CN115964661B - Rotary machine fault diagnosis method and system based on domain-by-domain network - Google Patents

Rotary machine fault diagnosis method and system based on domain-by-domain network Download PDF

Info

Publication number
CN115964661B
CN115964661B CN202310015521.3A CN202310015521A CN115964661B CN 115964661 B CN115964661 B CN 115964661B CN 202310015521 A CN202310015521 A CN 202310015521A CN 115964661 B CN115964661 B CN 115964661B
Authority
CN
China
Prior art keywords
domain
network
fault diagnosis
training
sample
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310015521.3A
Other languages
Chinese (zh)
Other versions
CN115964661A (en
Inventor
李学艺
郁天宇
李岱优
何秋实
解志杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Northeast Forestry University
Original Assignee
Northeast Forestry University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Northeast Forestry University filed Critical Northeast Forestry University
Priority to CN202310015521.3A priority Critical patent/CN115964661B/en
Publication of CN115964661A publication Critical patent/CN115964661A/en
Application granted granted Critical
Publication of CN115964661B publication Critical patent/CN115964661B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

The embodiment of the invention discloses a rotary machine fault diagnosis method and a system based on a domain-by-domain network, wherein the method comprises the following steps: acquiring parameter data of the rotary machine to be tested; inputting the parameter data into a pre-trained fault diagnosis model to obtain a fault diagnosis result of the rotary machine to be tested; the fault diagnosis model is obtained by training a target sample based on a group convolutional neural network, and the target sample is obtained by performing sample expansion processing on a limited sample by using a least square countermeasure network. The technical problem of the rotary machine fault diagnosis result accuracy lower due to insufficient sample size in the prior art is solved, and the technical effect of ensuring the rotary machine fault diagnosis result accuracy under the condition of small samples is achieved.

Description

Rotary machine fault diagnosis method and system based on domain-by-domain network
Technical Field
The invention relates to the technical field of intelligent mechanical manufacturing, in particular to a rotary machine fault diagnosis method and system based on a domain-by-domain network.
Background
Rotary machines are one of the most common components in the industry, and timely and accurate fault diagnosis of the rotary machines is critical to the normal operation of the equipment. Currently, fault diagnosis methods based on artificial intelligence have been widely used in fault diagnosis of gears. However, for the composite fault mode of the rotating machinery, most intelligent diagnosis methods currently identify the composite fault mode as a single fault mode, and neglect the connection between the composite fault and the single fault. For a deep learning diagnosis model, if modeling is to be performed on multiple possible component composite faults in a certain system, the complexity of the model is improved, and the number of model parameters is greatly increased. Therefore, insufficient sample size of the composite fault data may affect the accuracy of the diagnostic result.
Disclosure of Invention
Therefore, the embodiment of the invention provides a rotary machine fault diagnosis method and a system based on a domain-by-domain network, which at least partially solve the technical problem of lower accuracy of a rotary machine fault diagnosis result caused by insufficient sample size in the prior art, thereby realizing the technical effect of ensuring the accuracy of the rotary machine fault diagnosis result under the condition of small sample.
In order to achieve the above object, the embodiment of the present invention provides the following technical solutions:
a method of domain-opposing network-based rotary machine fault diagnosis, the method comprising:
acquiring parameter data of the rotary machine to be tested;
inputting the parameter data into a pre-trained fault diagnosis model to obtain a fault diagnosis result of the rotary machine to be tested;
the fault diagnosis model is obtained by training a target sample based on a group convolutional neural network, and the target sample is obtained by performing sample expansion processing on a limited sample by using a least square countermeasure network.
In some embodiments, training with target samples based on a group convolutional neural network to obtain the fault diagnosis model specifically includes:
performing expansion processing on the limited samples by using a least square generation countermeasure network to obtain a plurality of target sample data, and forming a target domain by using all the target sample data;
Pre-training an original signal of a source domain through a group convolution neural network to obtain a source domain signal;
the source domain signal and the target domain signal are trained in a domain countermeasure network to generate a fault diagnosis model.
In some embodiments, the expanding the limited samples using the least squares generation countermeasure network further comprises:
the generated vibration signal is pulled to the decision boundary by the generator on the premise of confusion discriminator.
In some embodiments, the least-squares method generates the least-loss function of the generator in the antagonism networkThe method comprises the following steps:
minimum loss function of discriminatorThe method comprises the following steps:
where G is a generator, D is a discriminator, z is noise, pdata (x) is a probability distribution of real data x, and pz (z) follows the probability distribution of noise z, E x~pdata(x) And E is z~pz(z) All have the expected values, a, b and c are constant, and b-c=1, b-a=2.
In some embodiments, the source domain original signal is pre-trained through a group convolutional neural network, specifically comprising:
acquiring an original signal of a source domain, and converting the original signal into a feature map sample;
inputting the characteristic map samples into a pre-stored group convolution neural network, grouping the characteristic map samples, and independently convolving each group of characteristic pattern samples;
After the convolution of each group of characteristic pattern book is completed, an output stacking union body is generated to complete the pre-training.
In some embodiments, training the source domain signal and the target domain signal in a domain countermeasure network specifically includes:
given source domain data x s Predictive tag y s Adopting a group convolution neural network structure to perform multi-layer nonlinear transformation to obtain depth characteristic representation G f (x s ;θ f ) At theta f Is the parameter G of each layer f Including weights and deviations;
input the extracted features into the classifier based on the domain adaptive method G y Obtain a corresponding output G y (G f (x s );θ y ),θ y Is the parameter G of each layer y And outputting a prediction label of each sample.
In some embodiments, the loss function of the domain adaptation method is:
E=L y (x s ,y s )+λL d (x s ,y t )
wherein E represents the total loss of the network, L y Representing the classification loss of a network on source domain data, L d Representing the loss of the distribution matching module, x representing the input features, y representing the labels of the source domain, λ representing the loss weight.
The invention also provides a rotary machine fault diagnosis system based on the domain-oriented network, which comprises:
the data acquisition unit is used for acquiring parameter data of the rotary machine to be tested;
the result output unit is used for inputting the parameter data into a pre-trained fault diagnosis model so as to obtain a fault diagnosis result of the rotary machine to be tested;
The fault diagnosis model is obtained by training a target sample based on a group convolutional neural network, and the target sample is obtained by performing sample expansion processing on a limited sample by using a least square countermeasure network.
The invention also provides an electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the steps of the method as described above when executing the program.
The invention also provides a non-transitory computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the method as described above.
According to the domain-antagonism-network-based rotary machine fault diagnosis method provided by the invention, parameter data of the rotary machine to be detected are obtained; inputting the parameter data into a pre-trained fault diagnosis model to obtain a fault diagnosis result of the rotary machine to be tested; the fault diagnosis model is obtained by training a target sample based on a group convolutional neural network, and the target sample is obtained by performing sample expansion processing on a limited sample by using a least square countermeasure network. In this way, the fault diagnosis method provided by the invention utilizes the fault diagnosis model generated by the countermeasure network after the sample expansion, and the accurate fault diagnosis result can be obtained after the original data is input into the fault diagnosis model, so that the fault of the rotating machinery such as the gear and the like can be effectively and accurately diagnosed based on relatively less training data, the technical problem of lower accuracy of the fault diagnosis result of the rotating machinery caused by insufficient sample quantity in the prior art is solved, and the technical effect of ensuring the accuracy of the fault diagnosis result of the rotating machinery under the condition of small sample is realized.
Further, in the training process of the fault diagnosis model, generating a countermeasure network by using least squares, performing expansion processing on a limited sample by using the countermeasure network to obtain a plurality of target sample data, and forming a target domain by using all the target sample data; pre-training an original signal of a source domain through a group convolution neural network to obtain a source domain signal; the source domain signal and the target domain signal are trained in a domain countermeasure network to generate a fault diagnosis model. In this way, the model training method expands limited target sample data by using least square generation countermeasure network, optimizes the defects of low quality of vibration signals and unstable training process generated by the conventional generation countermeasure network by changing an objective function, pretrains original vibration signals of a source domain by a group convolution neural network, effectively reduces network parameters by the group training network, trains source domain signals and target domain signals in the domain countermeasure network finally, diagnoses different distributed data in the target domain, improves the model training effect, and ensures the accuracy of output fault diagnosis results.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below. It will be apparent to those of ordinary skill in the art that the drawings in the following description are exemplary only and that other implementations can be obtained from the extensions of the drawings provided without inventive effort.
The structures, proportions, sizes, etc. shown in the present specification are shown only for the purposes of illustration and description, and are not intended to limit the scope of the invention, which is defined by the claims, so that any structural modifications, changes in proportions, or adjustments of sizes, which do not affect the efficacy or the achievement of the present invention, should fall within the ambit of the technical disclosure.
FIG. 1 is a flowchart of an embodiment of a method for diagnosing a failure of a rotary machine based on a domain-oriented network according to the present invention;
FIG. 2 is a schematic diagram of the domain adaptation process of the method of FIG. 1 in one particular use scenario;
FIG. 3 is a second flowchart of an embodiment of a domain-oriented network-based rotary machine fault diagnosis method according to the present invention;
FIG. 4 is a network architecture diagram of a group convolutional neural network provided by the present invention;
FIG. 5 is a schematic diagram of a minimum loss function provided by the present invention;
FIG. 6 is a third flowchart of an embodiment of a method for diagnosing a failure of a rotary machine based on a domain-oriented network according to the present invention;
FIG. 7 is a schematic diagram of a training process of a fault diagnosis model provided by the present invention;
FIG. 8 is a general flow chart of the method provided by the present invention in a specific application scenario;
FIG. 9 is a time waveform diagram of vibration data under various conditions in the specific application scenario shown in FIG. 8;
FIG. 10 is a diagnostically structured line graph of the specific application scenario illustrated in FIG. 8;
FIG. 11 is a block diagram illustrating an exemplary embodiment of a domain-oriented network-based rotary machine fault diagnosis system according to the present invention;
fig. 12 is a schematic diagram of an entity structure of an electronic device according to the present invention.
Detailed Description
Other advantages and advantages of the present invention will become apparent to those skilled in the art from the following detailed description, which, by way of illustration, is to be read in connection with certain specific embodiments, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
First, some technical terms related to the present invention will be explained:
Transfer learning is a research field that belongs to machine learning. It focuses on the use of existing problem solving models and extends it to other different but related problems. For example, knowledges (or models) used to identify cars may also be optimized to obtain the ability to identify trucks.
The antagonism network GAN is known as Generative Adversarial Nets, also known as the generation antagonism network. In GAN, there are 2 networks, one of which is used to generate data, called a "generator", and the other of which is used to determine whether the generated data is close to reality, called a "arbiter".
In order to solve the problem of poor fault diagnosis accuracy caused by the fact that the number of samples is small in rotating machinery such as gears in the prior art, the invention provides a fault diagnosis method of the rotating machinery based on a domain-opposing network, and the accuracy of fault diagnosis is improved through a pre-trained fault diagnosis model with small required sample size.
Referring to fig. 1, fig. 1 is a flowchart of an embodiment of a domain-specific network-based rotary machine fault diagnosis method according to the present invention.
In a specific embodiment, as shown in fig. 1 and fig. 2, the method for diagnosing a rotary machine fault based on a domain-specific network provided by the invention comprises the following steps:
S101: acquiring parameter data, such as vibration amplitude, of the rotary machine to be tested;
s102: inputting the parameter data into a pre-trained fault diagnosis model to obtain a fault diagnosis result of the rotary machine to be tested;
the fault diagnosis model is obtained by training a target sample based on a group convolutional neural network, and the target sample is obtained by performing sample expansion processing on a limited sample by using a least square countermeasure network.
In the above specific embodiment, the domain-antagonism-network-based rotary machine fault diagnosis method provided by the present invention obtains parameter data of a rotary machine to be tested; inputting the parameter data into a pre-trained fault diagnosis model to obtain a fault diagnosis result of the rotary machine to be tested; the fault diagnosis model is obtained by training a target sample based on a group convolutional neural network, and the target sample is obtained by performing sample expansion processing on a limited sample by using a least square countermeasure network. In this way, the fault diagnosis method provided by the invention utilizes the fault diagnosis model generated by the countermeasure network after the sample expansion, and the accurate fault diagnosis result can be obtained after the original data is input into the fault diagnosis model, so that the fault of the rotating machinery such as the gear and the like can be effectively and accurately diagnosed based on relatively less training data, the technical problem of lower accuracy of the fault diagnosis result of the rotating machinery caused by insufficient sample quantity in the prior art is solved, and the technical effect of ensuring the accuracy of the fault diagnosis result of the rotating machinery under the condition of small sample is realized.
The training process of the fault diagnosis model is specifically shown in fig. 3, and training is performed by using a target sample based on a group convolutional neural network to obtain the fault diagnosis model, and specifically includes the following steps:
s301: the method comprises the steps of performing expansion processing on a limited sample by using a least square generation countermeasure network to obtain a plurality of target sample data, and constructing a target domain by using all the target sample data. That is, the least squares generation countermeasure network is used to develop limited target sample data, optimizing the stability of the vibration signal and training process by changing the objective function.
S302: the original signals of the source domain are pre-trained through the group convolution neural network to obtain the signals of the source domain. In a specific use scene, the original signals of the source domain are pre-trained through the group convolution neural network, the network parameters are effectively reduced through the group training network, and the resource requirements on a computer are reduced through the group convolution method through the group convolution neural network.
In principle, the group convolution operates by grouping the input feature maps, each group being convolved separately. After each set of convolutions, the output stacks are connected in parallel as the output channels for that layer. As shown in FIG. 4, the input data is divided into three groups (group number g), the groups are divided only by depth, and C1/g determines a specific number. Assuming that the size of the input feature map is still C1 x H x W, the number of output feature maps is C2. The number of input characteristic diagrams of each group is C1/g, and the number of output characteristic diagrams of each group is C2/g. The size of each convolution kernel is (C1/g) h w, and the total number of convolution kernels is also C2. In this case, the number of convolution kernels per group is C2/g, the convolution kernels only convolving with the input map of the same group. The total convolution kernel is C2 x (C1/g) h w, so that the total convolution kernel is reduced to 1/g of the standard convolution, and the characteristic diagram of the conventional convolution output is calculated according to the input characteristic diagram C1 x h w points. In the feature map of the group convolution output, each point is calculated from the input feature map (C1/g) h w points.
S303: the source domain signal and the target domain signal are trained in a domain countermeasure network to generate a fault diagnosis model so that different distributed data in the target domain can be diagnosed. More specifically, during sample processing, gear vibration signals of five different health states under five different loads are collected, and for convenience of description, the gear vibration signals collected under five different loads are represented by G1 to G5, and the five different health states are represented by H1 to H5. LSGANs (least squares generation countermeasure network) is used to generate the vibration signal and extend the limited vibration signal to the same amount as the source signal. The DANN model is based on a group convolutional neural network, first, the source domain signal is used to pre-train the feature extractor, and then the whole DANN model is trained with the domain countermeasure network to achieve efficient fault diagnosis of gears (gear is an example in this embodiment) under different loads based on limited data. Eight transfer learning schemes are designed, G1, G2 and G3 are used as a group to perform transfer learning based on limited samples, G4 and G5 are used as a group to perform transfer learning based on limited samples, and each transfer learning scheme is trained and tested in five health states simultaneously. The PCB acceleration sensor is used for collecting vibration signals, and the sampling frequency is 20.45kHz.
In order to reduce the least squares loss, the generator must pull the generated vibration signal to the decision boundary under the premise of the confusion discriminator, so the least square method is used to generate the countermeasure network, and the method further comprises the following steps:
the generated vibration signal is pulled to the decision boundary by the generator on the premise of confusion discriminator.
In one placeIn some embodiments, as shown in FIG. 5, when generating an countermeasure network using a least squares method, a minimum loss function for the generatorThe method comprises the following steps:
minimum loss function for discriminatorThe method comprises the following steps:
where G is a generator, D is a discriminator, z is noise, pdata (x) is a probability distribution of real data x, and pz (z) follows the probability distribution of noise z, E x~pdata(x) And E is z~pz(z) All have the expected values, a, b and c are constant, and b-c=1, b-a=2.
In the solving process, the convergence of LSGANs can be proved based on the GANs, and after G is fixed, the optimal D can be found. The derivative of the objective function on D is 0, and the first derivative of the equation is taken to obtain the optimal solution D of D * (x):
Wherein, additional clause E x~pdata(x) [(D(x)-c) 2 ]Does not affect the value V LSGAN (G) As it does not contain the parameter G. Let b-c=1, b-a=2, then one can obtain:
when a, b, c are properly selected, the Pearson square divergence can be obtained, and since a, b, c cannot be 0, no gradient will disappear.
In some embodiments, as shown in fig. 6 and 7, the source domain original signal is pre-trained by the group convolutional neural network, specifically comprising the following steps:
s601: acquiring an original signal of a source domain, and converting the original signal into a feature map sample;
s602: inputting the characteristic map samples into a pre-stored group convolution neural network, grouping the characteristic map samples, and independently convolving each group of characteristic pattern samples;
s603: after the convolution of each group of characteristic pattern book is completed, an output stacking union body is generated to complete the pre-training.
In some embodiments, training the source domain signal and the target domain signal in a domain countermeasure network specifically includes:
given source domain data x s Predictive tag y s Adopting a group convolution neural network structure to perform multi-layer nonlinear transformation to obtain depth characteristic representation G f (x s ;θ f ) At theta f Is the parameter G of each layer f Including weights and deviations;
input the extracted features into the classifier based on the domain adaptive method G y Obtain a corresponding output G y (G f (x s );θ y ),θ y Is the parameter G of each layer y And outputting a prediction label of each sample.
The method comprises the steps of mapping source domains and target domains which are distributed differently on data to a high-dimensional feature space, wherein a distribution matching module is used for reducing the distance of a network in the space and improving the classification precision of the network in the target domain. Thus, the loss function of the domain adaptation method is:
E=L y (x s ,y s )+λL d (x s ,y t ) Equation 5
Wherein E represents the total loss of the network, L y Representing the classification loss of a network on source domain data, L d Representing the loss of the distribution matching module, x representing the input features, y representing the labels of the source domain, λ representing the loss weight. When λ=0, the net final loss function is the loss of a conventional group convolutional neural network.
x s y s G f (x s ;θ ff G f G y G t (G f (x s );θ y )
EL y L d Lambda in principle, the main purpose of LSGAN is to generate data that is highly similar to training samples. In the domain adaptation, the data of the target domain can be directly taken as a generation sample, and the data of other domains can be taken as real data. At this time, the generator does not generate a new sample any more, but plays a role in feature extraction, and learns the features of the domain data. Wherein the classifier uses negative log likelihood as a loss function, it can be expressed by the following equation:
the loss function of a discriminant is defined by a negative log likelihood in an equation, which can be expressed as:
wherein d i Indicating whether the ith sample is from the source domain or the target domain; the overall loss function of the network is:
in order to solve the problem that the objective function obtains the domain invariant feature, network parameters of the discriminator need to be optimized, so that the loss of the discriminator is maximized, and the source domain data and the target domain data are distinguished as far as possible. In addition, it is also desirable to optimize the parameters of the feature extractor to minimize the loss of the classifier and the arbiter, making the input samples inseparable. Therefore, the optimized network parameters can be obtained by adopting an iterative updating strategy:
The algorithm is optimized by introducing the gradient inversion layer, and when the source domain and the target domain share the same label space, the DANN method can effectively reduce the distribution difference of the source domain and the target domain and realize the knowledge transfer between different tasks.
Furthermore, the least square generation countermeasure network adopted by the invention is not only concerned about whether classification is correct or not, but also reaches a saturated state at a certain point, so that the training process is more stable, more characteristic diagrams can be generated under the same number of parameters and calculation in a group convolution mode, and a large number of characteristic diagrams mean that more information can be encoded. And secondly, extracting characteristics through domain countermeasure training (DANN) of the neural network, predicting labels and classifying domains. The basic principle of the domain self-adaptive method is to build a neural network model and a distribution matching module, and map source domains and target domains with different distributions on data to a high-dimensional feature space. The distribution matching module is used for reducing the distance of the network in the space and improving the classification precision of the network in the target domain.
In the above embodiment, in the training process of the fault diagnosis model, the finite sample is expanded by using the least square generation countermeasure network to obtain a plurality of target sample data, and all the target sample data are utilized to form the target domain; pre-training an original signal of a source domain through a group convolution neural network to obtain a source domain signal; the source domain signal and the target domain signal are trained in a domain countermeasure network to generate a fault diagnosis model. In this way, the model training method expands limited target sample data by using least square generation countermeasure network, optimizes the defects of low quality of vibration signals and unstable training process generated by the conventional generation countermeasure network by changing an objective function, pretrains original vibration signals of a source domain by a group convolution neural network, effectively reduces network parameters by the group training network, trains source domain signals and target domain signals in the domain countermeasure network finally, diagnoses different distributed data in the target domain, improves the model training effect, and ensures the accuracy of output fault diagnosis results. In addition, the rotary machine fault diagnosis method based on the domain-by-domain network has the following advantages compared with the prior art
1. A group convolution network with parameters less than the standard CNN is provided for fault diagnosis of rotary machinery;
2. introducing a least square loss function, and providing a least square GAN to enhance the vibration signal;
3. the original vibration signal is directly used for fault diagnosis of the rotary machine, and time-frequency domain conversion operation is not needed;
4. domain countermeasure training of a neural network with a set of convolution models is established to diagnose limited target domain data.
The rotary machine fault diagnosis method based on the domain-by-domain network provided by the invention is briefly described below by taking a specific use scene as an example. As shown in fig. 8, the method includes the steps of:
step one: collecting vibration signals of the rotary machine under different health conditions and different loads;
step two: generating an countermeasure network (LSGAN) using a least squares method to extend limited target sample data, optimizing the stability of the vibration signal and training process by changing the objective function;
step three: pre-training an original signal of a source domain through a group convolution neural network, wherein the group training network effectively reduces network parameters;
step four: training source domain signals and target domain signals in a domain countermeasure network to diagnose different distributed data in the target domain;
Step five: collecting gear vibration signals of five different health states under five different loads, wherein G1-G5 represent the gear vibration signals collected under the five different loads, and H1-H5 represent the five different health states;
step six: the LSGANs are used for generating vibration signals and expanding limited vibration signals to the same quantity as the source signals; the DANN model is based on a group convolution neural network, firstly, a source domain signal is used for pre-training a feature extractor, and then the whole DANN model is trained by using a domain countermeasure network so as to realize effective fault diagnosis of gears under different loads based on limited data;
step seven: eight transfer learning schemes are designed, G1, G2 and G3 are used as a group to perform transfer learning based on limited samples, G4 and G5 are used as a group to perform transfer learning based on limited samples, and each transfer learning scheme is trained and tested in five health states simultaneously.
Specifically, the loss function of LSGAN described in step two is:
the convergence proof of LSGANs can be based on the proof of GANs, and after fixing G, the optimal D can be found. The derivative of the objective function on D is 0, and the first derivative of the equation is taken to obtain the optimal solution D of D * (x):
Wherein, additional clause E x~pdata(x) [(D(x)-c) 2 ]Does not affect the value V LSGAN (G) As it does not contain the parameter G. Let b-c=1, b-a=2, then one can obtain:
when a, b, and c are properly selected, the Pearson square divergence can be obtained, and cannot be 0, so that no gradient disappears.
Specifically, to reduce the least squares loss, the generator must be allowed to pull the generated vibration signal to the decision boundary, with the confusion arbiter.
Further, in the second step, the resource requirement on the computer is reduced by a group convolution method. The group convolution operates by grouping the input feature maps, each group being convolved separately. After each set of convolutions, the output stacks are connected in parallel as the output channels for that layer. The input data is divided into three groups (group number g). The packets are only divided by depth, C1/g determining a specific number. Assuming that the size of the input feature map is still C1 x H x W, the number of output feature maps is C2. The number of input characteristic diagrams of each group is C1/g, and the number of output characteristic diagrams of each group is C2/g. The size of each convolution kernel is (C1/g) h w, and the total number of convolution kernels is also C2. In this case, the number of convolution kernels per group is C2/g. The convolution kernel only convolves with the same set of input graphs. The total convolution kernel is C2 x (C1/g) h w, so the total convolution kernel is reduced to 1/g of the standard convolution. The feature map of the conventional convolution output is calculated according to the input feature map C1×h×w points. In the feature map of the group convolution output, each point is calculated from the input feature map (C1/g) h w points.
Further, the source domain data x is given by the domain countermeasure network in the third step s Predictive tag y s Adopting a group convolution neural network structure to perform multi-layer nonlinear transformation to obtain depth characteristic representation G f (x s ;θ f ) Where theta f Is the parameter G of each layer f Including weights and deviations. The extracted features are then input into a classifier G y Obtain a corresponding output G y (G f (x s );θ y )。
Softmax is used to output the predictive label for each sample. Source and target domains of different distribution on the data are mapped to a high-dimensional feature space. The distribution matching module is used for reducing the distance of the network in the space and improving the classification precision of the network in the target domain. The loss function of domain adaptation learning is as follows.
E=L y (x s ,y s )+λL d (x s ,y t )
Wherein E represents the total loss of the network, L y Representing the loss of classification of a network on source domain data and L d Representing the loss of the distribution matching module, x being the input feature, y being the label of the source domain, λ being the weight of the two parts; λ=0, and the final loss function of the network is the loss of a conventional group convolutional neural network.
The primary purpose of LSGAN is to generate data that is highly similar to training samples. In the field adaptation, the principle of LSGANs can be consulted, the data of the target domain can be directly used as a generated sample, and the data of other domains can be used as real data. At this time, the generator does not generate a new sample any more, but plays a role in feature extraction, and learns the features of the domain data. The classifier uses negative log likelihood as a loss function, which can be expressed by the following equation:
The loss function of a discriminant is defined by a negative log likelihood in an equation, which can be expressed as:
d i indicating whether the ith sample is from the source domain or the target domain. The overall loss function of the network is:
in order to solve the problem that the objective function obtains the domain invariant feature, network parameters of the discriminator need to be optimized, so that the loss of the discriminator is maximized, and the source domain data and the target domain data are distinguished as far as possible. In addition, it is also desirable to optimize the parameters of the feature extractor to minimize the loss of the classifier and the arbiter, making the input samples inseparable. Therefore, the optimized network parameters can be obtained by adopting an iterative updating strategy:
the algorithm is optimized by introducing a gradient inversion layer. When the source domain and the target domain share the same label space, the DANN method can effectively reduce the distribution difference of the source domain and the target domain and realize the knowledge transfer between different tasks. Gears of 5 different health states were tested under 5 different loads and the vibration signals collected are shown in fig. 9.
In this embodiment, the data set used in the present invention may be as shown in table 1. Gear vibration signals of five different health states under five different loads are collected. G1-G5 represent gear vibration signals collected under five different loads. H1 to H5 represent five different health states. Under the same load, there are 2928 samples in the source domain, only 100 samples in the target domain, each sample containing 2800 data points.
Table 1 details of the collected data
The network architecture used in the present invention mainly comprises two modules: LSGANs and DANN. The LSGANs are used to generate the vibration signal and extend the limited vibration signal to the same amount as the source signal. The DANN model is based on a group convolutional neural network, first, the source domain signal is used for pre-training the feature extractor, and then the whole DANN model is trained by using the domain countermeasure network, so that effective fault diagnosis of gears under different loads is realized based on limited data. The main network structure of the LSGANs is shown in table 2. The DANN is mainly composed of a feature extractor, a classifier, and a domain discriminator.
TABLE 2 Primary network Structure of LSGANs
Layer (type) Parameters (parameters) Output shape
A generator
Input layer (40,70,1) (none, 40, 70, 1)
Dense layer 128 (none, 128)
Dense layer 256 (none, 256)
Dense layer 1024 (none, 1024)
Dense layer 2800 (without this,2800)
remodelling layer - (40,70,1)
Discriminator
Input layer (40,70,1) (none, 40, 70, 1)
Dense layer 1024 (none, 1024)
Dense layer 256 (none, 256)
Dense layer 128 (none, 128)
Dense layer 1 (none, 1)
TABLE 3 Main network architecture of feature extractor
Layer (type) Parameters (parameters) Output shape
Input layer (40,70,1) (none, 40, 70, 1)
Convolutional layer 16,(3,3) (none, 40, 70, 16)
Maximum pooling layer (2,2) (none, 20, 35, 16)
Group convolution layer 32, (3, 3), group = 4 (none, 20, 35, 32)
Group convolution layer 64, (3, 3), group = 4 (none, 20, 35, 64)
Maximum pooling layer (2,2) (none, 10, 17, 64)
Table 4 Main network structure of classifier
Layer (type) Parameters (parameters) Output shape
Group convolution layer 128, (3, 3), group = 4 (none, 10, 17, 128)
Dense layer 128 (none, 128)
Dense layer 64 (none, 64)
Dense layer 5 (none, 5)
Table 5 primary network structure of domain discriminator
Layer (type) Parameters (parameters) Output shape
Group convolution layer 128, (3, 3), group = 4 (none, 10, 17, 128)
Dense layer 128 (none, 128)
Dense layer 64 (none, 64)
Dense layer 1 (none, 1)
In this specific use scenario, eight kinds of migration learning schemes were designed, and specific working conditions of the 8 kinds of migration learning schemes are shown in table 6. G1, G2, G3 as a group perform limited sample-based transfer learning, and G4, G5 as a group perform limited sample-based transfer learning. Each migration learning protocol was trained and tested simultaneously in five health states.
Table 6 dataset of a migration learning scheme
Transferring tasks Source load Target load Health condition
G 1 →G 3 30 ox rice 40 ox rice H1,H2,H3,H4,H5
G 2 →G 3 35 ox rice 40 ox rice H1,H2,H3,H4,H5
G 3 →G 1 40 ox rice 30 ox rice H1,H2,H3,H4,H5
G 3 →G 2 40 ox rice 35 ox rice H1,H2,H3,H4,H5
G 1 →G 2 30 ox rice 35 ox rice H1,H2,H3,H4,H5
G 2 →G 1 35 ox rice 30 ox rice H1,H2,H3,H4,H5
G 4 →G 5 45 ox rice 50 ox rice H1,H2,H3,H4,H5
G 5 →G 4 50 ox rice 45 ox rice H1,H2,H3,H4,H5
The results of the above experiments are analyzed as follows. The results of the proposed method in eight different transfer studies are shown in fig. 10. The number of samples in the target domain is 10, 30, 50 and 100, respectively. To more clearly show the trend of the change, eight kinds of migration learning schemes are divided into two groups, and are plotted as (a) in fig. 10 and (b) in fig. 10, respectively. As the number of target domain samples increases, the diagnostic accuracy of the proposed model increases. For example, when the number of samples is 10, the diagnosis accuracy is between 73% and 86%, and as the number of samples increases, the accuracy gradually increases to 93% or more. This may indicate that the proposed method may effectively diagnose the health state of the gear. For G3-G1 and G2-G1, the target domain sample increases from 30 to 50, the precision change is relatively gentle, and the precision change is relatively severe from 50 to 100. From the trend, it can be seen that there is a bottleneck period in feature extraction for migration learning to G1 within a certain sample interval. This slow increase can be offset to some extent as the target domain samples increase. The overall trend in the accuracy change is relatively stable. Indicating that the method proposed by the present study is generally stable and effective.
For the characteristics learned by the field anti-migration learning, t-SNE visualization is performed to distinguish gears in five different health states, and the boundaries are clear. In addition, the coincidence of the characteristic ranges of the source domain and the target domain is relatively easy to access. Although there are few sample misjudgments, the field countermeasure training method presented herein works well for fault diagnosis in different fields. The proposed method is shown to be effective for fault diagnosis in different fields where the samples are limited.
According to the rotating machinery fault diagnosis method based on the domain antagonism network, limited target sample data is expanded by using least square generation antagonism network, the defects of low quality of vibration signals and unstable training process generated by the conventional generation antagonism network are optimized by changing an objective function, then the original vibration signals of the source domain are pre-trained by a group convolution neural network, network parameters are effectively reduced by the group training network, and finally the source domain signals and the target domain signals are trained in the domain antagonism network so as to diagnose different distributed data in the target domain, and a relatively accurate fault diagnosis result can be obtained.
In addition to the above method, the present invention also provides a rotary machine fault diagnosis system based on a domain-specific network, as shown in fig. 11, the system comprising:
a data acquisition unit 1101, configured to acquire parameter data of a rotating machine to be tested;
a result output unit 1102, configured to input the parameter data into a pre-trained fault diagnosis model, so as to obtain a fault diagnosis result of the rotating machine to be tested;
the fault diagnosis model is obtained by training a target sample based on a group convolutional neural network, the target sample is obtained by performing sample expansion processing on a limited sample by using an countermeasure network, and the countermeasure network is generated by using a least square method.
In some embodiments, training with target samples based on a group convolutional neural network to obtain the fault diagnosis model specifically includes:
performing expansion processing on the limited samples by using a least square generation countermeasure network to obtain a plurality of target sample data, and forming a target domain by using all the target sample data;
pre-training an original signal of a source domain through a group convolution neural network to obtain a source domain signal;
the source domain signal and the target domain signal are trained in a domain countermeasure network to generate a fault diagnosis model.
In some embodiments, the expanding the limited samples using the least squares generation countermeasure network further comprises:
the generated vibration signal is pulled to the decision boundary by the generator on the premise of confusion discriminator.
In some embodiments, the least-squares method generates the least-loss function of the generator in the antagonism networkThe method comprises the following steps:
minimum loss function of discriminatorThe method comprises the following steps:
where G is a generator, D is a discriminator, z is noise, pdata (x) is a probability distribution of real data x, and pz (z) follows the probability distribution of noise z, E x~pdata(x) And E is z~pz(z) All have the expected values, a, b and c are constant, and b-c=1, b-a=2.
In some embodiments, the source domain original signal is pre-trained through a group convolutional neural network, specifically comprising:
acquiring an original signal of a source domain, and converting the original signal into a feature map sample;
inputting the characteristic map samples into a pre-stored group convolution neural network, grouping the characteristic map samples, and independently convolving each group of characteristic pattern samples;
after the convolution of each group of characteristic pattern book is completed, an output stacking union body is generated to complete the pre-training.
In some embodiments, training the source domain signal and the target domain signal in a domain countermeasure network specifically includes:
Given source domain data x s Predictive tag y s Adopting a group convolution neural network structure to perform multi-layer nonlinear transformation to obtain depth characteristic representation G f (x s ;θ f ) At theta f Is the parameter G of each layer f Including weights and deviations;
input the extracted features into the classifier based on the domain adaptive method G y Obtain a corresponding output G y (G f (x s );θ y ),θ y Is the parameter G of each layer y And outputting a prediction label of each sample.
In some embodiments, the loss function of the domain adaptation method is:
E=L y (x s ,y s )+λL d (x s ,y t )
wherein E represents the total loss of the network, L y Representing the classification loss of a network on source domain data, L d Representing the loss of the distribution matching module, x representing the input features, y representing the labels of the source domain, λ representing the loss weight.
Fig. 12 illustrates a physical structure diagram of an electronic device, as shown in fig. 12, which may include: processor 1210, communication interface (Communications Interface), 1220, memory 1230 and communication bus 1240, wherein processor 1210, communication interface 1220 and memory 1230 communicate with each other via communication bus 1240. Processor 1210 may invoke logic instructions in memory 1230 to perform the methods described above.
In addition, the logic instructions in the memory 1230 described above may be implemented in the form of software functional units and sold or used as a stand-alone product, stored in a computer-readable storage medium. Based on this understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
In another aspect, the present invention also provides a computer program product comprising a computer program stored on a non-transitory computer readable storage medium, the computer program comprising program instructions which, when executed by a computer, are capable of performing the methods described above.
In yet another aspect, the present invention also provides a non-transitory computer readable storage medium having stored thereon a computer program which, when executed by a processor, is implemented to perform the above methods.
The apparatus embodiments described above are merely illustrative, wherein the elements illustrated as separate elements may or may not be physically separate, and the elements shown as elements may or may not be physical elements, may be located in one place, or may be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment. Those of ordinary skill in the art will understand and implement the present invention without undue burden.
Those skilled in the art will appreciate that in one or more of the examples described above, the functions described in the present invention may be implemented in a combination of hardware and software. When the software is applied, the corresponding functions may be stored in a computer-readable medium or transmitted as one or more instructions or code on the computer-readable medium. Computer-readable media includes both computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. A storage media may be any available media that can be accessed by a general purpose or special purpose computer.
The foregoing detailed description of the invention has been presented for purposes of illustration and description, and it should be understood that the foregoing is by way of illustration and description only, and is not intended to limit the scope of the invention.

Claims (4)

1. A method for domain-oriented network-based rotary machine fault diagnosis, the method comprising:
acquiring parameter data of the rotary machine to be tested;
inputting the parameter data into a pre-trained fault diagnosis model to obtain a fault diagnosis result of the rotary machine to be tested;
the fault diagnosis model is obtained by training a target sample based on a group convolutional neural network, wherein the target sample is obtained by performing sample expansion processing on a limited sample by using a least square countermeasure network;
training is performed by utilizing a target sample based on a group convolution neural network to obtain the fault diagnosis model, and the method specifically comprises the following steps:
performing expansion processing on the limited samples by using a least square generation countermeasure network to obtain a plurality of target sample data, and forming a target domain by using all the target sample data;
Pre-training an original signal of a source domain through a group convolution neural network to obtain a source domain signal;
training the source domain signal and the target domain signal in a domain countermeasure network to generate a fault diagnosis model;
wherein the least square generation countermeasure network is used for expanding the limited samples, and the method further comprises the following steps:
pulling the generated vibration signal to a decision boundary by using a generator on the premise of confusion discriminator;
wherein in least squares generation of the countermeasure network, the least loss function of the generatorThe method comprises the following steps:
minimum loss function of discriminatorThe method comprises the following steps:
where G is a generator, D is a discriminator, z is noise, pdata (x) is a probability distribution of real data x, and pz (z) follows the probability distribution of noise z, E x~pdata(x) And E is z~pz(z) All have the expected values, a, b and c are constants, and b-c=1, b-a=2;
the method for pre-training the original signals of the source domain through the group convolution neural network specifically comprises the following steps:
acquiring an original signal of a source domain, and converting the original signal into a feature map sample;
inputting the characteristic map samples into a pre-stored group convolution neural network, grouping the characteristic map samples, and independently convolving each group of characteristic pattern samples;
After the convolution of each group of characteristic pattern books is completed, generating output stacking union bodies to complete pre-training;
the training of the source domain signal and the target domain signal in the domain countermeasure network specifically comprises:
given source domain data x s Predictive tag y s Multilayer nonlinear transformation is performed by adopting a group convolution neural network structureObtaining depth feature representation G f (x s ;θ f ),θ f Is G f Parameters for each layer, including weights and deviations;
input the extracted features into the classifier based on the domain adaptive method G y Obtain a corresponding output G y (G f (x s );θ y ),θ y Is G y Outputting the parameters of each layer and outputting a prediction label of each sample;
the loss function of the domain self-adaptive method is as follows:
E=L y (x s ,y s )+λL d (x s ,y t )
wherein E represents the total loss of the network, L y Representing the classification loss of a network on source domain data, L d Representing the loss of the distribution matching module, x representing the input features, y representing the labels of the source domain, λ representing the loss weight.
2. A domain-countering network-based rotary machine fault diagnosis system, the system comprising:
the data acquisition unit is used for acquiring parameter data of the rotary machine to be tested;
the result output unit is used for inputting the parameter data into a pre-trained fault diagnosis model so as to obtain a fault diagnosis result of the rotary machine to be tested;
The fault diagnosis model is obtained by training a target sample based on a group convolutional neural network, wherein the target sample is obtained by performing sample expansion processing on a limited sample by using a least square countermeasure network;
training is performed by utilizing a target sample based on a group convolution neural network to obtain the fault diagnosis model, and the method specifically comprises the following steps:
performing expansion processing on the limited samples by using a least square generation countermeasure network to obtain a plurality of target sample data, and forming a target domain by using all the target sample data;
pre-training an original signal of a source domain through a group convolution neural network to obtain a source domain signal;
training the source domain signal and the target domain signal in a domain countermeasure network to generate a fault diagnosis model;
wherein the least square generation countermeasure network is used for expanding the limited samples, and the method further comprises the following steps:
pulling the generated vibration signal to a decision boundary by using a generator on the premise of confusion discriminator;
wherein in least squares generation of the countermeasure network, the least loss function of the generatorThe method comprises the following steps:
minimum loss function of discriminatorThe method comprises the following steps:
where G is a generator, D is a discriminator, z is noise, pdata (x) is a probability distribution of real data x, and pz (z) follows the probability distribution of noise z, E x~pdata(x) And E is z~pz(z) All have the expected values, a, b and c are constants, and b-c=1, b-a=2;
the method for pre-training the original signals of the source domain through the group convolution neural network specifically comprises the following steps:
acquiring an original signal of a source domain, and converting the original signal into a feature map sample;
inputting the characteristic map samples into a pre-stored group convolution neural network, grouping the characteristic map samples, and independently convolving each group of characteristic pattern samples;
after the convolution of each group of characteristic pattern books is completed, generating output stacking union bodies to complete pre-training;
the training of the source domain signal and the target domain signal in the domain countermeasure network specifically comprises:
given source domain data x s Predictive tag y s Adopting a group convolution neural network structure to perform multi-layer nonlinear transformation to obtain depth characteristic representation G f (x s ;θ f ),θ f Is G f Parameters for each layer, including weights and deviations;
input the extracted features into the classifier based on the domain adaptive method G y Obtain a corresponding output G f (G f (x s );θ y ),θ y Is G y Outputting the parameters of each layer and outputting a prediction label of each sample;
the loss function of the domain self-adaptive method is as follows:
E=L y (x s ,y s )+L d (x s ,y t )
wherein E represents the total loss of the network, L y Representing the classification loss of a network on source domain data, L d Representing the loss of the distribution matching module, x representing the input features, y representing the labels of the source domain, λ representing the loss weight.
3. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the steps of the method of claim 1 when executing the program.
4. A non-transitory computer readable storage medium, on which a computer program is stored, which when executed by a processor implements the steps of the method according to claim 1.
CN202310015521.3A 2023-01-04 2023-01-04 Rotary machine fault diagnosis method and system based on domain-by-domain network Active CN115964661B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310015521.3A CN115964661B (en) 2023-01-04 2023-01-04 Rotary machine fault diagnosis method and system based on domain-by-domain network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310015521.3A CN115964661B (en) 2023-01-04 2023-01-04 Rotary machine fault diagnosis method and system based on domain-by-domain network

Publications (2)

Publication Number Publication Date
CN115964661A CN115964661A (en) 2023-04-14
CN115964661B true CN115964661B (en) 2023-09-08

Family

ID=87361365

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310015521.3A Active CN115964661B (en) 2023-01-04 2023-01-04 Rotary machine fault diagnosis method and system based on domain-by-domain network

Country Status (1)

Country Link
CN (1) CN115964661B (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110751207A (en) * 2019-10-18 2020-02-04 四川大学 Fault diagnosis method for anti-migration learning based on deep convolution domain
CN110866365A (en) * 2019-11-22 2020-03-06 北京航空航天大学 Mechanical equipment intelligent fault diagnosis method based on partial migration convolutional network
CN111898634A (en) * 2020-06-22 2020-11-06 西安交通大学 Intelligent fault diagnosis method based on depth-to-reactance-domain self-adaption
CN112183581A (en) * 2020-09-07 2021-01-05 华南理工大学 Semi-supervised mechanical fault diagnosis method based on self-adaptive migration neural network
CN112215279A (en) * 2020-10-12 2021-01-12 国网新疆电力有限公司 Power grid fault diagnosis method based on immune RBF neural network
CN114295377A (en) * 2021-12-13 2022-04-08 南京工业大学 CNN-LSTM bearing fault diagnosis method based on genetic algorithm
CN114358124A (en) * 2021-12-03 2022-04-15 华南理工大学 Rotary machine new fault diagnosis method based on deep-antithetical-convolution neural network
CN114997211A (en) * 2022-04-22 2022-09-02 南京航空航天大学 Cross-working-condition fault diagnosis method based on improved countermeasure network and attention mechanism
CN115099270A (en) * 2022-06-16 2022-09-23 浙江大学 Bearing fault diagnosis method under variable load based on sub-domain adaptive countermeasure network
CN115374820A (en) * 2022-08-23 2022-11-22 江苏科技大学 Rotary machine cross-domain fault diagnosis method based on multi-source sub-domain adaptive network

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111651937B (en) * 2020-06-03 2023-07-25 苏州大学 Method for diagnosing faults of in-class self-adaptive bearing under variable working conditions

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110751207A (en) * 2019-10-18 2020-02-04 四川大学 Fault diagnosis method for anti-migration learning based on deep convolution domain
CN110866365A (en) * 2019-11-22 2020-03-06 北京航空航天大学 Mechanical equipment intelligent fault diagnosis method based on partial migration convolutional network
CN111898634A (en) * 2020-06-22 2020-11-06 西安交通大学 Intelligent fault diagnosis method based on depth-to-reactance-domain self-adaption
CN112183581A (en) * 2020-09-07 2021-01-05 华南理工大学 Semi-supervised mechanical fault diagnosis method based on self-adaptive migration neural network
CN112215279A (en) * 2020-10-12 2021-01-12 国网新疆电力有限公司 Power grid fault diagnosis method based on immune RBF neural network
CN114358124A (en) * 2021-12-03 2022-04-15 华南理工大学 Rotary machine new fault diagnosis method based on deep-antithetical-convolution neural network
CN114295377A (en) * 2021-12-13 2022-04-08 南京工业大学 CNN-LSTM bearing fault diagnosis method based on genetic algorithm
CN114997211A (en) * 2022-04-22 2022-09-02 南京航空航天大学 Cross-working-condition fault diagnosis method based on improved countermeasure network and attention mechanism
CN115099270A (en) * 2022-06-16 2022-09-23 浙江大学 Bearing fault diagnosis method under variable load based on sub-domain adaptive countermeasure network
CN115374820A (en) * 2022-08-23 2022-11-22 江苏科技大学 Rotary machine cross-domain fault diagnosis method based on multi-source sub-domain adaptive network

Also Published As

Publication number Publication date
CN115964661A (en) 2023-04-14

Similar Documents

Publication Publication Date Title
CN111898095B (en) Deep migration learning intelligent fault diagnosis method, device, storage medium and equipment
CN111369563B (en) Semantic segmentation method based on pyramid void convolutional network
CN109711532B (en) Acceleration method for realizing sparse convolutional neural network inference aiming at hardware
CN112161784B (en) Mechanical fault diagnosis method based on multi-sensor information fusion migration network
WO2023020388A1 (en) Gearbox fault diagnosis method and apparatus, gearbox signal collection method and apparatus, and electronic device
CN112906644B (en) Mechanical fault intelligent diagnosis method based on deep migration learning
Li et al. A deep transfer nonnegativity-constraint sparse autoencoder for rolling bearing fault diagnosis with few labeled data
CN108416755A (en) A kind of image de-noising method and system based on deep learning
CN107316046A (en) A kind of method for diagnosing faults that Dynamic adaptiveenhancement is compensated based on increment
CN113065581B (en) Vibration fault migration diagnosis method for reactance domain self-adaptive network based on parameter sharing
CN114048769A (en) Multi-source multi-domain information entropy fusion and model self-optimization method for bearing fault diagnosis
CN115563536A (en) Rolling bearing fault diagnosis method based on subdomain self-adaptation
CN110657984A (en) Planetary gearbox fault diagnosis method based on reinforced capsule network
CN114548199A (en) Multi-sensor data fusion method based on deep migration network
CN112257751A (en) Neural network pruning method
CN115688040A (en) Mechanical equipment fault diagnosis method, device, equipment and readable storage medium
CN115809596A (en) Digital twin fault diagnosis method and device
Cao et al. An antinoise fault diagnosis method based on multiscale 1DCNN
CN115290326A (en) Rolling bearing fault intelligent diagnosis method
CN115964661B (en) Rotary machine fault diagnosis method and system based on domain-by-domain network
CN111368969A (en) Feature map processing method and device based on residual error neural network and storage medium
CN111783335B (en) Transfer learning-based few-sample structure frequency response dynamic model correction method
CN113539517B (en) Method for predicting time sequence intervention effect
CN115204292A (en) Cross-equipment vibration fault migration diagnosis method based on PSFEN
CN115600134A (en) Bearing transfer learning fault diagnosis method based on domain dynamic impedance self-adaption

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant