CN118427681A - Cross-working condition open-set fault diagnosis method and equipment based on self-supervision contrast learning enhancement - Google Patents
Cross-working condition open-set fault diagnosis method and equipment based on self-supervision contrast learning enhancement Download PDFInfo
- Publication number
- CN118427681A CN118427681A CN202410499381.6A CN202410499381A CN118427681A CN 118427681 A CN118427681 A CN 118427681A CN 202410499381 A CN202410499381 A CN 202410499381A CN 118427681 A CN118427681 A CN 118427681A
- Authority
- CN
- China
- Prior art keywords
- sample
- enhancement
- fault
- fault diagnosis
- learning module
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000003745 diagnosis Methods 0.000 title claims abstract description 68
- 238000000034 method Methods 0.000 title claims abstract description 51
- 238000012549 training Methods 0.000 claims abstract description 35
- 238000009826 distribution Methods 0.000 claims abstract description 17
- 239000000284 extract Substances 0.000 claims abstract description 10
- 238000013508 migration Methods 0.000 claims abstract description 8
- 230000005012 migration Effects 0.000 claims abstract description 8
- 238000005457 optimization Methods 0.000 claims description 15
- 238000001514 detection method Methods 0.000 claims description 13
- 238000013528 artificial neural network Methods 0.000 claims description 9
- 238000000605 extraction Methods 0.000 claims description 9
- 239000011159 matrix material Substances 0.000 claims description 9
- 238000004364 calculation method Methods 0.000 claims description 7
- 238000010586 diagram Methods 0.000 claims description 7
- 238000004590 computer program Methods 0.000 claims description 6
- 230000004913 activation Effects 0.000 claims description 4
- 238000013527 convolutional neural network Methods 0.000 claims description 4
- 238000005520 cutting process Methods 0.000 claims description 4
- 230000006870 function Effects 0.000 claims description 4
- 238000003860 storage Methods 0.000 claims description 4
- 238000000926 separation method Methods 0.000 claims description 3
- 230000009466 transformation Effects 0.000 claims description 3
- 238000011524 similarity measure Methods 0.000 claims description 2
- 239000000523 sample Substances 0.000 description 90
- 238000012360 testing method Methods 0.000 description 10
- 230000015556 catabolic process Effects 0.000 description 3
- 238000006731 degradation reaction Methods 0.000 description 3
- 238000011161 development Methods 0.000 description 3
- 238000010606 normalization Methods 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 238000012545 processing Methods 0.000 description 3
- 238000013135 deep learning Methods 0.000 description 2
- 230000007547 defect Effects 0.000 description 2
- 238000013461 design Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 230000006872 improvement Effects 0.000 description 2
- 238000009776 industrial production Methods 0.000 description 2
- 238000012544 monitoring process Methods 0.000 description 2
- 230000003014 reinforcing effect Effects 0.000 description 2
- 230000002159 abnormal effect Effects 0.000 description 1
- 230000001133 acceleration Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000007405 data analysis Methods 0.000 description 1
- 238000013480 data collection Methods 0.000 description 1
- 238000000354 decomposition reaction Methods 0.000 description 1
- 238000013136 deep learning model Methods 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 238000005065 mining Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000011176 pooling Methods 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
Landscapes
- Image Analysis (AREA)
Abstract
The invention belongs to the technical field of intelligent fault diagnosis of equipment, and discloses a cross-working condition open-set fault diagnosis method and equipment based on self-supervision contrast learning enhancement, which comprises the following steps: step one, providing an intelligent fault diagnosis model, wherein the intelligent fault diagnosis model comprises a sample enhancement module, a distinguishing feature learning module and an open set learning module; a sample enhancement module is adopted to enhance a source domain sample and a target domain sample; the second step, the distinguishing feature learning module extracts the obvious distinguishing features from the target domain sample based on the self-supervision comparison learning strategy, and meanwhile the distinguishing feature learning module extracts the obvious distinguishing features from the source domain sample; step three, the open set class learning module detects known class faults and unknown class faults based on the obvious distinguishing characteristics and the clamping confidence rules; the open set learning module realizes data distribution alignment and fault characteristic knowledge migration under the cross-working condition through a pseudo tag consistency training strategy. The invention solves the problem of low fault recognition accuracy.
Description
Technical Field
The invention belongs to the technical field of intelligent fault diagnosis of equipment, and particularly relates to a cross-working condition open-set fault diagnosis method and equipment based on self-supervision contrast learning enhancement.
Background
With rapid development of technology and advancement of industry, large-scale equipment is gradually becoming intelligent and modern; likewise, the use of large-scale mechanical devices has also prompted the development of industry. The large-scale application of the intelligent equipment greatly improves the production efficiency and reduces the manpower burden. However, in a practical industrial production environment, mechanical equipment often operates under severe conditions for long periods of time, and some critical components are prone to failure. Particularly for some high reliability requirements, it is possible to cause huge property losses and even casualties once a fault occurs. Therefore, in the modern industrial production process, the monitoring and intelligent fault diagnosis of the equipment have important significance, and the safe and stable operation of the large-scale equipment can be effectively ensured.
Signal processing-based methods have been developed to address advanced signal noise reduction, decomposition, demodulation, and filtering techniques to highlight or extract fault signature information, but these techniques rely on high expert empirical knowledge, and at the same time, these methods remove some of the signature signal components of interest when effectively removing noise, and when multiple signal components are combined together, it is difficult to obtain effective fault signature information. The fault diagnosis technology based on machine learning needs to manually design extracted features and parameter adjustment, is limited in industrial big data analysis, and the shallow method construction is difficult to mine high-dimensional features, so that the performance of the feature is limited by the feature mining and decision-making independent design. The equipment fault diagnosis method based on deep learning can automatically and effectively extract deep features, train mass data through a deep neural network to obtain an effective feature extractor and classifier, effectively complete fault mode identification, reduce a large amount of labor cost investment and promote automation and intelligent development of equipment.
The equipment fault diagnosis method based on deep learning assumes that the training data and the test data have the same fault mode and the same distribution; however, in an actual cross-working condition diagnosis scenario, the acquired training data and test data are different, and all fault modes are difficult to acquire in the training stage. That is, an unknown failure mode may occur in practical application, which may cause performance degradation of the conventional deep learning model, and failure recognition accuracy is greatly reduced.
Disclosure of Invention
Aiming at the defects or improvement demands of the prior art, the invention provides a cross-working condition open-set fault diagnosis method and equipment based on self-supervision and contrast learning enhancement, which solve the problem of low fault recognition accuracy caused by the fact that the existing fault prediction method does not consider the difference between training data and test data.
To achieve the above object, according to one aspect of the present invention, there is provided a cross-operating mode open-set fault diagnosis method based on self-supervision contrast learning enhancement, the fault diagnosis method comprising the steps of:
Step one, providing an intelligent fault diagnosis model, wherein the intelligent fault diagnosis model comprises a sample enhancement module, a distinguishing characteristic learning module and an open-set learning module; the sample enhancement module is adopted to enhance the source domain sample and the target domain sample;
the second step, the distinguishing feature learning module extracts obvious distinguishing features from the target domain sample based on a self-supervision comparison learning strategy, and meanwhile the distinguishing feature learning module extracts obvious distinguishing features from the source domain sample;
step three, the open set class learning module detects known class faults and unknown class faults based on the obvious distinguishing characteristics and the clamping confidence rules; the open set learning module realizes data distribution alignment and fault characteristic knowledge migration under the cross-working condition through a pseudo tag consistency training strategy.
Further, the sample enhancement module adopts two strategies of weak sample enhancement and strong sample enhancement, wherein the weak sample enhancement is to randomly add Gaussian noise to an original signal, and the strong sample enhancement comprises the steps of randomly adding Gaussian noise to the original signal, randomly scaling, stretching and cutting; generating a source domain enhancement sample by adopting a weak sample enhancement strategy for a source domain, and generating a target domain mixed enhancement sample by adopting a strong sample enhancement strategy and a weak sample enhancement strategy for a target domain.
Further, the distinguishing feature learning module adopts a one-dimensional convolutional neural network structure, and the input of the distinguishing feature learning module is 64 multiplied by 1 multiplied by 1024, and the distinguishing feature learning module comprises 4 convolutional layers and 5 fully-connected neural network layers; each convolution layer is connected with a BN layer and a ReLU layer; and the BN layer carries out transformation reconstruction on the output feature map of the previous convolution layer by introducing learning parameters, and the ReLU layer carries out nonlinear activation on the feature map.
Further, the distinguishing feature learning module performs automatic feature extraction on the enhanced source domain sample, and calculates a second loss to obtain a fault distinguishing feature; the corresponding formula is:
where L sce represents the second loss of sample x, Representing a source domain weak enhanced sample,Representing a sampleIs used to determine the true probability distribution of (a),Representing the predicted probability distribution of weakly enhanced samples across a neural network.
Further, the distinguishing feature learning module performs automatic feature extraction on the sample of the target domain training set after the mixed sample is enhanced, and the comparison self-supervision learning strategy is utilized to enable the fault features of the same category to be gathered to the same clustering center, and the fault features of different categories to be dispersed to different clustering centers, so that feature unwrapping and separation are realized, and the corresponding formulas are as follows:
where L con represents the first loss of sample x, AndRepresenting the jth enhanced target domain sample via strong and weak samplesIs the fault feature, tau is the hyper-parameter to accelerate convergence,Representing a similarity measure between two features.
Further, inputting the significant distinguishing fault feature diagram obtained by the distinguishing feature learning module to the open set learning module, and calculating a third loss; the optimization targets of the open set learning module are as follows:
Lpcl=Ltk+Ltuk (13)
Wherein L pcl represents a third loss of samples, including a known class fault detection loss L tk and an unknown class fault detection loss L tuk, E (,) represents a cross entropy function, W tk represents a pseudo tag matrix of a known class fault, W tunk represents a pseudo tag matrix of an unknown class fault, and faults of the known class and the unknown class can be detected by the third loss with a pseudo tag clamping confidence rule.
Further, the first step further includes a step of performing parameter optimization on the intelligent fault diagnosis model, specifically, performing parameter optimization on a distinguishing feature learning module and an open-set learning module of the intelligent equipment fault diagnosis model according to the first loss, the second loss and the third loss.
Further, the calculation formula of the optimization target loss L total is:
Ltotal=Lsce+λLcon+γLpcl (16)
Wherein L sce represents a second loss; λ and γ are the balance parameters of the loss L pcl of the first loss L con and the third loss, respectively.
The invention also provides a cross-working condition open-set fault diagnosis system based on self-supervision and contrast learning enhancement, which comprises a memory and a processor, wherein the memory stores a computer program, and the processor executes the cross-working condition open-set fault diagnosis method based on self-supervision and contrast learning enhancement when executing the computer program.
The invention also provides a computer readable storage medium storing machine executable instructions that, when invoked and executed by a processor, cause the processor to implement a cross-condition open-set fault diagnosis method based on self-supervised contrast learning enhancement as described above.
In general, compared with the prior art, the cross-working condition open-set fault diagnosis method and equipment based on self-supervision contrast learning enhancement provided by the invention mainly have the following beneficial effects:
1. Sample enhancement, discrimination feature learning and open set class learning are carried out on a training set, the sample feature clustering of the same class is realized through self-supervision comparison learning, different sample features are dispersed, known faults and occurring unknown faults are correctly identified, fault diagnosis precision is improved, performance degradation problems caused by working condition changes and new fault modes are overcome, and further fault identification accuracy is improved.
2. The fault diagnosis method fully utilizes the different properties of fault characteristics of different fault modes, inputs the actually collected signals into a one-dimensional 5-layer convolutional neural network after data enhancement to perform distinguishing feature learning so as to realize feature extraction, and a feature open set learning module after network extraction detects known faults and unknown faults and performs fault mode diagnosis.
3. The fault diagnosis method does not need to carry out a complex preprocessing process on the original signal, and can have good robustness on the influence of working condition change and environmental noise.
4. The fault diagnosis method selects samples with high confidence to generate high-quality pseudo labels to monitor the training process, realizes the distribution and migration of the cross-working condition fault characteristics, can effectively improve the accuracy of fault diagnosis and improve the intelligent fault diagnosis level of equipment.
Drawings
Fig. 1 (a), (b), and (c) are schematic diagrams of intelligent fault diagnosis under variable working conditions and schematic diagrams of application defects of a closed-set fault diagnosis method under variable working conditions, respectively;
FIG. 2 is a flow chart of a cross-working condition open-set fault diagnosis method based on self-supervision contrast learning enhancement provided by the invention;
FIG. 3 is a block diagram of a cross-condition open-set fault diagnosis based on self-supervision contrast learning enhancement.
Detailed Description
The present invention will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present invention more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention. In addition, the technical features of the embodiments of the present invention described below may be combined with each other as long as they do not collide with each other.
Referring to fig. 1 and 2, the invention provides a cross-working condition open-set fault diagnosis method based on self-supervision and contrast learning enhancement, which is characterized in that monitoring signals acquired under different working conditions are normalized to reduce the influence of abnormal peaks, and by utilizing the proposed feature extraction and fault recognition method, gao Weilu bar discrimination features in a source domain and a target domain are extracted through a discrimination feature learning module, positive and negative sample pairs are constructed by utilizing weak enhanced target domain sample features and strong enhanced sample features, and the purpose of acquiring obvious discrimination fault features is achieved by maximizing mutual information between the positive and negative sample pairs, clustering the features of the same fault category and dispersing the fault features of different categories; meanwhile, the open set class learning module based on the clamping confidence rule is used for detecting known class faults and unknown class fault modes of the trained feature map with obvious distinguishing features, so that the performance of open set fault diagnosis is improved; the method comprises the steps of generating a pseudo tag by using a selected sample with high confidence, and realizing fault feature migration under a cross-working condition by using pseudo tag consistency training.
Referring to fig. 3, the fault diagnosis method mainly includes the following steps:
Step one, providing an intelligent fault diagnosis model, wherein the intelligent fault diagnosis model comprises a sample enhancement module, a distinguishing characteristic learning module and an open-set learning module; and reinforcing the source domain sample and the target domain sample by adopting the sample reinforcing module.
The distinguishing feature learning module adopts a one-dimensional convolutional neural network structure, the input of the distinguishing feature learning module is 64 multiplied by 1 multiplied by 1024, and the distinguishing feature learning module mainly comprises 4 convolutional layers and 5 fully-connected neural network layers, and the distinguishing feature learning module is composed of 5 layers of neural networks. The convolution kernel size of the convolution layer is 15×1 and 3×1, the number of the convolution kernels is 16, 32, 64, 128 and 256, and the step size is 2 and 3. Each convolution layer is followed by a BN layer and a ReLU layer. The BN layer carries out transformation reconstruction on the output feature map of the previous convolution layer by introducing a learnable parameter, and the ReLU layer carries out nonlinear activation on the feature map; the function of the discriminant feature learning module is to extract significant high-dimensional discriminant features of the source domain and target domain samples.
The open set fault diagnosis problem under the variable working condition refers to that the working condition of the model corresponding to the data set in the test phase and the training phase is inconsistent, and the label space corresponding to the data in the source domain (training phase) and the target domain (test phase) is inconsistent.
In the embodiment, the three-axis acceleration sensor, the rotation speed sensor and the current sensor are installed on the fault experiment table, so that data acquisition under different fault modes is completed. Carrying out maximum and minimum normalization processing on the acquired signals to eliminate the influence of signal dimensions under different fault modes, wherein the calculation formula is as follows:
Where x i denotes the ith input sample, AndRepresenting the minimum and maximum values in sample x i respectively,Represents the ith normalized signal sample, and N represents the number of samples. For each fault mode, taking 1024 sampling points as one sample to divide a data set, and taking the original signal after normalization processing as a signal sample to be input.
The sample enhancement module carries out sample enhancement on a source domain sample and a target domain sample respectively, adopts two strategies of weak sample enhancement and strong sample enhancement, wherein the weak sample enhancement is to randomly add Gaussian noise into an original signal, and the strong sample enhancement comprises various operations such as randomly adding Gaussian noise, randomly scaling, stretching, cutting and the like to the original signal. Generating a source domain enhancement sample by adopting a weak sample enhancement strategy for a source domain, and generating a target domain mixed enhancement sample by adopting a strong sample enhancement strategy and a weak sample enhancement strategy for a target domain.
Specifically, gaussian noise is randomly added: adding Gaussian noise to the original signal randomly, wherein the corresponding formula is as follows:
x:=x+n (2)
Where x represents a one-dimensional input signal and N represents gaussian noise following the distribution N (1, 0.01).
Random scaling: multiplying the input signal by a random scaling factor; the corresponding formula is:
x:=α*x (3)
Where x represents a one-dimensional input signal and α represents a scaling factor compliant with the distribution N (1, 0.01).
Randomly stretching: the signal is resampled in a random proportion and equal length is ensured by zero padding and truncation.
Randomly cutting: the random coverage part signal corresponds to the formula:
x:=mask*x (4)
where x represents a one-dimensional input signal, mask represents a binary sequence where the subsequence of random positions is zero, and may randomly cover part of the signal.
In addition, the intelligent fault diagnosis model needs to be initialized before being used, and parameters of the intelligent equipment fault diagnosis model are respectively theta, theta sce,θcon and theta pcl.
And secondly, the distinguishing feature learning module extracts the obvious distinguishing features from the target domain sample based on the self-supervision comparison learning strategy, and simultaneously the distinguishing feature learning module extracts the obvious distinguishing features from the source domain sample.
In the present embodiment, the acquired and divided training samples of N different failure modes are formed into a source domain training setWherein the method comprises the steps ofRepresenting the ith sample in the source domain training set,And representing the fault mode corresponding to the ith sample in the source domain training set. Obtaining a target domain training set from training samples collected under another working conditionAssuming that the sample thereinIf the fault mode does not belong to the known fault mode in the training set, the fault mode is set as an unknown fault mode.
Carrying out automatic feature extraction on the sample of the source domain training set after sample enhancement, and calculating a second loss to obtain obvious fault discrimination features, so as to ensure that each category of known faults can be correctly identified, wherein the formula is as follows:
where L sce represents the second loss of sample x, Representing a source domain weak enhanced sample,Representing a sampleIs used to determine the true probability distribution of (a),Representing the predicted probability distribution of weakly enhanced samples across a neural network. The second loss can make all training samples in the source domain classified as correctly as possible, so as to achieve the purpose of learning the obvious distinguishing characteristics of different fault modes in the source domain samples.
The ReLU layer performs nonlinear activation on the feature map, and the calculation formula is as follows:
z=max(0,x) (7)
Meanwhile, after the second convolution layer and the fourth convolution layer, a 1-layer 1-dimensional average pooling layer is respectively connected, so that the whole characteristic of the characteristic is ensured while the downsampling is finished, and the loss of high-dimensional fault characteristic information is avoided.
And inputting the weak enhancement samples of different fault modes in the target domain to a distinguishing feature learning module, and acquiring the obvious distinguishing fault feature information of the samples. The distinguishing feature learning module performs automatic feature extraction on the samples of the target domain training set after the mixed samples are enhanced, and the weak enhanced samples are used as the supervising information training feature distinguishing extractor to extract the obvious distinguishing fault feature information of the strong enhanced samples by utilizing the comparing self-supervising learning strategy. The self-supervision contrast learning enables the fault features of the same category to be gathered to the same clustering center, the fault features of different categories to be dispersed to different clustering centers, and feature unwrapping and separation are achieved. For any category, all samples in the category are clustered to the center of the same category as far as possible, and are far away from the clustering centers of different categories, so that the purposes of reducing the intra-category distance and expanding the inter-category distance are realized, and the self-supervision learning target loss is subjected to preliminary optimization, wherein the calculation formula is as follows:
where L con represents the first loss of sample x, AndRepresenting the jth enhanced target domain sample via strong and weak samplesIs the fault feature, tau is the hyper-parameter to accelerate convergence,And expressing similarity measurement between two characteristics, constructing positive and negative sample pairs by using a weak enhanced target domain sample and a strong enhanced target domain sample, clustering the characteristics of the same fault class and dispersing the fault characteristics of different classes by maximizing mutual information between the positive and negative sample pairs, achieving the purpose of obtaining obvious discrimination fault characteristics, and ensuring the accuracy of fault mode identification.
Step three, the open set class learning module detects known class faults and unknown class faults based on the obvious distinguishing characteristics and the clamping confidence rules; the open set learning module realizes data distribution alignment and fault characteristic knowledge migration under the cross-working condition through a pseudo tag consistency training strategy.
In the embodiment, the significant discriminating fault characteristic diagram obtained by the discriminating characteristic learning module is input to the open set learning module, and the third loss is calculated; the sample significant fault distinguishing characteristics are input into an open set type learning module, the module is composed of three full-connection layers, the sizes of the full-connection layers are 256×128, 128×64 and 64×n, and n is the number of fault mode types in the training set. For the input feature diagram of the last full-connection layer, aiming at the known and unknown categories, the method adopts the known category unknown detection clamping confidence rule and the pseudo tag consistency training strategy respectively.
Class detection is known: the same failure category samples would tend to be at the same cluster center. Therefore, the samples positioned in the clustering center are selected to generate the pseudo labels with high confidence, and the weak enhancement fault samples of the target domain of the clustering center can be subjected to pseudo label marking by limiting the lower bound of the maximum value of the known fault probability. And when the maximum probability value of any type of weak enhancement fault sample is greater than or equal to a given threshold value, judging that the sample is a known type, and otherwise, judging that the sample is an unknown type. The pseudo tag matrix for known sample detection based on the pinch confidence rule can be expressed as:
Unknown class detection: since the weakly enhanced samples are more similar to the original signal samples, the classifier output probability of the weakly enhanced samples is taken as the classification prediction probability. When the prediction probability of the unknown class fault sample is greater than or equal to a given threshold, the fault sample is considered as an unknown candidate sample. Thus, the unknown class detection lower bound based on the pinch confidence rule may be defined as
Wherein,The unknown class fault samples representing candidates, T 1 ε [0,0.1] represent a predefined threshold. According to this rule, the discriminant learning module adaptively assigns small probability values to unknown classes and large probability values to samples of known classes. In order to effectively separate known and unknown class fault samples, the maximum value of the fault class probabilities should be limited so that they are more easily distinguished. When the maximum value of the fault class probabilities is less than or equal to a given threshold, identifying candidate unknown class fault samples as unknown class fault samples. Otherwise, the candidate unknown class fault samples will be discarded. The unknown class detection upper bound based on the pinch confidence rule may be defined as:
wherein, A pseudo tag matrix representing an unknown class of faults,And (3) representing the maximum value of the fault class probability in the candidate unknown class fault sample, wherein T 2 epsilon [0,0.5] represents a preset threshold value.
Detecting faults of the known class and the unknown class by using a clamping confidence rule, generating a pseudo tag matrix to perform tag consistency learning, clustering fault samples of the known class according to a clustering center obtained by learning by a distinguishing feature learning module, separating out unknown class samples, and clustering the unknown class samples in the same clustering center. The optimization targets of the open set class learning module include:
Lpcl=Ltk+Ltuk (13)
Wherein L pcl represents a third loss of samples, including a known fault-like detection loss L tk and an unknown fault-like detection loss L tuk, E (,) represents a cross entropy function, W tk represents a pseudo tag matrix of a known fault-like, and W tunk represents a pseudo tag matrix of an unknown fault-like. The faults of the known class and the unknown class can be effectively detected through the third loss with the false label clamping confidence rule, so that the faults of the known class are accurately identified as far as possible, and the faults of the unknown class are classified into the classes of the faults of the unknown class. And training by using the high-confidence pseudo tag as the supervision information of the target domain, so that the fault recognition precision is improved. Through pseudo tag consistency learning, decision boundaries of the classifier are adaptively adjusted according to sparse distribution areas of unlabeled fault signals, domain knowledge migration is completed, and performance degradation caused by distribution differences caused by changed working conditions is eliminated.
The first step further comprises the step of carrying out parameter optimization on the intelligent fault diagnosis model, and specifically, carrying out parameter optimization on a distinguishing feature learning module and an open-set learning module of the intelligent equipment fault diagnosis model according to the first loss, the second loss and the third loss.
Specifically, for performing optimization of all deep neural network parameters from end to end, the following optimization target loss L total is set in this embodiment, and the calculation formula is as follows:
Ltotal=Lsce+λLcon+γLpcl (16)
Where λ and γ are the balance parameters of the loss L pcl of the second loss L con and the third loss, respectively.
Therefore, the parameters of the distinguishing feature learning module and the opening class learning module of the intelligent fault diagnosis model of the equipment can be optimized according to the gradient of the loss L total by carrying out back propagation on each sample, and the calculation mode is as follows:
Wherein, theta k, AndRespectively judging parameters of a feature learning module and an open set learning module in the ith iteration, wherein mu is the learning rate of the model in the ith iteration, theta k+1,AndAnd respectively judging parameters of the feature learning module and the open set learning module in the ith iteration. And through iterative optimization, a discrimination feature learning module and an open set learning module of the finally optimized intelligent fault diagnosis model of the equipment can be obtained.
In addition, the present embodiment completes data collection of the test phase sample by the failure bench device. In the test stage, the rotation speed of the equipment is different from that of the training stage when signals are acquired, and the types of the acquired fault modes are more than those of the training stage. And obtaining a test set through the training set data normalization and sample enhancementAssuming that the sample thereinAnd if the fault mode does not belong to the fault mode in the training set, setting the fault mode as an unknown fault mode.
For the samples of the test set, the failure mode detection is divided into three steps:
s801, data enhancement is carried out on a test sample;
s802, inputting the sample after sample enhancement to a distinguishing feature learning module to obtain obvious distinguishing fault features;
S803, inputting the significant distinguishing fault characteristics into an open set class learning module to detect known class faults and unknown class faults.
According to the fault diagnosis method, the designed effective discrimination feature learning module is used for adaptively learning the obvious discrimination fault features contained in the acquired signals, meanwhile, the sample feature clustering of the same category is realized, the sample features of different categories are scattered, the designed effective open set learning module is used for detecting the known faults and the unknown faults of the obvious discrimination fault features obtained through training, meanwhile, a high-confidence sample is selected to generate a high-quality pseudo tag supervision training process, the distribution and the migration of the cross-working condition fault features are realized, the accuracy of fault diagnosis can be effectively improved, and the intelligent fault diagnosis level of equipment is improved.
The invention also provides a cross-working condition open-set fault diagnosis system based on self-supervision and contrast learning enhancement, which comprises a memory and a processor, wherein the memory stores a computer program, and the processor executes the cross-working condition open-set fault diagnosis method based on self-supervision and contrast learning enhancement when executing the computer program.
The invention also provides a computer readable storage medium storing machine executable instructions that, when invoked and executed by a processor, cause the processor to implement a cross-condition open-set fault diagnosis method based on self-supervised contrast learning enhancement as described above.
It will be readily appreciated by those skilled in the art that the foregoing description is merely a preferred embodiment of the invention and is not intended to limit the invention, but any modifications, equivalents, improvements or alternatives falling within the spirit and principles of the invention are intended to be included within the scope of the invention.
Claims (10)
1. The cross-working condition open-set fault diagnosis method based on self-supervision contrast learning enhancement is characterized by comprising the following steps of:
Step one, providing an intelligent fault diagnosis model, wherein the intelligent fault diagnosis model comprises a sample enhancement module, a distinguishing characteristic learning module and an open-set learning module; the sample enhancement module is adopted to enhance the source domain sample and the target domain sample;
the second step, the distinguishing feature learning module extracts obvious distinguishing features from the target domain sample based on a self-supervision comparison learning strategy, and meanwhile the distinguishing feature learning module extracts obvious distinguishing features from the source domain sample;
step three, the open set class learning module detects known class faults and unknown class faults based on the obvious distinguishing characteristics and the clamping confidence rules; the open set learning module realizes data distribution alignment and fault characteristic knowledge migration under the cross-working condition through a pseudo tag consistency training strategy.
2. The self-supervision contrast learning enhancement-based cross-working condition open-set fault diagnosis method as claimed in claim 1, wherein the method comprises the following steps: the sample enhancement module adopts two strategies of weak sample enhancement and strong sample enhancement, wherein the weak sample enhancement is to randomly add Gaussian noise to an original signal, and the strong sample enhancement comprises the steps of randomly adding Gaussian noise to the original signal, randomly scaling, stretching and cutting; generating a source domain enhancement sample by adopting a weak sample enhancement strategy for a source domain, and generating a target domain mixed enhancement sample by adopting a strong sample enhancement strategy and a weak sample enhancement strategy for a target domain.
3. The self-supervision contrast learning enhancement-based cross-working condition open-set fault diagnosis method as claimed in claim 1, wherein the method comprises the following steps: the distinguishing feature learning module adopts a one-dimensional convolutional neural network structure, and the input of the distinguishing feature learning module is 64 multiplied by 1 multiplied by 1024, and the distinguishing feature learning module comprises 4 convolutional layers and 5 fully-connected neural network layers; each convolution layer is connected with a BN layer and a ReLU layer; and the BN layer carries out transformation reconstruction on the output feature map of the previous convolution layer by introducing learning parameters, and the ReLU layer carries out nonlinear activation on the feature map.
4. The self-supervision contrast learning enhancement-based cross-working condition open-set fault diagnosis method as claimed in claim 1, wherein the method comprises the following steps: the distinguishing feature learning module performs automatic feature extraction on the enhanced source domain sample, and calculates a second loss to obtain a fault distinguishing feature; the corresponding formula is:
where L sce represents the second loss of sample x, Representing a source domain weak enhanced sample,Representing a sampleIs used to determine the true probability distribution of (a),Representing the predicted probability distribution of weakly enhanced samples across a neural network.
5. The self-supervision contrast learning enhancement-based cross-working condition open-set fault diagnosis method as claimed in claim 4, wherein the method comprises the following steps: the distinguishing feature learning module carries out automatic feature extraction on the sample of the target domain training set after the mixed sample enhancement, and utilizes a comparison self-supervision learning strategy to enable the fault features of the same category to be gathered to the same clustering center, and the fault features of different categories to be dispersed to different clustering centers, so as to realize feature unwrapping and separation, wherein the corresponding formulas are as follows:
where L con represents the first loss of sample x, AndRepresenting the jth enhanced target domain sample via strong and weak samplesIs the fault feature, tau is the hyper-parameter to accelerate convergence,Representing a similarity measure between two features.
6. The self-supervision contrast learning enhancement-based cross-working condition open-set fault diagnosis method as claimed in claim 5, wherein the method comprises the following steps: inputting the significant distinguishing fault feature diagram obtained by the distinguishing fault feature learning module into the open set learning module, and calculating a third loss; the optimization targets of the open set learning module are as follows:
Lpcl=Ltk+Ltuk (13)
Wherein L pcl represents a third loss of samples, including a known class fault detection loss L tk and an unknown class fault detection loss L tuk, E (,) represents a cross entropy function, W tk represents a pseudo tag matrix of a known class fault, W tunk represents a pseudo tag matrix of an unknown class fault, and faults of the known class and the unknown class can be detected by the third loss with a pseudo tag clamping confidence rule.
7. The self-supervision contrast learning enhancement-based cross-working condition open-set fault diagnosis method as claimed in claim 6, wherein the method comprises the following steps: the first step further comprises the step of carrying out parameter optimization on the intelligent fault diagnosis model, and specifically, carrying out parameter optimization on a distinguishing feature learning module and an open-set learning module of the intelligent equipment fault diagnosis model according to the first loss, the second loss and the third loss.
8. The self-supervised contrast learning enhancement-based cross-operating-condition open-set fault diagnosis method as claimed in claim 7, wherein: the calculation formula of the optimization target loss L total is as follows:
Ltotal=Lsce+λLcon+γLpcl (16)
Wherein L sce represents a second loss; λ and γ are the balance parameters of the loss L pcl of the first loss L con and the third loss, respectively.
9. A cross-working condition open-set fault diagnosis system based on self-supervision contrast learning enhancement is characterized in that: the system comprises a memory and a processor, wherein the memory stores a computer program, and the processor executes the cross-working condition open-set fault diagnosis method based on self-supervision contrast learning enhancement according to any one of claims 1 to 8 when executing the computer program.
10. A computer-readable storage medium, characterized by: the computer-readable storage medium stores machine-executable instructions that, when invoked and executed by a processor, cause the processor to implement the self-supervised contrast learning enhancement-based cross-operating mode open set fault diagnosis method of any of claims 1-8.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410499381.6A CN118427681A (en) | 2024-04-24 | 2024-04-24 | Cross-working condition open-set fault diagnosis method and equipment based on self-supervision contrast learning enhancement |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410499381.6A CN118427681A (en) | 2024-04-24 | 2024-04-24 | Cross-working condition open-set fault diagnosis method and equipment based on self-supervision contrast learning enhancement |
Publications (1)
Publication Number | Publication Date |
---|---|
CN118427681A true CN118427681A (en) | 2024-08-02 |
Family
ID=92322659
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202410499381.6A Pending CN118427681A (en) | 2024-04-24 | 2024-04-24 | Cross-working condition open-set fault diagnosis method and equipment based on self-supervision contrast learning enhancement |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN118427681A (en) |
-
2024
- 2024-04-24 CN CN202410499381.6A patent/CN118427681A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Yu et al. | A bearing fault diagnosis technique based on singular values of EEMD spatial condition matrix and Gath-Geva clustering | |
CN111562108A (en) | Rolling bearing intelligent fault diagnosis method based on CNN and FCMC | |
EP3364341A1 (en) | Analyzing digital holographic microscopy data for hematology applications | |
CN108333468B (en) | The recognition methods of bad data and device under a kind of active power distribution network | |
CN114564982B (en) | Automatic identification method for radar signal modulation type | |
CN113516228B (en) | Network anomaly detection method based on deep neural network | |
CN113376516A (en) | Medium-voltage vacuum circuit breaker operation fault self-diagnosis and early-warning method based on deep learning | |
CN114760098A (en) | CNN-GRU-based power grid false data injection detection method and device | |
CN101738998A (en) | System and method for monitoring industrial process based on local discriminatory analysis | |
CN116400168A (en) | Power grid fault diagnosis method and system based on depth feature clustering | |
CN115112372A (en) | Bearing fault diagnosis method and device, electronic equipment and storage medium | |
CN111368648A (en) | Radar radiation source individual identification method and device, electronic equipment and storage medium thereof | |
CN114897764A (en) | Pulmonary nodule false positive elimination method and device based on standardized channel attention | |
CN108763926B (en) | Industrial control system intrusion detection method with safety immunity capability | |
CN117849193A (en) | Online crack damage monitoring method for neodymium iron boron sintering | |
CN116466408B (en) | Artificial neural network superbedrock identification method based on aeromagnetic data | |
Li et al. | Incremental learning of infrared vehicle detection method based on SSD | |
CN116863418A (en) | Road traffic limiting vehicle species division method based on CA_MixNet | |
CN116580176A (en) | Vehicle-mounted CAN bus anomaly detection method based on lightweight network MobileViT | |
CN111832463A (en) | Deep learning-based traffic sign detection method | |
CN116484206A (en) | SEIM-based unknown radiation source individual identification method and system | |
CN118427681A (en) | Cross-working condition open-set fault diagnosis method and equipment based on self-supervision contrast learning enhancement | |
CN113326864B (en) | Image retrieval model training method, device and storage medium | |
CN114444544A (en) | Signal classification and identification method based on convolutional neural network and knowledge migration | |
CN111275135B (en) | Fault diagnosis method, device, equipment and medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |