CN110728377A - Intelligent fault diagnosis method and system for electromechanical equipment - Google Patents

Intelligent fault diagnosis method and system for electromechanical equipment Download PDF

Info

Publication number
CN110728377A
CN110728377A CN201911000874.6A CN201911000874A CN110728377A CN 110728377 A CN110728377 A CN 110728377A CN 201911000874 A CN201911000874 A CN 201911000874A CN 110728377 A CN110728377 A CN 110728377A
Authority
CN
China
Prior art keywords
data
test data
training
fault diagnosis
training data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911000874.6A
Other languages
Chinese (zh)
Other versions
CN110728377B (en
Inventor
李沂滨
宋艳
郭庆稳
王代超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong University
Original Assignee
Shandong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong University filed Critical Shandong University
Priority to CN201911000874.6A priority Critical patent/CN110728377B/en
Publication of CN110728377A publication Critical patent/CN110728377A/en
Application granted granted Critical
Publication of CN110728377B publication Critical patent/CN110728377B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01HMEASUREMENT OF MECHANICAL VIBRATIONS OR ULTRASONIC, SONIC OR INFRASONIC WAVES
    • G01H17/00Measuring mechanical vibrations or ultrasonic, sonic or infrasonic waves, not provided for in the preceding groups
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01MTESTING STATIC OR DYNAMIC BALANCE OF MACHINES OR STRUCTURES; TESTING OF STRUCTURES OR APPARATUS, NOT OTHERWISE PROVIDED FOR
    • G01M99/00Subject matter not provided for in other groups of this subclass
    • G01M99/005Testing of complete machines, e.g. washing-machines or mobile phones

Landscapes

  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Medical Informatics (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Mathematical Physics (AREA)
  • Artificial Intelligence (AREA)
  • Test And Diagnosis Of Digital Computers (AREA)

Abstract

The present disclosure provides an intelligent fault diagnosis method and system for electromechanical equipment, which obtains past fault data of a target machine to form training data; acquiring real-time acquisition data of a target machine to form test data; constructing a domain self-adaptive network model, training the network model, marking the output of different data, minimizing the difference between training data and test data, and extracting and classifying the characteristics of the training data; obtaining pseudo labels of the test data by using the trained model, and retraining the whole network model at least once by using the weighted pseudo label test data and the original training data; and predicting and classifying the test data by using the retrained model to obtain a fault diagnosis result of the machine. The accuracy of the diagnosis can be further improved.

Description

Intelligent fault diagnosis method and system for electromechanical equipment
Technical Field
The disclosure belongs to the technical field of fault diagnosis, and relates to an intelligent fault diagnosis method and system for electromechanical equipment.
Background
The statements in this section merely provide background information related to the present disclosure and may not necessarily constitute prior art.
Machine damage in modern industry has severely affected the safe production, work efficiency and product quality of Industrial Internet of things (IIOT). The reliability of the IIOT in the manufacturing industry and the industrial production can be improved by utilizing the vibration signal or the current signal to carry out fault diagnosis before an accident occurs.
At present, Machine Learning algorithms such as Artificial Neural Networks (ANN), Support Vector Machines (SVM), Extreme Learning Machines (ELM) and the like are widely applied to intelligent fault diagnosis. To the knowledge of the inventor, there are many existing documents about damage or fault determination of industrial machines by using machine learning algorithm, but most of these methods are fault diagnosis methods driven by data, and feature selection based on the experience of researchers and an effective classifier are required, which means that these features have no direct relation with the classifier, and the accuracy and precision of classification are not high.
In recent years, deep learning has attracted attention for its prominent expression in tasks such as image classification, data mining, and speech recognition. Therefore, many fault diagnosis works based on deep learning have been carried out. However, to the inventors' knowledge, these studies do not consider the case where the training data is inconsistent with the test data acquisition conditions. In most cases, the acquisition conditions for the training and test data sets are consistent. However, for test data obtained under different working environments, a model trained using training data may have poor generalization performance.
Disclosure of Invention
In order to solve the problems, the disclosure provides an intelligent fault diagnosis method and system for electromechanical equipment, and the method and system fully consider the condition that the training data and the test data are inconsistent in acquisition conditions, so that the diagnosis accuracy is further improved.
According to some embodiments, the following technical scheme is adopted in the disclosure:
an intelligent fault diagnosis method for electromechanical equipment comprises the following steps:
acquiring past fault data of a target machine to form training data;
acquiring real-time acquisition data of a target machine to form test data;
constructing a domain self-adaptive network model, training the network model, marking the output of different data, minimizing the difference between training data and test data, and extracting and classifying the characteristics of the training data;
obtaining pseudo labels of the test data by using the trained model, and retraining the whole network model at least once by using the weighted pseudo label test data and the original training data;
and predicting and classifying the test data by using the retrained model to obtain a fault diagnosis result of the machine.
According to the technical scheme provided by the disclosure, the condition that the training data and the test data are inconsistent in acquisition condition is fully considered, the characteristic outputs of the training sample and the test sample are respectively marked, and the difference between the training data and the test data is minimized, so that the applicability of the model is ensured; meanwhile, in order to use effective information in the test data set prediction result, the domain adaptive network model is further optimized by retraining the prediction result based on the test data set and the original training data set, so that the classification precision and accuracy can be effectively improved, and the diagnosis accuracy can be improved.
As an alternative embodiment, the domain adaptive network model specifically includes a feature extraction network, a feature domain adaptive network, and a classification network, which are connected in sequence.
As an alternative, the input of the feature extraction network is a segmented one-dimensional original signal, and the lengths of the convolution kernels of the first two layers of the feature extraction network are greater than 10.
The current time of day data in the machine bearing signal may be correlated with data that is relatively far from it, and thus a long convolution kernel can provide more efficient information than a short convolution kernel. The accuracy and the effectiveness of feature extraction can be ensured by the fact that the lengths of the convolution kernels of the first two layers of the feature extraction network are larger than a set value.
As an alternative embodiment, the feature domain adaptive network is configured to label the output of the feature extraction network, label the feature outputs of the training samples and the test samples as 1 and 0, respectively, and then input the labeled data into the two fully-connected layers to minimize the difference between the training data and the test data.
As an alternative embodiment, the specific procedure for minimizing the difference between the training data and the test data is to minimize the loss function of the feature domain adaptive network, i.e. the difference between the outputs of the training data and the test data.
As an alternative embodiment, the input to the classification network is a training data set of the output of the feature extraction network.
As an alternative embodiment, the specific process of using the predicted result of the test data to retrain the domain adaptive network model includes setting the predicted result of the test data to its pseudo-label, i.e., the test data with the pseudo-label, retraining the domain adaptive network using the training data set and the pseudo-labeled test data set, and introducing sample weights in the classification loss function.
As an alternative embodiment, the cross entropy loss function of the retraining classification network is as follows:
Figure BDA0002241269150000041
wherein Y and
Figure BDA0002241269150000042
the true label and prediction output representing the training data,
Figure BDA0002241269150000043
and
Figure BDA0002241269150000044
is the pseudo label of the test data and the prediction output of the classifier, eta and lambda are the loss function weights of the training data and the test data respectively, and eta is more than or equal to lambda.
As an alternative embodiment, the retraining process cycles through the use of the test data classification results predicted based on the currently retrained domain adaptive network model as input for the next training.
An intelligent fault diagnosis system for electromechanical devices, comprising:
the sample data construction module is configured to acquire past fault data of the target machine to form training data; acquiring real-time acquisition data of a target machine to form test data;
a network model construction module configured to construct a domain adaptive network model, train the network model, label the output of different data, minimize the difference between training data and test data, and extract and classify the characteristics of the training data;
the retraining module is configured to obtain pseudo labels of the test data by using the trained model, and retrain the whole network model at least once by using the weighted pseudo label test data and the original training data;
and the result output module is configured to predict and classify the test data by using the retrained model to obtain a fault diagnosis result of the machine.
A computer readable storage medium, having stored therein a plurality of instructions adapted to be loaded by a processor of a terminal device and to execute the steps of the method for intelligent fault diagnosis for mechatronic devices.
A terminal device comprising a processor and a computer readable storage medium, the processor being configured to implement instructions; the computer readable storage medium is used for storing a plurality of instructions, and the instructions are suitable for being loaded by a processor and executing the steps of the intelligent fault diagnosis method for the electromechanical device.
Compared with the prior art, the beneficial effect of this disclosure is:
the method applies a Domain Adaptive Network (DAN) to fault diagnosis of a machine, and meanwhile, when a DAN model is constructed, the difference between features of different fields is minimized, and meanwhile, an optimal classification model based on labeled training data is trained, so that the method has good applicability.
The present disclosure proposes a retraining strategy that can further improve diagnostic accuracy using information from unlabeled test data.
Drawings
The accompanying drawings, which are included to provide a further understanding of the disclosure, illustrate embodiments of the disclosure and together with the description serve to explain the disclosure and are not to limit the disclosure.
FIG. 1 is a schematic diagram of the structure and training process of a DAN;
FIG. 2 is a diagram illustrating classification results of test data predicted based on a DAN training model;
FIG. 3 is a diagram of the structure and training process of the DAN-R;
FIG. 4 is a diagram illustrating classification results for predicting test data based on a DAN-R training model;
FIG. 5 is a graphical representation of a comparison of classification results for the first experiment of DAN and DAN-R on a Paderborn dataset;
FIG. 6 is the left column: the t-SNE gives a characteristic visualization result of the CWRU; right column: the confusion matrix. (a)1772 → 1750; (b)1772 → 1730; (c)1750 → 1772; (d)1750 → 1730; (e)1730 → 1772; (f)1730 → 1750.
FIG. 7 is a confusion matrix comparison of DAN (left column) and DAN-R (right column): (a) a → B; (b) a → C; (c) b → A; (d) b → C; (e) c → A; (f) c → B.
FIG. 8 is a t-SNE plot and confusion matrix for the Paderborn dataset (Table 4): (a) the t-SNE result of DAN; (b) t-SNE results for DAN-R; (c) the confusion matrix result of DAN-R.
FIG. 9 is the result of DAN and DAN-R on A → C. (a) The classification accuracy of the DAN-R corresponding to different retraining times; (b) extracting the characteristics obtained by the second layer convolution layer of the network by the DAN characteristics; (c) - (j) features obtained from the second convolutional layer of the DAN-R feature extraction network using 1 to 8 retraining strategies.
FIG. 10 is a view of the structure of a system on which a comparative example is supported;
the specific implementation mode is as follows:
the present disclosure is further described with reference to the following drawings and examples.
It should be noted that the following detailed description is exemplary and is intended to provide further explanation of the disclosure. Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this disclosure belongs.
It is noted that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of example embodiments according to the present disclosure. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, and it should be understood that when the terms "comprises" and/or "comprising" are used in this specification, they specify the presence of stated features, steps, operations, devices, components, and/or combinations thereof, unless the context clearly indicates otherwise.
A Retraining Strategy based domain adaptive Network (DAN-R) is used for fault diagnosis. The method first minimizes the feature difference of training data and test failure data, while maximizing the classification accuracy of the training data. The trained model is then used to give pseudo-labels to the test data. And finally, retraining the whole network by using the weighted pseudo-label test data and the original training data. The present embodiment mainly contributes to the following:
the DAN is used for fault diagnosis to minimize the differences between different domain features while training an optimal classification model based on labeled training data.
The retraining strategy utilizes information from unlabeled test data to further improve diagnostic accuracy.
The implementation of the DAN-R comprises two steps: domain adaptive network DAN and retraining strategy. As will be described separately below.
The first part is the construction of the domain adaptive network:
the DAN comprises a feature extraction network, a feature domain self-adaptive network and a classification network. The sum of the loss functions of the latter two networks is used for training the assistant feature extraction network. The frame of the DAN is shown in fig. 1.
1) A feature extraction network: the input to the feature extraction network is a segmented one-dimensional raw signal. To learn valid characteristics from these segmented signals, the convolution kernel and the maximum pool kernel in this network are much longer than in conventional CNNs. For example, the most common convolution kernel length for conventional CNNs is 1, 3, or 5, but the first two layers of the feature extraction network in the DAN are 17. Unlike images, the current time of day data in the bearing signal may have a relationship with data that is further away from it. Thus, long convolution kernels provide more efficient information than short convolution kernels.
2) Feature domain adaptive network: first, the network labels the output of the feature extraction network, and labels the feature outputs of the training samples and the test samples as 1 and 0, respectively. The marked data is then input into both fully connected layers. The task of this network is to minimize the difference between the training data and the test data. Suppose that after two complete connected layers, data D is trainedtrAnd test data DttRespectively, are f (D)tr) And f (D)tt). The function f (-) represents the processing of three fully connected layers. The purpose of the feature domain adaptive network is to implement the following functions:
min(Lossds)=min(||f(Dtr)-f(Dtt)||) (1)
wherein LossdsRepresenting the loss function of the feature domain adaptive network.
3) Classifying the network: the input to the classification network is a training data set of the output of the feature extraction network. For the multi-classification problem, the cross-entropy loss function is calculated as shown in equation (2):
where Y represents the true label of the training data,
Figure BDA0002241269150000082
is the predicted output of the classification network.
In the DAN, in order to satisfy both the conditions of low difference and low cross-entropy loss between training and test data, the loss function of the entire DAN network is
Losswhole=αLosscross-entropy+βLossds(3)
Where alpha and beta are the weighting factors for the two losses. In the present embodiment, α and β are both set to 1.
The second part is a retraining strategy:
after the DAN training is completed, a classification network model can be obtained, and then a prediction result of the test data is obtained. Although the prediction results have erroneous classification results, the correct classification results contain useful information, which means that they can be used to optimize the DAN model. Therefore, the present embodiment retrains the DAN network (DAN-R) using the prediction results of the test data. Let the predicted result of the test data be its pseudo label, and note the test data with pseudo label as
Figure BDA0002241269150000091
The training method of the feature domain adaptive network and the feature extraction network in the DAN-R is the same as that of the DAN, but the training method of the classification network is different, and as shown in fig. 3, the DAN-R trains the classification network using a training data set and a pseudo label test data set. In addition, to reduce the impact of false labels in the pseudo labels and enhance the effectiveness of the training data set, DAN-R introduces sample weights in the classification loss function. The cross entropy loss function of the retrained classification network is as follows:
Figure BDA0002241269150000092
wherein Y and
Figure BDA0002241269150000093
the true label and prediction output representing the training data,
Figure BDA0002241269150000094
and
Figure BDA0002241269150000095
is the pseudo label of the test data and the prediction output of the classifier, eta and lambda are the loss function weights of the training data and the test data respectively, and eta is more than or equal to lambda. The retraining strategy provided by the embodiment can be recycled, namely the test data classification result predicted based on the current DAN-R model can be used for next DAN-R training
Figure BDA0002241269150000101
Determination of the third part DAN-R parameter
The detailed parameters of the DAN-R are shown in Table 1, the feature extraction network adopts two large-scale nuclear one-dimensional convolution layers and a small-scale nuclear one-dimensional convolution layer, and each convolution layer is followed by a maximum pooling layer; the feature domain self-adaptive network adopts three layers of full connection; the classification network adopts two layers of full connection. Batch Normalization (BN) and Leaky modified Linear Unit (lreul) were used after each convolutional layer and a dropout operation was used after each internal fully-connected layer. In the experiment, η is equal to 1; λ ranges from 0.1 to 1 with a spacing of 0.1. In table 1, contribution 1D represents a one-dimensional convolutional layer, BN represents batch normalization, leakage ReLU represents a modified linear unit with leakage, Maxpooling 1D represents a max pooling layer, scatter represents a one-dimensional input, and Dropout represents a Dropout operation.
TABLE 1 DAN-R detailed parameters
Figure BDA0002241269150000102
Figure BDA0002241269150000111
Results and analysis of the experiments
This section first introduces data sets and experimental result evaluation indexes, then introduces the experimental results of the Kaiser West University of Western research (CWRU) data set and the Paderborne data set, and finally gives experimental analysis.
The experiment of this example was carried out on a GeForce RTX 2080 display card using a keras run on a tenserflow. The optimizer used in the experiment was Adam, the number of training times was 120, the learning rate was 0.0005, the learning rate decreased by 50% per 50 training times, and the batch parameter size was 16.
The present embodiment uses the Paderborn dataset and the CWRU dataset to evaluate the classification performance of DAN-R. The Paderborn dataset contains samples for three states: inner ring failure, outer ring failure and health status. This example performed two experiments using the Paderborn dataset, see tables 2, 3 and 4. The bearing codes and the settings of each code are shown in tables 2 and 3. In the first experiment, we used the data of one working set as the training data set and the data of the other working set as the testing data set. The data used to train and test the method in the second experiment is shown in table 4. The present embodiment splits the original signal of the Paderborn dataset into data segments of the same length, which are then input into the DAN-R. There are 4096 data per data segment.
The CWRU data set contains samples of four states: health, outer ring failure, inner ring failure, and ball failure. The data acquisition frequency used in this example was 48 KHz. For each case, three motor speeds (1772, 1750, and 1730) and three different fault diameters (0.007, 0.014, and 0.021) were used, respectively. For each combination of data, this embodiment employs randomly sampling 5000 4096 samples in length, i.e., 60000 samples of 4 states for each motor speed. In order to test the generalization performance of different methods, a model is trained by using data of the rotating speed of a motor, and data of other rotating speeds are used as test data.
TABLE 2 three states and data codes for the Paderborn dataset
Figure BDA0002241269150000121
Figure BDA0002241269150000131
TABLE 3 data Collection Condition and data codes for the Paderborn dataset
Numbering Data code Rotational speed (unit: rpm) Radial force (Unit: N) Load torque (Nm)
A N15 N07 F10 1500 1000 0.7
B N15 M01 F10 1500 1000 0.1
C N15 M07 F04 1500 400 0.7
TABLE 4 code of Paderborn dataset for training and test data for second experiment
Figure BDA0002241269150000132
Evaluation index
In this embodiment, the method performance is measured by the classification accuracy, which is defined as follows:
wherein, when x is y, δ (x, y) is 1; when x ≠ y, δ (x, y) is 0. L isGTLabel, L, being fault dataSIs the prediction result of the classification method. The higher the ρ value, the better the classification performance.
Experimental results of CWRU dataset
First, we present experimental results of applying ELM, SVM, CNN (DAN with feature domain adaptive network removed), DAN, and DAN-R to CWRU dataset. The experimental results are shown in table 5, where the classification performance of DAN is superior to other methods and the accuracy in all cases is higher than 99.4%. On the other hand, the performance of this method is superior to CNN, demonstrating that the DAN method minimizes the difference between the training data and the test data, and that the classification model trained with the labeled training set is well suited for the test data set.
In addition, Table 6 shows the classification results of DAN-R. As can be seen from the results, DAN-R can achieve 100% classification accuracy. That is, the pseudo-labeled test data and the retraining strategy help improve the classification results.
TABLE 5 Experimental results of DAN on CWRU data set
Figure BDA0002241269150000142
Figure BDA0002241269150000151
TABLE 6 Experimental results of DAN-R on CWRU data set
Figure BDA0002241269150000152
Paderborn data set experimental results
This example performed two different experiments on the Paderborn dataset. The first experiment involves training and test data collected under different working conditions, and the second experiment involves training and test data acquired under both artificial and natural conditions. The specific experimental results are as follows.
1) The experimental results of the first experiment are shown in table 7, and table 7 lists the classification accuracy of DAN and other methods. Compared with other methods, the method of the embodiment has better performance. Table 8 and FIG. 5 show the experimental results of DAN-R. DAN-R performed better than DAN.
2) A second experiment of the Paderborn dataset studied the relationship between artificial and natural fault data. Since it is easier to obtain a large amount of artificial fault data in practice, it is of great importance to study methods that train with artificial fault data and test on natural fault data. Table 9 shows the classification accuracy when the proposed method is compared with other methods. As can be seen from Table 9, DAN-R has better classification performance than other methods.
Experimental results on the Paderborn data set show that both the DAN and the DAN-R can effectively diagnose faults under different environments and have good generalization performance. The experimental result of the DAN-R shows that the retraining strategy is beneficial to improving the learning effect of the DAN.
The proposed method evaluates on several different data sets: fault data under different working conditions, and fault data obtained under natural and artificial conditions. The experimental results prove the effectiveness of the method.
TABLE 7 DAN Classification results on the Paderborn dataset (TABLE 2, TABLE 3)
Figure BDA0002241269150000161
TABLE 8 classification results of DAN-R on the Paderborn dataset (TABLE 2, TABLE 3)
Figure BDA0002241269150000162
Figure BDA0002241269150000171
TABLE 9 classification results of DAN-R on the Paderborn dataset (TABLE 4)
Figure BDA0002241269150000172
Analysis of Experimental results
To further clearly analyze the method, this example shows a feature visualization graph of T-Distribution Random Neighbor Embedding (T-SNE).
FIGS. 6(a) - (f) are t-SNE results for CWRU datasets. the input features of the t-SNE are the output of the feature extraction network in the DAN and DAN-R. It is clear that DAN-R successfully separated all classes. In addition, the confusion matrix for DAN-R is also shown, with results consistent with Table 6. The number of misclassified samples can be clearly seen from the confusion matrix.
FIGS. 7(a) - (f) are confusion matrices when applying DAN and DAN-R to the Paderborne dataset (tables 2 and 3). The confusion matrix vividly shows samples that are classified correctly and incorrectly. Therefore, we can conclude that DAN-R reduces misclassified test data.
FIGS. 8(a) - (c) are t-SNE plots and confusion matrices for the Paderborn dataset (Table 4). After retraining by using the pseudo-labeled test data, the distance between classes is increased, and the classification precision is improved.
Finally, FIGS. 9(a) - (j) show the results of the classification of DAN and DAN-R on A → C (A for training and C for testing). Fig. 9(a) shows the classification accuracy obtained at different retraining times. Obviously, the more retraining times, the higher the classification accuracy. Therefore, the retraining strategy can be repeatedly used to obtain better fault diagnosis effect. Fig. 9(b) is a feature obtained from the second layer convolution layer of the DAN feature extraction network. FIGS. 9(c) - (j) are features obtained from the second convolutional layer of DAN-R by using 1 to 8 retraining strategies, respectively. The features before and after the retraining strategy is used are greatly different, and the difference enables the DAN-R to have higher classification precision.
The system relied on by the above comparative embodiment comprises a data acquisition system and a vibration signal analysis and diagnosis system, wherein the vibration signal analysis and diagnosis system provides necessary equipment information and data by an equipment information management module and the vibration data acquisition system, and a fault diagnosis module carries out fault diagnosis on key equipment on a naval vessel according to equipment composition information and vibration data.
The specific functions comprise setting and managing the composition parameters of the tested equipment;
acquiring, amplifying, performing analog-to-digital conversion, displaying in real time and storing 24-channel vibration signals;
displaying vibration signal of 24 channels at most in real time (two forms of time domain and frequency domain)
The data acquisition system and the set acquisition parameters can be controlled through the touch screen;
monitoring the running state of the equipment;
and (5) diagnosing after the fault occurs.
The acquisition system comprises a plurality of vibration sensors and a data acquisition unit, and the specific structure can be as shown in fig. 10.
The vibration signal analysis and diagnosis system specifically comprises:
the device information management module: the device type is first determined, and the devices are classified into rotating devices and non-rotating devices. The non-rotating device determines the characteristic frequency and parameters based on the specific device parameters. And respectively inputting motor parameters, bearing parameters and specific mechanical parameters into the rotating equipment, and automatically calculating the characteristic frequency. And calculating the maximum value and the minimum value of different characteristic frequency points according to the historical measurement records. All information is added and stored in the database, and management and viewing of historical data are supported.
Vibration data acquisition module: in the data acquisition module, acquisition parameters such as the number of acquisition channels, sampling rate, acquisition time and duration are set first. After the acquisition is started, vibration signals of a time domain and a frequency domain can be displayed in real time, and a measurement file (. bin) is automatically stored in a specified directory.
The digital signal analysis module: the module has the main functions of carrying out digital signal processing and graphical display on the acquired vibration data, providing reference for diagnostic experts and storing the processing result in a picture form.
A fault diagnosis module: in the diagnostic module, the equipment name is selected at first, and the corresponding parameters, the characteristic frequency and the value range in the equipment library are transmitted into the module. And then, loading the measured value into a module, and judging the running state, the fault type and the severity of the tested equipment according to historical experience. And the module can determine the wear state and make a prediction of the service life based on the multiple measurements.
The vibration sensor is DH112, the diagnostic software is Thinkpad X, and the main control module is Linghua PXES-2590.
As will be appreciated by one skilled in the art, embodiments of the present disclosure may be provided as a method, system, or computer program product. Accordingly, the present disclosure may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present disclosure may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and so forth) having computer-usable program code embodied therein.
The present disclosure is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the disclosure. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The above description is only a preferred embodiment of the present disclosure and is not intended to limit the present disclosure, and various modifications and changes may be made to the present disclosure by those skilled in the art. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present disclosure should be included in the protection scope of the present disclosure.
Although the present disclosure has been described with reference to specific embodiments, it should be understood that the scope of the present disclosure is not limited thereto, and those skilled in the art will appreciate that various modifications and changes can be made without departing from the spirit and scope of the present disclosure.

Claims (10)

1. An intelligent fault diagnosis method for electromechanical equipment is characterized in that: the method comprises the following steps:
acquiring past fault data of a target machine to form training data;
acquiring real-time acquisition data of a target machine to form test data;
constructing a domain self-adaptive network model, training the network model, marking the output of different data, minimizing the difference between training data and test data, and extracting and classifying the characteristics of the training data;
obtaining pseudo labels of the test data by using the trained model, and retraining the whole network model at least once by using the weighted pseudo label test data and the original training data;
and predicting and classifying the test data by using the retrained model to obtain a fault diagnosis result of the machine.
2. The intelligent fault diagnosis method for the electromechanical device as claimed in claim 1, wherein: the domain self-adaptive network model specifically comprises a feature extraction network, a feature domain self-adaptive network and a classification network which are connected in sequence.
3. The intelligent fault diagnosis method for the electromechanical device as claimed in claim 2, wherein: the input of the feature extraction network is a segmented one-dimensional original signal, and the lengths of the convolution kernels of the first two layers of the feature extraction network are more than 10;
or, the feature domain adaptive network is configured to label the output of the feature extraction network, label the feature outputs of the training sample and the test sample as 1 and 0, respectively, and then input the labeled data into two fully-connected layers to minimize the difference between the training data and the test data.
4. The intelligent fault diagnosis method for the electromechanical device as claimed in claim 2, wherein: a specific procedure for minimizing the difference between the training data and the test data is to minimize the loss function of the feature domain adaptive network, i.e. the difference between the outputs of the training data and the test data.
5. The intelligent fault diagnosis method for the electromechanical device as claimed in claim 1, wherein: the specific process of retraining the domain adaptive network model using the prediction result of the test data includes setting the prediction result of the test data as a pseudo label, recording the test data with the pseudo label, retraining the domain adaptive network model using the training data set and the pseudo label test data set to train the classification network, and introducing sample weight in the classification loss function.
6. The intelligent fault diagnosis method for the electromechanical device as claimed in claim 5, wherein: the cross entropy loss function of the retrained classification network is as follows:
Figure FDA0002241269140000021
wherein Y and
Figure FDA0002241269140000022
the true label and prediction output representing the training data,
Figure FDA0002241269140000023
and
Figure FDA0002241269140000024
is the pseudo label of the test data and the prediction output of the classifier, eta and lambda are the loss function weights of the training data and the test data respectively, and eta is more than or equal to lambda.
7. The intelligent fault diagnosis method for the electromechanical device as claimed in claim 1, wherein: the retraining process is used in a circulating mode, namely a test data classification result predicted based on the current retrained domain adaptive network model is used as input of next training.
8. An intelligent fault diagnosis system for electromechanical equipment is characterized in that: the method comprises the following steps:
the sample data construction module is configured to acquire past fault data of the target machine to form training data; acquiring real-time acquisition data of a target machine to form test data;
a network model construction module configured to construct a domain adaptive network model, train the network model, label the output of different data, minimize the difference between training data and test data, and extract and classify the characteristics of the training data;
the retraining module is configured to obtain pseudo labels of the test data by using the trained model, and retrain the whole network model at least once by using the weighted pseudo label test data and the original training data;
and the result output module is configured to predict and classify the test data by using the retrained model to obtain a fault diagnosis result of the machine.
9. A computer-readable storage medium characterized by: in which a plurality of instructions are stored, said instructions being adapted to be loaded by a processor of a terminal device and to carry out the steps of a method for intelligent fault diagnosis for mechatronic devices according to any one of claims 1 to 7.
10. A terminal device is characterized in that: the system comprises a processor and a computer readable storage medium, wherein the processor is used for realizing instructions; the computer readable storage medium is used for storing a plurality of instructions, which are suitable for being loaded by a processor and executing the steps of the intelligent fault diagnosis method for the electromechanical device, according to any one of claims 1 to 7.
CN201911000874.6A 2019-10-21 2019-10-21 Intelligent fault diagnosis method and system for electromechanical equipment Active CN110728377B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911000874.6A CN110728377B (en) 2019-10-21 2019-10-21 Intelligent fault diagnosis method and system for electromechanical equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911000874.6A CN110728377B (en) 2019-10-21 2019-10-21 Intelligent fault diagnosis method and system for electromechanical equipment

Publications (2)

Publication Number Publication Date
CN110728377A true CN110728377A (en) 2020-01-24
CN110728377B CN110728377B (en) 2020-06-09

Family

ID=69220427

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911000874.6A Active CN110728377B (en) 2019-10-21 2019-10-21 Intelligent fault diagnosis method and system for electromechanical equipment

Country Status (1)

Country Link
CN (1) CN110728377B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111652320A (en) * 2020-06-10 2020-09-11 创新奇智(上海)科技有限公司 Sample classification method and device, electronic equipment and storage medium
CN111738455A (en) * 2020-06-02 2020-10-02 山东大学 Fault diagnosis method and system based on integration domain self-adaptation
CN112084909A (en) * 2020-08-28 2020-12-15 北京旋极信息技术股份有限公司 Fault diagnosis method, system and computer readable storage medium

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6766368B1 (en) * 2000-05-23 2004-07-20 Verizon Laboratories Inc. System and method for providing an internet-based correlation service
CN105738109A (en) * 2016-02-22 2016-07-06 重庆大学 Bearing fault classification diagnosis method based on sparse representation and ensemble learning
CN108446711A (en) * 2018-02-01 2018-08-24 南京邮电大学 A kind of Software Defects Predict Methods based on transfer learning
CN108492258A (en) * 2018-01-17 2018-09-04 天津大学 A kind of radar image denoising method based on generation confrontation network
US20180330205A1 (en) * 2017-05-15 2018-11-15 Siemens Aktiengesellschaft Domain adaptation and fusion using weakly supervised target-irrelevant data
CN109117860A (en) * 2018-06-27 2019-01-01 南京邮电大学 A kind of image classification method based on subspace projection and dictionary learning
CN109165566A (en) * 2018-08-01 2019-01-08 中国计量大学 A kind of recognition of face convolutional neural networks training method based on novel loss function
CN109753998A (en) * 2018-12-20 2019-05-14 山东科技大学 The fault detection method and system, computer program of network are generated based on confrontation type
CN109887047A (en) * 2018-12-28 2019-06-14 浙江工业大学 A kind of signal-image interpretation method based on production confrontation network
CN109947086A (en) * 2019-04-11 2019-06-28 清华大学 Mechanical breakdown migration diagnostic method and system based on confrontation study

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6766368B1 (en) * 2000-05-23 2004-07-20 Verizon Laboratories Inc. System and method for providing an internet-based correlation service
CN105738109A (en) * 2016-02-22 2016-07-06 重庆大学 Bearing fault classification diagnosis method based on sparse representation and ensemble learning
US20180330205A1 (en) * 2017-05-15 2018-11-15 Siemens Aktiengesellschaft Domain adaptation and fusion using weakly supervised target-irrelevant data
CN108492258A (en) * 2018-01-17 2018-09-04 天津大学 A kind of radar image denoising method based on generation confrontation network
CN108446711A (en) * 2018-02-01 2018-08-24 南京邮电大学 A kind of Software Defects Predict Methods based on transfer learning
CN109117860A (en) * 2018-06-27 2019-01-01 南京邮电大学 A kind of image classification method based on subspace projection and dictionary learning
CN109165566A (en) * 2018-08-01 2019-01-08 中国计量大学 A kind of recognition of face convolutional neural networks training method based on novel loss function
CN109753998A (en) * 2018-12-20 2019-05-14 山东科技大学 The fault detection method and system, computer program of network are generated based on confrontation type
CN109887047A (en) * 2018-12-28 2019-06-14 浙江工业大学 A kind of signal-image interpretation method based on production confrontation network
CN109947086A (en) * 2019-04-11 2019-06-28 清华大学 Mechanical breakdown migration diagnostic method and system based on confrontation study

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
LONG WEN,LIANG GAO,XINYU LI: ""A new deep Transfer Learning Based on Sparse Auto-Encoder for Fault Diagnosis"", 《IEEE》 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111738455A (en) * 2020-06-02 2020-10-02 山东大学 Fault diagnosis method and system based on integration domain self-adaptation
CN111738455B (en) * 2020-06-02 2021-05-11 山东大学 Fault diagnosis method and system based on integration domain self-adaptation
CN111652320A (en) * 2020-06-10 2020-09-11 创新奇智(上海)科技有限公司 Sample classification method and device, electronic equipment and storage medium
CN111652320B (en) * 2020-06-10 2022-08-09 创新奇智(上海)科技有限公司 Sample classification method and device, electronic equipment and storage medium
CN112084909A (en) * 2020-08-28 2020-12-15 北京旋极信息技术股份有限公司 Fault diagnosis method, system and computer readable storage medium

Also Published As

Publication number Publication date
CN110728377B (en) 2020-06-09

Similar Documents

Publication Publication Date Title
Zhang et al. Intelligent fault diagnosis under varying working conditions based on domain adaptive convolutional neural networks
CN109186973B (en) Mechanical fault diagnosis method of unsupervised deep learning network
CN110728377B (en) Intelligent fault diagnosis method and system for electromechanical equipment
CN112084974A (en) Multi-label rolling bearing fault diagnosis method based on meta-learning
CN111914883B (en) Spindle bearing state evaluation method and device based on deep fusion network
CN110210381A (en) A kind of adaptive one-dimensional convolutional neural networks intelligent failure diagnosis method of domain separation
CN113865868B (en) Rolling bearing fault diagnosis method based on time-frequency domain expression
CN114295377B (en) CNN-LSTM bearing fault diagnosis method based on genetic algorithm
CN114358124B (en) New fault diagnosis method for rotary machinery based on deep countermeasure convolutional neural network
CN110608884B (en) Rolling bearing state diagnosis method based on self-attention neural network
CN110455512B (en) Rotary mechanical multi-integration fault diagnosis method based on depth self-encoder DAE
Tian et al. Deep learning-based open set multi-source domain adaptation with complementary transferability metric for mechanical fault diagnosis
CN116718377A (en) Bearing fault diagnosis method based on wavelet transformation and depth residual error attention mechanism
CN114608826A (en) Training method, diagnosis method and diagnosis device of bearing fault diagnosis model
CN116894187A (en) Gear box fault diagnosis method based on deep migration learning
Fadli et al. Steel surface defect detection using deep learning
CN115290326A (en) Rolling bearing fault intelligent diagnosis method
CN114429152A (en) Rolling bearing fault diagnosis method based on dynamic index antagonism self-adaption
Deng et al. Remaining useful life prediction of machinery: A new multiscale temporal convolutional network framework
CN113505639B (en) Rotary machine multi-parameter health state assessment method based on TPE-XGBoost
Wang et al. One-stage self-supervised momentum contrastive learning network for open-set cross-domain fault diagnosis
Ren et al. Domain fuzzy generalization networks for semi-supervised intelligent fault diagnosis under unseen working conditions
CN116977708B (en) Bearing intelligent diagnosis method and system based on self-adaptive aggregation visual view
CN113239610A (en) Domain self-adaptive rolling bearing fault diagnosis method based on Wasserstein distance
Techane et al. Rotating machinery prognostics and application of machine learning algorithms: Use of deep learning with similarity index measure for health status prediction

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant