CN115798516A - Migratable end-to-end acoustic signal diagnosis method and system - Google Patents

Migratable end-to-end acoustic signal diagnosis method and system Download PDF

Info

Publication number
CN115798516A
CN115798516A CN202310070166.XA CN202310070166A CN115798516A CN 115798516 A CN115798516 A CN 115798516A CN 202310070166 A CN202310070166 A CN 202310070166A CN 115798516 A CN115798516 A CN 115798516A
Authority
CN
China
Prior art keywords
features
feature
fusion
scale
acoustic signal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310070166.XA
Other languages
Chinese (zh)
Other versions
CN115798516B (en
Inventor
余永升
章林柯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Haina Kede Hubei Technology Co ltd
Original Assignee
Haina Kede Hubei Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Haina Kede Hubei Technology Co ltd filed Critical Haina Kede Hubei Technology Co ltd
Priority to CN202310070166.XA priority Critical patent/CN115798516B/en
Publication of CN115798516A publication Critical patent/CN115798516A/en
Application granted granted Critical
Publication of CN115798516B publication Critical patent/CN115798516B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Measurement Of Mechanical Vibrations Or Ultrasonic Waves (AREA)

Abstract

The invention provides a mobile end-to-end acoustic signal diagnosis method, which comprises the following steps: s1: constructing an end-to-end fault diagnosis model, wherein the end-to-end fault diagnosis model comprises the following steps: the system comprises a multi-scale feature extraction unit, a feature fusion unit based on an attention mechanism and a feature extraction unit based on a residual error structure; s2: acquiring an acoustic signal generated in the equipment fault state, and extracting multi-scale features of the acoustic signal through a multi-scale feature extraction unit; s3: inputting the multi-scale features into a feature fusion unit based on an attention mechanism, and performing weighted fusion on the multi-scale features to obtain fusion features; s4: and inputting the fusion features into a feature extraction unit based on a residual error structure, and further extracting the features of the fusion features to obtain a fault prediction result. By extracting and weighting and fusing the multi-scale features, the method can effectively analyze the acoustic signals with more scale features in the complex environment.

Description

Migratable end-to-end acoustic signal diagnosis method and system
Technical Field
The invention relates to the field of acoustic signal diagnosis, in particular to a mobile end-to-end acoustic signal diagnosis method and system.
Background
At present, the existing fault diagnosis methods based on acoustic signals mainly comprise model-driven and knowledge-driven diagnosis methods.
The method based on model driving mainly realizes fault diagnosis by combining an expert experience knowledge model and signal analysis knowledge, and generally needs to obtain fault characteristic parameters according to structural parameters and an experience fault model of equipment, analyze equipment feedback signals and judge the running state of the equipment. The method has the advantages that fault empirical models of common parts are mature, and the fault type of the equipment can be identified by carrying out frequency analysis and fault frequency matching on vibration and sound signals of the equipment, but the method has the defects that non-standard parts lack fault empirical models and are difficult to carry out accurate fault diagnosis.
The fault diagnosis method based on knowledge driving mainly refers to a fault diagnosis method combining data processing and a machine learning algorithm, and mainly comprises a fault diagnosis method based on machine learning and a fault diagnosis method based on deep learning. Compared with the traditional fault diagnosis method, the method based on machine learning has strong fault identification capability, high precision and better robustness under the influence of noise, and is easy to realize continuous and real-time state monitoring on equipment.
The diagnosis precision of the fault diagnosis method based on machine learning mainly depends on the effect of de-feature expression and the classification capability of a fault classifier. Accurate feature expression can improve the discrimination of data samples in a feature space, but features with poor expression capability can cause samples of different classes to be aliased in the feature space, thereby affecting the classification precision.
The fault diagnosis method based on deep learning can make up the problem of poor machine learning fault feature expression capability. Using a deep learning model, more feature representations can be extracted from the time and frequency domain features of the device acoustic signal. Compared with a machine learning method, the method avoids the problem of inaccurate diagnosis caused by fuzzy feature expression or improper feature selection, and improves the adaptability of a fault diagnosis algorithm. The fault diagnosis method based on deep learning needs to extract time-frequency characteristics of collected sound signals and then use a neural network to learn and classify the characteristics.
Generally, the conventional fault diagnosis method for the acoustic signal has the defects of high detection cost, poor stability, low real-time performance and the like.
The above is only for the purpose of assisting understanding of the technical aspects of the present invention, and does not represent an admission that the above is prior art.
Disclosure of Invention
In order to solve the above technical problem, the present invention provides a mobile end-to-end acoustic signal diagnosis method, including:
s1: constructing an end-to-end fault diagnosis model, wherein the end-to-end fault diagnosis model comprises the following steps: the system comprises a multi-scale feature extraction unit, a feature fusion unit based on an attention mechanism and a feature extraction unit based on a residual error structure;
s2: acquiring an acoustic signal generated in the equipment fault state, and extracting multi-scale features of the acoustic signal through a multi-scale feature extraction unit;
s3: inputting the multi-scale features into a feature fusion unit based on an attention mechanism, and performing weighted fusion on the multi-scale features to obtain fusion features;
s4: inputting the fusion features into a feature extraction unit based on a residual error structure, and further extracting the features of the fusion features to obtain a fault prediction result.
Preferably, step S2 specifically includes:
the convolution feature extraction is respectively carried out on the acoustic signals by the one-dimensional convolution layers with different scales in the multi-scale feature extraction unit to obtain multi-scale features, and the calculation formula is as follows:
Figure SMS_1
wherein F is an acoustic signal, M (s) For the multi-scale feature, s is the number of scales, w (N) represents the weight of the nth convolution kernel, N is the number of convolution kernels, F (i) represents the acoustic signal calculated in the ith convolution operation, i is the number of convolution operation steps, N is the total number of convolution operation steps, and b (N) represents the bias matrix of the nth convolution kernel.
Preferably, step S3 specifically includes:
s31: the feature fusion unit based on the attention mechanism comprises: the channel attention unit and the space attention unit input the multi-scale features into the channel attention unit, and the channel attention features are obtained through calculation, wherein the calculation formula is as follows:
Figure SMS_2
wherein, M c In order to characterize the attention of the channel,
Figure SMS_3
as a sigmoid function, M (s) For multi-scale features, M (s) avg Is the mean of the multi-scale features, M (s) max For the maximum value of the multi-scale feature, MLP () represents a two-layer neural network, avgPool represents the mean pooling, maxPool represents the maximum pooling,
Figure SMS_4
sharing the weight W of the full connection layer 0 And W 1 Sharing a numerical value;
s32: inputting the channel attention feature into a space attention unit, and calculating to obtain a space attention feature, wherein the calculation formula is as follows:
Figure SMS_5
wherein M is s For spatial attention features, M c,avg Is the mean value of the channel attention characteristics, M c,max Is the maximum value of the channel attention feature, f 7×7 Represents a convolution operation of size 7 × 7;
s33: carrying out weighted fusion on the space attention feature and the channel attention feature through a CBAM attention mechanism to obtain a fusion feature, wherein the calculation formula is as follows:
Figure SMS_6
wherein the content of the first and second substances,
Figure SMS_7
a weight matrix for the spatial attention cell output,
Figure SMS_8
is the weight coefficient of the channel attention unit, M (s) (F) Is a multi-scale feature.
Preferably, the feature extraction unit based on the residual structure includes 7 layers of residual blocks, a normalization layer and a full connection layer;
the size of a convolution kernel used by each residual block is 3 multiplied by 1, and the convolution kernel is activated by using a LeakyRelu activation function;
the size of the fully connected layer is 256 × 2.
Preferably, step S4 specifically includes:
s41: inputting the fusion features into a feature extraction unit based on a residual error structure, and sequentially performing feature extraction through 7 layers of residual error blocks to obtain diagnosis features;
s42: inputting the diagnosis characteristics into a normalization layer for standardization processing to obtain standardized diagnosis characteristics;
s43: and inputting the standardized diagnosis characteristics into a full connection layer to judge the fault probability, and outputting a fault prediction result.
A portable end-to-end acoustic signal diagnostic system comprising:
the diagnosis model building module is used for building an end-to-end fault diagnosis model, and the end-to-end fault diagnosis model comprises the following components: the system comprises a multi-scale feature extraction unit, a feature fusion unit based on an attention mechanism and a feature extraction unit based on a residual error structure;
the characteristic extraction module is used for acquiring an acoustic signal generated in the equipment fault state and extracting multi-scale characteristics of the acoustic signal through the multi-scale characteristic extraction unit;
the feature fusion module is used for inputting the multi-scale features into a feature fusion unit based on an attention mechanism, and performing weighted fusion on the multi-scale features to obtain fusion features;
and the failure prediction result acquisition module is used for inputting the fusion features into the feature extraction unit based on the residual error structure, and further extracting the features of the fusion features to obtain a failure prediction result.
The invention has the following beneficial effects:
1. the method comprises the steps that the characteristics of an acoustic signal change when the equipment is in a fault state, the acoustic signal is collected in real time in a non-contact mode, and non-contact of the acoustic signal in judgment is guaranteed;
2. by extracting and weighting and fusing multi-scale features, acoustic signals with more scale features in a complex environment can be effectively analyzed, and the problems that the existing fault diagnosis method is poor in instantaneity and cannot well balance time-frequency resolution are solved.
Drawings
FIG. 1 is a flow chart of a method according to an embodiment of the present invention;
FIG. 2 is a block diagram of a multi-scale feature extraction unit
FIG. 3 is a block diagram of a feature fusion unit based on an attention mechanism;
FIG. 4 is a block diagram of a feature extraction unit based on a residual structure;
the implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the invention and do not limit the invention.
Referring to fig. 1, the present invention provides a migratory end-to-end acoustic signal diagnosis method, and the overall idea is as follows: the method comprises the steps that the characteristics of an acoustic signal are changed when the equipment is in a fault state, the acoustic signal is collected in real time in a non-contact mode, the filters with different sizes extract the characteristics of different time-frequency tradeoffs, and finally, whether a fault condition exists is predicted through characteristic fusion and characteristic extraction, so that end-to-end real-time detection of the fault type is achieved;
the method comprises the following steps:
s1: constructing an end-to-end fault diagnosis model, wherein the end-to-end fault diagnosis model comprises the following steps: the system comprises a multi-scale feature extraction unit, a feature fusion unit based on an attention mechanism and a feature extraction unit based on a residual error structure;
s2: acquiring an acoustic signal generated in the equipment fault state, and extracting multi-scale features of the acoustic signal through a multi-scale feature extraction unit;
s3: inputting the multi-scale features into a feature fusion unit based on an attention mechanism, and performing weighted fusion on the multi-scale features to obtain fusion features;
s4: inputting the fusion features into a feature extraction unit based on a residual error structure, and further extracting the features of the fusion features to obtain a fault prediction result.
Specifically, the training mode of the end-to-end fault diagnosis model includes:
collecting acoustic signals generated when the equipment is in a normal operation state and acoustic signals generated when the equipment is in different fault states, obtaining a training data set by taking the acoustic signals and the corresponding fault states as training samples, and dividing the training data set into a training set and a test set;
establishing an end-to-end fault diagnosis model for detecting the current equipment fault condition; and respectively training and testing the end-to-end fault diagnosis model by utilizing the training set and the testing set, and after the training and the testing are finished, taking the neural network model as the trained end-to-end fault diagnosis.
According to the method, the training set and the test set are established by utilizing the acoustic signals generated when the fault state of the equipment is known, and the established end-to-end fault diagnosis model is trained, so that the trained model can accurately predict the corresponding fault category according to the sound characteristics of the acoustic signals.
Further, the training mode of the end-to-end fault diagnosis model further comprises: after different application scenes are changed, the end-to-end fault diagnosis model is subjected to transfer learning.
After the application scenes are changed, the model is subjected to transfer learning to adapt to the change of different application scenes, so that the model is continuously optimized, different application scenes are covered, and the error of fault diagnosis is reduced.
In this embodiment, a structure diagram of the multi-scale feature extraction unit is shown in fig. 2, and in step S2, convolution features of an original time domain acoustic signal are extracted by using one-dimensional convolution layers of different sizes, the convolution layers of different sizes correspond to filters of different sizes, features extracted by the small-sized filters have a higher time resolution and a lower frequency resolution, and features extracted by the large-sized filters have a higher frequency resolution and a lower time resolution;
the multi-scale feature extraction inherits the local time-frequency analysis idea of short-time Fourier transform, and simultaneously solves the problems that the time-frequency resolution cannot be considered due to a fixed window and the like. The time resolution and the frequency resolution can be better balanced, and information of different time frequency tradeoffs can be obtained;
the step S2 specifically comprises the following steps:
the convolution feature extraction is respectively carried out on the acoustic signals by the one-dimensional convolution layers with different scales in the multi-scale feature extraction unit to obtain multi-scale features, and the calculation formula is as follows:
Figure SMS_9
wherein F is an acoustic signal, M (s) For the multi-scale feature, s is a scale number, w (N) represents a weight of an nth convolution kernel, N is the number of the convolution kernel, F (i) represents an acoustic signal calculated in the ith convolution operation, i is the step number of the convolution operation, N is the total step number of the convolution operation, and b (N) represents a bias matrix of the nth convolution kernel.
In this embodiment, a structure diagram of the feature fusion unit based on the attention mechanism in step S3 is shown in fig. 3, where the feature fusion unit is configured to perform weighted fusion on features of different scales, change original feature distribution, learn weight distribution in the features of different scales, perform weighted operation on the original features, change the distribution of the original features, enhance valid features, and suppress invalid features; similar to a band pass filter, the useful band components can be amplified and the useless band components can be attenuated;
the step S3 specifically comprises the following steps:
s31: the feature fusion unit based on the attention mechanism comprises: the method comprises a channel attention unit and a space attention unit, wherein in the channel attention unit, two feature maps with the size of 1 multiplied by C are obtained by performing maximum pooling and average pooling on input feature maps respectively, then the feature maps are input into a shared full-link layer, the outputs of the shared full-link layer are added, and the fusion features can be obtained finally by activating a sigmoid function and weighting the original feature maps;
inputting the multi-scale features into a channel attention unit, and calculating to obtain the channel attention features, wherein the calculation formula is as follows:
Figure SMS_10
wherein, M c In order to characterize the attention of the channel,
Figure SMS_11
as a sigmoid function, M (s) For multi-scale features, M (s) avg Is the mean of the multi-scale features, M (s) max For the maximum value of the multi-scale feature, MLP () represents a two-layer neural network, avgPool represents the mean pooling, maxPool represents the maximum pooling,
Figure SMS_12
sharing the weight W of the full connection layer 0 And W 1 Sharing a numerical value;
s32: a spatial attention unit that pays more attention to the features of which part of the space is more important; taking a feature map output by a channel attention unit as an input, respectively performing average value pooling and maximum value pooling in channel dimensions, then performing convolution, reducing the result into a channel, and weighting the input feature map by a sigmoid function to obtain a feature map output by a space attention unit;
inputting the channel attention characteristics into a space attention unit, and calculating to obtain space attention characteristics, wherein the calculation formula is as follows:
Figure SMS_13
wherein, M s For spatial attention features, M c,avg Is the mean value of the channel attention characteristics, M c,max Is the maximum value of the channel attention feature, f 7×7 Represents a convolution operation of size 7 × 7;
s33: carrying out weighted fusion on the space attention feature and the channel attention feature through a CBAM attention mechanism to obtain a fusion feature, wherein the calculation formula is as follows:
Figure SMS_14
wherein the content of the first and second substances,
Figure SMS_15
a weight matrix for the spatial attention cell output,
Figure SMS_16
is the weight coefficient of the channel attention unit, M (s) (F) Is a multi-scale feature.
In this embodiment, a structure diagram of the feature extraction unit based on the residual error structure is shown in fig. 4, and the feature extraction unit based on the residual error structure is mainly used for further feature extraction and feature dimension reduction on the fusion features extracted by the above operations, and finally outputting a fault prediction result through a full connection layer;
the characteristic extraction unit based on the residual error structure comprises 7 layers of residual error blocks, a normalization layer and a full connection layer;
the size of a convolution kernel used by each residual block is 3 multiplied by 1, and the convolution kernel is activated by using a LeakyRelu activation function;
the size of the fully connected layer is 256 × 2.
In this embodiment, step S4 specifically includes:
s41: inputting the fusion features into a feature extraction unit based on a residual error structure, and sequentially performing feature extraction through 7 layers of residual error blocks to obtain diagnosis features;
s42: inputting the diagnosis characteristics into a normalization layer for standardization processing to obtain standardized diagnosis characteristics;
s43: and inputting the standardized diagnosis characteristics into a full connection layer to judge the fault probability, and outputting a fault prediction result.
The invention provides a migratable end-to-end acoustic signal diagnostic system, comprising:
the diagnosis model building module is used for building an end-to-end fault diagnosis model, and the end-to-end fault diagnosis model comprises the following components: the system comprises a multi-scale feature extraction unit, a feature fusion unit based on an attention mechanism and a feature extraction unit based on a residual error structure;
the characteristic extraction module is used for acquiring an acoustic signal generated in the equipment fault state and extracting multi-scale characteristics of the acoustic signal through the multi-scale characteristic extraction unit;
the feature fusion module is used for inputting the multi-scale features into a feature fusion unit based on an attention mechanism, and performing weighted fusion on the multi-scale features to obtain fusion features;
and the fault prediction result acquisition module is used for inputting the fusion features into the feature extraction unit based on the residual error structure, and further extracting the features of the fusion features to acquire a fault prediction result.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or system that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or system. Without further limitation, an element defined by the phrases "comprising a," "8230," "8230," or "comprising" does not exclude the presence of other like elements in a process, method, article, or system comprising the element.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments. In the unit claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The use of the words first, second, third, etc. do not denote any order, but rather the words first, second, etc. are to be interpreted as indicating.
The above description is only a preferred embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes, which are made by using the contents of the present specification and the accompanying drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.

Claims (6)

1. A portable end-to-end acoustic signal diagnostic method, comprising:
s1: constructing an end-to-end fault diagnosis model, wherein the end-to-end fault diagnosis model comprises the following steps: the system comprises a multi-scale feature extraction unit, a feature fusion unit based on an attention mechanism and a feature extraction unit based on a residual error structure;
s2: acquiring an acoustic signal generated in the equipment fault state, and extracting multi-scale features of the acoustic signal through a multi-scale feature extraction unit;
s3: inputting the multi-scale features into a feature fusion unit based on an attention mechanism, and performing weighted fusion on the multi-scale features to obtain fusion features;
s4: and inputting the fusion features into a feature extraction unit based on a residual error structure, and further extracting the features of the fusion features to obtain a fault prediction result.
2. The portable end-to-end acoustic signal diagnostic method according to claim 1, characterized in that step S2 specifically is:
the convolution feature extraction is respectively carried out on the acoustic signals by the one-dimensional convolution layers with different scales in the multi-scale feature extraction unit to obtain multi-scale features, and the calculation formula is as follows:
Figure QLYQS_1
wherein F is an acoustic signal, M (s) For the multi-scale feature, s is the number of scales, w (N) represents the weight of the nth convolution kernel, N is the number of convolution kernels, F (i) represents the acoustic signal calculated in the ith convolution operation, i is the number of convolution operation steps, N is the total number of convolution operation steps, and b (N) represents the bias matrix of the nth convolution kernel.
3. The portable end-to-end acoustic signal diagnostic method according to claim 1, characterized in that step S3 specifically is:
s31: the feature fusion unit based on the attention mechanism comprises: the channel attention unit and the space attention unit input the multi-scale features into the channel attention unit, and the channel attention features are obtained through calculation, wherein the calculation formula is as follows:
Figure QLYQS_2
wherein, M c In order to be a feature of the channel attention,
Figure QLYQS_3
as a sigmoid function, M (s) For multi-scale features, M (s) avg Is the mean value of the multi-scale features, M (s) max MLP () represents a two-layered neural network, avgPool represents the mean pooling, maxPool represents the maximum pooling,
Figure QLYQS_4
sharing the weight W of the full connection layer 0 And W 1 Sharing a numerical value;
s32: inputting the channel attention feature into a space attention unit, and calculating to obtain a space attention feature, wherein the calculation formula is as follows:
Figure QLYQS_5
wherein M is s For spatial attention features, M c,avg Is the mean value of the channel attention characteristics, M c,max Is the maximum value of the channel attention feature, f 7×7 Represents a convolution operation of size 7 × 7;
s33: carrying out weighted fusion on the space attention feature and the channel attention feature through a CBAM attention mechanism to obtain a fusion feature, wherein the calculation formula is as follows:
Figure QLYQS_6
wherein the content of the first and second substances,
Figure QLYQS_7
a weight matrix for the spatial attention cell output,
Figure QLYQS_8
is the weight coefficient of the channel attention unit, M (s) (F) Is a multi-scale feature.
4. The migratable end-to-end acoustic signal diagnostic method of claim 1, wherein said residual structure based feature extraction unit comprises 7 layers of residual blocks, a normalization layer, and a full-link layer;
the size of a convolution kernel used by each residual block is 3 multiplied by 1, and the convolution kernel is activated by using a LeakyRelu activation function;
the size of the fully connected layer is 256 × 2.
5. The portable end-to-end acoustic signal diagnostic method according to claim 4, characterized in that step S4 specifically is:
s41: inputting the fusion features into a feature extraction unit based on a residual error structure, and sequentially performing feature extraction through 7 layers of residual error blocks to obtain diagnosis features;
s42: inputting the diagnosis characteristics into a normalization layer for standardization processing to obtain standardized diagnosis characteristics;
s43: and inputting the standardized diagnosis characteristics into a full connection layer to judge the fault probability, and outputting a fault prediction result.
6. A portable end-to-end acoustic signal diagnostic system, comprising:
the diagnosis model building module is used for building an end-to-end fault diagnosis model, and the end-to-end fault diagnosis model comprises the following components: the system comprises a multi-scale feature extraction unit, a feature fusion unit based on an attention mechanism and a feature extraction unit based on a residual error structure;
the characteristic extraction module is used for acquiring an acoustic signal generated in the equipment fault state and extracting multi-scale characteristics of the acoustic signal through the multi-scale characteristic extraction unit;
the feature fusion module is used for inputting the multi-scale features into a feature fusion unit based on an attention mechanism, and performing weighted fusion on the multi-scale features to obtain fusion features;
and the failure prediction result acquisition module is used for inputting the fusion features into the feature extraction unit based on the residual error structure, and further extracting the features of the fusion features to obtain a failure prediction result.
CN202310070166.XA 2023-02-07 2023-02-07 Migratable end-to-end acoustic signal diagnosis method and system Active CN115798516B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310070166.XA CN115798516B (en) 2023-02-07 2023-02-07 Migratable end-to-end acoustic signal diagnosis method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310070166.XA CN115798516B (en) 2023-02-07 2023-02-07 Migratable end-to-end acoustic signal diagnosis method and system

Publications (2)

Publication Number Publication Date
CN115798516A true CN115798516A (en) 2023-03-14
CN115798516B CN115798516B (en) 2023-04-18

Family

ID=85430164

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310070166.XA Active CN115798516B (en) 2023-02-07 2023-02-07 Migratable end-to-end acoustic signal diagnosis method and system

Country Status (1)

Country Link
CN (1) CN115798516B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116417013A (en) * 2023-06-09 2023-07-11 中国海洋大学 Underwater propeller fault diagnosis method and system
CN116645978A (en) * 2023-06-20 2023-08-25 方心科技股份有限公司 Electric power fault sound class increment learning system and method based on super-computing parallel environment
CN117292716A (en) * 2023-11-24 2023-12-26 国网山东省电力公司济南供电公司 Transformer fault diagnosis method and system based on voiceprint and infrared feature fusion

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112284736A (en) * 2020-10-23 2021-01-29 天津大学 Convolutional neural network fault diagnosis method based on multi-channel attention module
CN113158722A (en) * 2020-12-24 2021-07-23 哈尔滨理工大学 Rotary machine fault diagnosis method based on multi-scale deep neural network
CN113281029A (en) * 2021-06-09 2021-08-20 重庆大学 Rotating machinery fault diagnosis method and system based on multi-scale network structure
CN113569990A (en) * 2021-08-25 2021-10-29 浙江工业大学 Performance equipment fault diagnosis model construction method oriented to strong noise interference environment
CN113673346A (en) * 2021-07-20 2021-11-19 中国矿业大学 Motor vibration data processing and state recognition method based on multi-scale SE-Resnet
CN113822139A (en) * 2021-07-27 2021-12-21 河北工业大学 Equipment fault diagnosis method based on improved 1DCNN-BilSTM
CN114492642A (en) * 2022-01-27 2022-05-13 洛阳中重自动化工程有限责任公司 Mechanical fault online diagnosis method for multi-scale element depth residual shrinkage network
CN115587299A (en) * 2022-09-09 2023-01-10 中国石油大学(华东) Transferable multi-scale rotating machine fault diagnosis method and system
US20230025826A1 (en) * 2021-07-12 2023-01-26 Servicenow, Inc. Anomaly Detection Using Graph Neural Networks

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112284736A (en) * 2020-10-23 2021-01-29 天津大学 Convolutional neural network fault diagnosis method based on multi-channel attention module
CN113158722A (en) * 2020-12-24 2021-07-23 哈尔滨理工大学 Rotary machine fault diagnosis method based on multi-scale deep neural network
CN113281029A (en) * 2021-06-09 2021-08-20 重庆大学 Rotating machinery fault diagnosis method and system based on multi-scale network structure
US20230025826A1 (en) * 2021-07-12 2023-01-26 Servicenow, Inc. Anomaly Detection Using Graph Neural Networks
CN113673346A (en) * 2021-07-20 2021-11-19 中国矿业大学 Motor vibration data processing and state recognition method based on multi-scale SE-Resnet
CN113822139A (en) * 2021-07-27 2021-12-21 河北工业大学 Equipment fault diagnosis method based on improved 1DCNN-BilSTM
CN113569990A (en) * 2021-08-25 2021-10-29 浙江工业大学 Performance equipment fault diagnosis model construction method oriented to strong noise interference environment
CN114492642A (en) * 2022-01-27 2022-05-13 洛阳中重自动化工程有限责任公司 Mechanical fault online diagnosis method for multi-scale element depth residual shrinkage network
CN115587299A (en) * 2022-09-09 2023-01-10 中国石油大学(华东) Transferable multi-scale rotating machine fault diagnosis method and system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
李秋婷: "基于注意力机制的滚动轴承故障诊断方法研究" *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116417013A (en) * 2023-06-09 2023-07-11 中国海洋大学 Underwater propeller fault diagnosis method and system
CN116417013B (en) * 2023-06-09 2023-08-25 中国海洋大学 Underwater propeller fault diagnosis method and system
CN116645978A (en) * 2023-06-20 2023-08-25 方心科技股份有限公司 Electric power fault sound class increment learning system and method based on super-computing parallel environment
CN116645978B (en) * 2023-06-20 2024-02-02 方心科技股份有限公司 Electric power fault sound class increment learning system and method based on super-computing parallel environment
CN117292716A (en) * 2023-11-24 2023-12-26 国网山东省电力公司济南供电公司 Transformer fault diagnosis method and system based on voiceprint and infrared feature fusion
CN117292716B (en) * 2023-11-24 2024-02-06 国网山东省电力公司济南供电公司 Transformer fault diagnosis method and system based on voiceprint and infrared feature fusion

Also Published As

Publication number Publication date
CN115798516B (en) 2023-04-18

Similar Documents

Publication Publication Date Title
CN115798516B (en) Migratable end-to-end acoustic signal diagnosis method and system
WO2017024692A1 (en) Method of simulating analog circuit diagnostic fault using single measurement node
CN110334764A (en) Rotating machinery intelligent failure diagnosis method based on integrated depth self-encoding encoder
CN113485302B (en) Vehicle operation process fault diagnosis method and system based on multivariate time sequence data
CN104680541B (en) Remote Sensing Image Quality evaluation method based on phase equalization
EP3767551A1 (en) Inspection system, image recognition system, recognition system, discriminator generation system, and learning data generation device
CN114509266B (en) Bearing health monitoring method based on fault feature fusion
CN115659583A (en) Point switch fault diagnosis method
CN112668526A (en) Bolt group loosening positioning monitoring method based on deep learning and piezoelectric active sensing
CN112364706A (en) Small sample bearing fault diagnosis method based on class imbalance
CN116741148A (en) Voice recognition system based on digital twinning
CN112529177A (en) Vehicle collision detection method and device
Whitehill et al. Whosecough: In-the-wild cougher verification using multitask learning
CN105823634A (en) Bearing damage identification method based on time frequency relevance vector convolution Boltzmann machine
CN106682604B (en) Blurred image detection method based on deep learning
CN116524273A (en) Method, device, equipment and storage medium for detecting draft tube of power station
CN113990303B (en) Environmental sound identification method based on multi-resolution cavity depth separable convolution network
CN115184054B (en) Mechanical equipment semi-supervised fault detection and analysis method, device, terminal and medium
CN115758237A (en) Bearing fault classification method and system based on intelligent inspection robot
CN112069621B (en) Method for predicting residual service life of rolling bearing based on linear reliability index
CN113624466A (en) Steam turbine rotor fault diagnosis method, device, equipment and storage medium
CN113409213A (en) Plunger pump fault signal time-frequency graph noise reduction enhancement method and system
CN114357855A (en) Structural damage identification method and device based on parallel convolution neural network
Huang et al. An accurate prediction algorithm of RUL for bearings: time-frequency analysis based on MRCNN
CN112259126B (en) Robot and method for assisting in identifying autism voice features

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant