CN114049901A - Signal extraction and classification method based on sound - Google Patents

Signal extraction and classification method based on sound Download PDF

Info

Publication number
CN114049901A
CN114049901A CN202111325912.2A CN202111325912A CN114049901A CN 114049901 A CN114049901 A CN 114049901A CN 202111325912 A CN202111325912 A CN 202111325912A CN 114049901 A CN114049901 A CN 114049901A
Authority
CN
China
Prior art keywords
motor
sound
fault
rotor
sound signals
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111325912.2A
Other languages
Chinese (zh)
Inventor
李娟�
张玉洁
刘馨蔚
荣丽红
宋晓科
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qingdao Agricultural University
Original Assignee
Qingdao Agricultural University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qingdao Agricultural University filed Critical Qingdao Agricultural University
Priority to CN202111325912.2A priority Critical patent/CN114049901A/en
Publication of CN114049901A publication Critical patent/CN114049901A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/03Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the type of extracted parameters
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/27Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the analysis technique
    • G10L25/30Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the analysis technique using neural networks
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/48Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
    • G10L25/51Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Artificial Intelligence (AREA)
  • Acoustics & Sound (AREA)
  • Human Computer Interaction (AREA)
  • Evolutionary Computation (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Signal Processing (AREA)
  • Biophysics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Molecular Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Testing Of Devices, Machine Parts, Or Other Structures Thereof (AREA)
  • Measurement Of Mechanical Vibrations Or Ultrasonic Waves (AREA)

Abstract

The invention provides a signal extracting and classifying method based on sound, which comprises the following steps: step 1, collecting sound signals of a motor in different states; step 2, preprocessing the collected sound signals; step 3, extracting fault features by using the sound signals preprocessed in the step 2 to obtain a sound time-frequency spectrogram and construct a data set; step 4, building a CNN model, inputting the data set built in the step 3 into the model for training and testing; step 5, diagnosing and classifying the motor rotor broken bar fault by using the trained model and outputting a result; the method can effectively realize the diagnosis and classification of the motor rotor broken bar fault, provides a new research idea for the motor rotor broken bar fault diagnosis, and utilizes the effectiveness of the fault diagnosis and classification method combining short-time Fourier transform and deep learning to verify that the accuracy rate reaches 100%.

Description

Signal extraction and classification method based on sound
Technical Field
The invention belongs to the technical field of sound signal diagnosis and classification, and particularly relates to a sound-based signal extraction and classification method.
Background
The MCSA method is a motor rotor bar breakage fault diagnosis method based on stator current signal data, is the most common method, and is also considered to be the most effective method. Besides the stator current signal contains fault information, the sound signal of the motor in smooth running also contains much fault information, which is only different from the representation form of the stator current signal. The sound can reflect the running state and the healthy running condition of the motor, and meanwhile, the sound signal is one of the data which are most easily collected in a non-contact mode, so that the sound signal is a common data signal for fault diagnosis of students. At present, most of the rotor broken bar fault diagnosis is carried out by utilizing stator current signals, the research on the rotor broken bar fault carried out by sound signals is less, and the research on the motor rotor broken bar fault diagnosis by combining sound signals and deep learning is not found.
Disclosure of Invention
The invention provides a signal extraction and classification method based on sound, which takes sound signals generated when a motor operates as a research object.
In order to achieve the purpose, the invention adopts the following technical scheme, and the specific steps are as follows:
step 1, collecting sound signals of a motor in different states;
step 2, preprocessing the collected sound signals;
step 3, extracting fault features by using the sound signals preprocessed in the step 2 to obtain a sound time-frequency spectrogram and construct a data set;
step 4, building a CNN model, inputting the data set built in the step 3 into the model for training and testing;
and 5, diagnosing and classifying the motor rotor broken bar fault by using the trained model and outputting a result.
Preferably, the sound signals of the motor in different states are collected in the step 1, the sound signals are collected in two rounds, the first round is to collect the sound signals of the healthy motor in steady operation for 20 minutes, and the second round is to collect the sound signals of the motor with one rotor broken bar fault and the sound signals of the motor with two rotor broken bar faults for 20 minutes respectively.
Preferably, the specific process of collecting the sound signals in the second round is that the motor is disassembled, a motor rotor is separated, a rotor conducting bar is cut off by an electric drill hole, the motor is reassembled, a rotor broken bar fault sound signal is collected after the motor is electrified and runs stably, and sound data of 20 minutes are collected; and (3) disconnecting the motor after the motor is powered off and cooled, separating a motor rotor, chiseling two rotor conducting bars by using an electric drill hole, reassembling the motor, collecting sound signals of the broken bar fault of the two rotors after the motor is powered on and runs stably, and collecting sound data for 20 minutes.
Preferably, the step 2 of preprocessing the collected sound signal is divided into two steps:
s1: uniformly converting the format of the collected sound signals of the motor in different states into a wav format;
s2: the whole voice signal is re-sampled at equal time intervals, and the re-sampling time interval is set to 60 seconds.
Preferably, the fault feature extraction in step 3 is to obtain time-frequency spectrograms of the motor rotor in different fault states by using short-time fourier transform, specifically, the fault feature extraction is performed on 3 sound signals of healthy operation of the motor, 1 rotor fault and 2 rotor fault through short-time fourier transform, so as to obtain the time-frequency spectrograms of sound.
Preferably, the data set constructed in step 3 is randomly divided into a training set and a testing set, and the ratio of the training set to the testing set is 7: 3.
Preferably, the CNN model constructed in step 4 is an AlexNet model.
Compared with the prior art, the fault diagnosis and classification method based on the sound signals and the deep learning can effectively realize the diagnosis and classification of the motor rotor broken bar fault, a new research idea is provided for the motor rotor broken bar fault diagnosis, the verification accuracy rate reaches 100% by using the effectiveness of the fault diagnosis and classification method combining the short-time Fourier transform and the deep learning, compared with other 4 CNN models, the AlexNet model has the shortest training time for classifying the rotor broken bar fault, and the fault classification has rapidity.
Drawings
Fig. 1 is a scene diagram of simulation broken rod fault of electric drill drilling.
FIG. 2 is a rotor scenario diagram for different fault conditions.
Fig. 3 is a signal acquisition scenario diagram.
Fig. 4 is a time domain image of a sound signal.
Fig. 5 is a frequency domain image of the sound signal.
Fig. 6 is a spectrogram of an acoustic signal.
FIG. 7 is a schematic diagram of a fault diagnosis and classification model structure.
Fig. 8 is a fault signature visualization diagram.
Fig. 9 is a Loss plot.
Fig. 10 is a confusion matrix diagram.
Detailed Description
The invention is further illustrated by the following specific examples.
As shown in fig. 1 to 10, a sound-based signal extraction and classification method includes the steps of:
step 1, collecting sound signals of a motor in different states;
step 2, preprocessing the collected sound signals;
step 3, extracting fault features by using the sound signals preprocessed in the step 2 to obtain a sound time-frequency spectrogram and construct a data set;
step 4, building a CNN (convolutional neural network) model, inputting the data set built in the step 3 into the model for training and testing;
and 5, diagnosing and classifying the motor rotor broken bar fault by using the trained model and outputting a result.
The test site of the invention is in engineering building 511 electrician laboratory of Qingdao agricultural university, and the model of the selected squirrel-cage asynchronous motor is YX 380M 1-4. The data acquisition time is from 17 days 12 and 17 months in 2020 to 25 days 12 and 25 months in 2020, and experiments are carried out for data acquisition in both day and night. The average temperature of the electrical laboratory environment during the experiment was 17 ℃, the maximum temperature reached by the motor during the experiment was 56.3 ℃ and the minimum temperature was 16.8 ℃. The conventional parameters of the YX 380M 1-4 squirrel-cage three-phase asynchronous motor are shown in Table 1.1.
TABLE 1.1YX 380M 1-4 TYPE MOTOR PARAMETERS TABLE
Figure BDA0003346925450000041
Figure BDA0003346925450000051
1.1 Sound Signal acquisition
The collection of sound signal divides two rounds altogether to go on, and the first round is gathering healthy motor even running and is carrying out the sound signal, and the motor is by the three-phase alternating current power supply of electrician's laboratory bench, and the centre is protected through the fuse, has done ground protection. The sound signal of the steady running of the healthy motor collects sound data for 20 minutes. The second round is to collect the sound signal of the motor with one rotor broken bar fault and the sound signal of the motor with two rotor broken bar faults, and before collection, the motor needs to be simulated with the rotor broken bar fault. The asynchronous motor needs to be disassembled and assembled when the fault of the broken rotor bar of the motor is simulated, and the rotor bar is drilled by the electric drill until the rotor bar is broken. Figure 1 is a process for hole-drilling a rotor with an electric drill.
The second round sound signal's collection needs dismouting motor, and the process is comparatively loaded down with trivial details, and concrete process is: the method comprises the following steps of (1) disassembling a three-phase asynchronous motor, separating a motor rotor, chiseling off a rotor conducting bar by using an electric drill hole, reassembling the three-phase asynchronous motor, collecting a fault sound signal of a broken rotor bar after the motor is electrified and runs stably, and collecting sound data of 20 minutes; and (3) disconnecting the motor after the motor is powered off and cooled, separating a motor rotor, chiseling two rotor conducting bars by using an electric drill hole, reassembling the motor, collecting sound signals of the broken bar fault of the two rotors after the motor is powered on and runs stably, and collecting sound data for 20 minutes. A schematic view of the rotor in different fault conditions is shown in figure 2.
During the signal acquisition process, it is found that: the motor rotor broken bar fault not only affects the waveform of the motor stator current signal, but also can more directly feel the change of the motor temperature. The influence of different fault states of motor rotor broken bars (the rotor broken bar faults with different numbers) on the temperature is greatly different, the more the rotor broken bar data is, the faster the motor temperature rises, and the longer the motor can not run. Therefore, the process of collecting the sound signals needs to be monitored in real time, and the difficulty of collecting the motor sound signals with the rotor broken bar fault is higher than that of collecting the sound signals of the healthy motor.
The collection of the sound signals under different fault states is carried out in the actual environment of a laboratory, the sound signals are recorded through a recorder in the android mobile phone, and the collected sound data is stored in the format of mp 4. In the acquisition process, a digital oscilloscope is used for observing voltage waveform, and a thermometer is used for measuring the temperature of the shell of the asynchronous motor in real time, so that the influence on the operation of the motor and the increase of the fault degree of the motor due to overhigh temperature are prevented. Fig. 3 is a live sound signal capture image.
1.2 Fault feature extraction
The collected sound signals of the motor in different fault states are non-stationary signals, and a time-frequency analysis method is adopted to extract fault characteristics of the sound signals. The method adopts a short-time Fourier transform method to extract the characteristics of the broken bar fault of the motor sound signal rotor.
1.2.1 Sound Signal preprocessing
Before feature extraction is performed on the sound signal, data preprocessing needs to be performed on the sound signal. There are two steps of pre-processing of the sound signal.
(1) Audio signal format conversion
The sound signals are collected by a recorder function in an android mobile phone in a field actual environment, the storage format of the sound signals is only the mp4 format, but the premise of carrying out spectrogram conversion on the sound signals is that the sound signals are firstly converted into the wav format, so that the collected sound signals under different fault states of a motor are uniformly subjected to format conversion and converted into the wav format.
(2) Equal time interval sound signal resampling
The sound signal collected in the actual environment on site is a long-time continuous time domain signal, and needs to be made into a plurality of samples to be used as the input of artificial intelligence. For this purpose, we re-sample the whole voice signal at equal time intervals, and set the re-sampling time interval to 60 seconds.
1.2.2 Sound Signal representation
The sound signals directly obtained by experimental acquisition are time domain signals, and in order to better diagnose and classify faults, frequency domain images of the sound signals are made, and the time domain images and the frequency domain images of the sound signals are respectively shown in fig. 4 and fig. 5.
As can be seen from fig. 4 and 5, the time domain image and the frequency domain image of the sound signal of the motor in different rotor broken bar fault states hardly show any difference, and the fault feature cannot be effectively extracted, so that the rotor broken bar fault cannot be diagnosed. In addition, the frequency domain images of the sound signals of the motor in different rotor broken bar fault states are intermediate frequency regions in which the sound energy of the motor in the rotor broken bar fault state is mainly distributed below 1500 Hz.
The spectrogram, that is, the time-frequency spectrogram obtained through time-frequency transformation, can reflect the time domain and frequency domain information of the sound signal at the same time, and the spectrogram features of the sound signal in different fault states have differences, which is a precondition for performing sound identification by using an image processing method. The spectrogram may reflect the energy distribution at different frequencies for each short time period. According to the method, time frequency spectrograms of motor rotor broken bar in different fault states are obtained through short-time Fourier transform, 3 sound signals of motor healthy operation, 1 rotor broken bar fault and 2 rotor broken bar faults are subjected to fault feature extraction through the short-time Fourier transform, and the obtained sound spectrograms are shown in FIG. 6.
Constructing a data set by the obtained sound spectrogram under different fault states, wherein the training set and the test set are according to the following steps of 7:3, the details of the constructed data set are shown in table 1.2.
TABLE 1.2 Motor rotor breaking Bar Fault data set
Figure BDA0003346925450000071
Figure BDA0003346925450000081
1.3 Fault diagnosis and Classification framework
The deep learning method has powerful functions, can train texts, images and sounds, but has greater advantage in learning images compared with sounds. The classification based on images has great advantages compared to other deep learning methods. The text extracts fault features of sound signals, and provides a motor rotor broken bar fault diagnosis and classification method based on short-time Fourier transform and deep learning. Firstly, carrying out storage format conversion and resampling pretreatment at equal time intervals on obtained sound signal data, then carrying out short-time Fourier transform on the pretreated sound signal to obtain a two-dimensional image, constructing a data set, inputting the data set into an AlexNet model for learning, further completing diagnosis and classification of motor rotor broken bar faults, and finally comparing and analyzing the AlexNet model diagnosis and classification effects. The structural diagram of the motor rotor broken bar fault diagnosis and classification method is shown in FIG. 7.
As can be seen from fig. 7, the structural diagram for diagnosing and classifying the broken bar fault of the whole motor rotor mainly comprises five parts, namely data preprocessing in two modes of sound signal acquisition, storage format conversion of sound signal data and resampling at equal time intervals, time-frequency spectrogram obtained by short-time fourier transform, CNN model classification and result analysis. The internal structural parameters of the AlexNet model are shown in Table 1.3.
TABLE 1.3AlexNet model parameters
Figure BDA0003346925450000082
Figure BDA0003346925450000091
1.4 test results and analysis
1.4.1 feature visualization
The features extracted from the different layers of the CNN determine the final accuracy of the model. In general, the examination of the visual form of the filter response is considered to be a suitable method of evaluating the performance of the model. The method can extract the characteristics of the lower layer and the upper layer, so that the method has an intuitive reaction on each layer of model. Filters of the primary convolutional layer applied in the last convolutional layer of the model extract color features and directional edges; the filters of the middle layer encode simple textures prepared by a combination of colors and edges; the final convolutional layer extracts the image texture and the explicit mode. Fig. 8 shows a feature visualization of the first convolutional layer and the fully-connected layer of the AlexNet model.
As can be seen from fig. 8, the features of the fully-connected layer are significantly richer than those of the first layer, illustrating the advantages of CNN in feature extraction and classification.
1.4.2 analysis of results
Inputting the motor rotor broken bar fault sound spectrogram data set into an AlexNet model for training, wherein a Loss curve and a confusion matrix obtained after training are respectively shown in fig. 9 and fig. 10.
According to the obtained Loss curve and the confusion matrix, the effectiveness of the fault diagnosis and classification method combining short-time Fourier transform and deep learning is utilized, and the verification accuracy rate reaches 100%. To demonstrate the effectiveness of the method, a comparison was also made between the diagnostic classification effect of the 4 CNN models VGG16, google lenet, densneet 201, and squeezet.
To illustrate the rapidity of model classification, the training time of each model was counted, and table 1.4 shows the training time of each model under the parameters of Batch Size 32 and MaxEpochs 40.
TABLE 1.4 time table for model classification
Figure BDA0003346925450000101
Figure BDA0003346925450000102
As can be seen from Table 1.4, the AlexNet model has the shortest training time for classifying the rotor broken bar fault, and the fault classification has rapidity. In addition, table 1.5 shows the performance parameters of the AlexNet model and the other 4 models at different Batch Size and MaxEpochs.
TABLE 1.5 model Performance parameters
Figure BDA0003346925450000103
Figure BDA0003346925450000111
Figure BDA0003346925450000121
TABLE 1.6 model Multi-Classification Performance parameters
Figure BDA0003346925450000122
Figure BDA0003346925450000131
As can be seen from the classification and multi-classification performances of the models in tables 1.5 and 1.6, the fault classification effect of the AlexNet model under different Batch sizes and Max Epochs is better than that of the other 4 models, and the classification accuracy is high. Combining table 1.4 and table 1.5 can yield: the AlexNet model not only has high classification accuracy, but also has the characteristic of high classification rapidity, and further explains the effectiveness of the provided rotor broken bar fault diagnosis and classification method based on the sound signal.
The invention provides a sound-based signal extraction and classification method for diagnosing and classifying motor rotor faults. Firstly, converting an acquired sound signal in a time-lapse mp4 format into a wav format, carrying out preprocessing of resampling at equal time intervals, then making a time domain image and a frequency domain image of the sound signal, carrying out fault feature extraction on the preprocessed sound signal by using short-time Fourier transform to obtain a two-dimensional sound spectrogram, constructing a data set, inputting the two-dimensional sound spectrogram into an AlexNet model for training, and further finishing diagnosis and classification of rotor broken bar faults. And finally, the classification effects of 4 different CNN models are compared and researched, and performance analysis is carried out. The test result shows that the fault diagnosis and classification method based on the sound signal and the deep learning can effectively realize the diagnosis and classification of the motor rotor broken bar fault and provides a new research idea for the motor rotor broken bar fault diagnosis.
The above description is only a preferred embodiment of the present application and is not intended to limit the present application, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present application shall be included in the protection scope of the present application.
Although the present invention has been described with reference to the specific embodiments, it should be understood by those skilled in the art that various changes and modifications may be made without departing from the spirit and scope of the invention.

Claims (7)

1. A method for sound-based signal extraction and classification, comprising the steps of:
step 1, collecting sound signals of a motor in different states;
step 2, preprocessing the collected sound signals;
step 3, extracting fault features by using the sound signals preprocessed in the step 2 to obtain a sound time-frequency spectrogram and construct a data set;
step 4, building a CNN model, inputting the data set built in the step 3 into the model for training and testing;
and 5, diagnosing and classifying the motor rotor broken bar fault by using the trained model and outputting a result.
2. A sound-based signal extraction and classification method as claimed in claim 1, characterized in that: in the step 1, the sound signals of the motor in different states are collected in two rounds, the first round is to collect the sound signals of the healthy motor in stable operation for 20 minutes, and the second round is to collect the sound signals of the motor with one rotor broken bar fault and the sound signals of the motor with two rotor broken bar faults for 20 minutes respectively.
3. A sound-based signal extraction and classification method as claimed in claim 2, characterized in that: the second round of sound signal acquisition specifically comprises the steps of disassembling the motor, separating out a motor rotor, chiseling off a rotor conducting bar by using an electric drill hole, reassembling the motor, acquiring a fault sound signal of a rotor broken bar after the motor is electrified and runs stably, and acquiring sound data for 20 minutes; and (3) disconnecting the motor after the motor is powered off and cooled, separating a motor rotor, chiseling two rotor conducting bars by using an electric drill hole, reassembling the motor, collecting sound signals of the broken bar fault of the two rotors after the motor is powered on and runs stably, and collecting sound data for 20 minutes.
4. A sound-based signal extraction and classification method as claimed in claim 1, characterized in that: the step 2 of preprocessing the collected sound signals is divided into two steps:
s1: uniformly converting the format of the collected sound signals of the motor in different states into a wav format;
s2: the whole voice signal is re-sampled at equal time intervals, and the re-sampling time interval is set to 60 seconds.
5. A sound-based signal extraction and classification method as claimed in claim 1, characterized in that: the step 3 of extracting fault features is to obtain time frequency spectrograms of the motor rotor in different fault states by using short-time Fourier transform, and specifically, the fault features are extracted by performing short-time Fourier transform on 3 sound signals of healthy operation of the motor, 1 rotor broken bar fault and 2 rotor broken bar faults to obtain the time frequency spectrograms of sound.
6. A sound-based signal extraction and classification method as claimed in claim 1, characterized in that: and 3, randomly dividing the data set constructed in the step 3 into a training set and a testing set, wherein the proportion of the training set to the testing set is 7: 3.
7. A sound-based signal extraction and classification method as claimed in claim 1, characterized in that: and 4, the CNN model built in the step 4 is an AlexNet model.
CN202111325912.2A 2021-11-10 2021-11-10 Signal extraction and classification method based on sound Pending CN114049901A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111325912.2A CN114049901A (en) 2021-11-10 2021-11-10 Signal extraction and classification method based on sound

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111325912.2A CN114049901A (en) 2021-11-10 2021-11-10 Signal extraction and classification method based on sound

Publications (1)

Publication Number Publication Date
CN114049901A true CN114049901A (en) 2022-02-15

Family

ID=80208285

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111325912.2A Pending CN114049901A (en) 2021-11-10 2021-11-10 Signal extraction and classification method based on sound

Country Status (1)

Country Link
CN (1) CN114049901A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117436023A (en) * 2023-12-18 2024-01-23 深圳市鸿明机电有限公司 Servo motor fault diagnosis method based on convolutional neural network

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117436023A (en) * 2023-12-18 2024-01-23 深圳市鸿明机电有限公司 Servo motor fault diagnosis method based on convolutional neural network

Similar Documents

Publication Publication Date Title
CN111325095B (en) Intelligent detection method and system for equipment health state based on acoustic wave signals
CN112885372B (en) Intelligent diagnosis method, system, terminal and medium for power equipment fault sound
CN108960339A (en) A kind of electric car induction conductivity method for diagnosing faults based on width study
CN105841961A (en) Bearing fault diagnosis method based on Morlet wavelet transformation and convolutional neural network
CN110808033B (en) Audio classification method based on dual data enhancement strategy
CN110108992B (en) Cable partial discharge fault identification method and system based on improved random forest algorithm
CN103091612B (en) Separation and recognition algorithm for transformer oiled paper insulation multiple partial discharging source signals
CN112857767B (en) Hydro-turbo generator set rotor fault acoustic discrimination method based on convolutional neural network
CN105244038A (en) Ore dressing equipment fault abnormity audio analyzing and identifying method based on HMM
CN111814872B (en) Power equipment environmental noise identification method based on time domain and frequency domain self-similarity
CN108490349A (en) Motor abnormal sound detection method based on Mel frequency cepstral coefficients
CN111150372B (en) Sleep stage staging system combining rapid representation learning and semantic learning
CN109766874A (en) A kind of fan trouble classifying identification method based on deep learning algorithm
CN108154223A (en) Power distribution network operating mode recording sorting technique based on network topology and long timing information
CN114049901A (en) Signal extraction and classification method based on sound
CN114325256A (en) Power equipment partial discharge identification method, system, equipment and storage medium
CN113551765A (en) Sound spectrum analysis and diagnosis method for equipment fault
CN111239597A (en) Method for representing electric life of alternating current contactor based on audio signal characteristics
CN112932489A (en) Transformer substation noise subjective annoyance degree evaluation model establishing method and model establishing system
CN115481657A (en) Wind generating set communication slip ring fault diagnosis method based on electric signals
CN115728612A (en) Transformer discharge fault diagnosis method and device
Phillips et al. Visualization of environmental audio using ribbon plots and acoustic state sequences
CN116453526A (en) Multi-working-condition abnormality monitoring method and device for hydroelectric generating set based on voice recognition
Xiao et al. Adaptive feature extraction based on Stacked Denoising Auto-encoders for asynchronous motor fault diagnosis
Zhao et al. Fault diagnosis of asynchronous induction motor based on BP neural network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination