CN110867196B - Machine equipment state monitoring system based on deep learning and voice recognition - Google Patents

Machine equipment state monitoring system based on deep learning and voice recognition Download PDF

Info

Publication number
CN110867196B
CN110867196B CN201911222026.XA CN201911222026A CN110867196B CN 110867196 B CN110867196 B CN 110867196B CN 201911222026 A CN201911222026 A CN 201911222026A CN 110867196 B CN110867196 B CN 110867196B
Authority
CN
China
Prior art keywords
module
neural network
network model
sound
machine equipment
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911222026.XA
Other languages
Chinese (zh)
Other versions
CN110867196A (en
Inventor
刘亚荣
黄昕哲
谢晓兰
刘鑫
于顼顼
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guilin University of Technology
Original Assignee
Guilin University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guilin University of Technology filed Critical Guilin University of Technology
Priority to CN201911222026.XA priority Critical patent/CN110867196B/en
Publication of CN110867196A publication Critical patent/CN110867196A/en
Application granted granted Critical
Publication of CN110867196B publication Critical patent/CN110867196B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/48Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
    • G10L25/51Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01HMEASUREMENT OF MECHANICAL VIBRATIONS OR ULTRASONIC, SONIC OR INFRASONIC WAVES
    • G01H17/00Measuring mechanical vibrations or ultrasonic, sonic or infrasonic waves, not provided for in the preceding groups
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01MTESTING STATIC OR DYNAMIC BALANCE OF MACHINES OR STRUCTURES; TESTING OF STRUCTURES OR APPARATUS, NOT OTHERWISE PROVIDED FOR
    • G01M13/00Testing of machine parts
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01MTESTING STATIC OR DYNAMIC BALANCE OF MACHINES OR STRUCTURES; TESTING OF STRUCTURES OR APPARATUS, NOT OTHERWISE PROVIDED FOR
    • G01M99/00Subject matter not provided for in other groups of this subclass
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/03Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the type of extracted parameters
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/27Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the analysis technique
    • G10L25/30Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the analysis technique using neural networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Signal Processing (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Theoretical Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Measurement Of Mechanical Vibrations Or Ultrasonic Waves (AREA)

Abstract

The invention discloses a machine equipment state monitoring system based on deep learning and voice recognition. The system comprises a training data acquisition module for acquiring sound signals; the manual marking module marks the sound signals to form a sound sample library; the sound sample is sent to a preset neural network model for training through pretreatment and feature extraction; the real-time data acquisition module acquires sound signals and sends the sound signals into the trained neural network model; the state recognition module is combined with manual experience to comprehensively recognize and judge the running state of the machine through sound signals, and feeds back and outputs the result. The invention can monitor the running state of the machine equipment in real time, and send out an alarm signal when the machine equipment fails or is in a dangerous state, so as to inform an equipment manager to maintain in time, thereby improving the working efficiency; meanwhile, the neural network model is trained by combining a deep learning algorithm with artificial experience, so that the method has the advantages of high identification accuracy, good safety, high efficiency, intellectualization and the like.

Description

Machine equipment state monitoring system based on deep learning and voice recognition
Technical Field
The invention relates to the technical field of recognition of sound signals, in particular to a machine equipment state monitoring system based on deep learning and sound recognition.
Background
At present, in the use process of machine equipment in a factory environment, the machine equipment is easy to have a plurality of problems such as abrasion, aging and the like due to the influence of natural factors such as temperature, humidity, geographical position and the like and human factors. Machine condition monitoring is a very complex process, and although there are many researches on machine condition monitoring and fault diagnosis at present, due to the fact that the faults are more in types, accidental or random faults occur, and meanwhile, due to the complexity of the machine, machine condition monitoring and fault diagnosis are still a worth discussing problem.
According to the feature description and decision method adopted by the system, the state monitoring of the existing machine equipment is mainly aiming at the fault diagnosis development of the machine equipment, and is summarized into two main categories: a fault diagnosis method based on a system mathematical model and a fault diagnosis method based on a non-model. The fault diagnosis method based on the mathematical model of the system is to estimate the system output by constructing an observer, and then compare it with the output measured value to obtain fault information. The fault diagnosis method based on the non-model includes a fault diagnosis method based on measurable signal processing, a fault diagnosis method based on a fault diagnosis expert system, a fault diagnosis method based on fault mode identification, a fault diagnosis method based on a fault tree, a fault diagnosis method based on an artificial neural network, and the like. However, the existing fault diagnosis technology and method have the following problems:
(1) The major machinery used in production or the expensive large units are inconvenient to access or disassemble for inspection when faults occur.
(2) The machine equipment with high safety requirements is difficult to maintain and has high maintenance cost.
(3) The method has the defects in the aspects of importance of production, personal safety, environmental protection, social influence and the like.
(4) When analyzing and processing data, most diagnostic methods adopt various independent models to solve the problem, the method needs to combine various models well, and various conditions need to be considered in different problems, so that the method has certain limitation.
(5) At present, a good remote diagnosis method is difficult to completely solve for the fault diagnosis of the machine equipment of the complex system.
Because the machine equipment emits sound when running and the running states are different and the sound is different, the invention collects the sound data of the machine equipment and key parts thereof through the sensor, marks the sound data artificially to form a sound sample library, then carries out pretreatment, carries out feature extraction, sends the sound sample library into a preset neural network model, carries out recognition and judgment on the running state of the machine through the neural network model, carries out sound recognition on the data collected in real time through pretreatment and feature extraction, sends the data into the trained neural network model to carry out sound recognition, finally carries out comprehensive judgment on the recognition result by combining with the artificial experience, and carries out re-marking on the sound signal to form a new sample, so that the sound sample library is continuously increased and the recognition rate of the neural network model is improved. The invention can monitor the running state of the machine equipment in real time, display the running state of the machine, and send out an alarm signal when the machine equipment or key parts thereof are in a fault or dangerous state, so as to inform an equipment manager to maintain in time, improve the working efficiency and reduce the economic loss.
Disclosure of Invention
The invention aims at providing a machine equipment state monitoring system based on deep learning and voice recognition aiming at machine equipment on a factory pipeline, so as to make up for the defects of the operation state and fault monitoring of the traditional machine equipment.
In order to solve the above-mentioned technology, the technical scheme adopted by the invention is: a machine equipment condition monitoring system based on deep learning and voice recognition, comprising: the system comprises a training data acquisition module, a manual marking module, a sound sample library, preprocessing, feature extraction, a neural network model, a real-time data acquisition module, a state identification module, an identification result module, a manual experience module, a state display module and an alarm module. The system comprises a training data acquisition module, a manual marking module, a voice sample library, a characteristic extraction module, a neural network model, a state identification module, a manual experience module, a manual marking module, a state display module and an alarm module, wherein the training data acquisition module is connected with the manual marking module, the manual marking module is connected with the voice sample library and the identification result module respectively, the voice sample library is connected with the pretreatment, the pretreatment is connected with the real-time data acquisition module and the characteristic extraction module respectively, the characteristic extraction is connected with the neural network model, the neural network model is connected with the state identification module, the state identification module is connected with the identification result module, and the identification result module is connected with the manual experience module, the manual marking module, the state display module and the alarm module respectively.
The training data acquisition module adopts a sensor to acquire sound signals of machine equipment and key parts thereof which run on a production line in a factory production environment.
The manual marking module is used for judging the running states of the machine equipment and key parts thereof by equipment maintenance personnel or machine fault specialists through sound signals according to self experience, and comprises whether normal running and ageing degree are carried out. Wherein whether or not to normally operate includes: normal operation and failure; the degree of aging includes: good, medium, dangerous.
The sound sample library is a sound signal marked by manpower.
The preprocessing includes filtering, a/D conversion, pre-emphasis, framing windowing, and endpoint detection.
The filtering adopts an FIR filter to filter non-audio components in the signal, so that the signal-to-noise ratio of the input signal is improved to the maximum extent.
The a/D conversion is to convert an analog signal into a digital signal.
The pre-emphasis emphasizes the high-frequency part of the signal, enhances the high-frequency resolution of the sound signal, and facilitates the subsequent spectral analysis. A first-order FIR high-pass digital filter is selected for pre-emphasis processing, and the transfer function is H (z) =1-az -1 ,0.9<a<1.0。
The frame-by-frame windowing divides the sound signal into small time periods, namely frames, and then performs windowing processing on the sound signal of the frame, so as to keep the short-time stationarity of the sound signal and reduce the Gibbs effect. Wherein the frame length is set to 20ms and the frame is shifted by 1/3 of the frame length. The windowing adopts a Hamming window, and the function expression is shown as (1), wherein N is the window length equal to the frame length.
The end point detection is used for distinguishing background noise from environmental noise in the sound signal and accurately judging the starting point and the ending point of the sound signal.
The feature extraction is used for extracting feature parameters of sound signals, and the machine equipment state monitoring system based on deep learning and sound recognition adopts a mel frequency cepstrum coefficient as the feature parameters of the sound of the machine equipment.
The neural network model adopts a designed convolutional neural network model and comprises 4 convolutional layers, 4 pooling layers and 2 full-connection layers, wherein a ReLU is used as an intermediate layer activation function, a softmax is used as a final layer, and batch normalization (Batch Normalization) is used after each convolutional layer to accelerate training. The optimizer uses random gradient descent (Stochastic Gradient Descent, SGD), with dropouts ratios of 0.5, respectively, and Cross Entropy (Cross Entropy) for the loss function, and global averaging pooling. And inputting the sound data subjected to data processing and feature extraction into a pre-designed neural network model, and training the neural network model. Dividing the voice data sample into three parts, namely a training set, a verification set and a test set, wherein the proportion is 8:1:1, and performing ten-fold cross validation. Fitting a sound data sample on the training set by the model, judging whether the model reaches a required standard according to whether the recognition rate reaches a set threshold value, returning to continue learning if the recognition rate does not reach the required standard, verifying the neural network model by a verification set if the model reaches the required standard, primarily evaluating the super-parameters of the model and the capacity of the model by the verification set, judging whether the model reaches the required standard according to whether the recognition rate reaches the set threshold value, returning to continue learning if the recognition rate does not reach the threshold value, and testing if the recognition rate does not reach the threshold value; the test set is used for evaluating the generalization capability of the neural network model, if the generalization capability reaches a preset threshold value, training is finished, and otherwise, retraining is returned.
The state identification module is used for sending the preprocessed and characteristic extracted real-time sound data into the trained neural network model, and identifying the running states of the machine equipment and key parts thereof through the neural network model.
The recognition result module outputs and displays the result of the state recognition module on one hand, judges the type of the running state on the other hand, and sends information to the alarm module when the running state is 'failure' or the ageing degree is 'dangerous'.
The artificial experience module is mainly used for comprehensively analyzing the identification result by a professional equipment maintenance personnel or a machine fault expert, judging whether the identification result of the neural network model accords with the self experience judgment, and feeding back to the artificial marking module after the comprehensive analysis of the result, so that a sound sample library is continuously increased, namely training data of the neural network model is increased, and the accuracy of the neural network model in identifying the running state of the machine equipment and key parts thereof is further improved.
The state display module is responsible for displaying the running states identified by the identification result module, including the running states of all the monitored machine equipment and key parts thereof and corresponding positions thereof, and highlighting the machine which has failed or is at high risk and the key parts thereof.
The alarm module is responsible for receiving the alarm signal sent by the identification result module and sending out an alarm so as to inform maintenance personnel to take corresponding measures.
The invention has the following beneficial effects and advantages:
(1) The sensor is adopted to collect the sound signals of the machine equipment and key parts thereof during operation, and the sound signals are remotely processed, so that the remote diagnosis of the machine faults is carried out, and maintenance personnel are not required to approach or disassemble to check the machine equipment, so that the intelligent and safety are higher;
(2) The invention can monitor the running state and the aging degree of the machine equipment, can identify whether the machine equipment has faults or not, and reduces the economic loss caused by machine fault shutdown;
(3) The neural network is utilized to train the sound sample library, the sound sample library is built, meanwhile, the artificial experience is combined to re-mark the identification result, a new sound sample is formed, the sound sample library is continuously expanded, the neural network model is further trained, the designed neural network model is more perfect, the identification result is more accurate, and good conditions are provided for monitoring of machine equipment.
Drawings
Fig. 1 is a block diagram of a machine equipment state monitoring system based on deep learning and voice recognition in the present invention.
Fig. 2 is a block diagram of sound preprocessing used in the present invention.
FIG. 3 is a flow chart of neural network model training in the present invention.
1. A training data acquisition module; 2. a manual marking module; 3. a sound sample library; 4. pretreatment; 401. filtering; 402. a/D conversion; 403. pre-emphasis; 404. framing and windowing; 405. endpoint detection; 5. extracting features; 6. a neural network model; 7. a real-time data acquisition module; 8. a state recognition module; 9. a recognition result module; 10. a manual experience module; 11. a status display module; 12. and an alarm module.
Detailed Description
Examples:
as shown in fig. 1, the machine equipment state monitoring system based on deep learning and voice recognition of the present invention includes: training data acquisition module 1, manual mark module 2, sound sample storehouse 3, preliminary treatment 4, feature extraction 5, neural network model 6, real-time data acquisition module 7, state recognition module 8, recognition result module 9, artifical experience module 10, state display module 11 and alarm module 12. The training data acquisition module 1 is connected with the artificial marking module 2, the artificial marking module 2 is respectively connected with the sound sample library 3 and the recognition result module 9, the sound sample library 2 is connected with the pretreatment 4, the pretreatment 4 is respectively connected with the real-time data acquisition module 7 and the feature extraction 5, the feature extraction 5 is connected with the neural network model 6, the neural network model 6 is connected with the state recognition module 8, the state recognition module 8 is connected with the recognition result module 9, and the recognition result module 9 is respectively connected with the artificial experience module 10, the artificial marking module 2, the state display module 11 and the alarm module 12.
The training data acquisition module 1 adopts a sensor to acquire sound signals of machine equipment and key parts thereof which run on a production line in a factory production environment.
The manual marking module 2 is used for judging the running states of the machine equipment and key parts thereof, including whether the machine equipment runs normally and the ageing degree, through sound signals according to self experience by equipment maintenance personnel or machine fault specialists. Wherein whether or not to normally operate includes: normal operation and failure; the degree of aging includes: good, medium, dangerous.
The sound sample library 3 is a sound signal marked manually.
The preprocessing 4 includes filtering 401, a/D conversion 402, pre-emphasis 403, framing windowing 404, and endpoint detection 405.
The filtering 401 adopts an FIR filter to filter non-audio components in the signal, so that the signal-to-noise ratio of the input signal is improved to the maximum extent;
the a/D conversion 402 converts an analog signal into a digital signal;
the pre-emphasis 403 emphasizes the high frequency part of the signal, enhancing the high frequency resolution of the sound signal, facilitating the subsequent spectral analysis. A first-order FIR high-pass digital filter is selected for pre-emphasis processing, and the transfer function is H (z) =1-az -1 ,0.9<a<1.0;
The frame windowing 404 divides the sound signal into small time periods, i.e., frames, and then performs windowing processing on the sound signal of the frames, so as to maintain the short-time stationarity of the sound signal and reduce the Gibbs effect. Wherein the frame length is set to 20ms and the frame is shifted by 1/3 of the frame length. The windowing adopts a Hamming window, and the functional expression is shown as (2), wherein N is the window length equal to the frame length;
the end point detection 405 is provided in the sound signal to accurately judge the start point and the end point of the sound signal in order to distinguish the background noise from the environmental noise.
The feature extraction 5 is mainly used for extracting the feature parameters of the sound signals, and the mel frequency cepstrum coefficient is used as the feature parameters of the sound of the machine equipment.
The neural network model 6 adopts a designed convolutional neural network model, and comprises 4 convolutional layers, 4 pooling layers and 2 full-connection layers, wherein a ReLU is used as an intermediate layer activation function, a softmax is used as a final layer, and batch normalization (Batch Normalization) is used after each convolutional layer to accelerate training. The optimizer uses random gradient descent (Stochastic Gradient Descent, SGD), with dropouts ratios of 0.5, respectively, and Cross Entropy (Cross Entropy) for the loss function, and global averaging pooling. The sound data after the pretreatment 4 and the feature extraction 5 are input into a pre-designed neural network model 6, and the neural network model 6 is trained. Dividing the voice data sample into three parts, namely a training set, a verification set and a test set, wherein the proportion is 8:1:1, and performing ten-fold cross validation. Fitting a sound data sample on the training set by the model, judging whether the model reaches a required standard according to whether the recognition rate reaches a set threshold value, returning to continue learning if the recognition rate does not reach the required standard, verifying the neural network model 6 by a verification set if the model reaches the required standard, primarily evaluating the super-parameters of the model and the capacity of the model by the verification set, judging whether the model reaches the required standard according to whether the recognition rate reaches the set threshold value, returning to continue learning if the recognition rate does not reach the threshold value, and testing if the recognition rate does not reach the threshold value; the test set is used to evaluate the generalization ability of the neural network model 6, and if the generalization ability reaches a preset threshold, the training is ended, otherwise, the retraining is returned.
The state recognition module 8 sends the real-time sound data subjected to the pretreatment 4 and the feature extraction 5 into the trained neural network model 6, and recognizes the running states of the machine equipment and key parts thereof through the neural network model 6.
The recognition result module 9 outputs and displays the result of the state recognition module 8 on the one hand, and judges the type of the running state on the other hand, and when the running state is 'failure' or the ageing degree is 'dangerous', the information is sent to the alarm module 12.
The artificial experience module 10 mainly performs comprehensive analysis on the recognition result by a professional equipment maintenance personnel or a machine fault expert, judges whether the recognition result of the neural network model 6 accords with the self experience judgment, and feeds back the result to the artificial marking module 2 after performing comprehensive analysis, so that the sound sample library 3 is continuously increased, that is, training data of the neural network model 6 is increased, and further, the accuracy of the neural network model 6 in recognizing the running state of the machine equipment and key parts thereof is improved.
The state display module 11 is responsible for displaying the operation states identified by the identification result module 9, including the operation states of all the monitored machine equipment and key parts thereof and the corresponding positions thereof, and highlighting the machine which has failed or is at high risk and the key parts thereof.
The alarm module 12 is responsible for receiving the alarm signal sent by the identification result module 9 and sending out an alarm so as to inform maintenance personnel to take corresponding measures.
The working process of the machine equipment fault diagnosis method based on the manual experience and the voice recognition comprises the following steps:
(1) Firstly, sound signals of a machine and key parts thereof in a working state are collected by utilizing a sound sensor, the sound signals are marked manually according to self experience by professional equipment maintenance personnel or machine fault specialists, and the types of the sound signals are marked, wherein the types of the sound signals are mainly the running states of the machine equipment and the key parts thereof: including whether it is operating properly and the degree of aging. Wherein whether or not to normally operate includes: normal operation and failure; the degree of aging includes: good, medium, dangerous. Therefore, when, where and what kind of faults occur in the machine equipment can be predicted, the faults are prepared in advance, the occurrence of accidents is prevented, and the loss is avoided or minimized. The artificially marked sound signals are then formed into a sound sample library 3.
(2) Pre-processing 4 and feature extraction 5 are performed on the sound sample library 3, wherein the pre-processing 4 comprises filtering 401, a/D conversion 402, pre-emphasis 403, framing windowing 404 and endpoint detection 405, as shown in fig. 2; the feature extraction 5 uses mel-frequency cepstrum coefficients as feature parameters of the machine equipment sound.
(3) The sound sample is sent into a trained neural network model 6 after being preprocessed 4; the training of the neural network model 6 is shown in fig. 3, and the data samples are divided into three parts, namely a training set, a verification set and a test set, wherein the proportion is 8:1: and 1, performing ten-fold cross validation, respectively judging whether the set neural network model 6 meets the set threshold requirement, if so, performing the next validation and test, otherwise, returning to continue training.
(4) The sensor collects the sound signals of the machine equipment and the key parts thereof in real time, carries out pretreatment 4 and feature extraction 5 on the sound signals, carries out state identification through the trained neural network model 6, and comprehensively judges the working states of the machine equipment and the key parts thereof according to self experience and the neural network identification result by professional equipment maintenance personnel or machine fault specialists. Because the data of the machine equipment faults in the early working are limited, when the sample data are less, the better neural network model 6 is difficult to train, so that the identification result of the state identification result module 9 may have deviation, the real-time data subjected to pretreatment 4 and feature extraction 5 are input into the trained neural network model 6 for carrying out state identification, verification and judgment are carried out through manual experience, a new sound sample is formed by marking the real-time data and added into the original sound sample library 3, the trained neural network model 6 is more and more stable along with the continuous increase of the sound sample data, and the obtained monitoring result is more accurate.
The foregoing is merely illustrative of preferred embodiments of the present invention, but the scope of the present invention is not limited thereto, and any changes or substitutions easily contemplated by those skilled in the art within the scope of the present invention should be included in the scope of the present invention.

Claims (1)

1. The machine equipment state monitoring system based on deep learning and voice recognition is characterized by comprising a training data acquisition module, a manual marking module, a voice sample library, preprocessing, feature extraction, a neural network model, a real-time data acquisition module, a state recognition module, a recognition result module, a manual experience module, a state display module and an alarm module; the system comprises a training data acquisition module, a manual marking module, a voice sample library, a characteristic extraction module, a neural network model, a state identification module, a manual experience module, a manual marking module, a state display module and an alarm module, wherein the training data acquisition module is connected with the manual marking module;
the training data acquisition module acquires sound signals of machine equipment and key parts of the machine equipment running on a production line in a factory production environment by adopting a sensor;
the manual marking module is used for judging the running states of the machine equipment and key parts thereof by equipment maintenance personnel or machine fault specialists through sound signals according to self experience, and comprises whether the machine equipment runs normally or not and the ageing degree; wherein whether or not to normally operate includes: normal operation and failure; the degree of aging includes: good, medium, dangerous;
the sound sample library is a sound signal marked by manpower;
the preprocessing comprises filtering, A/D conversion, pre-emphasis, framing and windowing and endpoint detection;
the filtering adopts an FIR filter to filter non-audio components in the signal, so that the signal-to-noise ratio of the input signal is improved to the maximum extent;
the A/D conversion is to convert an analog signal into a digital signal;
the pre-emphasis is to emphasize the high frequency part of the signal, enhance the high frequency resolution of the sound signal, and facilitate the subsequent spectrum analysis; a first-order FIR high-pass digital filter is selected for pre-emphasis processing, and the transfer function is H (z) =1-az -1 ,0.9<a<1.0;
The frame-dividing windowing is to divide the sound signal into frames, and then to carry out windowing processing on the sound signal of the frame division, wherein the frame length is set to 20ms, and the frame is shifted by 1/3 of the frame length; the windowing adopts a Hamming window, and the functional expression is shown as (1), wherein N is the window length equal to the frame length;
the end point detection is used for distinguishing background noise from environmental noise in the sound signal and accurately judging the starting point and the ending point of the sound signal;
the feature extraction is used for extracting feature parameters of sound signals, and the machine equipment state monitoring system based on deep learning and sound recognition adopts mel frequency cepstrum coefficients as the feature parameters of the sound of the machine equipment;
the neural network model adopts a designed convolutional neural network model and comprises 4 convolutional layers, 4 pooling layers and 2 full-connection layers, wherein a ReLU is used as an intermediate layer activation function, a softmax is used as a final layer, and batch normalization (Batch Normalization) is used after each convolutional layer to accelerate training; the optimizer uses random gradient descent (Stochastic Gradient Descent, SGD), adopts dropouts with a ratio of 0.5, uses Cross Entropy (Cross Entropy) for the loss function, and performs global average pooling; inputting sound data subjected to data processing and feature extraction into a pre-designed neural network model, and training the neural network model; dividing the voice data sample into three parts, namely a training set, a verification set and a test set, wherein the proportion is 8:1:1, performing ten-fold cross validation; fitting a sound data sample on the training set by the model, judging whether the model reaches a required standard according to whether the recognition rate reaches a set threshold value, returning to continue learning if the recognition rate does not reach the required standard, verifying the neural network model by a verification set if the model reaches the required standard, primarily evaluating the super-parameters of the model and the capacity of the model by the verification set, judging whether the model reaches the required standard according to whether the recognition rate reaches the set threshold value, returning to continue learning if the recognition rate does not reach the threshold value, and testing if the recognition rate does not reach the threshold value; the test set is used for evaluating the generalization capability of the neural network model, if the generalization capability reaches a preset threshold value, training is finished, and otherwise, retraining is returned;
the state identification module is used for sending the real-time sound data subjected to pretreatment and feature extraction into a trained neural network model, and identifying the running states of the machine equipment and key parts thereof through the neural network model;
the recognition result module outputs and displays the result of the state recognition module on one hand, judges the type of the running state on the other hand, and sends information to the alarm module when the running state is 'failed' or the ageing degree is 'dangerous';
the artificial experience module is mainly used for comprehensively analyzing the recognition result by a professional equipment maintenance personnel or a machine fault expert, judging whether the recognition result of the neural network model accords with the self experience judgment, and feeding back to the artificial marking module after the comprehensive analysis of the result, so that a sound sample library is continuously increased, namely training data of the neural network model is increased, and the accuracy of the neural network model in recognizing the running state of the machine equipment and key parts thereof is further improved;
the state display module is responsible for displaying the running states identified by the identification result module, including the running states of all the monitored machine equipment and key parts thereof and corresponding positions thereof, and highlighting the machine which has failed or is in high risk and the key parts thereof;
the alarm module is responsible for receiving the alarm signal sent by the identification result module and sending out an alarm.
CN201911222026.XA 2019-12-03 2019-12-03 Machine equipment state monitoring system based on deep learning and voice recognition Active CN110867196B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911222026.XA CN110867196B (en) 2019-12-03 2019-12-03 Machine equipment state monitoring system based on deep learning and voice recognition

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911222026.XA CN110867196B (en) 2019-12-03 2019-12-03 Machine equipment state monitoring system based on deep learning and voice recognition

Publications (2)

Publication Number Publication Date
CN110867196A CN110867196A (en) 2020-03-06
CN110867196B true CN110867196B (en) 2024-04-05

Family

ID=69658389

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911222026.XA Active CN110867196B (en) 2019-12-03 2019-12-03 Machine equipment state monitoring system based on deep learning and voice recognition

Country Status (1)

Country Link
CN (1) CN110867196B (en)

Families Citing this family (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111256814A (en) * 2020-03-13 2020-06-09 天津商业大学 Tower monitoring system and method
CN111413925A (en) * 2020-03-20 2020-07-14 华中科技大学 Machine tool fault prediction method based on sound signals
CN111398965A (en) * 2020-04-09 2020-07-10 电子科技大学 Danger signal monitoring method and system based on intelligent wearable device and wearable device
CN111524523A (en) * 2020-04-26 2020-08-11 中南民族大学 Instrument and equipment state detection system and method based on voiceprint recognition technology
CN111581425A (en) * 2020-04-28 2020-08-25 上海鼎经自动化科技股份有限公司 Equipment sound classification method based on deep learning
CN112733588A (en) * 2020-08-13 2021-04-30 精英数智科技股份有限公司 Machine running state detection method and device and electronic equipment
CN112700793A (en) * 2020-12-24 2021-04-23 国网福建省电力有限公司 Method and system for identifying fault collision of water turbine
CN114764538B (en) * 2020-12-30 2024-04-26 河北云酷科技有限公司 Equipment sound signal mode identification method
CN113178032A (en) * 2021-03-03 2021-07-27 北京迈格威科技有限公司 Video processing method, system and storage medium
CN113129918B (en) * 2021-04-15 2022-05-03 浙江大学 Voice dereverberation method combining beam forming and deep complex U-Net network
CN113298134B (en) * 2021-05-20 2023-07-28 华中科技大学 System and method for remotely and non-contact health monitoring of fan blade based on BPNN
CN113593605B (en) * 2021-07-09 2024-01-26 武汉工程大学 Industrial audio fault monitoring system and method based on deep neural network
CN113657628A (en) * 2021-08-20 2021-11-16 武汉霖汐科技有限公司 Industrial equipment monitoring method and system, electronic equipment and storage medium
CN113852612B (en) * 2021-09-15 2023-06-27 桂林理工大学 Network intrusion detection method based on random forest
CN113988202B (en) * 2021-11-04 2022-08-02 季华实验室 Mechanical arm abnormal vibration detection method based on deep learning
CN114147740B (en) * 2021-12-09 2024-08-09 中科计算技术西部研究院 Robot inspection planning system and method based on environment state
CN114271683A (en) * 2021-12-29 2022-04-05 南京美基森信息技术有限公司 Water dispenser with water level detection function and water level detection method
CN114371649A (en) * 2022-01-10 2022-04-19 江苏大学 Spray flow regulation and control system and method based on convolutional neural network
CN114543983A (en) * 2022-03-29 2022-05-27 阿里云计算有限公司 Vibration signal identification method and device
CN115358369A (en) * 2022-08-15 2022-11-18 云南电网有限责任公司玉溪供电局 Monitoring alarm event identification method based on convolutional neural network model
CN115512688A (en) * 2022-09-02 2022-12-23 广东美云智数科技有限公司 Abnormal sound detection method and device
CN116189349B (en) * 2023-04-28 2023-07-18 深圳黑蚂蚁环保科技有限公司 Remote fault monitoring method and system for self-service printer
CN116434502B (en) * 2023-05-25 2024-06-18 中南大学 Automobile alarm device containing sound absorption piezoelectric aerogel and automobile alarm method
CN116597587A (en) * 2023-05-31 2023-08-15 河南龙宇能源股份有限公司 Underground operation equipment high-risk area invasion early warning method based on audio-visual cooperative recognition
CN116665711B (en) * 2023-07-26 2024-01-12 中国南方电网有限责任公司超高压输电公司广州局 Gas-insulated switchgear on-line monitoring method and device and computer equipment
CN117889943B (en) * 2024-03-13 2024-05-14 浙江维度仪表有限公司 Gas ultrasonic flowmeter inspection method and system based on machine learning

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20180130294A (en) * 2017-05-29 2018-12-07 부경대학교 산학협력단 Method for diagnosing machine fault based on sound
CN109357749A (en) * 2018-09-04 2019-02-19 南京理工大学 A kind of power equipment audio signal analysis method based on DNN algorithm
CN109767785A (en) * 2019-03-06 2019-05-17 河北工业大学 Ambient noise method for identifying and classifying based on convolutional neural networks
CN110335617A (en) * 2019-05-24 2019-10-15 国网新疆电力有限公司乌鲁木齐供电公司 A kind of noise analysis method in substation

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20180130294A (en) * 2017-05-29 2018-12-07 부경대학교 산학협력단 Method for diagnosing machine fault based on sound
CN109357749A (en) * 2018-09-04 2019-02-19 南京理工大学 A kind of power equipment audio signal analysis method based on DNN algorithm
CN109767785A (en) * 2019-03-06 2019-05-17 河北工业大学 Ambient noise method for identifying and classifying based on convolutional neural networks
CN110335617A (en) * 2019-05-24 2019-10-15 国网新疆电力有限公司乌鲁木齐供电公司 A kind of noise analysis method in substation

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
邵思羽.《博士学位论文》.东南大学,2019,全文. *

Also Published As

Publication number Publication date
CN110867196A (en) 2020-03-06

Similar Documents

Publication Publication Date Title
CN110867196B (en) Machine equipment state monitoring system based on deep learning and voice recognition
CN110940539B (en) Machine equipment fault diagnosis method based on artificial experience and voice recognition
CN112660745B (en) Intelligent diagnosis method and system for carrier roller fault and readable storage medium
WO2019080367A1 (en) Method for evaluating health status of mechanical device
CN112504673B (en) Carrier roller fault diagnosis method, system and storage medium based on machine learning
CN113657221B (en) Power plant equipment state monitoring method based on intelligent sensing technology
CN112179691B (en) Mechanical equipment running state abnormity detection system and method based on counterstudy strategy
CN116070163B (en) Indoor harmful gas concentration anomaly monitoring data processing method
CN113566948A (en) Fault audio recognition and diagnosis method for robot coal pulverizer
CN110375983B (en) Valve fault real-time diagnosis system and method based on time series analysis
WO2019043600A1 (en) Remaining useful life estimator
CN111508517A (en) Intelligent micro-motor product control method based on noise characteristics
CN113345399A (en) Method for monitoring sound of machine equipment in strong noise environment
CN115424635B (en) Cement plant equipment fault diagnosis method based on sound characteristics
CN116517860A (en) Ventilator fault early warning system based on data analysis
CN107844067A (en) A kind of gate of hydropower station on-line condition monitoring control method and monitoring system
CN113757093A (en) Fault diagnosis method for flash steam compressor unit
CN111752259A (en) Fault identification method and device for gas turbine sensor signal
CN117437933A (en) Feature cluster combination generation type learning-based unsupervised detection method for fault of voiceprint signal of transformer
CN114021620B (en) BP neural network feature extraction-based electric submersible pump fault diagnosis method
CN118067727A (en) Rotor welding defect assessment method
CN110231165B (en) Mechanical equipment fault diagnosis method based on expectation difference constraint confidence network
CN105354830B (en) Controller's fatigue detection method, apparatus and system based on multiple regression model
CN115392109A (en) LSTM multivariable time series anomaly detection method based on generative model
CN112660746A (en) Roller fault diagnosis method and system based on big data technology and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant