CN116453526A - Multi-working-condition abnormality monitoring method and device for hydroelectric generating set based on voice recognition - Google Patents

Multi-working-condition abnormality monitoring method and device for hydroelectric generating set based on voice recognition Download PDF

Info

Publication number
CN116453526A
CN116453526A CN202310451081.6A CN202310451081A CN116453526A CN 116453526 A CN116453526 A CN 116453526A CN 202310451081 A CN202310451081 A CN 202310451081A CN 116453526 A CN116453526 A CN 116453526A
Authority
CN
China
Prior art keywords
data
feature extraction
time
extraction network
sound
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310451081.6A
Other languages
Chinese (zh)
Other versions
CN116453526B (en
Inventor
刘畅
王宇庭
黄忠初
沈阳武
任家朋
何立夫
邝家月
张宸
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Three Gorges Corp
Original Assignee
China Three Gorges Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Three Gorges Corp filed Critical China Three Gorges Corp
Priority to CN202310451081.6A priority Critical patent/CN116453526B/en
Publication of CN116453526A publication Critical patent/CN116453526A/en
Application granted granted Critical
Publication of CN116453526B publication Critical patent/CN116453526B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L17/00Speaker identification or verification
    • G10L17/04Training, enrolment or model building
    • FMECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
    • F03MACHINES OR ENGINES FOR LIQUIDS; WIND, SPRING, OR WEIGHT MOTORS; PRODUCING MECHANICAL POWER OR A REACTIVE PROPULSIVE THRUST, NOT OTHERWISE PROVIDED FOR
    • F03BMACHINES OR ENGINES FOR LIQUIDS
    • F03B11/00Parts or details not provided for in, or of interest apart from, the preceding groups, e.g. wear-protection couplings, between turbine and generator
    • F03B11/008Measuring or testing arrangements
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L17/00Speaker identification or verification
    • G10L17/18Artificial neural networks; Connectionist approaches
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L17/00Speaker identification or verification
    • G10L17/26Recognition of special voice characteristics, e.g. for use in lie detectors; Recognition of animal voices
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/48Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
    • G10L25/51Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02EREDUCTION OF GREENHOUSE GAS [GHG] EMISSIONS, RELATED TO ENERGY GENERATION, TRANSMISSION OR DISTRIBUTION
    • Y02E10/00Energy generation through renewable energy sources
    • Y02E10/20Hydro energy

Abstract

The invention provides a method and a device for monitoring multi-working-condition abnormality of a hydroelectric generating set based on voice recognition, wherein the method comprises the following steps: collecting sound data of the water turbine under different working conditions; carrying out mode extraction on sound data under different working conditions to obtain mode sequences under different working conditions to establish a database; training the built time-frequency domain fused self-coding feature extraction network through the sound data in the fusion database to obtain a trained self-coding feature extraction network; constructing a twin feature extraction network based on the trunk of the trained self-coding feature extraction network; collecting real-time sound signal data of the current working condition and a mode sequence under the corresponding working condition in a database, and inputting the data and the mode sequence into a twin feature extraction network together to obtain the similarity of two input values; and comparing the similarity with a preset threshold value, and judging whether the real-time sound is abnormal or not. The running state of the water turbine unit is monitored in real time by utilizing the acoustic signals, the signals are easy to obtain, no manual work is needed, and the cost is low and the accuracy is higher.

Description

Multi-working-condition abnormality monitoring method and device for hydroelectric generating set based on voice recognition
Technical Field
The invention relates to the technical field of hydroelectric generating sets, in particular to a method and a device for monitoring multiple working condition anomalies of a hydroelectric generating set based on voice recognition.
Background
Pumped storage is a relatively mature energy storage mode and has been developed for hundreds of years. The pumped storage is the most mature technology, the best economical energy storage mode with the largest large-scale development condition, and is a green carpet cleaning flexible adjusting power supply of the power system. The pumped storage has the functions of peak regulation, frequency modulation, phase modulation, energy storage, system standby, black start and the like, has the technical and economic advantages of large capacity, multiple working conditions, high speed, high reliability, good economy and the like, plays a fundamental role in ensuring the safety of a large power grid, promoting the consumption of new energy and improving the performance of the whole system, and is an important component of the energy Internet. The development of pumped storage is accelerated, the urgent requirement of a novel power system is established, the important support for guaranteeing the safe and stable operation of the power system is provided, and the important guarantee for the large-scale development of renewable energy sources is provided.
One of the core components of the pumped storage power station is a hydraulic generator, and is different from a hydraulic turbine which is used by a hydroelectric power plant and rotates in one direction, the hydraulic turbine of the pumped storage power station not only has the characteristics of bidirectional rotation and opposite rotation directions under two states of power generation and energy storage, but also is extremely frequent to start and stop, generally at least 2 times, and 40 times per day are designed for the pumped storage power station of Diofweik in UK. The state transition such as excessively frequent start and stop is a huge consumption to the hydraulic turbine, and simultaneously the requirement on the state monitoring of the hydraulic turbine is improved. The development of the current information monitoring technology mainly adopts various signals such as vibration, force, temperature, rays, electromagnetism and the like for analysis, and in addition, along with the improvement of the capability of the sound acquisition and processing technology, the concept that sound is used for monitoring the operation working condition and abnormality of the hydraulic generator is also proposed by partial scholars. The sound is an important index for analyzing the running state of equipment, and the sound signal contains a large amount of vibration information. When the device is in normal operation, the sounds generated by the device corresponding to different states of the mutual motion of the machine body, the firmware, the parts and the parts are different, and the sounds generated by the hydraulic generator are also changed when the operation state is changed.
The existing monitoring method for judging the operation condition and abnormality of the hydraulic generator by sound is mainly finished by workers with abundant experience, real-time online detection cannot be realized, the method is contrary to the development requirement of the detection of the electric power equipment, the accuracy of the detection cannot be ensured by means of the fact that the manual hydraulic generator operation condition and abnormality monitoring excessively depend on working experience and subjective judgment, and the working environment of the workers is relatively bad, and the economic cost and the time cost are high.
Disclosure of Invention
In view of the above, the invention provides a method and a device for monitoring multiple working conditions of a hydro-generator set based on voice recognition, which solve the problems of high economic and time cost and low accuracy caused by the operation working conditions of the hydro-generator and the abnormal dependence on the experience of staff in the prior art.
In order to achieve the above purpose, the present invention provides the following technical solutions:
in a first aspect, an embodiment of the present invention provides a method for monitoring multiple working conditions abnormality of a hydro-generator set based on voice recognition, including:
collecting sound data of the water turbine under different working conditions;
carrying out mode extraction on the sound data under different working conditions by a clustering method to obtain mode sequences under different working conditions, and establishing a database according to the different mode sequences corresponding to the different working conditions;
Training the built time-frequency domain fused self-coding feature extraction network through the sound data in the fusion database to obtain a trained self-coding feature extraction network;
constructing a twin feature extraction network based on the trunk of the trained self-coding feature extraction network;
collecting real-time sound signal data of the current working condition as a first input value, extracting a mode sequence under the corresponding working condition in a database as a second input value, and inputting the first input value and the second input value into a twin feature extraction network together to obtain the similarity of the two input values, wherein the sampling period of the real-time sound signal data is the same as the length of the corresponding mode sequence in the database;
and comparing the similarity with a preset threshold value, and judging whether the real-time sound is abnormal or not.
According to the multi-working-condition anomaly monitoring method for the hydroelectric generating set based on voice recognition, provided by the embodiment of the invention, based on an artificial intelligence related technology, the running condition of the hydroelectric generating set is monitored in real time by utilizing the voice signal, the normal running of the set is not interfered, the voice acquisition equipment is flexible and convenient to install, the signal is easy to acquire, no manual work is needed, the cost is low, and the accuracy is higher.
Optionally, the different working conditions include: the method for acquiring the sound data of the water turbine under different working conditions comprises the following steps of:
Under steady state or transient state working conditions, collecting sequence data of preset strips at a preset sampling frequency, wherein the time span of each strip of data is preset time, and obtaining a sound data set under each steady state or transient state working condition;
and under the working condition of the transition state, collecting sequence data of preset strips at a preset sampling frequency, wherein the time span of each strip of data is the duration of the transition state, and the duration of the transition state is longer than the preset time to obtain a sound data set under the working condition of each transition state.
According to the embodiment of the invention, a large amount of sound data are collected in different modes according to different working conditions, so that the collected sound data sets under different working conditions are more representative.
Optionally, performing mode extraction on the sound data under the different working conditions includes:
under steady state or transient state working conditions, calculating weighted average among sound data to obtain a mode sequence of the sound data under the current working conditions;
and in the transient state working condition, aligning the sound data by using a normalized cross-correlation matching method and a traversing mode, and calculating the weighted average of all the data after the alignment to obtain a mode sequence of the sound data under the current working condition.
Optionally, the process of aligning the sound data by using a normalized cross-correlation matching method and a traversal method includes:
Calculating the similarity of the first two groups of data in the sound data by using NCC;
and (3) keeping one group of data unchanged, carrying out displacement and traversal on the other group of data, and recalculating the similarity, wherein the displacement ranges from 0 to half of the data length, and representing that the two groups of data are aligned when the similarity is maximum.
According to the embodiment of the invention, the NCC method is utilized to match the time sequence information of the sound under the same working condition, the time sequence alignment is carried out according to the matching result, and then the mode of the sound under the corresponding working condition is obtained through weighted average, so that the obtained mode is more representative.
Optionally, inputting the first input value and the second input value together into the twin feature extraction network comprises:
under steady state or transient state working condition, inputting the collected real-time sound signal data and the data of the mode sequence under the corresponding working condition in the database into a twin feature extraction network;
under the working condition of the transition state, the short-time average energy and the short-time average amplitude of the sound signals summarized by the current sampling window are calculated, the position of the current sampling window in the whole transition process is positioned, and the data of the mode sequence of the current sampling window under the corresponding working condition in the database are substituted into the twin feature extraction network.
According to the embodiment of the invention, the signals to be matched are subjected to interval screening by utilizing the short-time average energy and the short-time average amplitude of the acoustic signals, and more accurate position information is obtained through NCC, so that the subsequent data comparison is facilitated.
Optionally, training the built time-frequency domain fused self-coding feature extraction network by fusing sound data in a database, and obtaining the trained self-coding feature extraction network comprises the following steps:
extracting time domain feature vectors and frequency domain feature vectors of sound signals in a database;
splicing the time domain feature vector and the frequency domain feature vector to obtain an input vector;
inputting the input vector into an encoder, and outputting a fused time-frequency domain signal;
inputting the fused time-frequency domain signals into a decoder, and outputting decoding signals;
and calculating a loss function by using the input vector and the decoded signal, and training the self-coding feature extraction network fused with the time domain and the frequency domain to obtain a trained self-coding feature extraction network.
According to the multi-working-condition anomaly monitoring method for the hydroelectric generating set based on voice recognition, disclosed by the embodiment of the invention, the self-coding feature extraction network is trained after the time domain features and the frequency domain features of the voice data are fused, so that the self-coding feature extraction network is more practical, and the collected voice data can be processed and judged more accurately.
Optionally, the process of constructing the twin feature extraction network is:
freezing trained parameters in the self-coding feature extraction network, and training the full-connection layer;
and thawing the trained parameters, and performing overall training on the self-coding feature extraction network to obtain the twin feature extraction network.
According to the embodiment of the invention, the self-coding network is utilized to learn the feature extraction structure, the parameters of the feature extraction part are fixed after the beneficial parameters are obtained, and the twin network is trained in two stages according to a method of firstly locally and then wholly, so that the twin feature extraction network can obtain the parameters which are more beneficial to judging the abnormality of the water turbine unit, and the learning effect is better.
In a second aspect, an embodiment of the present invention provides a multiple-working-condition anomaly monitoring device for a hydro-generator set based on voice recognition, where the device includes:
the acquisition module is used for acquiring sound data of the water turbine under different working conditions;
the database building module is used for carrying out mode extraction on the sound data under different working conditions through a clustering method to obtain mode sequences under different working conditions, and building a database according to the different mode sequences corresponding to the different working conditions;
the time-frequency domain fusion module is used for training the built time-frequency domain fused self-coding feature extraction network through the sound data in the fusion database to obtain a trained self-coding feature extraction network;
The network construction module is used for constructing a twin feature extraction network based on the trunk of the trained self-coding feature extraction network;
the computing module is used for collecting real-time sound signal data of the current working condition as a first input value, extracting a mode sequence under the corresponding working condition in the database as a second input value, and inputting the first input value and the second input value into the twin feature extraction network together to obtain the similarity of the two input values, wherein the sampling period of the real-time sound signal data is the same as the length of the corresponding mode sequence in the database;
and the judging module is used for comparing the similarity with a preset threshold value and judging whether the real-time sound is abnormal or not.
The multi-working-condition abnormality monitoring device for the hydroelectric generating set provided by the embodiment of the invention is based on the artificial intelligence correlation technology, utilizes the acoustic signals to monitor the running condition of the hydroelectric generating set in real time, does not interfere the normal running of the set, has flexible and convenient installation of the sound collecting equipment, is easy to obtain the signals, does not need to use manpower, and has low cost and higher accuracy.
In a third aspect, an embodiment of the present invention provides a computer apparatus, including: the system comprises a memory and a processor, wherein the memory and the processor are in communication connection, the memory stores computer instructions, and the processor executes the computer instructions, thereby executing the method in the first aspect or any optional implementation manner of the first aspect.
In a fourth aspect, embodiments of the present invention provide a computer-readable storage medium storing computer instructions for causing a computer to perform the method of the first aspect, or any one of the alternative embodiments of the first aspect.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are needed in the description of the embodiments or the prior art will be briefly described, and it is obvious that the drawings in the description below are some embodiments of the present invention, and other drawings can be obtained according to the drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of a method for monitoring multiple working conditions of a hydroelectric generating set based on voice recognition according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a hydraulic turbine set according to an embodiment of the present invention;
fig. 3 is a schematic structural diagram of a time-frequency domain fused self-coding feature extraction network according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of a structure of a sequence 2Seq used for extracting a timing feature according to an embodiment of the present invention;
Fig. 5 is a schematic structural diagram of a twin feature extraction network in a multi-condition anomaly monitoring method for a hydro-generator set based on voice recognition according to an embodiment of the present invention;
fig. 6 is a schematic structural diagram of a multi-condition anomaly monitoring device for a hydro-generator set based on voice recognition according to an embodiment of the present invention;
fig. 7 is a schematic structural diagram of a computer device according to an embodiment of the present invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are some embodiments of the present invention, but not all embodiments of the present invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to fall within the scope of the invention.
In the description of the present invention, it should be noted that the directions or positional relationships indicated by the terms "center", "upper", "lower", "left", "right", "vertical", "horizontal", "inner", "outer", etc. are based on the directions or positional relationships shown in the drawings, are merely for convenience of describing the present invention and simplifying the description, and do not indicate or imply that the devices or elements referred to must have a specific orientation, be configured and operated in a specific orientation, and thus should not be construed as limiting the present invention. Furthermore, the terms "first," "second," and "third" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance.
In the description of the present invention, it should be noted that, unless explicitly specified and limited otherwise, the terms "mounted," "connected," and "connected" are to be construed broadly, and may be either fixedly connected, detachably connected, or integrally connected, for example; can be mechanically or electrically connected; the two components can be directly connected or indirectly connected through an intermediate medium, or can be communicated inside the two components, or can be connected wirelessly or in a wired way. The specific meaning of the above terms in the present invention can be understood by those of ordinary skill in the art according to the specific circumstances.
The technical features of the different embodiments of the invention described below may be combined with one another as long as they do not conflict with one another.
Example 1
The embodiment of the invention provides a multi-working-condition anomaly monitoring method for a hydroelectric generating set based on voice recognition, which comprises the following steps of:
step S1: and collecting sound data of the water turbine under different working conditions.
In one embodiment, the operating conditions of the water turbine include: the water turbine of the pumping and storing station has five stable working conditions, namely power generation, power generation phase modulation, water pumping phase modulation and shutdown; the working conditions of three transient states (the state of the water turbine under non-power generation and pumping water) are standby, idle running and no-load. There are a total of over ten different transition states between steady state and transient state, as shown in fig. 2. Both steady state and transient state represent that the water turbine is currently in a relatively stable state operation, and at the moment, the sound emitted by the water turbine is relatively stable, and the time sequence and the frequency domain waveforms are displayed in relatively fixed modes. The periodicity of the acoustic signal is not considered at this time, and the acoustic signal is considered unchanged. However, the acoustic signals generated by different steady states are also different, and are related to the running state of the unit, the load, the power and the like. The transition state is influenced by the moment changes of the rotating speed, the load and the like of the water turbine, the acoustic signals are also changed at the moment, the time spent by different transition states is different, the shortest time-consuming state change is from idle to idle, the transition can be controlled to be completed within 10 seconds, the longest time-consuming state change comprises phase modulation from standby to pumping, pumping to standby and the like, and the time spent is more than 200 seconds. The embodiment of the invention can judge whether the abnormality occurs or not by monitoring the acoustic signals generated during the steady state and the transient state, and can also rapidly detect and feed back the abnormal transition state in the process of the correlation conversion between the steady state and the transient state.
In practical application, can utilize acoustic sensor (like the adapter) to gather the hydraulic turbine sound under different states, hydraulic generator comprises stator, rotor, frame, guide bearing, stopper etc. in order to prevent hydraulic turbine vibration, the top cap is installed in hydraulic turbine top generally and is played the fixed action. In the embodiment of the invention, the acoustic sensor is arranged on the top cover and is close to the water wheel, so that sound data can be acquired more clearly, the sampling frequency of the acoustic sensor is between 20hz and 20khz, the sensitivity is 39db, the acoustic frequency generated by the water wheel can be basically covered, the signal-to-noise ratio is 60db, and the influence of noise can be greatly reduced. The water turbine collects the mode of sound data under different operating modes differently:
under steady state or transient state working conditions, the sequence data of the preset strips are collected at the preset sampling frequency, the time span of each strip of data is the preset time, a sound data set under each steady state or transient state working condition is obtained, and by means of example, 100 strips of data are collected under each working condition, the data are in a sequence form, the collection frequency is set to 400hz, the time span of each strip of data is 3s, each strip of data is composed of sound data of 1200 sampling points, and each working condition forms a sound data set with 100 x 1200 dimensionalities.
And under the working condition of the transition state, collecting sequence data of a preset strip at a preset sampling frequency, wherein the time span of each piece of data is the duration time of the transition state, and the duration time of the transition state is longer than the preset time, so that a sound data set under the working condition of each transition state is obtained. According to the embodiment of the invention, a large amount of sound data are collected in different modes according to different working conditions, so that the collected sound data sets under different working conditions are more representative. The transient state working condition includes an emergency state (for example, a "black" starting state: after the whole system is shut down due to failure, the system is in a full "black" state, and independent of other network assistance, a generator set with self-starting capability in the system is started to drive a generator set without self-starting capability, the recovery range of the system is gradually expanded, and finally the recovery of the whole system is realized), the collection frequency is kept to be 400hz, all data under each operation are saved as a sequence, 100 pieces of data are collected in total, the sequence length is determined by the duration of the transient state, for example, the duration from idle to idle is 10s, the length of each data sequence is composed of sound information of 4000 sampling points, and a sound data set with 100 x 4000 dimensions is formed.
Step S2: and carrying out mode extraction on the sound data under different working conditions by using a clustering method to obtain mode sequences under different working conditions, and establishing a database according to the different mode sequences corresponding to the different working conditions.
Specifically, in an embodiment, performing pattern extraction on sound data under different working conditions includes:
and under the steady state or transient state working condition, calculating the weighted average of the sound data to obtain the mode sequence of the sound data under the current working condition. Because the running mode of the water turbine is unchanged under the steady state and the transient state, the state is relatively fixed, the collected sound signals are more prone to be a group of signals with smaller variation, the theoretical period is smaller, and the sound signals are only related to the rotating speed and can be ignored, so that the sound mode under the current working condition can be obtained by using weighted averaging among sound data.
The transition state is not a stable process, the beginning of data acquisition is difficult to determine, and each time there is time misalignment of data, so that the sound data is aligned by using an NCC (Normalized cross-correlation matching) and traversing mode, and the weighted average of all the data is calculated after the alignment, so as to obtain a mode sequence of the sound data under the current working condition. The method specifically comprises the following steps: calculating the similarity of the first two groups of data in the sound data by using NCC; one group of data is kept unchanged, the other group of data is subjected to displacement and traversal, the similarity is recalculated, the displacement range is 0 to half of the data length, and when the similarity is maximum, the alignment of the two groups of data is represented. For example, according to step S1, a data set with a sample number of 100 is collected under each transition condition, the first two sets of data f (x, y) and t (x, y) in the data set are taken, the NCC is used to calculate the similarity, the NCC first calculates the mean and variance of the two sets of distributions, and then calculates the statistical value of the difference between the current distribution and the mean to determine the similarity of the two sequences, where the NCC formula is as follows, n represents the data dimension of each sample, σ represents the sample variance, μ represents the sample mean:
Then, the range of the displacement k (x, y) is enabled to be 0 to half of the data length, the NCC is recalculated at the overlapped part of f (x, y) and t (x, y), the size of the NCC is recorded, k is sequentially traversed and calculated, the k value corresponding to the maximum NCC is taken, at the moment, the t (x, y) after the displacement k is aligned with the starting point of f (x, y), the first 10 groups of data are taken and sequentially aligned with f (x, y), the aligned 10 groups of data are weighted and averaged, the mode sequence M of the first 10 groups of data can be obtained, all the remaining data in the sound data set corresponding to the current working condition are aligned with the sequence M through the method, and the mode sequence under the current working condition is obtained through overall weighted and averaged after all the alignment.
According to the embodiment of the invention, the NCC method is utilized to match the time sequence information of the sound under the same working condition, the time sequence alignment is carried out according to the matching result, and then the mode of the sound under the corresponding working condition is obtained through weighted average, so that the obtained mode is more representative.
Illustratively, the data format in the database is: { id: data, wherein id represents identifiers for different conditions of different units, and data represents a pattern sequence of sound data. As the working time of the water turbine is increased, the abrasion of various structures in the water turbine is old and the like, the sound signals can be gradually changed, and the collected data are supplemented at certain intervals, so that the mode sequences under various working conditions in the database are updated.
Step S3: training the built time-frequency domain fused self-coding feature extraction network through sound data in a database to obtain a trained self-coding feature extraction network.
Specifically, in an embodiment, the sound signal includes two information of time domain and frequency domain, and when the sound signal is characterized, the frequency domain information is more important, so in the embodiment of the present invention, the time domain and frequency domain information fusion is needed to be performed on the sound data in the database by using the self-coding feature network of time-frequency domain fusion, as shown in fig. 3, the process of training the built self-coding feature extraction network of time-frequency domain fusion by using the sound data in the database includes the following steps:
step S31: extracting time domain feature vectors and frequency domain feature vectors of sound signals in a database; illustratively, one aspect utilizes a recurrent neural network to process time domain information: input X is sound time series data with dimension [1, 1200 ]. The timing of the sound signal mainly comprises amplitude, which can be understood as the loudness of the sound. And because the operation of the water turbine is a continuous action, the possibility of pulse-like mutation in time sequence is low, and the invention uses the coding part of the Seq2Seq structure as an extraction network of time sequence characteristics. The structure of the Seq2Seq is shown in fig. 4, and the Seq2Seq combines the current input and the input at the moment before the current moment through the hidden node h of the middle layer, so as to obtain the output y. The dimension of the feature F1 of the time domain feature extraction branch output is [1, 128]. The time series signal of the sound only represents part of the characteristics of the sound, and the spectrogram of the sound contains more information. On the other hand, the convolution network is utilized to process the frequency domain information: mel-frequency coefficients are an effective means of processing sound frequency domain information. The mel-frequency cepstral coefficient is an algorithm invented by simulating the human auditory system, and is used for decorrelating information of sound in a frequency domain, so that the problem of masking sounds with different frequencies is solved, and meanwhile, a compression algorithm is added, so that partial redundant noise information is filtered. The MFCC (Mel Frequency Cepstrum Coefficient, mel frequency cepstral coefficient) is processed as follows:
First, fourier transform is performed on a time sequence signal x to obtain a frequency domain signal. Where N is set to 512, m represents the frequency of the signal and N represents the sequence of sampling points.
And setting a group of triangular filter groups, wherein the size of the group of triangular filter groups is Mel frequency cepstrum coefficient, and performing filtering operation on the spectrum signals.
The energy spectrum of the filtered spectrum is calculated as follows. Where S (i, M) represents the energy of the speech information in the mth frequency band, M 0 Representing the number of triangular filters, i is the i-th frame.
The filtered data is responded to by Discrete Cosine Transform (DCT) to output features. The formula of the DCT is as follows, wherein Y represents the characteristic dimension outputted after DCT, and in the embodiment of the invention, the value of Y in the traversing process is set to 15, and j represents the value. The output of the DCT is added with first-order and second-order difference, and then the output of [1, 48] dimension is obtained.
And carrying out convolution operation on the output characteristics of the MFCC by utilizing a one-dimensional convolution kernel, wherein the number of layers is set to be one, and the dimension of the output F2 is [1,64].
Step S32: splicing the time domain feature vector and the frequency domain feature vector to obtain an input vector; illustratively, the two feature vectors F1 and F2 are stitched using a Concat operation to obtain an input vector F in the [1,192] dimension.
Step S33: inputting the input vector F into an encoder, and outputting the fused time-frequency domain signals; the purpose of the encoder is to fuse the signals in the time and frequency domains, in particular as the output of the encoder.
Step S34: inputting the fused time-frequency domain signals into a decoder, and outputting decoding signals; illustratively, the encoder and decoder sections use a symmetrical fully-connected neural network having two hidden layers, 1024 and 256 dimensions, respectively, with the intermediate encoded output being a 32-dimensional vector and the activation function used being a ReLU activation function.
Step S35: and calculating a loss function by using the input vector and the decoded signal, and training the time-frequency domain fused self-coding feature extraction network to obtain a trained self-coding feature extraction network. Illustratively, the twin feature extraction network can directly identify anomalies between two acoustic signals, but the school capability of the twin network is weak, on one hand, because two branches share weights, the difficulty of back propagation learning is high, and on the other hand, the output dimension of the twin network is relatively low, and the transformation of the loss function is difficult to support the learning of the complex feature extraction structure. In the back propagation process, the encoder to Loss direct connection process is not considered. The loss function uses a cross entropy loss function. The formula is as follows.
The epochs were set to 2 and the learning rate was set to 1e-4, and the optimizer trained 150 epochs using the Adam optimizer.
According to the multi-working-condition anomaly monitoring method for the hydroelectric generating set based on voice recognition, disclosed by the embodiment of the invention, the self-coding feature extraction network is trained after the time domain features and the frequency domain features of the voice data are fused, so that the self-coding feature extraction network is more practical, and the collected voice data can be processed and judged more accurately.
Step S4: and constructing a twin feature extraction network based on the trunk of the trained self-coding feature extraction network. Illustratively, a Backbone of the time-frequency domain fused self-coding feature extraction network is taken as a trunk feature extraction network of a twin network for anomaly detection, as shown in fig. 5. The twin network has two functions, namely, the similarity between two inputs is evaluated through a neural network structure, and the probability values of similarity of X and Y are output. Because the output of the twin network is relatively simple, it is generally difficult to support complex feature extraction structures, and therefore training the twin network for voice recognition directly is difficult to converge.
Specifically, in one embodiment, the process of constructing the twin feature extraction network is:
Freezing trained parameters in the self-coding feature extraction network, and training the full-connection layer; illustratively, by training the self-coding feature extraction network through time-frequency domain fusion, the parameters of the backhaul part can already extract the time domain and frequency domain features of the sound, and when training the twin network, the parameters of the backhaul part already trained are frozen first, and the subsequent FC (fully connected layers, fully connected layer) is trained. And thawing the trained parameters, and performing overall training on the self-coding feature extraction network to obtain the twin feature extraction network. Illustratively, the backhaul part parameters are thawed and the network as a whole is trained. The FC uses a two-layer fully connected layer structure, and the similarity comparison of the X and Y inputs uses Euclidean distance measurement function. The training super parameters of the two stages are the same, and are: the epochs are set to 2, the learning rate is set to 1e-4, and the optimizer trains 100 epochs using an Adam optimizer, by way of example only, and not by way of limitation.
According to the embodiment of the invention, the self-coding network is utilized to learn the feature extraction structure, the parameters of the feature extraction part are fixed after the beneficial parameters are obtained, and the twin network is trained in two stages according to a method of firstly locally and then wholly, so that the twin feature extraction network can obtain the parameters which are more beneficial to judging the abnormality of the water turbine unit, and the learning effect is better.
Step S5: the method comprises the steps of collecting real-time sound signal data of the current working condition as a first input value, extracting a mode sequence under the corresponding working condition in a database as a second input value, and inputting the first input value and the second input value into a twin feature extraction network together to obtain the similarity of the two input values, wherein the sampling period of the real-time sound signal data is identical to the length of the corresponding mode sequence in the database. Illustratively, real-time sound signal data of the current working condition is collected, and a state parameter, namely an id value is determined by an upper control system, and different algorithms are executed according to the id value.
Specifically, in an embodiment, inputting the first input value and the second input value together into the twin feature extraction network includes:
under steady state or transient state working condition, inputting the collected real-time sound signal data and the data of the mode sequence under the corresponding working condition in the database into a twin feature extraction network; when the water turbine unit is in a steady state and a transient state, the state is not changed, the signal tends to be stable, and data with a proper window size is selected, wherein the window refers to the length of a data segment selected for monitoring by each sample, and the aim of real-time monitoring can be achieved only by meeting the condition that the execution time of a subsequent algorithm is less than the window time. In a specific embodiment, the window size of the design monitoring is 3s, that is, 1200 pieces of sampling point information form a piece of data to be detected, and the sampling period is the same as the length of the corresponding pattern sequence in the database. And inputting the current acquired data and the corresponding data of the database into a time-frequency domain fusion self-coding network to perform feature extraction.
Under the working condition of the transition state, the short-time average energy and the short-time average amplitude of the sound signals summarized by the current sampling window are calculated, the position of the current sampling window in the whole transition process is positioned, and the data of the mode sequence of the current sampling window under the corresponding working condition in the database are substituted into the twin feature extraction network. For example, a smaller window should be selected at this time, so that real-time monitoring is facilitated, and subsequent positioning processing of the acoustic signal in time sequence is facilitated. In one embodiment, the window size for design monitoring is also 3 seconds of 1200 sample point information. For a transition state process, the acoustic signals are continuously changed, the corresponding mode signal sequences in the database contain the change information of the acoustic signals in the whole process of transformation, and before judging whether the abnormality occurs in the signals in the current window, the positions of the current window in the whole process change must be positioned. The positioning mode is as follows:
the short-time average energy E (i) and the short-time average amplitude M (i) of the sound signal within the window are calculated as follows, y representing the sound timing signal, where L represents the length of the window.
E (i) and M (i) for each stage in the pattern sequence are calculated at the window size. When the NCC is used for comparing the correlation of two sequences, the change trend of the two sequences is more concerned, and when the value of the sequence A is increased by the same proportion relative to the value of the sequence B at each moment, the correlation of the NCC of the two sequences is extremely high, and the NCC cannot compare the two sequences which change stably. The purpose of the current step is to locate the position where the current acquired data should be located from the pattern sequence, and since the pattern sequence represents the overall change process between two states, the absolute intensity of the signal changes, but the relative values at the front and rear moments still tend to a stable value, before matching with the NCC, a change interval where the current acquired value is likely to be located needs to be located first, and then the NCC is used for further location in the interval. And setting the upper and lower boundaries of the signal energy by using E (i), setting the upper and lower boundaries of the signal amplitude by using M (i), and taking the union of the upper and lower boundaries as a matching section of NCC. And in the matching interval, performing accurate positioning by using an NCC algorithm. The start time t of the current sample time in the pattern sequence is obtained. And (3) taking a sequence with the window size after t as a matching sequence to be input into a self-coding network with time-frequency domain fusion for feature extraction. If the current signal has the following two conditions, the signal cannot be considered to be in match with the pattern sequence, and the possibility of larger abnormality exists, the two conditions are respectively: short-time energy magnitudes and average energies cannot be found in the pattern sequence; the matching degree of NCC is smaller than a certain threshold value, and when the matching degree is lower, the positioning is considered to be failed. According to actual experience and setting of various super parameters in the current scene, the threshold value of NCC is set to 0.8, and if the threshold value is greater than 0.8, the positioning is considered to be successful.
According to the embodiment of the invention, the signals to be matched are subjected to interval screening by utilizing the short-time average energy and the short-time average amplitude of the acoustic signals, and more accurate position information is obtained through NCC, so that the subsequent data comparison is facilitated.
Step S6: and comparing the similarity with a preset threshold value, and judging whether the real-time sound is abnormal or not. In practical use, the determination threshold of the output result of the twin network can be adaptively adjusted according to the requirements of the pumping power storage station on the abnormal false detection rate and the false omission rate. If the current sound is larger than the threshold value, the current sound is considered to be abnormal, otherwise, the current sound is judged to be abnormal, and the current sound is fed back to the workstation personnel for the next operation.
According to the voice recognition-based multi-working-condition anomaly monitoring method for the hydroelectric generating set, provided by the embodiment of the invention, the running condition of the hydroelectric generating set is monitored in real time by utilizing the voice signal based on the artificial intelligence correlation technology, the normal running of the set is not interfered, the voice acquisition equipment is flexible and convenient to install, the signal is easy to acquire, the cost is low, and the accuracy is higher.
Example 2
The embodiment of the invention provides a device for monitoring multi-working-condition abnormality of a hydroelectric generating set based on voice recognition, which is shown in fig. 6 and comprises the following components:
The acquisition module 1 is used for acquiring sound data of the water turbine under different working conditions; details refer to the related description of step S1 in the above method embodiment, and will not be described herein.
The database building module 2 is used for carrying out mode extraction on sound data under different working conditions through a clustering method to obtain mode sequences under different working conditions, and building a database according to different mode sequences corresponding to different working conditions; for details, refer to the related description of step S2 in the above method embodiment, and no further description is given here.
The time-frequency domain fusion module 3 is used for training the built time-frequency domain fused self-coding feature extraction network through fusing the sound data in the database to obtain a trained self-coding feature extraction network; for details, refer to the related description of step S3 in the above method embodiment, and no further description is given here.
A network construction module 4, configured to construct a twin feature extraction network based on a trunk of the trained self-coding feature extraction network; for details, see the description of step S4 in the above method embodiment, and the details are not repeated here.
The computing module 5 is used for collecting real-time sound signal data of the current working condition as a first input value, extracting a mode sequence under the corresponding working condition in the database as a second input value, and inputting the first input value and the second input value into the twin feature extraction network together to obtain the similarity of the two input values, wherein the sampling period of the real-time sound signal data is the same as the length of the corresponding mode sequence in the database; for details, see the description of step S5 in the above method embodiment, and the details are not repeated here.
And the judging module 6 is used for comparing the similarity with a preset threshold value and judging whether the real-time sound is abnormal or not. For details, see the description of step S6 in the above method embodiment, and the details are not repeated here.
The multi-working-condition abnormality monitoring device for the hydroelectric generating set provided by the embodiment of the invention is based on the artificial intelligence correlation technology, utilizes the acoustic signals to monitor the running condition of the hydroelectric generating set in real time, does not interfere the normal running of the set, has flexible and convenient installation of the sound collecting equipment, is easy to acquire the signals, and has low cost and higher accuracy.
Example 3
Fig. 7 shows a schematic structural diagram of a computer device according to an embodiment of the present invention, including: a processor 901 and a memory 902, wherein the processor 901 and the memory 902 may be connected by a bus or otherwise, for example in fig. 7.
The processor 901 may be a central processing unit (Central Processing Unit, CPU). The processor 901 may also be other general purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), field programmable gate arrays (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or a combination thereof.
The memory 902 is used as a non-transitory computer readable storage medium for storing non-transitory software programs, non-transitory computer executable programs, and modules, such as program instructions/modules corresponding to the methods in the method embodiments described above. The processor 901 executes various functional applications of the processor and data processing, i.e., implements the methods in the above-described method embodiments, by running non-transitory software programs, instructions, and modules stored in the memory 902.
The memory 902 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, at least one application program required for a function; the storage data area may store data created by the processor 901, and the like. In addition, the memory 902 may include high-speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, memory 902 optionally includes memory remotely located relative to processor 901, which may be connected to processor 901 via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
One or more modules are stored in the memory 902 that, when executed by the processor 901, perform the methods of the method embodiments described above.
The specific details of the computer device may be correspondingly understood by referring to the corresponding related descriptions and effects in the above method embodiments, which are not repeated herein.
It will be appreciated by those skilled in the art that implementing all or part of the above-described methods in the embodiments may be implemented by a computer program for instructing relevant hardware, and the implemented program may be stored in a computer readable storage medium, and the program may include the steps of the embodiments of the above-described methods when executed. The storage medium may be a magnetic Disk, an optical Disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a Flash Memory (Flash Memory), a Hard Disk (HDD), or a Solid State Drive (SSD); the storage medium may also comprise a combination of memories of the kind described above.
Although embodiments of the present invention have been described in connection with the accompanying drawings, various modifications and variations may be made by those skilled in the art without departing from the spirit and scope of the invention, and such modifications and variations are within the scope of the invention as defined by the appended claims.

Claims (10)

1. A hydroelectric generating set multi-working condition abnormality monitoring method based on voice recognition is characterized by comprising the following steps:
collecting sound data of the water turbine under different working conditions;
carrying out mode extraction on the sound data under different working conditions to obtain mode sequences under different working conditions, and establishing a database according to the different mode sequences corresponding to the different working conditions;
training the built time-frequency domain fused self-coding feature extraction network through sound data in a database to obtain a trained self-coding feature extraction network;
constructing a twin feature extraction network based on the trunk of the trained self-coding feature extraction network;
collecting real-time sound signal data of the current working condition as a first input value, extracting a mode sequence under the corresponding working condition in a database as a second input value, and inputting the first input value and the second input value into a twin feature extraction network together to obtain the similarity of the two input values, wherein the sampling period of the real-time sound signal data is the same as the length of the corresponding mode sequence in the database;
and comparing the similarity with a preset threshold value, and judging whether the real-time sound is abnormal or not.
2. The voice recognition-based multiple-condition anomaly monitoring method for a water turbine generator set according to claim 1, wherein the different conditions include: the method for acquiring the sound data of the water turbine under different working conditions comprises the following steps of:
Under steady state or transient state working conditions, collecting sequence data of preset strips at a preset sampling frequency, wherein the time span of each strip of data is preset time, and obtaining a sound data set under each steady state or transient state working condition;
and under the working condition of the transition state, collecting sequence data of preset strips at a preset sampling frequency, wherein the time span of each strip of data is the duration of the transition state, and the duration of the transition state is longer than the preset time to obtain a sound data set under the working condition of each transition state.
3. The method for monitoring the abnormality of the multiple working conditions of the hydro-generator set based on the voice recognition according to claim 2, wherein the mode extraction of the voice data under the different working conditions comprises the following steps:
under steady state or transient state working conditions, calculating weighted average among sound data to obtain a mode sequence of the sound data under the current working conditions;
and under the working condition of the transitional state, aligning the sound data by using a normalized cross-correlation matching method and a traversing mode, and calculating the weighted average of all the data after the alignment to obtain a mode sequence of the sound data under the current working condition.
4. The method for monitoring multiple working conditions anomalies of a hydro-generator set based on voice recognition according to claim 3, wherein the process of aligning the voice data by using a normalized cross-correlation matching method and a traversal method comprises the following steps:
Calculating the similarity of the first two groups of data in the sound data by using NCC;
and (3) keeping one group of data unchanged, carrying out displacement and traversal on the other group of data, and recalculating the similarity, wherein the displacement ranges from 0 to half of the data length, and representing that the two groups of data are aligned when the similarity is maximum.
5. The voice recognition-based multiple-condition anomaly monitoring method for a hydro-generator set of claim 2, wherein the step of inputting the first input value and the second input value together into the twin feature extraction network comprises:
under steady state or transient state working condition, inputting the collected real-time sound signal data and the data of the mode sequence under the corresponding working condition in the database into a twin feature extraction network;
under the working condition of the transition state, the short-time average energy and the short-time average amplitude of the sound signals summarized by the current sampling window are calculated, the position of the current sampling window in the whole transition process is positioned, and the data of the mode sequence of the current sampling window under the corresponding working condition in the database are substituted into the twin feature extraction network.
6. The method for monitoring the multi-working-condition abnormality of the hydro-generator set based on the voice recognition according to claim 1, wherein the process of training the built time-frequency domain fused self-coding feature extraction network by fusing the voice data in the database to obtain the trained self-coding feature extraction network comprises the following steps:
Extracting time domain feature vectors and frequency domain feature vectors of sound signals in a database;
splicing the time domain feature vector and the frequency domain feature vector to obtain an input vector;
inputting the input vector into an encoder, and outputting a fused time-frequency domain signal;
inputting the fused time-frequency domain signals into a decoder, and outputting decoding signals;
and calculating a loss function by using the input vector and the decoded signal, and training the self-coding feature extraction network fused with the time domain and the frequency domain to obtain a trained self-coding feature extraction network.
7. The voice recognition-based multi-condition anomaly monitoring method for the hydro-generator set, according to claim 6, is characterized in that the process of constructing the twin feature extraction network is as follows:
freezing trained parameters in the self-coding feature extraction network, and training the full-connection layer;
and thawing the trained parameters, and performing overall training on the self-coding feature extraction network to obtain the twin feature extraction network.
8. A hydroelectric generating set multi-condition anomaly monitoring device based on voice recognition, which is characterized by comprising:
the acquisition module is used for acquiring sound data of the water turbine under different working conditions;
The database building module is used for carrying out mode extraction on the sound data under different working conditions to obtain mode sequences under different working conditions, and building a database according to different mode sequences corresponding to different working conditions;
the time-frequency domain fusion module is used for training the built time-frequency domain fused self-coding feature extraction network through the sound data in the fusion database to obtain a trained self-coding feature extraction network;
the network construction module is used for constructing a twin feature extraction network based on the trunk of the trained self-coding feature extraction network;
the computing module is used for collecting real-time sound signal data of the current working condition as a first input value, extracting a mode sequence under the corresponding working condition in the database as a second input value, and inputting the first input value and the second input value into the twin feature extraction network together to obtain the similarity of the two input values, wherein the sampling period of the real-time sound signal data is the same as the length of the corresponding mode sequence in the database;
and the judging module is used for comparing the similarity with a preset threshold value and judging whether the real-time sound is abnormal or not.
9. A computer device, comprising: a memory and a processor in communication with each other, the memory having stored therein computer instructions, the processor executing the computer instructions to perform the method of any of claims 1-7.
10. A computer readable storage medium storing computer instructions for causing a computer to perform the method of any one of claims 1-7.
CN202310451081.6A 2023-04-24 2023-04-24 Multi-working-condition abnormality monitoring method and device for hydroelectric generating set based on voice recognition Active CN116453526B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310451081.6A CN116453526B (en) 2023-04-24 2023-04-24 Multi-working-condition abnormality monitoring method and device for hydroelectric generating set based on voice recognition

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310451081.6A CN116453526B (en) 2023-04-24 2023-04-24 Multi-working-condition abnormality monitoring method and device for hydroelectric generating set based on voice recognition

Publications (2)

Publication Number Publication Date
CN116453526A true CN116453526A (en) 2023-07-18
CN116453526B CN116453526B (en) 2024-03-08

Family

ID=87121846

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310451081.6A Active CN116453526B (en) 2023-04-24 2023-04-24 Multi-working-condition abnormality monitoring method and device for hydroelectric generating set based on voice recognition

Country Status (1)

Country Link
CN (1) CN116453526B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117116291A (en) * 2023-08-22 2023-11-24 昆明理工大学 Sound signal processing method of sand-containing water flow impulse turbine

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107610692A (en) * 2017-09-22 2018-01-19 杭州电子科技大学 The sound identification method of self-encoding encoder multiple features fusion is stacked based on neutral net
CN112857767A (en) * 2021-01-18 2021-05-28 中国长江三峡集团有限公司 Hydro-turbo generator set rotor fault acoustic discrimination method based on convolutional neural network
CN113792597A (en) * 2021-08-10 2021-12-14 广东省科学院智能制造研究所 Mechanical equipment abnormal sound detection method based on self-supervision feature extraction
CN113837000A (en) * 2021-08-16 2021-12-24 天津大学 Small sample fault diagnosis method based on task sequencing meta-learning
US20220084333A1 (en) * 2020-09-15 2022-03-17 Deere & Company Sound analysis to identify a damaged component in a work machine
CN114333773A (en) * 2021-12-10 2022-04-12 重庆邮电大学 Industrial scene abnormal sound detection and identification method based on self-encoder
CN115293212A (en) * 2022-08-15 2022-11-04 西安欧亚学院 Equipment running state monitoring method based on audio perception and digital twins
CN115376554A (en) * 2022-07-21 2022-11-22 桂林电子科技大学 Abnormal sound detection method for domain transfer self-supervision machine
CN115954017A (en) * 2022-12-01 2023-04-11 中国人民解放军陆军炮兵防空兵学院 HHT-based engine small sample sound abnormal fault identification method and system

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107610692A (en) * 2017-09-22 2018-01-19 杭州电子科技大学 The sound identification method of self-encoding encoder multiple features fusion is stacked based on neutral net
US20220084333A1 (en) * 2020-09-15 2022-03-17 Deere & Company Sound analysis to identify a damaged component in a work machine
CN112857767A (en) * 2021-01-18 2021-05-28 中国长江三峡集团有限公司 Hydro-turbo generator set rotor fault acoustic discrimination method based on convolutional neural network
CN113792597A (en) * 2021-08-10 2021-12-14 广东省科学院智能制造研究所 Mechanical equipment abnormal sound detection method based on self-supervision feature extraction
CN113837000A (en) * 2021-08-16 2021-12-24 天津大学 Small sample fault diagnosis method based on task sequencing meta-learning
CN114333773A (en) * 2021-12-10 2022-04-12 重庆邮电大学 Industrial scene abnormal sound detection and identification method based on self-encoder
CN115376554A (en) * 2022-07-21 2022-11-22 桂林电子科技大学 Abnormal sound detection method for domain transfer self-supervision machine
CN115293212A (en) * 2022-08-15 2022-11-04 西安欧亚学院 Equipment running state monitoring method based on audio perception and digital twins
CN115954017A (en) * 2022-12-01 2023-04-11 中国人民解放军陆军炮兵防空兵学院 HHT-based engine small sample sound abnormal fault identification method and system

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
HUANZHUO WU等: "You Only Hear Once: Lightweight In-Network AI Design for Multi-object Anomaly Detection", 2022 IEEE 21ST MEDITERRANEAN ELECTROTECHNICAL CONFERENCE (MELECON) *
吴小龙 等: "具有多核结构的稀疏化DNN在轴承诊断中的应用", 机械设计与制造, no. 02 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117116291A (en) * 2023-08-22 2023-11-24 昆明理工大学 Sound signal processing method of sand-containing water flow impulse turbine

Also Published As

Publication number Publication date
CN116453526B (en) 2024-03-08

Similar Documents

Publication Publication Date Title
CN109357749A (en) A kind of power equipment audio signal analysis method based on DNN algorithm
JP7199608B2 (en) Methods and apparatus for inspecting wind turbine blades, and equipment and storage media therefor
Zhang et al. Fault diagnosis and prognosis using wavelet packet decomposition, Fourier transform and artificial neural network
CN112885372B (en) Intelligent diagnosis method, system, terminal and medium for power equipment fault sound
CN116453526B (en) Multi-working-condition abnormality monitoring method and device for hydroelectric generating set based on voice recognition
CN103176128A (en) Method and system for forcasting state of wind generating set and diagnosing intelligent fault
CN112857767B (en) Hydro-turbo generator set rotor fault acoustic discrimination method based on convolutional neural network
CN111814872B (en) Power equipment environmental noise identification method based on time domain and frequency domain self-similarity
KR101345598B1 (en) Method and system for condition monitoring of wind turbines
CN112700793A (en) Method and system for identifying fault collision of water turbine
CN105547730A (en) Fault detection system of water-wheel generator set
CN113470694A (en) Remote listening monitoring method, device and system for hydraulic turbine set
Wang et al. Coupled hidden Markov fusion of multichannel fast spectral coherence features for intelligent fault diagnosis of rolling element bearings
CN105352541B (en) A kind of transformer station high-voltage side bus auxiliary monitoring system and its monitoring method based on power network disaster prevention disaster reduction system
CN114492196A (en) Fault rapid detection method and system based on normal wave energy ratio theory
Dang et al. Cochlear filter cepstral coefficients of acoustic signals for mechanical faults identification of power transformer
CN114997749B (en) Intelligent scheduling method and system for power personnel
CN114708885A (en) Fan fault early warning method based on sound signals
CN114320773A (en) Wind turbine generator fault early warning method based on power curve analysis and neural network
CN115406630A (en) Method for detecting faults of wind driven generator blades through passive acoustic signals based on machine learning
Li et al. Unsupervised Anomalous Sound Detection for Machine Condition Monitoring Using Temporal Modulation Features on Gammatone Auditory Filterbank.
Ye et al. Power plant production equipment abnormal sound perception method research based on machine hearing
CN113919525A (en) Power station fan state early warning method, system and application thereof
CN220791410U (en) Wind power cabin monitoring device based on sound collection data
Li et al. Fault Diagnosis for Single-phase Grounding in Distribution Network Based on Hilbert-Huang Transform and Siamese Convolution Neural Network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant