CN116092518A - Wind driven generator blade state identification method, device and storage medium - Google Patents

Wind driven generator blade state identification method, device and storage medium Download PDF

Info

Publication number
CN116092518A
CN116092518A CN202310105480.7A CN202310105480A CN116092518A CN 116092518 A CN116092518 A CN 116092518A CN 202310105480 A CN202310105480 A CN 202310105480A CN 116092518 A CN116092518 A CN 116092518A
Authority
CN
China
Prior art keywords
audio signal
driven generator
frame
wind driven
processing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310105480.7A
Other languages
Chinese (zh)
Inventor
李荣学
梁晓东
别克扎提·巴合提
张辉
尹俊宇
谢鸿
夏明福
孙永旭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hunan Lianzhi Monitoring Technology Co ltd
Original Assignee
Hunan Lianzhi Monitoring Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hunan Lianzhi Monitoring Technology Co ltd filed Critical Hunan Lianzhi Monitoring Technology Co ltd
Priority to CN202310105480.7A priority Critical patent/CN116092518A/en
Publication of CN116092518A publication Critical patent/CN116092518A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • FMECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
    • F03MACHINES OR ENGINES FOR LIQUIDS; WIND, SPRING, OR WEIGHT MOTORS; PRODUCING MECHANICAL POWER OR A REACTIVE PROPULSIVE THRUST, NOT OTHERWISE PROVIDED FOR
    • F03DWIND MOTORS
    • F03D17/00Monitoring or testing of wind motors, e.g. diagnostics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
    • G10L19/02Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using spectral analysis, e.g. transform vocoders or subband vocoders
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
    • G10L19/04Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using predictive techniques
    • G10L19/26Pre-filtering or post-filtering
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/02Speech enhancement, e.g. noise reduction or echo cancellation
    • G10L21/0208Noise filtering
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/03Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the type of extracted parameters
    • G10L25/21Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the type of extracted parameters the extracted parameters being power information
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/03Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the type of extracted parameters
    • G10L25/24Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the type of extracted parameters the extracted parameters being the cepstrum
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/27Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the analysis technique
    • G10L25/30Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the analysis technique using neural networks
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/48Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02EREDUCTION OF GREENHOUSE GAS [GHG] EMISSIONS, RELATED TO ENERGY GENERATION, TRANSMISSION OR DISTRIBUTION
    • Y02E10/00Energy generation through renewable energy sources
    • Y02E10/70Wind energy
    • Y02E10/72Wind turbines with rotation axis in wind direction

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Acoustics & Sound (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Theoretical Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Data Mining & Analysis (AREA)
  • Mechanical Engineering (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Combustion & Propulsion (AREA)
  • Chemical & Material Sciences (AREA)
  • Sustainable Energy (AREA)
  • Sustainable Development (AREA)
  • Spectroscopy & Molecular Physics (AREA)
  • Quality & Reliability (AREA)
  • Wind Motors (AREA)

Abstract

The invention provides a wind driven generator blade state identification method, equipment and a storage medium, wherein the method comprises the steps of sequentially carrying out pre-emphasis processing, wiener filtering processing, framing processing, windowing processing and short-time energy processing on an original audio signal to obtain the audio signal; then, extracting the mel frequency cepstrum coefficient of the audio signal to obtain the characteristics of the audio signal; inputting the audio signal characteristics into a neural network to perform wind driven generator blade state identification and classification training to obtain a wind driven generator blade state identification neural network model, and realizing wind driven generator blade state identification based on the wind driven generator blade state identification neural network model. The method has the advantages that the wiener filtering processing and short-time energy calculation effectively reduce and eliminate the background wind noise of the blade acoustic signals of the wind driven generator; the framing process and the windowing process eliminate fine noise.

Description

Wind driven generator blade state identification method, device and storage medium
Technical Field
The invention relates to the technical field of wind power monitoring and signal processing, in particular to a method, equipment and storage medium for identifying the blade state of a wind driven generator.
Background
The blades of the wind power generation equipment work in high altitude and all-weather conditions, bear large load, have severe running environments, blow, sun, rain, lightning, corrosion and the like, are corroded or affected by various mediums at any time, and have great influence on the service life of the blades. The cost for replacing the blades is high, the cost for replacing a single blade of a land wind farm is over 100 ten thousand yuan, and the overhead maintenance cost is generally tens of thousands to hundreds of thousands yuan at a time. Therefore, in order to reduce the cost of controlling the replacement frequency of the blade and to ensure the safety of timely replacement of the blade, it is necessary to monitor the state of the blade.
The mode of monitoring the blade in the prior art mainly comprises the mode of monitoring the blade of the wind driven generator by arranging strain gauges, vibration sensors, fiber bragg grating sensors and the like, and most of the modes are applied to the change of the stress mode of the blade of the wind driven generator for monitoring, but monitoring equipment is arranged in the blade before the mode is required to leave a factory, the survival rate is low, the failure rate is high, and once the monitoring equipment is damaged, the replacement is inconvenient, and the later operation and maintenance cost is also high. The wind power generator blade monitoring system is characterized by further comprising products for monitoring the directions of tower collision, damage and the like of the wind power generator blade, such as a laser radar sensor which is common to wind power generator manufacturers, wherein the method can only monitor the blade tip position of the blade, and the blade is slightly deformed in a high wind turbulence environment, namely the blade tip of the blade deviates from an expected rotation track, so that the monitoring result is inaccurate. And more commonly manual monitoring by maintenance personnel of a wind power plant, including: visual inspection, ear recognition and the like, such methods rely on personal experience, lack of expansibility, and lack of accuracy in judging the state of the leaf.
In view of the foregoing, there is a strong need for a method, apparatus and storage medium for identifying the blade status of a wind turbine to solve the problems in the prior art.
Disclosure of Invention
The invention aims to provide a method, equipment and a storage medium for identifying the blade state of a wind driven generator, and the specific technical scheme is as follows:
a wind driven generator blade state identification method comprises the following steps:
step S1: the method comprises the steps of preprocessing an original audio signal, namely collecting the original audio signal of a wind driven generator, and sequentially carrying out pre-emphasis processing, wiener filtering processing, framing processing, windowing processing and short-time energy processing on the original audio signal to obtain an audio signal;
step S2: extracting audio signal characteristics, namely extracting Mel frequency cepstrum coefficients of the audio signal preprocessed in the step S1 to obtain the audio signal characteristics;
step S3: and (3) identifying the blade state of the wind driven generator, namely repeating the step (S1) and the step (S2) under different states of the wind driven generator to obtain a plurality of audio signal characteristics, inputting all the audio signal characteristics into a neural network to perform wind driven generator blade state identification and classification training to obtain a wind driven generator blade state identification neural network model, and realizing wind driven generator blade state identification based on the wind driven generator blade state identification neural network model.
Preferably, in step S1, the pre-emphasis processing is to process the original audio signal with a high-pass filter to obtain a pre-emphasis audio signal H (k), where the expression of the pre-emphasis audio signal is as follows:
H(k)=a*H(k-1)+a*[s(k)-s(k-1)];
wherein a is a pre-emphasis coefficient, and the value range is 0.9 < a < 1; s (k) represents the original audio signal of the current frame; s (k-1) represents the original audio signal of the previous frame; h (k-1) denotes the audio signal after the last frame filtering, where k=0, 1.
Preferably, in step S1, the wiener filtering process is to perform wiener filtering process on the pre-emphasis audio signal by using a wiener filter to obtain a preliminary noise reduction signal
Figure BDA0004074610380000021
The expression of the preliminary noise reduction signal is as follows:
Figure BDA0004074610380000022
wherein E represents a mathematical expectation calculation function; lambda (lambda) d (k) Representing the noise power spectrum of the kth band point.
Preferably, in step S1, the specific procedure of the frequency division process is as follows:
dividing the preliminary noise reduction signal into a plurality of short-time audio segments, taking each audio segment as an analysis frame, taking the frame length of the analysis frame as 10-20ms, and taking half of the frame length as the frame shift length.
Preferably, in step S1, the windowing process brings each analysis frame into a window function, the value of which is set to 0 outside the window, and the expression of the window function is as follows:
Figure BDA0004074610380000023
wherein N represents the length of the short-time fourier transform; n-1 represents the length of the Hamming window, i.e., the frame length; ω (n) represents a window function.
Preferably, in step S1, the short-time energy processing is to calculate the short-time energy of the audio signal of the current frame, and if the short-time energy is lower than the set threshold, the current frame is regarded as a mute frame, where the short-time energy is expressed as follows:
Figure BDA0004074610380000031
wherein y is i (n) represents the i-th frame of the windowed audio signal obtained after processing by the windowing function ω (n).
Preferably, the i-th frame y of the windowed audio signal is obtained after processing by the windowing function ω (n) i The expression of (n) is as follows:
Figure BDA0004074610380000032
wherein inc is the frame shift length, and fn is the total frame number after framing.
Preferably, in step S3, the neural network model is trained by using a combination of standard back propagation algorithm and discard method.
In addition, the invention also provides computer equipment, which comprises:
a memory: the memory stores a computer program;
a processor: the processor when executing the computer program implements the wind turbine blade state identification method as described above.
In addition, the invention also provides a computer readable storage medium, on which a computer program is stored, which when being executed by a processor, implements the wind turbine blade state identification method as described above.
The technical scheme of the invention has the following beneficial effects:
(1) According to the invention, the original audio signal is preprocessed, and the preprocessing process is combined with the wiener filtering processing and the short-time energy calculating method, so that the background wind noise of the blade sound signal of the wind driven generator is effectively reduced and eliminated; the pretreatment process also comprises framing treatment and windowing treatment on the audio signals in the treatment process, so that fine noise is eliminated. The pretreatment process of the invention sequentially carries out pre-emphasis treatment, wiener filtering treatment, framing treatment, windowing treatment and short-time energy treatment to obtain pure audio signals.
(2) According to the invention, the Mel frequency cepstrum coefficient feature extraction is carried out on the audio signal, so that useful information contained in aerodynamic noise generated by the wind driven generator blade is effectively extracted, and corresponding features capable of representing blade faults are extracted.
(3) According to the diagnosis result of the neural network on the blade faults of the wind driven generator, operation and maintenance personnel of the wind driven generator can timely and effectively know the health states of the blades. The diagnosis result gives the wind driven generator operation and maintenance personnel the guiding opinion of blade inspection and repair, thereby saving the maintenance and management cost of the wind field.
In addition to the objects, features and advantages described above, the present invention has other objects, features and advantages. The present invention will be described in further detail with reference to the drawings.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the invention. In the drawings:
FIG. 1 is a flow chart of the steps of a method for identifying blade condition in a wind turbine in accordance with a preferred embodiment of the present invention;
FIG. 2 is a graph of the relationship between the value of the Mel scale after conversion to frequency and the sampling frequency;
FIG. 3 is a neural network framework in a preferred embodiment of the present invention;
fig. 4 is a neural network training flow diagram in a preferred embodiment of the invention.
Detailed Description
Embodiments of the invention are described in detail below with reference to the attached drawings, but the invention can be implemented in a number of different ways, which are defined and covered by the claims.
Examples:
referring to fig. 1, the embodiment discloses a method for identifying a blade state of a wind driven generator, which includes the steps of:
step S1: the method comprises the steps of preprocessing an original audio signal, namely collecting the original audio signal of a wind driven generator by using a high-precision pickup, and sequentially carrying out pre-emphasis processing, wiener filtering processing, framing processing, windowing processing and short-time energy processing on the original audio signal to obtain an audio signal;
step S2: extracting audio signal characteristics, namely extracting Mel frequency cepstrum coefficients (Mel Frequency Cepstrum Coeff icient, MFCC) of the audio signal preprocessed in the step S1 to obtain the audio signal characteristics;
step S3: and (3) identifying the blade state of the wind driven generator, namely repeating the step (S1) and the step (S2) under different states of the wind driven generator to obtain a plurality of audio signal characteristics, inputting all the audio signal characteristics into a neural network to perform wind driven generator blade state identification and classification training to obtain a wind driven generator blade state identification neural network model, and realizing wind driven generator blade state identification based on the wind driven generator blade state identification neural network model.
Specifically, in step S1, the pre-emphasis processing is to process the original audio signal with a high-pass filter to obtain a pre-emphasis audio signal H (k), which is aimed at raising a high-frequency portion in the original audio signal, so that the spectrum of the original audio signal is flattened, where the expression of the pre-emphasis audio signal is as follows:
H(k)=a*H(k-1)+a*[s(k)-s(k-1)];
wherein a is a pre-emphasis coefficient, and the value range is 0.9 < a < 1; s (k) represents the original audio signal of the current frame; s (k-1) represents the original audio signal of the previous frame; h (k-1) denotes the audio signal after the last frame filtering, where k=0, 1.
When the wind driven generator operates, the temperature in the engine room is continuously increased due to heat generated by each device, so that the operation and safety of the device are affected, and the engine room is required to be cooled in time. Most wind driven generators adopt a heat exchange mode to indirectly contact the air inside and outside the wind driven generator cabin to realize cooling. The method mainly adopts a high-power cooling fan arranged at the bottom of the tower to complete the heat exchange process. A large amount of continuous noise data is generated when collecting aerodynamic audio signals of the wind turbine blades. In addition, other environmental noise such as wind noise, artificial noise and the like exists in the running process of the wind driven generator.
Specifically, in step S1, the wiener filtering process is to perform wiener filtering process on the pre-emphasis audio signal by using a wiener filter to obtain a preliminary noise reduction signal
Figure BDA0004074610380000051
Preliminary noise reduction can be performed on the pre-emphasis audio signal by adopting wiener filtering processing, so that a purer preliminary noise reduction signal is obtained, and the expression of the preliminary noise reduction signal is as follows:
Figure BDA0004074610380000052
wherein E represents a mathematical expectation calculation function; lambda (lambda) d (k) Representing the noise power spectrum of the kth band point.
Since the aerodynamic audio signal generated by the wind turbine blade due to environmental influences (variable wind speed, variable rotational speed of the blade) can be regarded as a non-stationary time-varying signal, the generation process of which is closely related to the environmental changes, the aerodynamic audio signal generated by the wind turbine blade can be regarded as stationary for a short time. In order to introduce a processing method and theory of a stable process into short-time processing of aerodynamic audio signals of the wind turbine blade, the embodiment carries out framing processing on preliminary noise reduction signals of the wind turbine blade, namely dividing the preliminary noise reduction signals into a plurality of short-time audio segments, and each short-time audio segment is called an analysis frame. Processing each analysis frame of the preliminary noise reduction signal corresponds to processing a continuous signal with fixed characteristics. The analysis frames may be either continuous or may be in overlapping frames.
Specifically, the framing process in step S1 is as follows:
the preferred frame length of this embodiment takes 10-20ms as one frame, and in order to avoid spectral leakage of the signal by the window boundary, there is a frame overlap (overlapping portion between frames) when shifting the frames. Generally, half of the frame length is taken as the frame shift, that is, one half of the frame is shifted each time and then the next frame is taken, so that the overlarge characteristic change between frames can be avoided.
Specifically, in step S1, the windowing process is performed to bring each analysis frame into a window function, where the set value outside the window is 0, and the windowing process can avoid spectrum leakage, and the expression of the window function is as follows:
Figure BDA0004074610380000061
wherein N represents the length of the short-time fourier transform; n-1 represents the length of the Hamming window, i.e., the frame length; ω (n) represents a window function.
Specifically, in step S1, the short-time energy is processed to calculate the short-time energy of the current frame audio signal (the short-time energy represents the volume, that is, the magnitude of the sound amplitude, and some fine noise in the aerodynamic audio signal of the wind turbine blade can be filtered according to the value of the short-time energy to extract more pure aerodynamic audio data of the wind turbine blade), if the short-time energy is lower than the set threshold, the current frame is regarded as a mute frame, where the expression of the short-time energy is as follows:
Figure BDA0004074610380000062
Figure BDA0004074610380000063
wherein inc is the frame shift length; fn is the total frame number after framing; y is i (n) represents the i-th frame of the windowed audio signal obtained after processing by the windowing function ω (n).
Further, in step S2, the present embodiment extracts mel-frequency cepstrum coefficients (Mel Frequency Cepstrum Coeff icient, MFCC) of the audio signal of the wind turbine blade preprocessed in step S1, and uses the extracted audio signal as a signal feature (MFCC feature), which specifically includes the following steps:
first, performing fourier transform (FFT) on an audio signal, transforming an original time domain signal into a frequency domain signal, obtaining a frequency spectrum corresponding to each frame of audio signal, and further obtaining amplitude spectrum data X (p), where the expression is as follows:
Figure BDA0004074610380000064
wherein x is n The fourier transformed audio signal is represented, p is an input parameter in the time domain, and j is a constant term.
Specifically, p is defined as follows:
Figure BDA0004074610380000065
wherein f m Representing the value after conversion of the Mel scale to frequency, f s Is the sampling frequency.
It should be noted that, the p value obtained finally is a matrix of 1×28, and the range of values is 1-N frame lengths, i.e. the fft point number, as shown in the expression of the amplitude spectrum data X (p). Wherein f m The value after conversion of the mel scale into frequency and the sampling frequency f s The relationship between these is shown in FIG. 2.
Secondly, a group of m (preferably 20-40) triangular filter sets are arranged, the amplitude spectrum obtained in the last step is filtered, and the energy of the signal passing through each filter is calculated, wherein the calculation formula is as follows:
Figure BDA0004074610380000071
where m represents the number of filters, and k represents the input parameter in the time domain, i.e. the obtained output value of the two-dimensional array mel filter.
Thirdly, performing Discrete Cosine Transform (DCT) on the output value of the filter in the second step, and solving a Mel-scale Cepstrum parameter of M order to obtain MFCC characteristics, wherein the expression is as follows:
Figure BDA0004074610380000072
where H represents the obtained output value of the two-dimensional array mel filter, i represents the frame data of which number (i takes a value of 1-301 in this embodiment), and n represents the nth column of the i-th frame (n takes a value of 1-26 in this embodiment).
Specifically, in step S3, the MFCC characteristics of the aerodynamic audio signals generated in the running process of the x (in this embodiment, the aerodynamic audio signals generated in the running process of the two wind turbine blades in different states: normal and whistle) wind turbine blades in different states are extracted, and the z (in this embodiment, the z 4) dimensional feature vectors contained in the x (in this embodiment, the z (in the fourth embodiment, the z) dimensional feature vectors are input to the neural network to perform state recognition and classification. After repeated learning, the neural network converges to the expected output to complete the training of the neural network, so that the neural network can identify different states of the wind driven generator blade.
Further, as shown in fig. 3, the neural network frame constructed in this embodiment is composed of 1 input layer, 4 hidden layers and 1 output layer. The number of hidden layers is too small to learn the mapping relationship between the input and the output well, but as the number of hidden layers increases, the network structure becomes complex, and thus the modeling capability is reduced. Experiments show that when the number of hidden layers is set to 4, the network performance is optimal. The number of nodes of each layer is 128-1024-1024-1024-1024-32 in sequence, namely 128 nodes of an input layer, 1024 nodes of an hidden layer and 32 nodes of an output layer. Wherein the input layer is a feature vector quantity of the MFCC of 32×4 dimensions, and each node of the output layer represents masking values of 32 frequency channels of the MFCC filter bank of one frame. The activation function of the hidden layer of the neural network adopts a ReLU function, so that the generalization capability of the network can be improved, and the problem of gradient disappearance in the training process can be avoided. The output layer adopts a nonlinear Sigmoid function.
The neural network model is trained by combining a standard back propagation algorithm (Back Propagation algorithm, BP) and a drop method (Dropout), which can be seen in detail in FIG. 4. Wherein the drop rate of the Dropout method is 0.2. The weight and bias of the network in the BP algorithm are optimized by adopting an algorithm combining self-adaptive random gradient descent and momentum items. In this embodiment, the iteration number is set to 20, the momentum change rate of the first 5 iterations is set to 0.5, and the momentum change rate of the remaining iteration number is set to 0.9. The minimum mean square error function is the cost function of the network.
In addition, the embodiment also discloses a computer device, which comprises:
a memory: the memory stores a computer program;
a processor: the processor when executing the computer program implements the wind turbine blade state identification method as described above.
In addition, the embodiment also discloses a computer readable storage medium, on which a computer program is stored, the computer program realizing the wind turbine blade state identification method when being executed by a processor.
The above description is only of the preferred embodiments of the present invention and is not intended to limit the present invention, but various modifications and variations can be made to the present invention by those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

1. The method for identifying the blade state of the wind driven generator is characterized by comprising the following steps of:
step S1: the method comprises the steps of preprocessing an original audio signal, namely collecting the original audio signal of a wind driven generator, and sequentially carrying out pre-emphasis processing, wiener filtering processing, framing processing, windowing processing and short-time energy processing on the original audio signal to obtain an audio signal;
step S2: extracting audio signal characteristics, namely extracting Mel frequency cepstrum coefficients of the audio signal preprocessed in the step S1 to obtain the audio signal characteristics;
step S3: and (3) identifying the blade state of the wind driven generator, namely repeating the step (S1) and the step (S2) under different states of the wind driven generator to obtain a plurality of audio signal characteristics, inputting all the audio signal characteristics into a neural network to perform wind driven generator blade state identification and classification training to obtain a wind driven generator blade state identification neural network model, and realizing wind driven generator blade state identification based on the wind driven generator blade state identification neural network model.
2. The method according to claim 1, wherein in step S1, the pre-emphasis processing is to process the original audio signal with a high-pass filter to obtain a pre-emphasis audio signal H (k), where the pre-emphasis audio signal has the following expression:
H(k)=a*H(k-1)+a*[s(k)-s(k-1)];
wherein a is a pre-emphasis coefficient, and the value range is 0.9 < a < 1; s (k) represents the original audio signal of the current frame; s (k-1) represents the original audio signal of the previous frame; h (k-1) denotes the audio signal after the last frame filtering, where k=0, 1.
3. The method according to claim 2, wherein in step S1, the wiener filtering process is performed by using wienerThe filter performs wiener filtering processing on the pre-emphasis audio signal to obtain a preliminary noise reduction signal
Figure FDA0004074610370000011
The expression of the preliminary noise reduction signal is as follows:
Figure FDA0004074610370000012
wherein E represents a mathematical expectation calculation function; lambda (lambda) d (k) Representing the noise power spectrum of the kth band point.
4. A method for identifying a blade state of a wind turbine according to claim 3, wherein in step S1, the frequency dividing process is specifically as follows:
dividing the preliminary noise reduction signal into a plurality of short-time audio segments, taking each audio segment as an analysis frame, taking the frame length of the analysis frame as 10-20ms, and taking half of the frame length as the frame shift length.
5. The method according to claim 4, wherein in step S1, the windowing process is performed to bring each analysis frame into a window function, the value of the window is set to 0, and the expression of the window function is as follows:
Figure FDA0004074610370000021
wherein N represents the length of the short-time fourier transform; n-1 represents the length of the Hamming window, i.e., the frame length; ω (n) represents a window function.
6. The method according to claim 5, wherein in step S1, the short-term energy is processed to calculate the short-term energy of the current frame audio signal, and if the short-term energy is lower than the set threshold, the current frame is regarded as a mute frame, wherein the short-term energy is expressed as follows:
Figure FDA0004074610370000022
wherein y is i (n) represents the i-th frame of the windowed audio signal obtained after processing by the windowing function ω (n).
7. The method for recognizing the blade state of a wind turbine according to claim 6, wherein the i-th frame y of the windowed audio signal is obtained after processing by the windowing function ω (n) i The expression of (n) is as follows:
Figure FDA0004074610370000023
wherein inc is the frame shift length, and fn is the total frame number after framing.
8. The method for recognizing the blade state of a wind turbine according to claim 7, wherein in step S3, the neural network model is trained by a combination of a standard back propagation algorithm and a discarding method.
9. A computer device, comprising:
a memory: the memory stores a computer program;
a processor: a method for identifying the blade condition of a wind turbine according to any one of claims 1-8 when the processor executes a computer program.
10. A computer readable storage medium, characterized in that a computer program is stored thereon, which computer program, when being executed by a processor, implements a method for identifying a wind turbine blade state according to any of claims 1-8.
CN202310105480.7A 2023-02-13 2023-02-13 Wind driven generator blade state identification method, device and storage medium Pending CN116092518A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310105480.7A CN116092518A (en) 2023-02-13 2023-02-13 Wind driven generator blade state identification method, device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310105480.7A CN116092518A (en) 2023-02-13 2023-02-13 Wind driven generator blade state identification method, device and storage medium

Publications (1)

Publication Number Publication Date
CN116092518A true CN116092518A (en) 2023-05-09

Family

ID=86213854

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310105480.7A Pending CN116092518A (en) 2023-02-13 2023-02-13 Wind driven generator blade state identification method, device and storage medium

Country Status (1)

Country Link
CN (1) CN116092518A (en)

Similar Documents

Publication Publication Date Title
Zou et al. An adversarial denoising convolutional neural network for fault diagnosis of rotating machinery under noisy environment and limited sample size case
CN110046562B (en) Health monitoring method and device for wind power system
Chen et al. Acoustical damage detection of wind turbine blade using the improved incremental support vector data description
CN105841961A (en) Bearing fault diagnosis method based on Morlet wavelet transformation and convolutional neural network
CN113567131A (en) Bearing fault diagnosis method based on S transformation and miniature convolution neural network model
CN111103137A (en) Wind turbine gearbox fault diagnosis method based on deep neural network
CN114565006A (en) Wind driven generator blade damage detection method and system based on deep learning
CN112651426A (en) Fault diagnosis method for rolling bearing of wind turbine generator
CN113240022A (en) Wind power gear box fault detection method of multi-scale single-classification convolutional network
Qi et al. Feature classification method of frequency cepstrum coefficient based on weighted extreme gradient boosting
CN114708885A (en) Fan fault early warning method based on sound signals
CN114352486A (en) Wind turbine generator blade audio fault detection method based on classification
Shan et al. Semisupervised fault diagnosis of gearbox using weighted graph-based label propagation and virtual adversarial training
CN113673442A (en) Variable working condition fault detection method based on semi-supervised single classification network
Solimine et al. An unsupervised data-driven approach for wind turbine blade damage detection under passive acoustics-based excitation
CN116092518A (en) Wind driven generator blade state identification method, device and storage medium
CN110222390B (en) Gear crack identification method based on wavelet neural network
CN116624343A (en) Wind turbine generator tower abnormal vibration monitoring and health degree evaluation method and system
CN112347917B (en) Gas turbine fault diagnosis method, system, equipment and storage medium
CN113919525A (en) Power station fan state early warning method, system and application thereof
CN113758708B (en) Rolling bearing signal frequency domain fault diagnosis method based on L1 norm and group norm constraint
CN115859148A (en) Fan blade vibration alarm method and device
CN112201226B (en) Sound production mode judging method and system
Li et al. A robust fault diagnosis method for rolling bearings based on deep convolutional neural network
Liu et al. Acoustic emission analysis for wind turbine blade bearing fault detection using sparse augmented Lagrangian algorithm

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination