CN116524273A - Method, device, equipment and storage medium for detecting draft tube of power station - Google Patents

Method, device, equipment and storage medium for detecting draft tube of power station Download PDF

Info

Publication number
CN116524273A
CN116524273A CN202310531512.XA CN202310531512A CN116524273A CN 116524273 A CN116524273 A CN 116524273A CN 202310531512 A CN202310531512 A CN 202310531512A CN 116524273 A CN116524273 A CN 116524273A
Authority
CN
China
Prior art keywords
data set
draft tube
module
weight
sound signal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310531512.XA
Other languages
Chinese (zh)
Inventor
唐振宇
陈嵩
张玺
赵栋栋
张义
何维
卢回忆
刘豪睿
刘加
曹宏
刘德广
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Huacong Zhijia Technology Co ltd
Sichuan Huaneng Taipingyi Hydropower Co Ltd
Original Assignee
Beijing Huacong Zhijia Technology Co ltd
Sichuan Huaneng Taipingyi Hydropower Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Huacong Zhijia Technology Co ltd, Sichuan Huaneng Taipingyi Hydropower Co Ltd filed Critical Beijing Huacong Zhijia Technology Co ltd
Priority to CN202310531512.XA priority Critical patent/CN116524273A/en
Publication of CN116524273A publication Critical patent/CN116524273A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • G06N3/0455Auto-encoder networks; Encoder-decoder networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/27Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the analysis technique
    • G10L25/30Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the analysis technique using neural networks
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/48Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/08Feature extraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/12Classification; Matching

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computational Linguistics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Molecular Biology (AREA)
  • Mathematical Physics (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Medical Informatics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Biology (AREA)
  • Signal Processing (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Acoustics & Sound (AREA)
  • Complex Calculations (AREA)

Abstract

The invention relates to the technical field of power station draft tube detection, in particular to a power station draft tube detection method, a device, equipment and a storage medium, which comprise the following steps: acquiring a draft tube sound signal to obtain a sound signal data set, carrying out analog-to-digital conversion on the sound signal data set to obtain a digital signal data set, extracting a characteristic image based on the digital signal data set to obtain a characteristic image data set, carrying out convolution operation on the characteristic image data set to obtain a convolution data set, carrying out linear processing on the convolution data set by utilizing a multi-head attention module, screening hidden characteristics by utilizing a sparse self-attention module to obtain a weight image data set, training a pre-training model by utilizing the weight image data set to obtain a trained draft tube detection model, accurately predicting local tearing and falling of a draft tube metal shell, changes such as cracks of a draft tube entrance rib plate, pressure pulsation and the like, realizing accurate detection of the draft tube defect of the hydropower station, and effectively avoiding accidents.

Description

Method, device, equipment and storage medium for detecting draft tube of power station
Technical Field
The invention relates to the technical field of power station draft tube detection, in particular to a power station draft tube detection method, a device, equipment and a storage medium.
Background
The hydroelectric generating set is a complex and special rotary machine, and the generating set equipment is more complex as the installed capacity of the hydroelectric generating set is continuously increased. When the operation condition deviates from the optimal area, the component of the outlet flow velocity of the runner blade in the circumferential direction is larger, and after the component enters the draft tube, a relatively obvious annular quantity is formed in the draft tube, and then a vacuum vortex belt rotating in the draft tube is developed. The eccentric vortex belt can cause unstable hydraulic power in a water turbine through-flow channel, pressure pulsation occurs, and when the situation is serious, the vibration and the output swing of the unit are caused, so that the unit components are damaged, and the safe, stable and economic operation of the unit is threatened. The pressure pulsation of the water turbine draft tube is an important index for measuring the running stability of the unit, fault information representing the state of the draft tube vortex belt of the water turbine is extracted from the collected signals, the running state and the fault development trend of the unit can be better known, and the state evaluation and fault diagnosis of the draft tube vortex belt of the water turbine unit are completed.
Draft tubes and tail water entry gates are one of the important projects of hydroelectric power plants. In recent years, due to the defects of a draft tube and a tail water entrance door, certain accidents are caused. Therefore, the timely inspection of defects is an important operation requirement, and the current power station draft tube detection predicts draft tube pressure pulsation according to the operation parameters of the prototype machine, so as to judge the stability of the operation area of the unit. However, due to the difference between the experimental conditions and the actual running conditions, the normal running of the actual running condition cannot be accurately judged, and the method is only used for determining the pressure pulsation amplitude of the draft tube, and in the actual running, the actual defects comprise local tearing and falling of the draft tube metal shell, cavitation erosion of the draft tube, eccentric vortex strips, cracks of the draft tube entrance rib plate and pressure pulsation change, and a large number of different sensors are required to be arranged for judging the defects, so that a method capable of accurately judging the actual running condition of the draft tube of the power station and timely detecting the running states of the draft tube and the draft tube entrance door is required to be designed.
Disclosure of Invention
The invention aims to provide a method, a device, equipment and application for detecting a draft tube of a power station, which are used for solving the problem that the conventional power station cannot accurately judge the normal running of the draft tube and the actual running condition of a draft entrance.
In order to solve the technical problems, the invention provides a method for detecting a draft tube of a power station, which comprises the following steps:
acquiring a draft tube sound signal to obtain a sound signal data set;
the sound signal data set is processed through analog-to-digital conversion to obtain a digital signal data set;
extracting a characteristic image based on the digital signal data set to obtain a characteristic image data set;
performing convolution operation on the characteristic image data set to obtain a convolution data set;
after the convolution data set is subjected to linear processing by utilizing a multi-head attention module, implicit characteristics are screened by a sparse self-attention module, and a weight image data set is obtained;
training the pre-training model by using the weight image data set to obtain a trained draft tube detection model;
and detecting the draft tube of the power station by using the draft tube detection model after training to obtain the state of the draft tube.
Preferably, the training the pre-training model by using the weighted image dataset, and obtaining the trained draft tube detection model includes:
calculating an error by using the ArcFace loss function, using a first vector output by a network, namely a component vector, for image classification, and repeatedly iterating the pre-training model until the loss function converges by an inter-class object separation method to obtain a trained model.
Preferably, the ArcFace loss function calculation formula is:
wherein, N is the number of samples, N is the number of sample classifications, W is the weight matrix, b is the bias vector, and z is the feature vector;
let b j =0,W j T z i =||W j ||‖z i ‖cosθ j ,||W j ||=1,‖z i II=, get:
obtaining a final loss function by using inter-class object separation and adding angular margin m:
wherein L is Arcface As a loss function.
Preferably, the processing the sound signal data set through analog-to-digital conversion to obtain a digital signal data set includes:
performing conversion operation on the sound signal data set by using a Fourier transform and filtering method, and converting original audio data into picture data to obtain the digital signal data set, wherein the calculation formula is as follows:
X={X i }
calculation of the log-mel spectrum of the Signal XWherein psi is t ∈R F F and T are the number of mel filters and the number of time frames, respectively.
Preferably, the linear processing of the convolved data set with the multi-headed attention module comprises:
performing linear processing on the convolution data set to obtain a query matrix, a key matrix and a value matrix, wherein the calculation formula is as follows:
wherein Q is a query matrix, K is a key matrix, V is a value matrix,W Q ,W K ,W V ∈R D×D
calculating by using the query matrix, the key matrix and the value matrix to obtain an attention weight matrix, wherein the calculation formula is as follows:
wherein A is element feature, T is the number of time frames,is a scaling factor;
residual connection is carried out on the attention weight matrix and the single-head self-attention module, and the output result of the multi-layer perceptron is obtained through layer standardization, wherein the calculation formula is as follows:
MLP(z′ p )=ReLU(z′ p W 1 +b 1 )·W 2 +b 2
wherein W is out ∈R D×D ,W 1 ,W 2 ,∈R D×D Weight, b out ∈R (N+1)×D ,b 1 ,b 2 ∈R (N+1)×D For bias, SSA is standard self-attention, concat is splice, reLU is full connection layer activation function, MSA (z p ) For multi-head attention output results, MLP (z' p ) Outputting a result for the multi-layer perceptron;
if z p-1 For the input of the p-th coding module, then:
z′ p =LN(MSA(z p-1 )+z p-1 )
z p =LN(MLP(z′ p )+z′ p )
where LN is layer standardization.
Preferably, the filtering the implicit features through the sparse self-attention module to obtain a weighted image dataset includes:
the implicit characteristics input by the last coding layer are screened by using the weights learned by the first L coding layers, and the calculation formula is as follows:
the weight value is normalized and then is weighted and summed with the attention force diagram to obtain the final attention weight, and the calculation formula is as follows:
wherein alpha is a weight matrix, a attn Is the final attention weight;
screening hidden features corresponding to the maximum weight, splicing the hidden features with the classification vector, adding the merged features with the RESNet module to serve as the input of the last layer of coding module, and obtaining a weight image data set, wherein the calculation formula is as follows:
wherein,,is a weighted image dataset.
Preferably, the acquiring the draft tube sound signal, the obtaining the sound signal data set includes:
and acquiring a draft tube sound signal by using a sound acquisition unit, and processing the draft tube sound signal by using a signal conditioning set impedance transformation method to obtain the sound signal data set.
The invention also provides a device for detecting the draft tube of the power station, which comprises:
the data acquisition module is used for acquiring the sound signal of the draft tube to obtain a sound signal data set;
the signal conversion module is used for obtaining a digital signal data set through analog-to-digital conversion processing of the sound signal data set;
the feature extraction module is used for extracting a feature image based on the digital signal data set to obtain a feature image data set;
the convolution module is used for carrying out convolution operation on the characteristic image data set to obtain a convolution data set;
the weight module is used for linearly processing the convolution data set by utilizing the multi-head attention module, and screening hidden features by utilizing the sparse self-attention module to obtain a weight image data set;
the training module is used for training the pre-training model by using the weight image data set to obtain a trained draft tube detection model;
and the prediction module is used for detecting the draft tube of the power station by using the draft tube detection model after training, so as to obtain the state of the draft tube.
The invention also provides a power station draft tube detection device, comprising:
a memory for storing a computer program;
and the processor is used for realizing the steps of the power station draft tube detection method when executing the computer program.
The invention also provides a computer readable storage medium, wherein the computer readable storage medium stores a computer program, and the computer program realizes the steps of the method for detecting the draft tube of the power station when being executed by a processor.
According to the power station draft tube detection method provided by the invention, the draft tube sound signal is obtained, the sound signal is converted into the digital signal, the model of the transformer network is trained based on the digital signal, the trained model is utilized to conduct prediction processing on the draft tube, the changes of partial tearing and falling of the draft tube metal shell, cavitation erosion of the draft tube, eccentric vortex strips, cracks of the draft tube entering a door rib plate, pressure pulsation and the like are accurately predicted, the accurate detection of the defects of the draft tube of the hydropower station is realized, and accidents are effectively avoided.
Drawings
The foregoing and/or additional aspects and advantages of the invention will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings, in which:
FIG. 1 is a flow chart of a first embodiment of a method for detecting a draft tube of a power station according to the present invention;
FIG. 2 is a block diagram of a transducer encoding module;
fig. 3 is a block diagram of a device for detecting a draft tube of a power station according to an embodiment of the present invention.
Detailed Description
The invention provides a method, a device, equipment and a storage medium for detecting a draft tube of a power station, which train a model of a transducer network by utilizing a draft tube sound signal, realize accurate detection of the defect of the draft tube of the hydropower station and effectively avoid accidents.
In order to better understand the aspects of the present invention, the present invention will be described in further detail with reference to the accompanying drawings and detailed description. It will be apparent that the described embodiments are only some, but not all, embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Referring to fig. 1, fig. 1 is a flowchart of a first embodiment of a method for detecting a draft tube of a power station according to the present invention; the specific operation steps are as follows:
step S101: acquiring a draft tube sound signal to obtain a sound signal data set;
and acquiring a draft tube sound signal by using a sound acquisition unit, and processing the draft tube sound signal by using a signal conditioning set impedance transformation method to obtain the sound signal data set.
Step S102: the sound signal data set is processed through analog-to-digital conversion to obtain a digital signal data set;
performing conversion operation on the sound signal data set by using a Fourier transform and filtering method, and converting original audio data into picture data to obtain the digital signal data set, wherein the calculation formula is as follows:
X={X i }
calculation of the log-mel spectrum of the Signal XWherein psi is t ∈R F F and T are the number of mel filters and the number of time frames, respectively.
Step S103: extracting a characteristic image based on the digital signal data set to obtain a characteristic image data set;
step S104: performing convolution operation on the characteristic image data set to obtain a convolution data set;
step S105: after the convolution data set is subjected to linear processing by utilizing a multi-head attention module, implicit characteristics are screened by a sparse self-attention module, and a weight image data set is obtained;
performing linear processing on the convolution data set to obtain a query matrix, a key matrix and a value matrix, wherein the calculation formula is as follows:
wherein Q is a query matrix, K is a key matrix, V is a value matrix,W Q ,W K ,W V ∈R D×D
calculating by using the query matrix, the key matrix and the value matrix to obtain an attention weight matrix, wherein the calculation formula is as follows:
wherein A is element feature, T is the number of time frames,is a scaling factor;
residual connection is carried out on the attention weight matrix and the single-head self-attention module, and the output result of the multi-layer perceptron is obtained through layer standardization, wherein the calculation formula is as follows:
MLP(z′ p )=ReLU(z′ p W 1 +b 1 )·W 2 +b 2
wherein W is out ∈R D×D ,W 1 ,W 2 ,∈R D×D Weight, b out ∈R (N+1)×D ,b 1 ,b 2 ∈R (N+1)×D For bias, SSA is standard self-attention, concat is splice, reLU is full connection layer activation function, MSA (z p ) For multi-head attention output results, MLP (z' p ) Outputting a result for the multi-layer perceptron;
if z p-1 For the input of the p-th coding module, then:
z′ p =LN(MSA(z p-1 )+z p-1 )
z p =LN(MLP(z′ p )+z′ p )
wherein LN is layer standardization;
the implicit characteristics input by the last coding layer are screened by using the weights learned by the first L coding layers, and the calculation formula is as follows:
the weight value is normalized and then is weighted and summed with the attention force diagram to obtain the final attention weight, and the calculation formula is as follows:
wherein alpha is a weight matrix, A attn Is the final attention weight;
screening hidden features corresponding to the maximum weight, splicing the hidden features with the classification vector, adding the merged features with the RESNet module to serve as the input of the last layer of coding module, and obtaining a weight image data set, wherein the calculation formula is as follows:
wherein,,is a weighted image dataset.
Step S106: training the pre-training model by using the weight image data set to obtain a trained draft tube detection model;
calculating an error by using the ArcFace loss function, using a first vector output by a network, namely a component vector, for image classification, and repeatedly iterating the pre-training model until the loss function converges by an inter-class object separation method to obtain a trained model.
The ArcFace loss function calculation formula is as follows:
wherein, N is the number of samples, N is the number of sample classifications, W is the weight matrix, b is the bias vector, and z is the feature vector;
let b j =0,W j T z i =||W j ||‖z i ‖cosθ j ,||W j ||=1,‖z i Type = s, yield:
obtaining a final loss function by using inter-class object separation and adding angular margin m:
wherein L is Arcface As a loss function.
Step S107: and detecting the draft tube of the power station by using the draft tube detection model after training to obtain the state of the draft tube.
The embodiment provides a power station draft tube detection method, which designs a power station draft tube detection model by utilizing audio data and a transducer model, trains the transducer model based on the power station draft tube audio data, detects a draft tube by utilizing a model after training, simplifies the existing draft tube detection steps, reduces the detection cost, realizes accurate judgment of the actual running state of the draft tube, provides a cost-saving and effective detection method for power station draft tube detection, and avoids accidents.
Based on the above embodiments, the present embodiment describes the method for detecting a draft tube of the power station, as shown in fig. 2, and fig. 2 is a structural diagram of a transducer coding module provided in the present embodiment, specifically as follows:
the sound collection units of many sets distribute inside the draft tube, and the tail-water enters the people's door, and each collection unit of set comprises the adapter, signal processing circuit, and signal processing circuit accomplishes the function and includes: signal conditioning: impedance transformation, amplification and filtering; analog-to-digital conversion and digital signal processing: A/D conversion, ARM signal processing, the signal is packed by ARM chip to finish the final TCP/IP protocol, the network relays and finish the transmission of the packed data, multiple sound acquisition units are synchronous in time, the data server receives the storage and calculation of the packed data.
Calculate signal x= { X i Log-mel spectra of }Wherein psi is t ∈R F F and T are the number of mel filters and the number of time frames, respectively.
Extract feature picture using the following data as input, ψ= (ψ) 1 ,…,ψ P )∈R P×F
Window movement L of adjacent picture, then there is a signal XA matrix of input pictures.
Log-mel transformation is performed on 1024 sampling points of 64ms data. Window 50%, f=128, p=64, l=4, then for an input data of 10s, the sampling rate is 16khz, t=311, n=62, so that 62 pictures can be generated.
After the picture is generated, the preprocessing of the data is completed, and the subsequent work becomes picture identification.
As shown in table 1, the encoder of Vision Transformer is formed by stacking 12 encoding modules with the same structure, and outputs e (ψ) t )=e t ∈R s Where e is a feature vector extracted based on a transducer.
TABLE 1
Network layer L Hidden D MLP number Head N h
12 384 1536 6
Finally, arcFace is used to improve the differentiation of cosine distances.
After the convolution module is operated, the data diagram of the (N+1) D shape is changed, the output of the RESNet module, the transducer encoder and the multi-head attention combination is also the data diagram of the (N+1) D shape, and the output of the full link layer is a vector of the 1*D shape;
the encoding module includes multi-head self-attention (MSA) and multi-layer perceptron (MLP). The multi-head self-attention module consists of N pieces ofSingle-head standard self-attention unit (SSA). For a single-head standard self-attention unit, input z is input first p ∈R (N+1)×D The input is subjected to linear transformation to obtain a query matrix Q, a key matrix K and a value matrix V.
Wherein the method comprises the steps ofW Q ,W K ,W V ∈R D×D After Q, K, V is obtained, the calculation formula of the attention weight matrix a is as follows:
element A in matrix A ij Representing the correlation between the ith feature and the jth feature, the greater the value the stronger the correlation,is a scaling factor. The attention weight matrix A point multiplied by the value matrix V to obtain the output z 'of the single-head self-attention unit'
Different single-head self-attention units learn relevant features in independent feature subspaces without interference, and finally the multi-head self-attention module outputs the signals to the single-head self-attention unitsAnd splicing the output results, and obtaining the output of the module through linear transformation. The output is equal to z p Residual connections are made and the layer normalized (layer normalization, LN) is used as input to the next multi-layer perceptron module.
Wherein W is out ∈R D×D Weight, b out ∈R (N+1)×D For bias, SSA is standard self-attention and concat represents stitching. The multi-layer perceptron module consists of two full-connection layers, wherein the activation function of the first full-connection layer is ReLU, and the second full-connection layer does not use the activation function, and the calculation formula is as follows:
MLP(z′ p )=ReLU(z′ p W 1 +b 1 )·W 2 +b 2
wherein W is 1 ,W 2 ,∈R D×D Weight, b 1 ,b 2 ∈R (N+1)×D Is biased;
if z p-1 The output of the coding module is shown as follows, which is the input of the p-th coding module:
z′ p =LN(MSA(z p-1 )+z p-1 )
z p =LN(MLP(z′ p )+z′ p )
z p as the output of the last module and as the input of the next module.
In the Vision Transformer model, to automatically learn the relationships between elements in a sequence, a sparse self-attention module is used.
Input data: the input to the sparse self-attention module is the output data of the encoder module, with each data point represented as a vector.
If the Vision Transformer network comprises L coding modules, the sparse self-attention module utilizes the weights learned by the first L coding layers to input implicit features to the last coding layerScreening is performed. The weight map learned by the first L coding layers is as follows:
the weight value is normalized and then weighted and summed with the attention attempt to obtain the final attention weight A attn As shown below.
Alpha is a weight matrix, using A attn Weights corresponding to the middle classification vectorsAt N h And screening hidden features corresponding to the maximum weight from the self-attention heads. Finally, the implicit features are spliced with the classification vectors and added with the RESNet module to be used as the input of the last layer of coding module.
The first vector of the network output, the component vector, is used for image classification as Vision Transformer. The loss function of the network uses the Arcface loss L Arcface Order-making
ArcFace is an improvement over traditional Softmax Loss, the complete formula of Softmax Loss is as follows:
n is the number of samples, N is the number of sample classifications, W is the weight matrix, b is the bias vector, W and b together are the fully connected layers that obtain the feature vector, and z is the feature vector.
Let b j =0,W j T z i =||W j ||‖z i ‖cosθ j ,||W j ||=1,‖z i ‖=s
To compact the intra-class objects, the inter-class objects are separated, plus the angular margin m, to yield the final form of ArcFace:
according to the draft tube detection method for the power station, provided by the embodiment of the invention, the draft tube sound signal is obtained, the sound signal is converted into the digital signal, the model of the converter network is trained based on the digital signal, the trained model is utilized to conduct prediction processing on the draft tube, the changes of local tearing and falling of the draft tube metal shell, cavitation erosion of the draft tube, eccentric vortex strips, cracks of the draft tube entrance rib plate, pressure pulsation and the like are accurately predicted, the existing draft tube detection steps are simplified, the detection cost is reduced, the actual running state of the draft tube is accurately judged, and the cost-saving and effective detection method is provided for draft tube detection of the power station.
Referring to fig. 3, fig. 3 is a block diagram of a draft tube detection device of a power station according to an embodiment of the present invention; the specific apparatus may include:
the data acquisition module 100 is used for acquiring a draft tube sound signal to obtain a sound signal data set;
the signal conversion module 200 is configured to obtain a digital signal data set by performing analog-to-digital conversion on the sound signal data set;
the feature extraction module 300 extracts a feature image based on the digital signal data set to obtain a feature image data set;
a convolution module 400, configured to perform a convolution operation on the feature image dataset to obtain a convolution dataset;
the weight module 500 is used for obtaining a weight image dataset by screening hidden features through the sparse self-attention module after the convolution dataset is subjected to linear processing by the multi-head attention module;
training the pre-training model by using the weight image data set to obtain a trained draft tube detection model by the training module 600;
the prediction module 700 detects the draft tube of the power station by using the draft tube detection model after training, and obtains the state of the draft tube.
A power plant draft tube detection apparatus according to this embodiment is used to implement a power plant draft tube detection method as described above, and thus, the foregoing description of an embodiment of a power plant draft tube detection method in a power plant draft tube detection apparatus may be seen in the example portions of a power plant draft tube detection method, for example, the data acquisition module 100, the signal conversion module 200, the feature extraction module 300, the convolution module 400, the weighting module 500, the training module 600, and the prediction module 700, which are used to implement steps S101, S102, S103, S104, S105, S106, and S107 in a power plant draft tube detection method, respectively, so that the detailed description thereof will be omitted herein.
In the description of the present specification, a description referring to terms "one embodiment," "some embodiments," "examples," "specific examples," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present invention. In this specification, schematic representations of the above terms are not necessarily directed to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, the different embodiments or examples described in this specification and the features of the different embodiments or examples may be combined and combined by those skilled in the art without contradiction.
Furthermore, the terms "first," "second," and the like, are used for descriptive purposes only and are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include at least one such feature. In the description of the present invention, the meaning of "plurality" means at least two, for example, two, three, etc., unless specifically defined otherwise.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps of the process, and additional implementations are included within the scope of the preferred embodiment of the present invention in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order from that shown or discussed, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the embodiments of the present invention.
In this specification, each embodiment is described in a progressive manner, and each embodiment is mainly described in a different point from other embodiments, so that the same or similar parts between the embodiments are referred to each other. For the device disclosed in the embodiment, since it corresponds to the method disclosed in the embodiment, the description is relatively simple, and the relevant points refer to the description of the method section.
The method, the device, the equipment and the storage medium for detecting the draft tube of the power station provided by the invention are described in detail. The principles and embodiments of the present invention have been described herein with reference to specific examples, the description of which is intended only to facilitate an understanding of the method of the present invention and its core ideas. It should be noted that it will be apparent to those skilled in the art that various modifications and adaptations of the invention can be made without departing from the principles of the invention and these modifications and adaptations are intended to be within the scope of the invention as defined in the following claims.
It is to be understood that portions of the present invention may be implemented in hardware, software, firmware, or a combination thereof. In the above-described embodiments, the various steps or methods may be implemented in software or firmware stored in a memory and executed by a suitable instruction execution system. As with the other embodiments, if implemented in hardware, may be implemented using any one or combination of the following techniques, as is well known in the art: discrete logic circuits having logic gates for implementing logic functions on data signals, application specific integrated circuits having suitable combinational logic gates, programmable Gate Arrays (PGAs), field Programmable Gate Arrays (FPGAs), and the like.
Those of ordinary skill in the art will appreciate that all or a portion of the steps carried out in the method of the above-described embodiments may be implemented by a program to instruct related hardware, where the program may be stored in a computer readable storage medium, and where the program, when executed, includes one or a combination of the steps of the method embodiments.
In addition, each functional unit in the embodiments of the present invention may be integrated in one processing module, or each unit may exist alone physically, or two or more units may be integrated in one module. The integrated modules may be implemented in hardware or in software functional modules. The integrated modules may also be stored in a computer readable storage medium if implemented in the form of software functional modules and sold or used as a stand-alone product.

Claims (10)

1. A method for detecting a draft tube of a power station, comprising:
acquiring a draft tube sound signal to obtain a sound signal data set;
the sound signal data set is processed through analog-to-digital conversion to obtain a digital signal data set;
extracting a characteristic image based on the digital signal data set to obtain a characteristic image data set;
performing convolution operation on the characteristic image data set to obtain a convolution data set;
after the convolution data set is subjected to linear processing by utilizing a multi-head attention module, implicit characteristics are screened by a sparse self-attention module, and a weight image data set is obtained;
training the pre-training model by using the weight image data set to obtain a trained draft tube detection model;
and detecting the draft tube of the power station by using the draft tube detection model after training to obtain the state of the draft tube.
2. The method for detecting a draft tube of a power station according to claim 1, wherein said training a pre-training model using said weighted image dataset includes:
calculating an error by using the ArcFace loss function, using a first vector output by a network, namely a component vector, for image classification, and repeatedly iterating the pre-training model until the loss function converges by an inter-class object separation method to obtain a trained model.
3. The method for detecting a draft tube of a power station according to claim 2, wherein the ArcFace loss function calculation formula is:
wherein, N is the number of samples, N is the number of sample classifications, W is the weight matrix, b is the bias vector, and z is the feature vector;
let b j =0,||W j ||=1,‖z i II=, get:
obtaining a final loss function by using inter-class object separation and adding angular margin m:
wherein L is Arcface As a loss function.
4. The method of draft tube inspection of claim 1, wherein said subjecting said acoustic signal data set to analog to digital conversion processing to obtain a digital signal data set includes:
performing conversion operation on the sound signal data set by using a Fourier transform and filtering method, and converting original audio data into picture data to obtain the digital signal data set, wherein the calculation formula is as follows:
X={X i }
calculation of the log-mel spectrum of the Signal XWherein psi is t ∈R F F and T are the number of mel filters and the number of time frames, respectively.
5. The power plant draft tube inspection method according to claim 1, wherein said linearly processing said convolved data set with a multi-headed attention module includes:
performing linear processing on the convolution data set to obtain a query matrix, a key matrix and a value matrix, wherein the calculation formula is as follows:
wherein Q is a query matrix, K is a key matrix, V is a value matrix,W Q ,W K ,W V ∈R D×D
calculating by using the query matrix, the key matrix and the value matrix to obtain an attention weight matrix, wherein the calculation formula is as follows:
wherein A is element feature, T is the number of time frames,is a scaling factor;
residual connection is carried out on the attention weight matrix and the single-head self-attention module, and the output result of the multi-layer perceptron is obtained through layer standardization, wherein the calculation formula is as follows:
MLP(z′ p )=ReLU(z′ p W 1 +b 1 )·W 2 +b 2
wherein W is out ∈R D×D ,W 1 ,W 2 ,∈R D×D Weight, b out ∈R (N+1)×D ,b 1 ,b 2 ∈R (N+1)×D For bias, SSA is standard self-attention, concat is splice, reLU is full connection layer activation function, MSA (z p ) For multi-head attention output results, MLP (z' p ) Outputting a result for the multi-layer perceptron;
if z p-1 For the input of the p-th coding module, then:
z′ p =LN(MSA(z p-1 )+z p-1 )
z p =LN(MLP(z′ p )+z′ p )
where LN is layer standardization.
6. The method for detecting a draft tube of a power station according to claim 5, wherein said screening implicit features by a sparse self-attention module to obtain a weighted image dataset includes:
the implicit characteristics input by the last coding layer are screened by using the weights learned by the first L coding layers, and the calculation formula is as follows:
the weight value is normalized and then is weighted and summed with the attention force diagram to obtain the final attention weight, and the calculation formula is as follows:
wherein alpha is a weight matrix, A attn Is the final attention weight;
screening hidden features corresponding to the maximum weight, splicing the hidden features with the classification vector, adding the merged features with the RESNet module to serve as the input of the last layer of coding module, and obtaining a weight image data set, wherein the calculation formula is as follows:
wherein,,is a weighted image dataset.
7. The method of claim 1, wherein the acquiring the draft tube sound signal to obtain the sound signal data set comprises:
and acquiring a draft tube sound signal by using a sound acquisition unit, and processing the draft tube sound signal by using a signal conditioning set impedance transformation method to obtain the sound signal data set.
8. A power plant draft tube inspection device, comprising:
the data acquisition module is used for acquiring the sound signal of the draft tube to obtain a sound signal data set;
the signal conversion module is used for obtaining a digital signal data set through analog-to-digital conversion processing of the sound signal data set;
the feature extraction module is used for extracting a feature image based on the digital signal data set to obtain a feature image data set;
the convolution module is used for carrying out convolution operation on the characteristic image data set to obtain a convolution data set;
the weight module is used for linearly processing the convolution data set by utilizing the multi-head attention module, and screening hidden features by utilizing the sparse self-attention module to obtain a weight image data set;
the training module is used for training the pre-training model by using the weight image data set to obtain a trained draft tube detection model;
and the prediction module is used for detecting the draft tube of the power station by using the draft tube detection model after training, so as to obtain the state of the draft tube.
9. A power plant draft tube inspection apparatus, comprising:
a memory for storing a computer program;
a processor for carrying out the steps of a power plant draft tube detection method according to any one of claims 1 to 7 when said computer program is executed.
10. A computer readable storage medium, characterized in that the computer readable storage medium has stored thereon a computer program which, when executed by a processor, implements the steps of a power plant draft tube detection method according to any one of claims 1 to 7.
CN202310531512.XA 2023-05-11 2023-05-11 Method, device, equipment and storage medium for detecting draft tube of power station Pending CN116524273A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310531512.XA CN116524273A (en) 2023-05-11 2023-05-11 Method, device, equipment and storage medium for detecting draft tube of power station

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310531512.XA CN116524273A (en) 2023-05-11 2023-05-11 Method, device, equipment and storage medium for detecting draft tube of power station

Publications (1)

Publication Number Publication Date
CN116524273A true CN116524273A (en) 2023-08-01

Family

ID=87391972

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310531512.XA Pending CN116524273A (en) 2023-05-11 2023-05-11 Method, device, equipment and storage medium for detecting draft tube of power station

Country Status (1)

Country Link
CN (1) CN116524273A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117237359A (en) * 2023-11-15 2023-12-15 天津市恒一机电科技有限公司 Conveyor belt tearing detection method and device, storage medium and electronic equipment

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117237359A (en) * 2023-11-15 2023-12-15 天津市恒一机电科技有限公司 Conveyor belt tearing detection method and device, storage medium and electronic equipment
CN117237359B (en) * 2023-11-15 2024-02-20 天津市恒一机电科技有限公司 Conveyor belt tearing detection method and device, storage medium and electronic equipment

Similar Documents

Publication Publication Date Title
CN111626153B (en) Integrated learning-based partial discharge fault state identification method
CN109033612B (en) Transformer fault diagnosis method based on vibration noise and BP neural network
CN113191215A (en) Rolling bearing fault diagnosis method integrating attention mechanism and twin network structure
CN113405825B (en) Belt conveyor fault diagnosis method based on sound signals
CN111048114A (en) Equipment and method for detecting abnormal sound of equipment
CN106017876A (en) Wheel set bearing fault diagnosis method based on equally-weighted local feature sparse filter network
US20050027528A1 (en) Method for improving speaker identification by determining usable speech
CN111914883A (en) Spindle bearing state evaluation method and device based on deep fusion network
CN115798516B (en) Migratable end-to-end acoustic signal diagnosis method and system
EP3767551A1 (en) Inspection system, image recognition system, recognition system, discriminator generation system, and learning data generation device
CN111722145A (en) Method for diagnosing slight fault of turn-to-turn short circuit of excitation winding of synchronous motor
CN116524273A (en) Method, device, equipment and storage medium for detecting draft tube of power station
CN114993669A (en) Multi-sensor information fusion transmission system fault diagnosis method and system
Song et al. Data and decision level fusion-based crack detection for compressor blade using acoustic and vibration signal
CN116842379A (en) Mechanical bearing residual service life prediction method based on DRSN-CS and BiGRU+MLP models
CN115457980A (en) Automatic voice quality evaluation method and system without reference voice
CN112052712B (en) Power equipment state monitoring and fault identification method and system
CN115376526A (en) Power equipment fault detection method and system based on voiceprint recognition
CN116012681A (en) Method and system for diagnosing motor faults of pipeline robot based on sound vibration signal fusion
CN111428772A (en) Photovoltaic system depth anomaly detection method based on k-nearest neighbor adaptive voting
CN111753876A (en) Product quality detection method based on deep neural network
CN117235465B (en) Transformer fault type diagnosis method based on graph neural network wave recording analysis
Morovati Increase the accuracy of speech signal categories in high noise environments
CN117708574B (en) CNN variable-speed rolling bearing fault diagnosis method embedded with physical information
CN114548153B (en) Planetary gear box fault diagnosis method based on residual error-capsule network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination