CN114400019A - Model generation method, abnormality detection device, and electronic apparatus - Google Patents

Model generation method, abnormality detection device, and electronic apparatus Download PDF

Info

Publication number
CN114400019A
CN114400019A CN202111666960.8A CN202111666960A CN114400019A CN 114400019 A CN114400019 A CN 114400019A CN 202111666960 A CN202111666960 A CN 202111666960A CN 114400019 A CN114400019 A CN 114400019A
Authority
CN
China
Prior art keywords
audio
detection model
anomaly detection
trained
network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111666960.8A
Other languages
Chinese (zh)
Inventor
于洪伟
李亚桐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Voiceai Technologies Co ltd
Original Assignee
Voiceai Technologies Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Voiceai Technologies Co ltd filed Critical Voiceai Technologies Co ltd
Priority to CN202111666960.8A priority Critical patent/CN114400019A/en
Publication of CN114400019A publication Critical patent/CN114400019A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/27Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the analysis technique
    • G10L25/30Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the analysis technique using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/03Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the type of extracted parameters
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/48Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
    • G10L25/51Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Data Mining & Analysis (AREA)
  • Human Computer Interaction (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Signal Processing (AREA)
  • Software Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Telephonic Communication Services (AREA)

Abstract

The embodiment of the application discloses a model generation method, an abnormality detection device and electronic equipment. The method comprises the following steps: acquiring a training data set, wherein the training data set comprises respective first audio features of a plurality of pieces of audio information of a target device, and the plurality of pieces of audio information comprise normal audio information and abnormal audio information; training a generator network to be trained through the training data set to take the converged generator network to be trained as an initial anomaly detection model; and adjusting the initial anomaly detection model through a discriminator network to take the adjusted generator network as a target anomaly detection model. By the aid of the method, the first audio features of the equipment to be detected can be input into the target abnormity detection model to perform abnormity detection on the equipment to be detected, manpower is saved, and abnormity detection efficiency is improved.

Description

Model generation method, abnormality detection device, and electronic apparatus
Technical Field
The present application relates to the field of computer technologies, and in particular, to a model generation method, an anomaly detection apparatus, and an electronic device.
Background
With the ever-increasing demand for electrical power, power systems have increasingly demanding economics and reliability for power-operated equipment. Due to the load effect of long-time operation and the influence of natural environment (air temperature, air pressure, humidity, filth and the like), the aging and the abrasion of the electric power operation equipment can be caused, so that the performance and the reliability of the electric power operation equipment are gradually reduced, and potential safety hazards exist, so that the monitoring and the detection of the operation state of the electric power operation equipment are very necessary.
However, the detection method of the electric power running equipment requires manual examination, which is inefficient.
Disclosure of Invention
In view of the above problems, the present application proposes a model generation method, an abnormality detection apparatus, an electronic device, and a storage medium to achieve an improvement of the above problems.
In a first aspect, the present application provides a model generation method applied to an electronic device, the method including: acquiring a training data set, wherein the training data set comprises respective first audio features of a plurality of pieces of audio information of a target device, and the plurality of pieces of audio information comprise normal audio information and abnormal audio information; training a generator network to be trained through the training data set to take the converged generator network to be trained as an initial anomaly detection model; and adjusting the initial anomaly detection model through a discriminator network to take the adjusted generator network as a target anomaly detection model.
In a second aspect, the present application provides an anomaly detection method applied to an electronic device, the method including: acquiring audio to be detected; performing framing, windowing and fast Fourier transform on the audio to be detected to obtain a first audio characteristic corresponding to the audio to be detected, wherein the first audio characteristic is a spectrogram corresponding to the audio to be detected; and inputting the first audio characteristic into the target abnormity detection model obtained by the method, and obtaining a detection result output by the target abnormity detection model.
In a third aspect, the present application provides a model generation apparatus, operable on an electronic device, the apparatus including: the data set acquisition unit is used for acquiring a training data set, wherein the training data set comprises first audio features of a plurality of pieces of audio information of the target equipment, and the plurality of pieces of audio information comprise normal audio information and abnormal audio information; an initial anomaly detection model obtaining unit, configured to train a generator network to be trained through the training data set, so as to use the converged generator network to be trained as an initial anomaly detection model; and the target anomaly detection model acquisition unit is used for adjusting the initial anomaly detection model through the discriminator network so as to take the adjusted generator network as a target anomaly detection model.
In a fourth aspect, the present application provides an abnormality detection apparatus, operable on an electronic device, the apparatus including: the audio acquisition unit to be detected is used for acquiring audio to be detected; the first audio characteristic acquisition unit is used for performing framing, windowing and fast Fourier transform on the audio to be detected to obtain a first audio characteristic corresponding to the audio to be detected, wherein the first audio characteristic is a spectrogram corresponding to the audio to be detected; and the detection result acquisition unit is used for inputting the first audio characteristic into the target abnormity detection model obtained by the method and acquiring the detection result output by the target abnormity detection model.
In a fifth aspect, the present application provides an electronic device comprising one or more processors and a memory; one or more programs are stored in the memory and configured to be executed by the one or more processors, the one or more programs configured to perform the methods described above.
In a sixth aspect, the present application provides a computer-readable storage medium having a program code stored therein, wherein the program code performs the above method when running.
After a training data set comprising first audio characteristics of normal audio information and abnormal audio information is obtained, training a generator network to be trained through the training data set so as to use the converged generator network to be trained as an initial abnormal detection model, and adjusting the initial abnormal detection model through a discriminator network so as to use the adjusted generator network as a target abnormal detection model. By the aid of the method, after the target abnormity detection model is obtained through training of the first audio characteristics of the normal audio information and the abnormal audio information, the first audio characteristics of the equipment to be detected can be input into the target abnormity detection model to perform abnormity detection on the equipment to be detected in the process of abnormity detection on the equipment to be detected, labor is saved, and abnormity detection efficiency is improved. In addition, the initial anomaly detection model is adjusted through the discriminator network, so that the target anomaly detection model can have better performance under the condition of lack of abnormal audio training data, namely, the discrimination of normal audio and abnormal audio can have higher accuracy.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a flowchart illustrating a model generation method according to an embodiment of the present application;
FIG. 2 is a flow chart of an embodiment of the present application at S110 of FIG. 1;
fig. 3 shows a schematic diagram of a flow of a method of acquiring an audio data set proposed by the present application;
FIG. 4 is a schematic diagram illustrating a flow of a method for obtaining a first audio feature proposed in the present application;
FIG. 5 shows a schematic diagram of a generator network to be trained proposed by the present application;
FIG. 6 is a flow chart illustrating a method of model generation according to another embodiment of the present application;
FIG. 7 is a flow chart illustrating a method of model generation according to yet another embodiment of the present application;
FIG. 8 is a schematic diagram of an anomaly detection model to be trained according to the present application;
FIG. 9 is a flow chart illustrating a method for anomaly detection as set forth in an embodiment of the present application;
fig. 10 is a block diagram illustrating a structure of a model generation apparatus according to an embodiment of the present application;
fig. 11 is a block diagram showing a structure of an abnormality detection apparatus according to an embodiment of the present application;
fig. 12 is a block diagram illustrating an electronic device according to the present application;
fig. 13 is a storage unit for storing or carrying program codes for implementing a parameter obtaining method according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
With the ever-increasing demand for electrical power, power systems have increasingly demanding economics and reliability for power-operated equipment. Due to the load effect of long-time operation and the influence of natural environment (air temperature, air pressure, humidity, filth and the like), the aging and the abrasion of the electric power operation equipment can be caused, so that the performance and the reliability of the electric power operation equipment are gradually reduced, and potential safety hazards exist, so that the monitoring and the detection of the operation state of the electric power operation equipment are very necessary.
The inventor finds that the traditional abnormal detection method of the electric power operation equipment needs manual examination and is low in efficiency in the abnormal detection research of the related electric power operation equipment. The method for detecting the abnormality of the power operation equipment based on the deep neural network needs a large amount of abnormal data to train network parameters, however, in an actual production environment, the amount of the abnormal data is small, so that the performance of the deep neural network is poor.
Therefore, the inventor proposes a model generation method, an anomaly detection device and an electronic device in the present application, after a training data set including a first audio feature of normal audio information and abnormal audio information is acquired, a generator network to be trained is trained through the training data set, so that a converged generator network to be trained is used as an initial anomaly detection model, and then the initial anomaly detection model is adjusted through a discriminator network, so that the adjusted generator network is used as a target anomaly detection model. By the aid of the method, after the target abnormity detection model is obtained through training of the first audio characteristics of the normal audio information and the abnormal audio information, the first audio characteristics of the equipment to be detected can be input into the target abnormity detection model to perform abnormity detection on the equipment to be detected in the process of abnormity detection on the equipment to be detected, labor is saved, and abnormity detection efficiency is improved. In addition, the initial anomaly detection model is adjusted through the discriminator network, so that the target anomaly detection model can have better performance under the condition of lack of abnormal audio training data, namely, the discrimination of normal audio and abnormal audio can have higher accuracy.
Referring to fig. 1, a model generation method provided by the present application is applied to an electronic device, and the method includes:
s110: a training data set is obtained, wherein the training data set comprises first audio features of a plurality of pieces of audio information of a target device, and the plurality of pieces of audio information comprise normal audio information and abnormal audio information.
In this embodiment, the target device may be a device selected for anomaly detection. Among them, the target device may be an electric power running device, such as: generators, motors, transformers, etc.
As shown in fig. 2, the acquiring of the training data set includes:
s111: a plurality of audio information of a target device is acquired.
As one mode, sound generated by a target device due to its internal structure or hardware conditions during operation may be sampled multiple times by an audio acquisition device (e.g., a recorder, etc.), and a plurality of sampled audio frequencies may be used as a plurality of audio information of the target device, where the plurality of audio information may include normal audio information (audio frequencies when the target device normally operates) and abnormal audio information (audio frequencies when the target device is abnormal, for example, sounds when a transformer is in an abnormal state such as overcurrent due to an external short circuit or overload due to a load exceeding a rated capacity for a long time). Illustratively, the target device is a transformer, and a plurality of audio information with a sampling rate of 16kHZ and a sampling precision of 16bit can be obtained by sampling a plurality of audios when the transformer normally operates and is abnormal through the audio acquisition device.
It should be noted that the sampling rate and the sampling precision are generally related to hardware conditions of the audio acquisition device, and a higher sampling rate may indicate that the number of sampling points acquired by the audio acquisition device per second is greater, for example, when the sampling rate is 16khz, it may indicate that the number of sampling points acquired by the audio acquisition device per second is 16000; the higher the sampling precision, the larger the range of representation of the data of each sampling point can be shown, for example, when the sampling precision is 16 bits, the range of representation of the data of each sampling point can be shown to be-32768 (2)(16-1)) And- + 32767.
As shown in fig. 3, after the audio collecting device collects a plurality of audio information of the target device, the plurality of audio information of the target device may be stored and labeled, so as to use the labeled plurality of audio information as an audio data set. For example, the tag of the captured normal audio information may be set to 1, and the tag of the captured abnormal audio information may be set to 0.
It should be noted that, there are various ways of labeling the multiple pieces of audio information of the target device, and the labeling may be manual labeling, automatic labeling by using a classification model, or manual calibration after automatic labeling by using a classification model.
S112: and performing framing, windowing and fast Fourier transform on the audio information to obtain a spectrogram corresponding to the audio information.
As one mode, an audio data set including a plurality of pieces of audio information of the target device may be preprocessed, and a result of the preprocessing may be stored as a training data set, where the training data set includes spectrogram corresponding to each of the plurality of pieces of audio information, so as to convert the audio information into image information and input the image information into the deep neural network for training. As shown in fig. 4, the pre-processing may include framing, windowing, and fast fourier transform, and then, the frame windowing processing may be performed on the labeled audio, and then, the fast fourier transform may be performed on each window to obtain a spectrogram corresponding to the labeled audio. Illustratively, when the duration of each audio collected by the audio collecting device is 1s, the sampling rate is 16kHZ, the frame length is 25ms, the frame shift is 10ms, and the window is a hanning window, each audio information corresponds to 16000 sampling point data, each frame in each audio information corresponds to 400 sampling point data, when the window length is the same as the frame length, after multiplying each frame data by a hanning window function, 400 windowed sampling point data can be obtained, after performing 512-point fast fourier transform on the windowed sampling point data, 512 spectral lines corresponding to one frame data can be obtained, after completing the calculation, the hanning window can be moved backward for 10ms (160 sampling point data), and the next calculation can be performed until all frames are calculated, and a spectrogram corresponding to each audio information can be obtained.
As another mode, an audio data set including a plurality of pieces of audio information of the target device may be subjected to real-time processing through framing, windowing, and fast fourier transform to obtain spectrogram corresponding to each of the plurality of pieces of audio information, so that the audio information is converted into image information and input into the deep neural network in real time for training.
It should be noted that the frame length, the frame shift, the window function, and the number of points of the fast fourier transform may be determined according to actual requirements. Illustratively, in consideration of the real-time performance of the anomaly detection model and the resolution of the audio information, the number of points of the fast fourier transform can be set to 512, so that abundant audio information can be extracted and the model can keep a faster calculation speed.
Furthermore, it should be noted that, in actual production life, the abnormal situations of the power running equipment are few, so that the number of spectrogram corresponding to a normal audio in the training data set may be more than the number of spectrogram corresponding to an abnormal audio, and thus the training data set in the embodiment of the present application may be unbalanced.
S113: and taking the spectrogram as a first audio feature of the audio information.
As one way, the plurality of spectrogram patterns of the target device may be used as the first audio features of the plurality of audio information of the target device.
S120: training the generator network to be trained through the training data set so as to take the converged generator network to be trained as an initial anomaly detection model.
The Generator network (Generator, G) to be trained may be configured to generate a second audio feature that conforms to the distribution of the first audio feature based on the first audio feature, and perform feature extraction on the second audio feature. In this embodiment, the generator network to be trained may include a first audio feature reconstruction network and a feature extraction network, where the first audio feature reconstruction network may include a first encoder, an LSTM (Long Short-Term Memory network), and a decoder, and the feature extraction network may include a second encoder. As one way, the first audio feature may be input into a first encoder, and the first audio feature is encoded (Coding) through a nonlinear transformation, so as to obtain a low-dimensional feature of the first audio feature; inputting the coded features (the low-dimensional features of the first audio features) into the LSTM, wherein the LSTM has good extraction capability on time information and the audio features in the application are related to time, so that the coded features can be further extracted by using the LSTM to obtain more effective potential representations (the low-dimensional features of the first audio features), so that the generator network to be trained can learn the features of the normal audio and the abnormal audio in the time dimension respectively, and the resolution capability of the generator network to be trained on the normal audio and the abnormal audio is improved; then, the potential representation can be input into a decoder, Decoding (Decoding) is carried out on the potential representation through inverse mapping, and a first audio feature is reconstructed, wherein the reconstructed first audio feature can be called as a second audio feature, and the size of the first audio feature is the same as that of the second audio feature; after the second audio feature is obtained, the second audio feature may be input into a second encoder, and the second audio feature may be further subjected to feature extraction and dimension reduction.
Optionally, the generator network to be trained may further include a full connection layer, and after the full connection layer, the softmax activation function may be used to output an abnormality detection result of the audio information corresponding to the first audio feature. By training the generator network to be trained with the training data set including the plurality of first audio features, a converged generator network to be trained can be obtained, and the converged generator network to be trained can be used as an initial anomaly detection model.
Optionally, in order to reduce the loss of feature information, residual connection may be used between the same size intermediate features of the first encoder and decoder in the generator network to be trained. Illustratively, as shown in fig. 5, in the first encoder section, the first audio feature may be subjected to a convolution operation of 3 × 3 to obtain an intermediate feature, and similarly, in the decoder section, the output of the LSTM may be subjected to a deconvolution operation of 3 × 3 to obtain an output feature, which is added to the first encoded intermediate feature to obtain an intermediate feature of the decoder, which is the same size as the first encoder intermediate feature.
It should be noted that the first encoder and the second encoder may have the same structure. Illustratively, as shown in fig. 5, the first encoder and the second encoder of the generator network to be trained may respectively include 3 × 3 two-dimensional convolutional layers, the decoder may include 3 × 3 two-dimensional deconvolution layers, and the generator network to be trained may further include one LSTM layer and one fully connected layer.
Furthermore, it should be noted that the network layer depths and the parameters (e.g., convolutional layer size, deconvolution layer size, etc.) corresponding to each layer of the network of encoders, decoders, LSTM layers, and fully-connected layers in the generator network to be trained can be flexibly set according to different target devices, first audio feature sizes, etc. Optionally, in order to improve the performance of the model, an attention mechanism may be introduced in the generator network to be trained.
S130: and adjusting the initial anomaly detection model through a discriminator network to take the adjusted generator network as a target anomaly detection model.
Wherein, a Discriminator network (D) can be used to discriminate whether the distributions of the first audio feature and the second audio feature in the initial anomaly detection model are consistent. In the embodiment of the present application, the discriminator network may include a third encoder, and optionally, the network structure of the third encoder may be the same as that of the first encoder or the second encoder.
As a mode, since the training purpose of the generator network in the initial anomaly detection model may be to generate a second audio feature that conforms to the distribution (real distribution) of the first audio feature, and the training purpose of the discriminator network may be to correctly discriminate whether the distributions of the first audio feature and the second audio feature are consistent, network parameters (for example, weights and the like) of the initial anomaly detection model are adjusted by the discriminator network, so that the generator network in the initial anomaly detection model learns the data distribution of the first audio feature corresponding to each of the normal audio information and the abnormal audio information in the process of confronting each other with the discriminator network, and thus the adjusted generator network can be used as a target anomaly detection model to perform anomaly detection on the input audio feature.
As another mode, whether the initial anomaly detection model is adjusted through the discriminator network may be determined according to the proportion of the abnormal audio information in the training data set, and if the proportion of the abnormal audio information is smaller than the threshold, the initial anomaly detection model is adjusted through the discriminator network, so that the adjusted generator network is used as the target anomaly detection model. For example, assuming that the threshold is a and the proportion of abnormal audio information in the training data set is B, if B is smaller than a, it indicates that the training samples of the abnormal audio information are insufficient, and the initial abnormal detection model may not learn the data distribution of the first audio feature corresponding to the abnormal audio information, so that the initial abnormal detection model may not accurately distinguish the normal audio from the abnormal audio, and at this time, network parameter (e.g., weight, etc.) adjustments may be made to the initial anomaly detection model by the network of discriminators, so that the generator network in the initial anomaly detection model learns the data distribution of the first audio features corresponding to the normal audio information and the abnormal audio information respectively in the process of competing with the discriminator network, therefore, the adjusted generator network can be used as a target abnormity detection model to carry out abnormity detection on the input audio characteristics.
In the model generation method provided by this embodiment, after a training data set including a first audio feature of normal audio information and abnormal audio information is acquired, a generator network to be trained is trained through the training data set, so that a converged generator network to be trained is used as an initial abnormality detection model, and then the initial abnormality detection model is adjusted through a discriminator network, so that the adjusted generator network is used as a target abnormality detection model. By the aid of the method, after the target abnormity detection model is obtained through training of the first audio characteristics of the normal audio information and the abnormal audio information, the first audio characteristics of the equipment to be detected can be input into the target abnormity detection model to perform abnormity detection on the equipment to be detected in the process of abnormity detection on the equipment to be detected, labor is saved, and abnormity detection efficiency is improved. In addition, the initial anomaly detection model is adjusted through the discriminator network, so that the target anomaly detection model can have better performance under the condition of lack of abnormal audio training data, namely, the discrimination of normal audio and abnormal audio can have higher accuracy.
Referring to fig. 6, a model generation method provided by the present application is applied to an electronic device, and the method includes:
s210: a training data set is obtained, wherein the training data set comprises first audio features of a plurality of pieces of audio information of a target device, and the plurality of pieces of audio information comprise normal audio information and abnormal audio information.
S220: and inputting the training data set into the generator network to be trained to obtain the output of the generator network to be trained.
As one mode, the first audio features corresponding to the normal audio information and the abnormal audio information may be input to the generator network to be trained, so as to obtain the output of the generator network to be trained. In this manner, there may be a plurality of normal audio information and a plurality of abnormal audio information, each of the normal or abnormal audio information corresponding to a first audio characteristic.
S230: training the generator network to be trained based on the output, a first loss function and a second loss function to take the converged generator network to be trained as an initial anomaly detection model, wherein the first loss function is an absolute value of a difference between the first encoder output result and the second encoder output result, the second loss function is an absolute value of a difference between the first audio feature and a second audio feature, and the second audio feature is an output result of the decoder.
In the embodiment of the present application, the first loss function may be used to minimize a distance between an output feature of the first encoder and an output feature of the second encoder in the Generator network to be trained (Generator, G), so that the Generator network to be trained may learn the distribution of the encoding features corresponding to the normal audio information and the abnormal audio information, respectively. By one approach, the first penalty function may be an absolute value of a difference between the first encoder output result and the second encoder output result, and the first penalty function may be calculated as follows:
Loss_g1=‖z1-z2
wherein z is1Can represent the output result of the first encoder, z2The output result of the second encoder may be represented.
Furthermore, in the embodiment of the present application, the second loss function may be used to minimize a distance between the first audio feature and the second audio feature in the Generator network to be trained (Generator, G), so that the Generator network to be trained may learn a distribution of texture features corresponding to each of the normal audio information and the abnormal audio information. By one approach, the second loss function may be an absolute value of a difference between the first audio feature and the decoder output result, and the second loss function is calculated as follows:
Loss_g2=‖x-G(x)‖
where x may represent the first audio feature and g (x) may represent the output of the decoder.
As one way, a weighted sum of the first loss function and the second loss function may be used as a loss function of the generator network to be trained, and the generator network to be trained is trained based on an output of the generator network to be trained and the loss function of the generator network to be trained, so as to use the converged generator network to be trained as an initial anomaly detection model. The calculation formula of the loss function of the generator network to be trained is as follows:
Loss_G=xLoss_g1+yLoss_g2
the sum of x and y is 1, and the values of x and y may be set based on experience or trained as trainable parameters of the generator network to be trained.
S240: and adjusting the initial anomaly detection model through a discriminator network to take the adjusted generator network as a target anomaly detection model.
According to the model generation method provided by the embodiment, after the target abnormity detection model is obtained through the training of the first audio characteristics of the normal audio information and the abnormal audio information, the first audio characteristics of the equipment to be detected can be input into the target abnormity detection model to perform abnormity detection on the equipment to be detected in the process of performing abnormity detection on the equipment to be detected, so that the labor is saved, and the abnormity detection efficiency is improved. In addition, the initial anomaly detection model is adjusted through the discriminator network, so that the target anomaly detection model can have better performance under the condition of lack of abnormal audio training data, namely, the discrimination of normal audio and abnormal audio can have higher accuracy. In addition, in this embodiment, the generator network to be trained is trained through the output of the generator network to be trained, the first loss function and the second loss function, so that the converged generator network to be trained is used as an initial anomaly detection model, and thus the initial anomaly detection model can obtain the feature distribution conditions corresponding to the normal audio information and the abnormal audio information, and the capability of the initial anomaly detection model in distinguishing the normal audio from the abnormal audio is improved.
Referring to fig. 7, a model generation method provided by the present application is applied to an electronic device, and the method includes:
s310: a training data set is obtained, wherein the training data set comprises first audio features of a plurality of pieces of audio information of a target device, and the plurality of pieces of audio information comprise normal audio information and abnormal audio information.
S320: training the generator network to be trained through the training data set so as to take the converged generator network to be trained as an initial anomaly detection model.
S330: and acquiring an anomaly detection model to be trained, wherein the anomaly detection model to be trained comprises the initial anomaly detection model and the third encoder.
As one way, as shown in fig. 8, the anomaly detection model to be trained may include a first encoder, a decoder, a second encoder, and a third encoder, where the third encoder may be used as a Discriminator network (D), and an input of the third encoder may be the first audio feature and an output of the decoder (the second audio feature).
Alternatively, the first encoder, the second encoder, and the third encoder may have the same structure.
S340: and inputting the training data set into the anomaly detection model to be trained to obtain the output of the anomaly detection model to be trained.
As one mode, the plurality of first audio features corresponding to the normal audio information and the abnormal audio information may be input into the abnormal detection model to be trained, so as to obtain an output of the abnormal detection model to be trained.
S350: and adjusting the anomaly detection model to be trained based on the output, the first loss function, the second loss function and a third loss function to obtain a convergent anomaly detection model to be trained, wherein the third loss function is an absolute value of a difference between a third audio feature and a fourth audio feature, the third audio feature is a third encoder output result corresponding to the first audio feature, and the fourth audio feature is a third encoder output result corresponding to the second audio feature.
In this embodiment, the third loss function may be used to minimize a distance between the network output feature of the Discriminator corresponding to the first audio feature and the network output feature of the Discriminator corresponding to the second audio feature, so that the anomaly detection model to be trained learns a feature that can deceive the Discriminator network (Discriminator, D), that is, so that the Discriminator network cannot confirm whether the second audio feature is the generated feature. As one way, the third loss function may be an absolute value of a difference between the third audio feature and the fourth audio feature, and the third loss function is calculated as follows:
Loss_d=‖D(x)-D(G(x))‖
wherein d (x) may represent a third audio feature, and the third audio feature may be an output result obtained by inputting the first audio feature into the discriminator network; d (g (x)) may represent a fourth audio feature, which may be an output of the network of discriminators from which the second audio feature is input.
As one way, a weighted sum of the first loss function, the second loss function, and the third loss function may be used as a loss function of the anomaly detection model to be trained, and the generator network to be trained is trained through the output of the anomaly detection model to be trained and the loss function of the anomaly detection model to be trained, so as to use the converged generator network to be trained as the initial anomaly detection model. The calculation formula of the loss function of the generator network to be trained is as follows:
Loss=xLoss_g1+yLoss-g2+zLoss_d
the sum of x, y and z is 1, and the values of x, y and z may be set based on experience or trained as trainable parameters of the anomaly detection model to be trained.
S360: and taking a generator network in the converged anomaly detection model to be trained as a target anomaly detection model.
According to the model generation method provided by the embodiment, after the target abnormity detection model is obtained through the training of the first audio characteristics of the normal audio information and the abnormal audio information, the first audio characteristics of the equipment to be detected can be input into the target abnormity detection model to perform abnormity detection on the equipment to be detected in the process of performing abnormity detection on the equipment to be detected, so that the labor is saved, and the abnormity detection efficiency is improved. In addition, the initial anomaly detection model is adjusted through the discriminator network, so that the target anomaly detection model can have better performance under the condition of lack of abnormal audio training data, namely, the discrimination of normal audio and abnormal audio can have higher accuracy. In addition, in this embodiment, the discriminator network discriminates whether the audio features of the generator network in the abnormal detection model to be trained are true or false (the label of the first audio feature is true, and the label of the second audio feature is false), so that the capability of the generator network to acquire the normal audio features can be improved, the difference between the first audio features corresponding to the normal audio and the abnormal audio after feature extraction is performed by the generator network is increased, and the performance of the generator network in the abnormal detection model to be trained, that is, the performance of the target abnormal detection model is further improved.
Referring to fig. 9, an abnormality detection method provided by the present application is applied to an electronic device, and the method includes:
s410: and acquiring the audio to be detected.
The audio to be detected may be a sound generated by the power running equipment (generator, motor, transformer, etc.) during running, and the sound may be generated by the power running equipment due to the internal structure or hardware condition of the power running equipment. As a mode, the audio to be detected can be acquired periodically through the audio acquisition equipment, so that the electric power operation equipment can be detected in real time, the electric power operation equipment can be found and maintained in time when abnormality occurs, and potential safety hazards are avoided. Illustratively, the audio to be detected may be acquired by the audio acquisition device every 2 s.
S420: and performing framing, windowing and fast Fourier transform on the audio to be detected to obtain a first audio characteristic corresponding to the audio to be detected, wherein the first audio characteristic is a spectrogram corresponding to the audio to be detected.
In the embodiment of the application, the spectrogram can be a two-dimensional image, and the size of the spectrogram can be related to the number of points of fast fourier transform and the number of frames after windowing the audio frame to be detected. Wherein, the first dimension size of the spectrogram can be represented by a formula: the number of fast Fourier transform points/2 +1 is obtained, and 1 can represent a direct current component; the second dimension size of the spectrogram can be represented by the formula: and (sampling rate × audio duration-sampling rate × frame length)/(sampling rate × frame shift) +1, where the unit of frame length and frame shift is s. For example, when the sampling rate is 16kHZ, the frame length is 25ms, the frame shift is 10ms, and the number of fft points is 512, the speech spectrogram of 257 × 198 can be obtained for the audio to be detected with a duration of 2 s.
S430: and inputting the first audio features into a target abnormity detection model, and acquiring a detection result output by the target abnormity detection model.
As one mode, a spectrogram corresponding to the audio to be detected can be input into a target anomaly detection model, the target anomaly detection model can output whether the audio to be detected is an abnormal audio, and if the audio is the abnormal audio, it indicates that the electric power operation equipment corresponding to the audio to be detected is abnormal, and fault troubleshooting needs to be performed on the electric power equipment; if the audio frequency is normal, the power running equipment corresponding to the audio frequency to be detected is in a normal working state.
According to the anomaly detection method provided by the embodiment, in the process of anomaly detection of the equipment to be detected, the first audio characteristic of the equipment to be detected can be input into the target anomaly detection model to perform anomaly detection on the equipment to be detected, so that the labor is saved, and the anomaly detection efficiency is improved.
Referring to fig. 10, a model generating apparatus 600 provided by the present application, operating on an electronic device, includes:
the data set obtaining unit 610 is configured to obtain a training data set, where the training data set includes first audio features of a plurality of pieces of audio information of a target device, and the plurality of pieces of audio information include normal audio information and abnormal audio information.
An initial anomaly detection model obtaining unit 620, configured to train the generator network to be trained through the training data set, so as to use the converged generator network to be trained as an initial anomaly detection model.
A target anomaly detection model obtaining unit 630, configured to adjust the initial anomaly detection model through the network of discriminators, so as to use the adjusted generator network as a target anomaly detection model.
As one way, the data set obtaining unit 610 is specifically configured to obtain a plurality of audio information of the target device; performing framing, windowing and fast Fourier transform on the audio information to obtain a spectrogram corresponding to the audio information; and taking the spectrogram as a first audio feature of the audio information.
As a mode, the generator network includes a first audio feature reconstruction network and a feature extraction network, where the first audio feature reconstruction network includes a first encoder, an LSTM, and a decoder, the feature extraction network includes a second encoder, and the initial anomaly detection model obtaining unit 620 is specifically configured to input the training data set into the generator network to be trained, so as to obtain an output of the generator network to be trained; training the generator network to be trained based on the output, a first loss function and a second loss function to obtain an initial anomaly detection model, wherein the first loss function is an absolute value of a difference between the first encoder output result and the second encoder output result, the second loss function is an absolute value of a difference between the first audio feature and the second audio feature, and the second audio feature is an output result of the decoder.
As one mode, the discriminator network includes a third encoder, and the target anomaly detection model obtaining unit 630 is specifically configured to obtain an anomaly detection model to be trained, where the anomaly detection model to be trained includes the initial anomaly detection model and the third encoder; inputting the training data set into the anomaly detection model to be trained to obtain the output of the anomaly detection model to be trained; adjusting the anomaly detection model to be trained based on the output, the first loss function, the second loss function and a third loss function to obtain a converged anomaly detection model to be trained, wherein the third loss function is an absolute value of a difference between a third audio feature and a fourth audio feature, the third audio feature is a third encoder output result corresponding to the first audio feature, and the fourth audio feature is a third encoder output result corresponding to the second audio feature; and taking a generator network in the converged anomaly detection model to be trained as a target anomaly detection model.
Optionally, the first encoder, the second encoder and the third encoder have the same structure.
Referring to fig. 11, an abnormality detection apparatus 800 provided by the present application is operated in an electronic device, where the apparatus 800 includes:
and a detected audio obtaining unit 810, configured to obtain an audio to be detected.
A first audio characteristic obtaining unit 820, configured to perform framing, windowing, and fast fourier transform on the audio to be detected to obtain a first audio characteristic corresponding to the audio to be detected, where the first audio characteristic is a spectrogram corresponding to the audio to be detected.
The detection result obtaining unit 830 is configured to input the first audio feature into a target anomaly detection model, and obtain a detection result output by the target anomaly detection model.
As one way, the detected audio obtaining unit 810 is specifically configured to periodically obtain the audio to be detected.
An electronic device provided by the present application will be described below with reference to fig. 12.
Referring to fig. 12, based on the model generation method, the anomaly detection method and the apparatus, another electronic device 100 capable of executing the model generation method and the anomaly detection method is further provided in the embodiment of the present application. The electronic device 100 includes one or more processors 102, only one of which is shown, and a memory 104, coupled to each other. The memory 104 stores programs that can execute the content of the foregoing embodiments, and the processor 102 can execute the programs stored in the memory 104.
Processor 102 may include one or more processing cores, among other things. The processor 102 interfaces with various components throughout the electronic device 100 using various interfaces and circuitry to perform various functions of the electronic device 100 and process data by executing or executing instructions, programs, code sets, or instruction sets stored in the memory 104 and invoking data stored in the memory 104. Alternatively, the processor 102 may be implemented in hardware using at least one of Digital Signal Processing (DSP), Field-Programmable Gate Array (FPGA), and Programmable Logic Array (PLA). The processor 102 may integrate one or more of a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), a modem, and the like. Wherein, the CPU mainly processes an operating system, a user interface, an application program and the like; the modem is used to handle wireless communications. It is understood that the modem may not be integrated into the processor 102, but may be implemented by a communication chip.
The Memory 104 may include a Random Access Memory (RAM) or a Read-Only Memory (Read-Only Memory). The memory 104 may be used to store instructions, programs, code sets, or instruction sets. The memory 104 may include a stored program area and a stored data area, wherein the stored program area may store instructions for implementing an operating system, instructions for implementing at least one function (such as a touch function, a sound playing function, an image playing function, etc.), instructions for implementing various method embodiments described below, and the like. The storage data area may also store data created by the terminal 100 in use, such as a phonebook, audio-video data, chat log data, and the like.
Referring to fig. 13, a block diagram of a computer-readable storage medium according to an embodiment of the present application is shown. The computer-readable storage medium 1000 has stored therein program code that can be called by a processor to execute the methods described in the above-described method embodiments.
The computer-readable storage medium 1000 may be an electronic memory such as a flash memory, an EEPROM (electrically erasable programmable read only memory), an EPROM, a hard disk, or a ROM. Alternatively, the computer-readable storage medium 800 includes a non-volatile computer-readable storage medium. The computer readable storage medium 1000 has storage space for program code 1010 for performing any of the method steps of the method described above. The program code can be read from or written to one or more computer program products. The program code 1010 may be compressed, for example, in a suitable form.
In summary, according to the model generation method, the anomaly detection device and the electronic device provided by the application, after the training data set including the first audio features of the normal audio information and the abnormal audio information is obtained, the generator network to be trained is trained through the training data set, so that the converged generator network to be trained serves as an initial anomaly detection model, and then the initial anomaly detection model is adjusted through the discriminator network, so that the adjusted generator network serves as a target anomaly detection model. By the aid of the method, after the target abnormity detection model is obtained through training of the first audio characteristics of the normal audio information and the abnormal audio information, the first audio characteristics of the equipment to be detected can be input into the target abnormity detection model to perform abnormity detection on the equipment to be detected in the process of abnormity detection on the equipment to be detected, labor is saved, and abnormity detection efficiency is improved. In addition, the initial anomaly detection model is adjusted through the discriminator network, so that the target anomaly detection model can have better performance under the condition of lack of abnormal audio training data, namely, the discrimination of normal audio and abnormal audio can have higher accuracy.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solutions of the present application, and not to limit the same; although the present application has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not necessarily depart from the spirit and scope of the corresponding technical solutions in the embodiments of the present application.

Claims (11)

1. A model generation method applied to an electronic device, the method comprising:
acquiring a training data set, wherein the training data set comprises respective first audio features of a plurality of pieces of audio information of a target device, and the plurality of pieces of audio information comprise normal audio information and abnormal audio information;
training a generator network to be trained through the training data set to take the converged generator network to be trained as an initial anomaly detection model;
and adjusting the initial anomaly detection model through a discriminator network to take the adjusted generator network as a target anomaly detection model.
2. The method of claim 1, wherein the generator network to be trained comprises a first audio feature reconstruction network and a feature extraction network, wherein the first audio feature reconstruction network comprises a first encoder, an LSTM, and a decoder, and wherein the feature extraction network comprises a second encoder;
training the generator network to be trained through the training data set to take the converged generator network to be trained as an initial anomaly detection model, including:
inputting the training data set into the generator network to be trained to obtain the output of the generator network to be trained;
training the generator network to be trained based on the output, a first loss function and a second loss function to take the converged generator network to be trained as an initial anomaly detection model, wherein the first loss function is an absolute value of a difference between the first encoder output result and the second encoder output result, the second loss function is an absolute value of a difference between the first audio feature and a second audio feature, and the second audio feature is an output result of the decoder.
3. The method of claim 2, wherein the network of discriminators includes a third encoder, and wherein adjusting the initial anomaly detection model by the network of discriminators to have the adjusted generator network as a target anomaly detection model comprises:
acquiring an anomaly detection model to be trained, wherein the anomaly detection model to be trained comprises the initial anomaly detection model and the third encoder;
inputting the training data set into the anomaly detection model to be trained to obtain the output of the anomaly detection model to be trained;
adjusting the anomaly detection model to be trained based on the output, the first loss function, the second loss function and a third loss function to obtain a converged anomaly detection model to be trained, wherein the third loss function is an absolute value of a difference between a third audio feature and a fourth audio feature, the third audio feature is a third encoder output result corresponding to the first audio feature, and the fourth audio feature is a third encoder output result corresponding to the second audio feature;
and taking a generator network in the converged anomaly detection model to be trained as a target anomaly detection model.
4. The method of claim 3, wherein the first encoder, the second encoder, and the third encoder are identical in structure.
5. The method of claim 1, wherein obtaining a training data set that includes first audio features of respective audio information of a target device, the audio information including normal audio information and abnormal audio information, comprises:
acquiring a plurality of audio information of a target device;
performing framing, windowing and fast Fourier transform on the audio information to obtain a spectrogram corresponding to the audio information;
and taking the spectrogram as a first audio feature of the audio information.
6. An abnormality detection method applied to an electronic device, the method comprising:
acquiring audio to be detected;
performing framing, windowing and fast Fourier transform on the audio to be detected to obtain a first audio characteristic corresponding to the audio to be detected, wherein the first audio characteristic is a spectrogram corresponding to the audio to be detected;
inputting the first audio characteristic into a target abnormity detection model obtained by the method of any one of claims 1 to 5, and obtaining a detection result output by the target abnormity detection model.
7. The method according to claim 6, applied to an electronic device, wherein the acquiring the audio to be detected comprises:
the audio to be detected is acquired periodically.
8. An apparatus for model generation, operable on an electronic device, the apparatus comprising:
the data set acquisition unit is used for acquiring a training data set, wherein the training data set comprises first audio features of a plurality of pieces of audio information of the target equipment, and the plurality of pieces of audio information comprise normal audio information and abnormal audio information;
an initial anomaly detection model obtaining unit, configured to train a generator network to be trained through the training data set, so as to use the converged generator network to be trained as an initial anomaly detection model;
and the target anomaly detection model acquisition unit is used for adjusting the initial anomaly detection model through the discriminator network so as to take the adjusted generator network as a target anomaly detection model.
9. An anomaly detection apparatus, operable with an electronic device, the apparatus comprising:
the audio acquisition unit to be detected is used for acquiring audio to be detected;
the first audio characteristic acquisition unit is used for performing framing, windowing and fast Fourier transform on the audio to be detected to obtain a first audio characteristic corresponding to the audio to be detected, wherein the first audio characteristic is a spectrogram corresponding to the audio to be detected;
a detection result obtaining unit, configured to input the first audio feature into the target anomaly detection model obtained by the method according to any one of claims 1 to 5, and obtain a detection result output by the target anomaly detection model.
10. An electronic device comprising one or more processors and memory;
one or more programs stored in the memory and configured to be executed by the one or more processors, the one or more programs configured to perform the method of any of claims 1-7.
11. A computer-readable storage medium, having program code stored therein, wherein the method of any of claims 1-7 is performed when the program code is run.
CN202111666960.8A 2021-12-31 2021-12-31 Model generation method, abnormality detection device, and electronic apparatus Pending CN114400019A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111666960.8A CN114400019A (en) 2021-12-31 2021-12-31 Model generation method, abnormality detection device, and electronic apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111666960.8A CN114400019A (en) 2021-12-31 2021-12-31 Model generation method, abnormality detection device, and electronic apparatus

Publications (1)

Publication Number Publication Date
CN114400019A true CN114400019A (en) 2022-04-26

Family

ID=81229740

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111666960.8A Pending CN114400019A (en) 2021-12-31 2021-12-31 Model generation method, abnormality detection device, and electronic apparatus

Country Status (1)

Country Link
CN (1) CN114400019A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114399005A (en) * 2022-03-10 2022-04-26 深圳市声扬科技有限公司 Training method, device, equipment and storage medium of living body detection model
CN115288994A (en) * 2022-08-03 2022-11-04 西安安森智能仪器股份有限公司 Compressor abnormal state detection method based on improved DCGAN
CN115426282A (en) * 2022-07-29 2022-12-02 苏州浪潮智能科技有限公司 Voltage abnormality detection method, system, electronic device, and storage medium
CN115565525A (en) * 2022-12-06 2023-01-03 四川大学华西医院 Audio anomaly detection method and device, electronic equipment and storage medium
CN117951606A (en) * 2024-03-27 2024-04-30 国网山东省电力公司梁山县供电公司 Power equipment fault diagnosis method, system, equipment and storage medium

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114399005A (en) * 2022-03-10 2022-04-26 深圳市声扬科技有限公司 Training method, device, equipment and storage medium of living body detection model
CN114399005B (en) * 2022-03-10 2022-07-12 深圳市声扬科技有限公司 Training method, device, equipment and storage medium of living body detection model
CN115426282A (en) * 2022-07-29 2022-12-02 苏州浪潮智能科技有限公司 Voltage abnormality detection method, system, electronic device, and storage medium
CN115426282B (en) * 2022-07-29 2023-08-18 苏州浪潮智能科技有限公司 Voltage abnormality detection method, system, electronic device and storage medium
CN115288994A (en) * 2022-08-03 2022-11-04 西安安森智能仪器股份有限公司 Compressor abnormal state detection method based on improved DCGAN
CN115288994B (en) * 2022-08-03 2024-01-19 西安安森智能仪器股份有限公司 Improved DCGAN-based compressor abnormal state detection method
CN115565525A (en) * 2022-12-06 2023-01-03 四川大学华西医院 Audio anomaly detection method and device, electronic equipment and storage medium
CN117951606A (en) * 2024-03-27 2024-04-30 国网山东省电力公司梁山县供电公司 Power equipment fault diagnosis method, system, equipment and storage medium
CN117951606B (en) * 2024-03-27 2024-07-19 国网山东省电力公司梁山县供电公司 Power equipment fault diagnosis method, system, equipment and storage medium

Similar Documents

Publication Publication Date Title
CN114400019A (en) Model generation method, abnormality detection device, and electronic apparatus
US20230123117A1 (en) Method and Apparatus for Inspecting Wind Turbine Blade, And Device And Storage Medium Thereof
CN112289341A (en) Sound abnormity identification method and system for transformer substation equipment
CN108023597B (en) Numerical control system reliability data compression method
WO2000017856A1 (en) Method and apparatus for detecting voice activity in a speech signal
CN111680665B (en) Motor mechanical fault diagnosis method adopting current signals based on data driving
CN117235557A (en) Electrical equipment fault rapid diagnosis method based on big data analysis
CN110265065A (en) A kind of method and speech terminals detection system constructing speech detection model
CN104995673A (en) Frame error concealment
CN110147739A (en) A kind of Reactor Fault recognition methods based on Multifractal Analysis
CN114151293B (en) Fault early warning method, system, equipment and storage medium of fan variable pitch system
CN116318172A (en) Design simulation software data self-adaptive compression method
CN116386612A (en) Training method of voice detection model, voice detection method, device and equipment
CN115376526A (en) Power equipment fault detection method and system based on voiceprint recognition
CN114157023B (en) Distribution transformer early warning information acquisition method
CN106463141B (en) Audio signal circuit sectionalizer and encoder
CN114004996A (en) Abnormal sound detection method, abnormal sound detection device, electronic equipment and medium
CN115328661B (en) Computing power balance execution method and chip based on voice and image characteristics
Hatzipantelis et al. The use of hidden Markov models for condition monitoring electrical machines
CN112560674A (en) Method and system for detecting quality of sound signal
CN116580716B (en) Audio encoding method, device, storage medium and computer equipment
CN112114215A (en) Transformer aging evaluation method and system based on error back propagation algorithm
CN117636909B (en) Data processing method, device, equipment and computer readable storage medium
CN114664314B (en) Beidou short message voice transmission method and device
CN115653816A (en) Fault monitoring method, device, equipment and medium for water turbine

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination