CN116573508B - High-resolution elevator fault identification method, device and related medium - Google Patents

High-resolution elevator fault identification method, device and related medium Download PDF

Info

Publication number
CN116573508B
CN116573508B CN202310859577.7A CN202310859577A CN116573508B CN 116573508 B CN116573508 B CN 116573508B CN 202310859577 A CN202310859577 A CN 202310859577A CN 116573508 B CN116573508 B CN 116573508B
Authority
CN
China
Prior art keywords
elevator
voiceprint
data
fault
voiceprint data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310859577.7A
Other languages
Chinese (zh)
Other versions
CN116573508A (en
Inventor
钟桂生
袁戟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan Wanrui Digital Operation Co ltd
Shenzhen Wanwuyun Technology Co ltd
Original Assignee
Wuhan Wanrui Digital Operation Co ltd
Shenzhen Wanwuyun Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan Wanrui Digital Operation Co ltd, Shenzhen Wanwuyun Technology Co ltd filed Critical Wuhan Wanrui Digital Operation Co ltd
Priority to CN202310859577.7A priority Critical patent/CN116573508B/en
Publication of CN116573508A publication Critical patent/CN116573508A/en
Application granted granted Critical
Publication of CN116573508B publication Critical patent/CN116573508B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B66HOISTING; LIFTING; HAULING
    • B66BELEVATORS; ESCALATORS OR MOVING WALKWAYS
    • B66B5/00Applications of checking, fault-correcting, or safety devices in elevators
    • B66B5/0006Monitoring devices or performance analysers
    • B66B5/0018Devices monitoring the operating condition of the elevator system
    • B66B5/0025Devices monitoring the operating condition of the elevator system for maintenance or repair
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B66HOISTING; LIFTING; HAULING
    • B66BELEVATORS; ESCALATORS OR MOVING WALKWAYS
    • B66B5/00Applications of checking, fault-correcting, or safety devices in elevators
    • B66B5/0006Monitoring devices or performance analysers
    • B66B5/0012Devices monitoring the users of the elevator system
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B66HOISTING; LIFTING; HAULING
    • B66BELEVATORS; ESCALATORS OR MOVING WALKWAYS
    • B66B5/00Applications of checking, fault-correcting, or safety devices in elevators
    • B66B5/0006Monitoring devices or performance analysers
    • B66B5/0018Devices monitoring the operating condition of the elevator system
    • B66B5/0031Devices monitoring the operating condition of the elevator system for safety reasons
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02BCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO BUILDINGS, e.g. HOUSING, HOUSE APPLIANCES OR RELATED END-USER APPLICATIONS
    • Y02B50/00Energy efficient technologies in elevators, escalators and moving walkways, e.g. energy saving or recuperation technologies

Abstract

The invention discloses a high-resolution elevator fault identification method, a device and a related medium, wherein the method comprises the following steps: acquiring historical elevator voiceprint data with continuous frames, and marking the historical elevator voiceprint data; performing voiceprint enhancement processing on the marked historical elevator voiceprint data through an improved demucsV3 algorithm to obtain target voiceprint data; performing feature extraction on target voiceprint data by adopting an unsupervised audio pre-training model to obtain corresponding voiceprint characterization data; performing specialized fine tuning training on the voiceprint characterization data by combining a convolutional neural network and a door circulation unit network, and outputting fault probability scores of each frame of the voiceprint characterization data to construct an elevator fault recognition model; carrying out fault identification on the appointed elevator voiceprint data by using an elevator fault identification model, and outputting a fault probability score; and monitoring and alarm management are carried out according to the fault probability score. The invention can improve the fault detection efficiency and precision of the elevator and reduce the operation and maintenance cost of the elevator.

Description

High-resolution elevator fault identification method, device and related medium
Technical Field
The invention relates to the technical field of computer software, in particular to a high-resolution elevator fault identification method, a high-resolution elevator fault identification device and a related medium.
Background
As high-rise buildings grow in places, elevator installations play an important role in cross-floor scheduling within the high-rise buildings. The elevator device itself has the characteristics of high integration and automation: the device mainly comprises a traction system, a guiding system, a weight balancing system, an electric traction system, an electric control system, a safety protection system, car equipment and the like. Under the collaborative work of a plurality of control systems and equipment, the elevator equipment completes the transportation scheduling operation of personnel and goods among floors, greatly improves the efficiency of personnel traveling, and simultaneously provides convenient and comfortable traveling conditions for specific people such as disabled persons, old people and the like.
However, there is a periodic elevator maintenance property cost behind the dispatch service provided by the elevator installation. In addition, under the influence of many factors such as the high-frequency use of elevator, the unconscious elevator taking action of elevator passengers, the abrasion of elevator equipment, complex environment and the like, emergency caused by elevator faults happens, and serious threat is formed to the life safety and property safety of the elevator passengers. Therefore, in order to solve the problems of elevator fault early warning, fault diagnosis, maintenance work and the like, a method capable of accurately predicting elevator faults is needed to improve the operation and maintenance efficiency of an elevator system and reduce the operation and maintenance cost of the elevator system.
Disclosure of Invention
The embodiment of the invention provides a high-resolution elevator fault identification method, a high-resolution elevator fault identification device, computer equipment and a storage medium, aiming at improving the elevator fault detection efficiency and accuracy and reducing the elevator operation and maintenance cost.
In a first aspect, an embodiment of the present invention provides a method for identifying a fault of a high-resolution elevator, including:
acquiring historical elevator voiceprint data with continuous frames, and marking the historical elevator voiceprint data;
performing voiceprint enhancement processing on the marked historical elevator voiceprint data through an improved demucsV3 algorithm to obtain target voiceprint data;
performing feature extraction on the target voiceprint data by adopting an unsupervised audio pre-training model to obtain corresponding voiceprint characterization data;
combining a convolutional neural network and a gate cycle unit network to perform specialized fine tuning training on the voiceprint characterization data, and outputting fault probability scores of each frame of the voiceprint characterization data so as to construct an elevator fault recognition model;
performing fault recognition on the appointed elevator voiceprint data by using the elevator fault recognition model, and outputting fault probability scores of the appointed elevator voiceprint data;
and comparing the fault probability score of the designated elevator voiceprint data with a preset alarm threshold value, and monitoring and alarm management are carried out on the elevator corresponding to the comparison result.
In a second aspect, an embodiment of the present invention provides a high resolution elevator fault recognition apparatus, including:
the voiceprint marking unit is used for acquiring historical elevator voiceprint data and marking the historical elevator voiceprint data;
the voiceprint enhancement unit is used for carrying out voiceprint enhancement processing on the marked historical elevator voiceprint data through an improved demucsV3 algorithm to obtain target voiceprint data;
the characterization extraction unit is used for carrying out feature extraction on the target voiceprint data by adopting an unsupervised audio pre-training model to obtain corresponding voiceprint characterization data;
the model building unit is used for carrying out specialized fine tuning training on the voiceprint characterization data by combining a convolutional neural network and a door circulation unit network, and outputting fault probability scores of each frame of the voiceprint characterization data so as to build an elevator fault recognition model;
the fault recognition unit is used for carrying out fault recognition on the appointed elevator voiceprint data by utilizing the elevator fault recognition model and outputting the fault probability score of the appointed elevator voiceprint data;
the fault analysis unit is used for comparing the fault probability score of the designated elevator voiceprint data with a preset alarm threshold value, and monitoring and alarm management are carried out on the elevator corresponding to the comparison result.
In a third aspect, an embodiment of the present invention provides a computer device, including a memory, a processor, and a computer program stored on the memory and executable on the processor, where the processor implements the high resolution elevator fault identification method according to the first aspect when the computer program is executed.
In a fourth aspect, embodiments of the present invention provide a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the high resolution elevator fault identification method according to the first aspect.
In order to reduce the probability of occurrence of accidents of the elevator and reduce the property cost of elevator maintenance, the embodiment of the invention firstly adopts industrial auscultation equipment or other modes to acquire elevator operation voiceprint data, then carries out voiceprint enhancement, feature extraction, prediction output and other operations on the voiceprint data, achieves the aim of predicting the probability of occurrence of faults by utilizing a high-resolution recognition algorithm, and further can timely take intervention measures such as early warning or alarming according to the result of prediction output, thereby improving the safety and reliability of elevator dispatching operation and simultaneously reducing the property cost of elevator maintenance.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required for the description of the embodiments will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic flow chart of a high-resolution elevator fault recognition method according to an embodiment of the present invention;
fig. 2 is a schematic diagram of fault time difference in a fault recognition method of a high-resolution elevator according to an embodiment of the present invention;
fig. 3 is a schematic diagram of visualization of learning targets in a high-resolution elevator fault recognition method according to an embodiment of the present invention;
fig. 4 is a diagram of a DemucsV3 network structure in a method for identifying a high resolution elevator fault according to an embodiment of the present invention;
fig. 5 is a network structure diagram of a coding layer of a DemucsV3 network in the high resolution elevator fault identification method according to the embodiment of the invention;
fig. 6 is a Wav2Vec model frame diagram in a high resolution elevator fault recognition method according to an embodiment of the present invention;
fig. 7 is a diagram of an encoder network structure of a Wav2Vec model in a high resolution elevator fault recognition method according to an embodiment of the present invention;
Fig. 8 is a context network structure diagram of a Wav2Vec model in a high resolution elevator fault recognition method according to an embodiment of the present invention;
fig. 9 is a diagram of a fine tuning network in a high resolution elevator fault recognition method according to an embodiment of the present invention;
fig. 10 is a visual diagram of analysis of results in a high-resolution elevator fault recognition method according to an embodiment of the present invention;
fig. 11 is a schematic block diagram of a high-resolution elevator fault recognition device according to an embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are some, but not all embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
It should be understood that the terms "comprises" and "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It is also to be understood that the terminology used in the description of the invention herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in this specification and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It should be further understood that the term "and/or" as used in the present specification and the appended claims refers to any and all possible combinations of one or more of the associated listed items, and includes such combinations.
Referring to fig. 1, fig. 1 is a schematic flow chart of a high-resolution elevator fault recognition method according to an embodiment of the present invention, which specifically includes: steps S101-S106.
S101, acquiring historical elevator voiceprint data with continuous frames, and marking the historical elevator voiceprint data;
s102, carrying out voiceprint enhancement processing on marked historical elevator voiceprint data through an improved demucsV3 algorithm to obtain target voiceprint data;
s103, performing feature extraction on the target voiceprint data by adopting an unsupervised audio pre-training model to obtain corresponding voiceprint characterization data;
S104, combining a convolutional neural network with a gate cycle unit network to perform specialized fine tuning training on the voiceprint characterization data, and outputting fault probability scores of each frame of the voiceprint characterization data so as to construct an elevator fault recognition model;
s105, performing fault recognition on the specified elevator voiceprint data by using the elevator fault recognition model, and outputting fault probability scores of the specified elevator voiceprint data;
s106, comparing the failure probability score of the designated elevator voiceprint data with a preset alarm threshold value, and monitoring and alarm management are carried out on the elevator corresponding to the comparison result.
In the embodiment, firstly, historical elevator voiceprint data are acquired and marked, then voiceprint enhancement processing is carried out on the historical elevator voiceprint data through improved demucsV3, voiceprint representation data are extracted and generated from the voice print representation data through an unsupervised audio pre-training model, and then a convolutional neural network and a door circulation unit network are utilized for classified prediction, so that corresponding fault probability scores are obtained. The high-resolution elevator fault recognition model is constructed through the process, then the elevator voiceprint data can be recognized and predicted according to the elevator fault recognition model, the elevator is subjected to state analysis according to the recognition and prediction result, and then whether corresponding measures such as early warning or alarming are needed or not is judged according to the state analysis, so that the safety and reliability of elevator dispatching operation are improved, and the property cost of elevator maintenance is reduced.
In an embodiment, the S101 includes:
setting the historical elevator voiceprint data as a voiceprint data set, and performing fault marking on the historical elevator voiceprint data according to whether the historical elevator voiceprint data is fault data or not to obtain a metadata set with a label; wherein the label of the fault data is 1, and the label of the non-fault data is 0;
framing the voiceprint data set based on frame length and frame movement of historical elevator voiceprint data to obtain short-time voiceprint data with the frame number of T1;
traversing the metadata set and generating a one-dimensional vector with length T2 and filling value 0The method comprises the steps of carrying out a first treatment on the surface of the Wherein t1=t2;
acquiring the time stamp of the elevator fault from the metadata set, and calculating the time difference between the center of each frame in the short-time voiceprint data and the time stamp of the elevator fault closest to the center
The time difference is calculated according to the following formulaLearning and labeling are carried out as model learning targets, and finally labeled historical elevator voiceprint data are obtained:
wherein, the liquid crystal display device comprises a liquid crystal display device,representing +.>Function of transformation, adjacent frame number->Indicating the super parameter, i indicating the index of the frame, and h indicating the frame shift.
In this embodiment, the monitoring is performed around a plurality of key devices and safety components of the elevator apparatus, and the specific operation is to install an industrial stethoscope for key elevator devices such as a traction machine, a speed limiter, guide shoes, guide rails, car top wheels, and the like, and the stethoscope has the function of collecting voiceprint data of elevator operation in real time, and transmits the voiceprint data to a model for training or reasoning. Voiceprint marking refers to fault marking of voiceprint data collected by a stethoscope, and in general, voiceprint data labels have 0 'and 1', which respectively represent two states of good and fault; the voice print data in the 'good' state does not need to be further marked, and the data in the 'fault' state needs to be further operated, wherein the voice print data comprises the statistics of the times of elevator faults in the voice print data and the time stamp of each occurrence and ending of the faults, and the tag data is used as metadata of the voice print of the elevator.
Aiming at the problems of fault early warning, fault diagnosis, maintenance and the like of the elevator, the high resolution requirement of an identification algorithm is met, and metadata of a fault sample is required to be further processed. The YOLO (You Look Only Once) algorithm in the computer vision field divides the image into a plurality of grids, and then predicts the distance between each grid and the detection target; inspired by YOLO, in this embodiment, the fault probability of continuous time (frame) is used as a learning object, the variation trend of the fault probability of continuous time is further analyzed, and the fault trend is judged to finally obtain the prediction result of the elevator fault.
In particular, with voiceprint data setsXMetadata setMTraining a data set for input by a modelX f And set up a model learning target data setY f . First traverse the voiceprint datasetXAnd metadata setMAnd the following operations are performed:
X: based on the frame length n and the frame shift h pairXFraming to generate short-time voiceprint data with the frame number of T
M: generating a one-dimensional vector with length T and filling value 0
If metadata isMIs '0', i.e. a "good" state:
direct return to
Otherwise, i.e. "failure" state:
slave metadataMAcquiring a time stamp of elevator fault occurrence, and calculatingThe time difference between the center of each frame and its nearest elevator failure time stamp is +. >Indicating that the subscript i indicates the index of the frame and negative and positive values indicate that the frame is before and after the failure time stamp, respectively, as shown in fig. 2 +.>The broken line between the two indicates the occurrence time of elevator failure; the time difference is further determined using the following formula +.>Conversion to a model learning target:
wherein, the liquid crystal display device comprises a liquid crystal display device,the adjacent frame number is super parameter, and the sharpness of the change trend of the model learning target is determined: the larger the super parameter is, the smoother the model learning target change trend is, and otherwise, the steeper the model learning target change trend is. By adjacent frames->For example, FIG. 3 is a visualization of model learning objectives. Wherein, the left side in fig. 3 indicates that the fault occurrence time is just between two adjacent frames, and the right side indicates that the fault occurrence time is at the right side of the center of the frame;
then, willThe data of the corresponding position is modified to +>
Finally, the traversal processAnd->Respectively stored inX f AndY f and returning until the traversal is completed.
In one embodiment, the step S102 includes:
the marked historical elevator voiceprint data is input to a time domain module in an improved demucsV3 algorithm;
performing first feature extraction on input historical elevator voiceprint data in a time dimension through a 5-layer first coding layer in the time domain module, and performing first feature reconstruction on the extracted first features through a 5-layer first decoding layer in the time domain module;
Performing short-time Fourier transform on the marked historical elevator voiceprint data, and inputting the transformed data to a frequency domain module in an improved demucsV3 algorithm;
performing second feature extraction on the transformed data in the frequency dimension through 5 layers of second coding layers in the frequency domain module, and performing second feature reconstruction on the extracted second features through 5 layers of second decoding layers in the frequency domain module;
the sharing layer is utilized to share and integrate information of the time domain module and the frequency domain module;
and carrying out short-time Fourier inverse transformation on the output result of the frequency domain module, and adding the data after inverse transformation with the output result of the time domain module to obtain target voiceprint data with enhanced voiceprint.
In this embodiment, since the elevator apparatus has a highly integrated feature, the process of operation thereof relies on the cooperation of a plurality of devices and systems, so that voiceprint data collected by an industrial stethoscope often has noise, and the noise source may be speaking sound from an elevator passenger, prompting sound of a car, or operating sound from other non-diagnostic devices. Therefore, in order to avoid the influence of noise on model accuracy, the embodiment adopts voiceprint enhancement as a pre-step of model training.
Specifically, the voiceprint enhancement algorithm used in the embodiment is improved to a certain extent on the basis of demucsV3 so as to better meet the characteristics of elevator operation voiceprint and the landing performance of the algorithm. The demucsV3 model has a symmetrical U-shaped structure, mainly comprising a time domain module, a frequency domain module and a plurality of sharing layers, and the improved model structure is shown in fig. 4. The time domain module (taking 'T' as prefix and the right side) takes original audio as input, the interior of the module is composed of a 5-layer coding layer (TEncoder) and a 5-layer decoding layer (TDecoder), the former completes feature extraction of the original audio in the time dimension, the latter is responsible for reconstructing the extracted features, and a sharing layer between the coding layer and the decoding layer realizes sharing and integration of information of both time domain and frequency domain. The frequency domain module (taking 'Z' as prefix, left side) takes a spectrogram after Short Time Fourier Transform (STFT) as input, wherein the frame length n and the frame shift h keep the global consistency, and the inside of the module is also composed of a 5-layer coding layer (ZEncoder) and a 5-layer decoding layer (ZDecor), but the difference from the time domain module is that: the frequency domain module performs feature extraction and reconstruction in the frequency dimension, wherein the feature extraction and reconstruction will flow through the sharing layer to be shared with the time domain coding module.
Finally, the output of the frequency domain decoding block is subjected to Inverse Short Time Fourier Transform (ISTFT), and added to the output result of the time domain decoding block (TDecoder 1) as the output of the model, i.e., the enhanced version of the original elevator voiceprint.
The calculation formula of the short-time Fourier transform is as follows:
the calculation formula of the short-time inverse Fourier transform is as follows:
wherein, the liquid crystal display device comprises a liquid crystal display device,for the original signal, representing the input time-domain signal, n represents the time index, < >>For definition in time domain->And frequency domain->Two-dimensional function on->E represents a natural constant, N represents a window length, m represents an index of a time window,/-for a window function>A discrete value representing frequency, j representing an imaginary unit.
In an embodiment, the step S102 further includes:
setting residual modules for the first coding layer and the second coding layer respectively, and reducing the number of model parameters through the residual modules;
the improved demucsV3 algorithm was evaluated using the signal-to-noise ratio as an evaluation index according to the following formula:
wherein SDR represents the signal-to-noise ratio,representing the energy or power of a real sound source, +.>Representing the energy or power of the system output sound source, < >>Representing time index,/->Representing a constant.
In the embodiment, the floor effect of the algorithm is considered, and the instantaneity of the model is improved in the practical application process, so that the internal structure of the coding layer and the decoding layer is simplified to a certain extent, and the parameter quantity of the model is reduced on the premise of meeting the high resolution and the model reasoning efficiency. As shown in fig. 5, in this embodiment, a residual module is set in each coding layer, and the inside of the residual module is mainly composed of a plurality of convolution layers, a two-way long-short-term memory network, a regularization function and an activation function.
In addition, in this embodiment, a Signal-To-noise Ratio (SDR) is selected as an evaluation index of the voiceprint enhancement system, and the SDR measure is widely applied To systems such as sound source separation and noise reduction, and the larger the value is, the better the noise reduction performance of the system is.
In an embodiment, the unsupervised audio pre-training model is a Wav2Vec model;
the step S103 includes:
embedding the target voiceprint data into a shallow feature space by using an encoder network of a Wav2Vec model, and mapping the target voiceprint data into feature vectors;
and carrying out deep feature representation on the feature vector by using a context network in the Wav2Vec model to obtain the voiceprint characterization data.
In practical application scenarios, due to differences of elevator brand technology, loss conditions, service life and the like, traditional acoustic features such as Mel-Spectrum (Mel-Spectrum) and Mel-cepstrum coefficient (MFCC) cannot well extract elevator voiceprint features, and therefore a network structure with efficient elevator voiceprint feature extraction is needed. In addition, considering that the elevator operation voiceprint is different from the human hearing structure, i.e. the human ear hearing is not in a linear relation with the sound frequency, the audio data after voiceprint enhancement is subjected to a pre-emphasis process to compensate for the attenuation of the high frequency components during transmission. Therefore, in this embodiment, an unsupervised audio pre-training model Wav2Vec is selected for feature extraction of elevator voiceprints, and the key idea of the Wav2Vec model is to perform global training in a large amount of marked or unmarked data, and then fine-tune on a specific data set to improve performance of downstream tasks, so experiments prove that the pre-training method is particularly effective. In the embodiment, a Wav2Vec model is used for pre-training a large amount of elevator voiceprint data, and fine adjustment is further carried out on a specific diagnosed safety component so as to improve the accuracy of elevator fault identification.
The Wav2Vec model takes as input an original audio signal, predicts audio at a future time based on historical audio information and currently input audio information, and the model framework mainly consists of an Encoder Network (Encoder Network) and a Context Network (Context Network), as shown in fig. 6. Encoder network (for use)Representation, see fig. 7) embedding the audio signal into the feature space of the shallow layer, embedding each segment of signal +.>Mapping to a feature vector +.>. Context network (with representation->See fig. 8) in combination with the coded vectors of the multiple time steps, a feature representation of the deep layer is obtained, namely:
wherein, the liquid crystal display device comprises a liquid crystal display device,vis a receptive field.
In addition, the Wav2Vec model employs a self-supervised learning approach to contrasting predictive coding (Contrastive Predictive Coding, CPC) to learn audio representations with rich semantics by comparing the original audio segments to their context segments in the time dimension and the content dimension.
The Wav2Vec model training aims at randomly extracting a plurality of negative samples (interference samples) in a probability distribution, distinguishing the negative samples (real samples), and calculating a comparison loss function in the next step as follows:
wherein, the liquid crystal display device comprises a liquid crystal display device,feature vector representing the encoder network output, +. >Feature vector representing the context network output, +.>Representing a negative sample->The number of negative samples, +.>For inputting the length of an audio sequence +.>Representing the probability distribution of the negative sample,an affine function representing a step size k is calculated as follows:
wherein, the liquid crystal display device comprises a liquid crystal display device,and->Is a network parameter to be trained;
expressed as positive sample +.>Of->The calculation formula is as follows:
corresponding to all step sizes kAdding to obtain the final loss function +.>
Where K represents the maximum step size.
In an embodiment, the step S104 includes:
inputting the voiceprint representation data into a convolutional neural network, and outputting predicted values of the voiceprint representation data by the gate cycle unit network after passing through the convolutional neural network and the gate cycle unit network;
using a loss function according toAnd carrying out specialized processing on the predicted value:
wherein T represents the total frame number of the voiceprint representation data, G (T) represents the true value of the T frame, P (T) represents the predicted value of the T frame, l bce Representing a bi-classification cross entropy loss function.
In this embodiment, after voiceprint characterization is completed, fine tuning (Fine-tuning) is performed on a specific task, so that the recognition accuracy of elevator faults can be significantly improved. The specific operation is to fine tune the network structure based on the pre-training model, and the newly added additional network layer is shown in fig. 9, wherein the network structure mainly comprises a CNN (Convolution Neural Network, convolutional neural network) and a GRU (Gate Recurrent Unit, gate cycle unit network), and the CNN-GRU network comprises a plurality of convolutional layers, a pooling layer, a gate control layer and the like.
The fine tuning training is performed on the dataset of the diagnosed elevator component: the CNN-GRU network takes the output of the Wav2Vec model as input, further characterizes Fan Huasheng lines by the CNN module and the GRU module, and finally outputs a score of each frame to represent the probability of failure of the frame. Therefore, the loss function of the fine tuning model is defined asThe following is shown:
wherein T represents the total frame number of the input audio, G (T) represents the true value of the T frame, P (T) represents the model predictive value of the T frame, l bce Representing a bi-classification cross entropy loss function.
In one embodiment, when the specified elevator voiceprint data is identified in the step S105, the method is mainly divided into two steps: model identification and result reasoning. The following describes the reasoning process taking the guide shoe as an example of the elevator component. The model identification step is a series of operations of completing voiceprint enhancement, voiceprint characterization and prediction output of new guide shoe voiceprint data: noise reduction processing of voiceprint data is completed through a voiceprint enhancement model special for the guide shoe, and pure guide shoe voiceprint data is obtained; and (3) transmitting the fine-tuned model of the guide shoe data set to obtain the score of each frame. The result reasoning step is to further process the result output by the model to obtain a reasoning result: for each frame output by the model, all adjacent frames are taken ) The average of the scores is used as the final score of the frame, with +.>The calculation formula is as follows:
wherein, the liquid crystal display device comprises a liquid crystal display device,output score for the model of the i-th frame, is->Is the number of adjacent frames.
In one embodiment, the step S106 includes:
if the failure probability scores of all frames in the designated elevator voiceprint data do not exceed a preset first alarm threshold value, continuing to monitor the running state of the corresponding elevator;
if the failure probability scores of the continuous a frames in the designated elevator voiceprint data exceed a preset second alarm threshold value, generating a first-level failure alarm; wherein a > 1;
if the failure probability score of any frame in the designated elevator voiceprint data exceeds a preset third alarm threshold value, generating a second-level failure alarm; the second level is larger than the first level, and the preset first alarm threshold value is smaller than the preset second alarm threshold value and smaller than the preset third alarm threshold value.
In this embodiment, the result output by the elevator fault recognition model is analyzed to determine the running state of the elevator, and further determine whether a relevant emergency measure needs to be taken. As shown in fig. 10, for the inputted elevator voiceprint data, the elevator failure recognition model predicts that the final score value of each frame is outputted, if the score value of a certain frame exceeds the alarm threshold Q ala (alarm line in fig. 10, i.e. the preset third alarm threshold), a "fault alarm" is triggered; if presentThe score value of the continuous a frames exceeds the early warning threshold Q war (the warning line in fig. 10, i.e. the preset second alarm threshold value), a "fault warning" is triggered; if the safety threshold Q is not exceeded sec (the safety line in fig. 10, i.e. the preset first alarm threshold value), the monitoring of the operating state of the elevator is continued. From the figure, it can be seen that Q ala >Q war >Q sec And the four thresholds (Q ala 、Q war 、Q sec And the setting of a) can be determined by the actual condition of the elevator being monitored.
Fig. 11 is a schematic block diagram of a high-resolution elevator fault recognition device 1100 according to an embodiment of the present invention, where the device 1100 includes:
the voiceprint annotation unit 1101 is configured to obtain historical elevator voiceprint data, and annotate the historical elevator voiceprint data;
the voiceprint enhancement unit 1102 is configured to perform voiceprint enhancement processing on the marked historical elevator voiceprint data through an improved DemucsV3 algorithm to obtain target voiceprint data;
a characterization extraction unit 1103, configured to perform feature extraction on the target voiceprint data by using an unsupervised audio pre-training model, so as to obtain corresponding voiceprint characterization data;
The model building unit 1104 is used for performing specialized fine tuning training on the voiceprint characterization data by combining a convolutional neural network and a gate cycle unit network, and outputting a fault probability score of each frame of the voiceprint characterization data so as to build an elevator fault recognition model;
a fault recognition unit 1105, configured to perform fault recognition on the specified elevator voiceprint data by using the elevator fault recognition model, and output a fault probability score of the specified elevator voiceprint data;
the fault analysis unit 1106 is configured to compare the fault probability score of the specified elevator voiceprint data with a preset alarm threshold, and monitor and alarm manage the elevator corresponding to the comparison result.
In an embodiment, the voiceprint annotation unit 1101 includes:
the data set setting unit is used for setting the historical elevator voiceprint data into a voiceprint data set, and performing fault marking on the historical elevator voiceprint data according to whether the historical elevator voiceprint data is fault data or not to obtain a metadata set with a label; wherein the label of the fault data is 1, and the label of the non-fault data is 0;
the data set framing unit is used for framing the voiceprint data set based on the frame length and the frame movement of the historical elevator voiceprint data to obtain short-time voiceprint data with the frame number of T1;
A data set traversing unit for traversing the metadata set and generating a one-dimensional vector with length T2 and filling value 0The method comprises the steps of carrying out a first treatment on the surface of the Wherein t1=t2;
a time stamp obtaining unit, configured to obtain a time stamp of an elevator failure from the metadata set, and calculate a time difference between each frame center in the short-time voiceprint data and a time stamp of the elevator failure closest to the center
A learning labeling unit for differentiating the time according to the following formulaLearning and labeling are carried out as model learning targets, and finally labeled historical elevator voiceprint data are obtained:
wherein, the liquid crystal display device comprises a liquid crystal display device,representing +.>Function of transformation, adjacent frame number->Indicating hyper-parameters, table iThe index of the frame is shown, h represents the frame shift.
In an embodiment, the voiceprint enhancement unit 1102 includes:
the time domain input unit is used for inputting the marked historical elevator voiceprint data to a time domain module in the improved demucsV3 algorithm;
the first feature extraction unit is used for extracting first features of the input historical elevator voiceprint data in the time dimension through a 5-layer first coding layer in the time domain module, and reconstructing the first features by using a 5-layer first decoding layer in the time domain module;
The data transformation unit is used for carrying out short-time Fourier transformation on the marked historical elevator voiceprint data and inputting the transformed data to a frequency domain module in an improved demucsV3 algorithm;
the second feature extraction unit is used for carrying out second feature extraction on the transformed data in the frequency dimension through 5 layers of second coding layers in the frequency domain module and carrying out second feature reconstruction on the extracted second features through 5 layers of second decoding layers in the frequency domain module;
the sharing integration unit is used for sharing and integrating information of the time domain module and the frequency domain module by utilizing a sharing layer;
and the result adding unit is used for carrying out short-time Fourier inverse transformation on the output result of the frequency domain module, and adding the data after inverse transformation with the output result of the time domain module to obtain target voiceprint data with enhanced voiceprint.
In an embodiment, the voiceprint enhancement unit 1102 further includes:
the residual setting unit is used for setting residual modules for the first coding layer and the second coding layer respectively and reducing the number of model parameters through the residual modules;
the index evaluation unit is used for evaluating the improved demucsV3 algorithm by taking the signal-to-noise ratio as an evaluation index according to the following formula:
Wherein SDR represents the signal-to-noise ratio,representing the energy or power of a real sound source, +.>Representing the energy or power of the system output sound source, < >>Representing time index,/->Representing a constant.
In an embodiment, the unsupervised audio pre-training model is a Wav2Vec model;
the feature extraction unit 1103 includes:
the data embedding unit is used for embedding the target voiceprint data into a shallow characteristic space by utilizing an encoder network of the Wav2Vec model and mapping the target voiceprint data into a characteristic vector;
and the deep representation unit is used for carrying out deep feature representation on the feature vector by using a context network in the Wav2Vec model to obtain the voiceprint representation data.
In an embodiment, the model building unit 1104 includes:
the prediction output unit is used for inputting the voiceprint representation data into a convolutional neural network, and outputting a predicted value of the voiceprint representation data through the gate cycle unit network after passing through the convolutional neural network and the gate cycle unit network;
a specialized processing unit for utilizing the loss function according to the following formulaAnd carrying out specialized processing on the predicted value:
wherein T represents the total frame number of the voiceprint representation data, G (T) represents the true value of the T frame, P (T) represents the predicted value of the T frame, l bce Representing a bi-classification cross entropy loss function.
In one embodiment, the fault analysis unit 1106 includes:
the operation monitoring unit is used for continuously monitoring the operation state of the corresponding elevator if the failure probability scores of all frames in the designated elevator voiceprint data do not exceed a preset first alarm threshold value;
the first alarm unit is used for generating a first-level fault alarm if the fault probability scores of the continuous a frames in the designated elevator voiceprint data exceed a preset second alarm threshold value; wherein a > 1;
the second alarm unit is used for generating a second-level fault alarm if the fault probability score of any frame in the designated elevator voiceprint data exceeds a preset third alarm threshold value; the second level is larger than the first level, and the preset first alarm threshold value is smaller than the preset second alarm threshold value and smaller than the preset third alarm threshold value.
Since the embodiments of the apparatus portion and the embodiments of the method portion correspond to each other, the embodiments of the apparatus portion are referred to the description of the embodiments of the method portion, and are not repeated herein.
The embodiment of the present invention also provides a computer readable storage medium having a computer program stored thereon, which when executed can implement the steps provided in the above embodiment. The storage medium may include: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The embodiment of the application also provides a computer device, which can comprise a memory and a processor, wherein the memory stores a computer program, and the processor can realize the steps provided by the embodiment when calling the computer program in the memory. Of course, the computer device may also include various network interfaces, power supplies, and the like.
In the description, each embodiment is described in a progressive manner, and each embodiment is mainly described by the differences from other embodiments, so that the same similar parts among the embodiments are mutually referred. For the system disclosed in the embodiment, since it corresponds to the method disclosed in the embodiment, the description is relatively simple, and the relevant points refer to the description of the method section. It should be noted that it will be apparent to those skilled in the art that various modifications and adaptations of the application can be made without departing from the principles of the application and these modifications and adaptations are intended to be within the scope of the application as defined in the following claims.
It should also be noted that in this specification, relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.

Claims (9)

1. A high resolution elevator fault identification method, comprising:
acquiring historical elevator voiceprint data with continuous frames, and marking the historical elevator voiceprint data;
performing voiceprint enhancement processing on the marked historical elevator voiceprint data through an improved demucsV3 algorithm to obtain target voiceprint data;
performing feature extraction on the target voiceprint data by adopting an unsupervised audio pre-training model to obtain corresponding voiceprint characterization data;
combining a convolutional neural network and a gate cycle unit network to perform specialized fine tuning training on the voiceprint characterization data, and outputting fault probability scores of each frame of the voiceprint characterization data so as to construct an elevator fault recognition model;
performing fault recognition on the appointed elevator voiceprint data by using the elevator fault recognition model, and outputting fault probability scores of the appointed elevator voiceprint data;
comparing the fault probability score of the designated elevator voiceprint data with a preset alarm threshold value, and monitoring and alarm management are carried out on the elevator corresponding to the comparison result;
the voice print enhancement processing is carried out on the marked historical elevator voice print data through the improved demucsV3 algorithm to obtain target voice print data, and the voice print enhancement processing comprises the following steps:
The marked historical elevator voiceprint data is input to a time domain module in an improved demucsV3 algorithm;
performing first feature extraction on input historical elevator voiceprint data in a time dimension through a 5-layer first coding layer in the time domain module, and performing first feature reconstruction on the extracted first features through a 5-layer first decoding layer in the time domain module;
performing short-time Fourier transform on the marked historical elevator voiceprint data, and inputting the transformed data to a frequency domain module in an improved demucsV3 algorithm;
performing second feature extraction on the transformed data in the frequency dimension through 5 layers of second coding layers in the frequency domain module, and performing second feature reconstruction on the extracted second features through 5 layers of second decoding layers in the frequency domain module;
the sharing layer is utilized to share and integrate information of the time domain module and the frequency domain module;
and carrying out short-time Fourier inverse transformation on the output result of the frequency domain module, and adding the data after inverse transformation with the output result of the time domain module to obtain target voiceprint data with enhanced voiceprint.
2. The method of claim 1, wherein the acquiring and labeling historical elevator voiceprint data having consecutive frames comprises:
Setting the historical elevator voiceprint data as a voiceprint data set, and performing fault marking on the historical elevator voiceprint data according to whether the historical elevator voiceprint data is fault data or not to obtain a metadata set with a label; wherein the label of the fault data is 1, and the label of the non-fault data is 0;
framing the voiceprint data set based on frame length and frame movement of historical elevator voiceprint data to obtain short-time voiceprint data with the frame number of T1;
traversing the metadata set and generating a one-dimensional vector with length T2 and filling value 0Wherein t1=t2;
acquiring the time stamp of the elevator fault from the metadata set, and calculating the time difference delta between the center of each frame in the short-time voiceprint data and the time stamp of the elevator fault closest to the center i
The time difference delta is calculated according to the following formula i Learning and labeling are carried out as model learning targets, and finally labeled historical elevator voiceprint data are obtained:
wherein g (delta) i ) Representing the time difference delta i Function of transformation, adjacent frame numberIndicating the super parameter, i indicating the index of the frame, and h indicating the frame shift.
3. The method for identifying a high-resolution elevator fault according to claim 1, wherein the voiceprint enhancement processing is performed on the marked historical elevator voiceprint data by using a modified DemucsV3 algorithm to obtain target voiceprint data, and the method further comprises:
Setting residual modules for the first coding layer and the second coding layer respectively, and reducing the number of model parameters through the residual modules;
the improved demucsV3 algorithm was evaluated using the signal-to-noise ratio as an evaluation index according to the following formula:
wherein SDR represents the signal-to-noise ratio,representing the energy or power of a real sound source, +.>Representing the energy or power of the system output sound source, v representing the time index and epsilon representing a constant.
4. The high resolution elevator fault identification method of claim 1, wherein the unsupervised audio pre-training model is a Wav2Vec model;
the feature extraction is performed on the target voiceprint data by adopting an unsupervised audio pre-training model to obtain corresponding voiceprint representation data, and the feature extraction comprises the following steps:
embedding the target voiceprint data into a shallow feature space by using an encoder network of a Wav2Vec model, and mapping the target voiceprint data into feature vectors;
and carrying out deep feature representation on the feature vector by using a context network in the Wav2Vec model to obtain the voiceprint characterization data.
5. The method for identifying high-resolution elevator faults according to claim 1, wherein the step of combining a convolutional neural network with a gate cycle unit network to perform specialized fine-tuning training on the voiceprint characterization data and outputting a fault probability score of each frame of the voiceprint characterization data so as to construct an elevator fault identification model comprises the steps of:
Inputting the voiceprint representation data into a convolutional neural network, and outputting predicted values of the voiceprint representation data by the gate cycle unit network after passing through the convolutional neural network and the gate cycle unit network;
using a loss function according toAnd carrying out specialized processing on the predicted value:
wherein T represents the total frame number of the voiceprint representation data, G (T) represents the true value of the T frame, P (T) represents the predicted value of the T frame, l bce Representing a bi-classification cross entropy loss function.
6. The method for identifying a high-resolution elevator fault according to claim 1, wherein comparing the fault probability score of the designated elevator voiceprint data with a preset alarm threshold, and monitoring and alarm management are performed on the elevator corresponding to the comparison result, and the method comprises:
if the failure probability scores of all frames in the designated elevator voiceprint data do not exceed a preset first alarm threshold value, continuing to monitor the running state of the corresponding elevator;
if the failure probability scores of the continuous a frames in the designated elevator voiceprint data exceed a preset second alarm threshold value, generating a first-level failure alarm; wherein a > 1;
if the failure probability score of any frame in the designated elevator voiceprint data exceeds a preset third alarm threshold value, generating a second-level failure alarm; the second level is larger than the first level, and the preset first alarm threshold value is smaller than the preset second alarm threshold value and smaller than the preset third alarm threshold value.
7. A high resolution elevator fault identification device, comprising:
the voiceprint marking unit is used for acquiring historical elevator voiceprint data and marking the historical elevator voiceprint data;
the voiceprint enhancement unit is used for carrying out voiceprint enhancement processing on the marked historical elevator voiceprint data through an improved demucsV3 algorithm to obtain target voiceprint data;
the characterization extraction unit is used for carrying out feature extraction on the target voiceprint data by adopting an unsupervised audio pre-training model to obtain corresponding voiceprint characterization data;
the model building unit is used for carrying out specialized fine tuning training on the voiceprint characterization data by combining a convolutional neural network and a door circulation unit network, and outputting fault probability scores of each frame of the voiceprint characterization data so as to build an elevator fault recognition model;
the fault recognition unit is used for carrying out fault recognition on the appointed elevator voiceprint data by utilizing the elevator fault recognition model and outputting the fault probability score of the appointed elevator voiceprint data;
the fault analysis unit is used for comparing the fault probability score of the designated elevator voiceprint data with a preset alarm threshold value, and monitoring and alarm management are carried out on the elevator corresponding to the comparison result;
The voiceprint enhancement unit includes:
the time domain input unit is used for inputting the marked historical elevator voiceprint data to a time domain module in the improved demucsV3 algorithm;
the first feature extraction unit is used for extracting first features of the input historical elevator voiceprint data in the time dimension through a 5-layer first coding layer in the time domain module, and reconstructing the first features by using a 5-layer first decoding layer in the time domain module;
the data transformation unit is used for carrying out short-time Fourier transformation on the marked historical elevator voiceprint data and inputting the transformed data to a frequency domain module in an improved demucsV3 algorithm;
the second feature extraction unit is used for carrying out second feature extraction on the transformed data in the frequency dimension through 5 layers of second coding layers in the frequency domain module and carrying out second feature reconstruction on the extracted second features through 5 layers of second decoding layers in the frequency domain module;
the sharing integration unit is used for sharing and integrating information of the time domain module and the frequency domain module by utilizing a sharing layer;
and the result adding unit is used for carrying out short-time Fourier inverse transformation on the output result of the frequency domain module, and adding the data after inverse transformation with the output result of the time domain module to obtain target voiceprint data with enhanced voiceprint.
8. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the high resolution elevator fault identification method according to any one of claims 1 to 6 when the computer program is executed.
9. A computer-readable storage medium, characterized in that the computer-readable storage medium has stored thereon a computer program which, when executed by a processor, implements the high-resolution elevator fault recognition method according to any one of claims 1 to 6.
CN202310859577.7A 2023-07-13 2023-07-13 High-resolution elevator fault identification method, device and related medium Active CN116573508B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310859577.7A CN116573508B (en) 2023-07-13 2023-07-13 High-resolution elevator fault identification method, device and related medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310859577.7A CN116573508B (en) 2023-07-13 2023-07-13 High-resolution elevator fault identification method, device and related medium

Publications (2)

Publication Number Publication Date
CN116573508A CN116573508A (en) 2023-08-11
CN116573508B true CN116573508B (en) 2023-10-10

Family

ID=87536420

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310859577.7A Active CN116573508B (en) 2023-07-13 2023-07-13 High-resolution elevator fault identification method, device and related medium

Country Status (1)

Country Link
CN (1) CN116573508B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111524520A (en) * 2020-04-22 2020-08-11 星际(重庆)智能装备技术研究院有限公司 Voiceprint recognition method based on error reverse propagation neural network
CN112897262A (en) * 2021-02-26 2021-06-04 浙江理工大学 Elevator running state evaluation system and method based on sound feature extraction
CN114220439A (en) * 2021-12-24 2022-03-22 北京金山云网络技术有限公司 Method, device, system, equipment and medium for acquiring voiceprint recognition model
WO2022081678A1 (en) * 2020-10-15 2022-04-21 Dolby Laboratories Licensing Corporation Frame-level permutation invariant training for source separation
CN115146670A (en) * 2022-05-30 2022-10-04 西安交通大学 Radio frequency fingerprint identification method and system based on data enhancement and comparison learning

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111524520A (en) * 2020-04-22 2020-08-11 星际(重庆)智能装备技术研究院有限公司 Voiceprint recognition method based on error reverse propagation neural network
WO2022081678A1 (en) * 2020-10-15 2022-04-21 Dolby Laboratories Licensing Corporation Frame-level permutation invariant training for source separation
CN112897262A (en) * 2021-02-26 2021-06-04 浙江理工大学 Elevator running state evaluation system and method based on sound feature extraction
CN114220439A (en) * 2021-12-24 2022-03-22 北京金山云网络技术有限公司 Method, device, system, equipment and medium for acquiring voiceprint recognition model
CN115146670A (en) * 2022-05-30 2022-10-04 西安交通大学 Radio frequency fingerprint identification method and system based on data enhancement and comparison learning

Also Published As

Publication number Publication date
CN116573508A (en) 2023-08-11

Similar Documents

Publication Publication Date Title
CN110161343B (en) Non-invasive real-time dynamic monitoring method for external powered device of intelligent train
WO2016004774A1 (en) Rail transportation fault diagnosis method and system based on time series analysis
CN110057584B (en) Degradation monitoring method for locomotive traction motor bearing
Soualhi et al. Prognosis of bearing failures using hidden Markov models and the adaptive neuro-fuzzy inference system
CN104819846B (en) Rolling bearing sound signal fault diagnosis method based on short-time Fourier transform and sparse laminated automatic encoder
CN110764493A (en) PHM application system, method and storage medium suitable for high-speed railway
Sun et al. Fault diagnosis for train plug door using weighted fractional wavelet packet decomposition energy entropy
CN111046583A (en) Switch machine fault diagnosis method based on DTW algorithm and ResNet network
WO2023138581A1 (en) Method and apparatus for detecting polygonal fault of wheel set of rail transit locomotive
Du et al. Risk evaluation of bogie system based on extension theory and entropy weight method
Wang et al. Ensemble decision approach with dislocated time–frequency representation and pre-trained CNN for fault diagnosis of railway vehicle gearboxes under variable conditions
Liu et al. Deep learning based identification and uncertainty analysis of metro train induced ground-borne vibration
CN114034481A (en) Fault diagnosis system and method for rolling mill gearbox
CN116573508B (en) High-resolution elevator fault identification method, device and related medium
CN113707175B (en) Acoustic event detection system based on feature decomposition classifier and adaptive post-processing
CN115137374A (en) Sleep stage oriented electroencephalogram interpretability analysis method and related equipment
JP7417342B2 (en) Method for monitoring vibrations in train cabins, construction and application method of vibration signal feature library
CN107782548B (en) Rail vehicle part detection system
CN113610188A (en) Bow net contact force non-section abnormity identification method and device
CN109580260A (en) A kind of inferior health diagnostic method of track vehicle door system
CN113987905A (en) Escalator braking force intelligent diagnosis system based on deep belief network
Zhao [Retracted] Fault Diagnosis Method for Wind Power Equipment Based on Hidden Markov Model
CN113086798B (en) Elevator fault detection method based on gated cyclic network and typical correlation analysis
Vincent et al. A Cognitive Rail Track Breakage Detection System Using Artificial Neural Network
CN112434979A (en) Health assessment method for turnout system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant