CN113392888A - Rail transit traction motor fault identification method, storage medium and electronic equipment - Google Patents

Rail transit traction motor fault identification method, storage medium and electronic equipment Download PDF

Info

Publication number
CN113392888A
CN113392888A CN202110624266.3A CN202110624266A CN113392888A CN 113392888 A CN113392888 A CN 113392888A CN 202110624266 A CN202110624266 A CN 202110624266A CN 113392888 A CN113392888 A CN 113392888A
Authority
CN
China
Prior art keywords
signal
imf
substep
boltzmann machine
traction motor
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110624266.3A
Other languages
Chinese (zh)
Inventor
吴凯
廖晓斌
盛建科
刘湘
詹柏青
曾进辉
兰征
何东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hunan University of Technology
Guangdong Fullde Electronics Co Ltd
Zhuzhou Fullde Rail Transit Research Institute Co Ltd
Original Assignee
Hunan University of Technology
Guangdong Fullde Electronics Co Ltd
Zhuzhou Fullde Rail Transit Research Institute Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hunan University of Technology, Guangdong Fullde Electronics Co Ltd, Zhuzhou Fullde Rail Transit Research Institute Co Ltd filed Critical Hunan University of Technology
Priority to CN202110624266.3A priority Critical patent/CN113392888A/en
Publication of CN113392888A publication Critical patent/CN113392888A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/06Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
    • G06N3/061Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using biological neurons, e.g. biological neurons connected to an integrated circuit
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Neurology (AREA)
  • Microelectronics & Electronic Packaging (AREA)
  • Test And Diagnosis Of Digital Computers (AREA)

Abstract

The invention relates to a rail transit traction motor fault identification method, electronic equipment and a storage medium. The method comprises the following steps: s1, adding white noise to an original detection signal x (t) of a rail transit traction motor to obtain a signal s (t); s2, decomposing a signal s (t) by using an EMD decomposition algorithm to obtain an IMF component; s3, carrying out Hilbert transformation on the obtained mean value of the IMF components to obtain an instantaneous amplitude value and an instantaneous frequency, and extracting a disturbance signal from the instantaneous amplitude value and the instantaneous frequency; and S4, identifying the disturbance signal by using the deep belief network. The method can better acquire the disturbed information, realizes the quick real-time equipment track traffic traction motor fault, and has high recognition rate and high positioning precision.

Description

Rail transit traction motor fault identification method, storage medium and electronic equipment
Technical Field
The invention relates to a rail transit traction motor, in particular to a rail transit traction motor fault identification method based on ensemble empirical mode decomposition and a deep belief network, electronic equipment and a storage medium.
Background
The rail transit traction technology has penetrated through our lives, and is convenient for our daily life for both high-speed trains going out of doors and free subways in cities. However, as the heart of rail traffic, failure of a railway traction system sometimes causes catastrophic accidents, and therefore, regular maintenance is required to ensure the safety of the railway vehicle. In order to improve the safety of railway traction systems, a great deal of research has been conducted on monitoring the condition of vehicle equipment. For general motor faults, almost half are caused by bearings, so early detection of bearing faults in traction motors is of paramount importance.
At present, the research on the traction power supply system of the electrified railway at home and abroad is mainly focused on a mode of combining signal processing and mode identification, but the methods need to carry out fault diagnosis by depending on expert experience and need to adopt a signal processing technology to extract fault characteristics, the obtained characteristic vector has certain subjectivity, and the time of the whole fault diagnosis process is greatly prolonged.
Deep learning is a new method in the field of machine learning, abstract models among all layers of neurons are deeply constructed through multiple layers of networks, weights are calculated, high-level, low-dimensional and effective feature expressions are obtained, deep information of a large amount of data is mined, and the problem of complex operation of a high-dimensional data space is solved. In recent years, the development of deep learning method theories and algorithms is particularly rapid, and the application of the theories and the algorithms relates to the fields of aviation, medical treatment, electric power, traffic and the like, and the theories and the algorithms all achieve better effects.
Disclosure of Invention
In order to identify the fault type of the rail transit traction motor, the invention provides a rail transit traction motor fault identification method, electronic equipment and a computer readable storage medium, which are high in speed, strong in real-time performance and high in positioning precision.
To achieve the object, according to one aspect of the present invention, there is provided a rail transit traction motor fault identification method, including:
s1, adding white noise to an original detection signal x (t) of a rail transit traction motor to obtain a signal s (t);
s2, decomposing a signal s (t) by using an EMD decomposition algorithm to obtain an IMF component;
s3, carrying out Hilbert transformation on the obtained mean value of the IMF components to obtain an instantaneous amplitude value and an instantaneous frequency, and extracting a disturbance signal from the instantaneous amplitude value and the instantaneous frequency;
and S4, identifying the disturbance signal by using the deep belief network.
Further, the step S2 further includes:
let the extremum envelope function of the signal s (t) be
Figure BDA0003101495590000021
Wherein u (t) and v (t) are the upper and lower envelopes, respectively, of signal s (t);
obtaining mean value e of extreme value envelope function f (t)1Calculating signals s (t) and e1Difference value c of1=s(t)-e1
If c is1If IMF is satisfied, it is labeled as the first natural mode IMF component, otherwise replace original c with f (t)1Constantly executing c1=s(t)-e1Up to c1The IMF condition is met;
the first IMF component c1Separated from the signal s (t) to obtain r1=s(t)-c1
Separating the obtained r1As a new decomposition signal S (t), re-executing step S2, and repeating the above steps to separate the sub-components of the IMF until the nth component r is reachednAnd finishing the execution when the function is a monotone function.
Further, the maximum value and the minimum value envelope curve of the signal s (t) are constructed by a cubic spline interpolation method.
Further, the standard deviation coefficient SD is used as a criterion for judging whether the IMF condition is satisfied.
Further, the standard deviation coefficient
Figure BDA0003101495590000022
Wherein i is the number of decomposition layers.
Further, when the standard deviation coefficient SD is less than &1, the IMF condition is considered to be satisfied.
Further, the value of &1 is between 0.2 and 0.3.
Further, the step S4 further includes:
s41, initializing the number N of neuron layers and the number N of neurons, wherein the number M of data sent in each training, the iteration number P of a training deep belief network, the iteration number P 'of each limiting Boltzmann machine in the training deep belief network, the current iteration number T of the training deep belief network and the current iteration number T' of the limiting Boltzmann machine in the training deep belief network are obtained;
s42, obtaining a normal sample and a fault sample;
substep S43, training a first limit Boltzmann machine in the deep belief network, and assigning the training data to a display layer v(0)Calculate the probability that it causes hidden neurons to be turned on:
Figure BDA0003101495590000023
wherein the superscript is used to distinguish between different vectors, and the subscript j represents the dimension;
substep s44. extract a sample from the calculated probability distribution:
Figure BDA0003101495590000031
substep S45. use h(0)Reconstructing the display layer and simultaneously extracting one sample;
and S46, calculating the probability of opening the hidden layer neuron after reconstruction:
Figure BDA0003101495590000032
substep s47. update the weights W, b, c, wherein:
Figure BDA0003101495590000033
bj=bj+v(0)-v(1)
Figure BDA0003101495590000034
step S48, determining whether T 'is P', and if not, returning to step S43, if T 'is T' + 1; if yes, continuing the next step;
substep S49, fixing the weight coefficient and the offset coefficient of the first limiting Boltzmann machine, and using the final output of the first limiting Boltzmann machine as the input of the second limiting Boltzmann machine;
substep S410. repeating substeps S43 through substep S48 to train a second limiting Boltzmann machine;
substep S411. calculating the output of the second limiting Boltzmann machineYi
Substep S412. calculating an output error MSE:
Figure BDA0003101495590000035
step S413, optimizing the error function by using a gradient descent method, and then reversely propagating the error function back to each layer to perform parameter fine adjustment;
step S414, determining whether T is P, if not, T is T +1, and returning to step S411; if yes, continuing the next step;
and S415, sending the positioned sample data into a trained model for classification and identification.
In accordance with another aspect of the present invention, there is provided an electronic apparatus, wherein the electronic apparatus includes:
a processor; and the number of the first and second groups,
a memory arranged to store computer executable instructions that, when executed, cause the processor to perform the method.
According to another aspect of the present invention, there is provided a computer readable storage medium, wherein the computer readable storage medium stores one or more programs which, when executed by a processor, implement the method.
According to the method, white noise is added into an original detection signal, an optimized signal obtained by EMD decomposition algorithm is decomposed to obtain IMF, Hilbert transformation is carried out on the mean value of the obtained IMF, then disturbance identification is carried out by adopting a deep belief network, disturbance information can be better obtained, the faults of the track traffic traction motor of the equipment can be rapidly and real-timely realized, the identification rate is high, and the positioning precision is high.
The above description is only an overview of the technical solutions of the present invention, and the embodiments of the present invention are described below in order to make the technical means of the present invention more clearly understood and to make the description and other objects, features, and advantages of the present invention more comprehensible.
Drawings
Various other advantages and benefits will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the invention. Also, like reference numerals are used to refer to like elements throughout the drawings.
In the drawings:
FIG. 1 is a flow chart of a rail transit traction motor fault identification method of the present invention;
FIG. 2 is a process of adding white noise to an original detection signal in accordance with the present invention;
FIG. 3 is a flow chart of the present invention for obtaining IMF using the EMd decomposition algorithm;
FIG. 4 is a flow chart of deep belief network training in the present invention;
FIG. 5 is a schematic structural diagram of an electronic device according to the present invention;
fig. 6 is a schematic structural diagram of a computer-readable storage medium according to the present invention.
Detailed Description
Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
The embodiment is implemented based on an electronic device, such as a computer device, where the electronic device includes a memory, a processor, and a computer program stored in the memory and running on the processor, and when the processor executes the program, as shown in fig. 1 to 4, the method for identifying a fault of a rail transit traction motor includes steps S1 to S4.
S1, white noise is added to an original detection signal x (t) of the rail transit traction motor.
Specifically, white noise is added into an original detection signal x (t) of a rail transit traction motor on the basis of the original detection signal x (t) to obtain a signal s (t), so that the original detection signal x (t) is optimized, the defect that the original detection signal x (t) lacks a time scale is overcome, the signal is smoother, and the phenomenon of mode aliasing is overcome.
And S2, decomposing the optimized signal s (t) by using an EMD decomposition algorithm to obtain an IMF component.
Specifically, the extreme value distribution of the signal s (t) is calculated, the envelope curve of the maximum value and the minimum value is constructed by a cubic spline interpolation method, and the extreme value envelope function of the signal s (t) is set as f (t), so that the method comprises the following steps:
Figure BDA0003101495590000051
in equation (1), u (t) and v (t) are the upper and lower envelopes of signal s (t), respectively.
Obtaining mean value e of extreme value envelope function f (t)1Calculating signals s (t) and e1And the difference is set as c1Then, there are:
c1=s(t)-e1 (2)
analysis c1If c is1If the IMF condition is met, it is labeled as the first IMF component, labeled c1(ii) a If not, replacing c with f (t)1Continuously executing the formula (2) until c1The IMF condition is satisfied.
The first IMF component c1Is separated from the signal s (t) and r is obtained1Comprises the following steps:
r1=s(t)-c1 (3)
separating the obtained r1As a new decomposition signal S (t), re-executing step S2, and repeating the above steps to separate the sub-components of the IMF until the nth component r is repeatednAnd when the function is a monotone function, the execution is finished. Thereby obtaining a reconstructed signal of
Figure BDA0003101495590000052
In the formula (4), rnRepresenting a residual component, ciRepresenting frequency components of the signal that vary from low to high.
The above method of separating IMF from the raw detection signal x (t) is referred to as "sieving". However, in practical process, since the envelope mean m1 is hard to be zero, a standard deviation coefficient is introduced as a criterion for judging whether the IMF condition is satisfied. Wherein the standard deviation coefficient SD is expressed as follows:
Figure BDA0003101495590000053
in the formula (5), the value of 1 is usually between 0.2 and 0.3, i is the number of decomposition layers, and when the standard deviation coefficient satisfies the formula (5), the intrinsic modal component of the IMF obtained by decomposition is considered to meet the requirement.
S3, carrying out Hilbert transformation on the obtained mean value of the IMF components, wherein the specific formula is as follows:
Figure BDA0003101495590000061
wherein the analytic signal may be expressed as:
Figure BDA0003101495590000062
the amplitude function and the phase function are respectively:
Figure BDA0003101495590000063
the instantaneous frequency is:
Figure BDA0003101495590000064
the instantaneous amplitude and the instantaneous frequency can be obtained through the above two formulas, and the disturbance signal is extracted from the instantaneous amplitude and the instantaneous frequency so as to describe the characteristics of the disturbance occurrence.
S4, recognizing the disturbance signal by using a deep belief network, which specifically comprises the following steps:
s41, initializing the number N of neuron layers and the number N of neurons, wherein the number M of data sent in each training, the iteration number P of a training deep belief network, the iteration number P 'of each limiting Boltzmann machine in the training deep belief network, the current iteration number T of the training deep belief network and the current iteration number T' of the limiting Boltzmann machine in the training deep belief network are obtained;
s42, obtaining a normal sample and a fault sample;
substep S43, training a first limit Boltzmann machine in the deep belief network, and assigning the training data to a display layer v(0)Calculate the probability that it causes hidden neurons to be turned on:
Figure BDA0003101495590000065
wherein the superscript is used to distinguish between different vectors, and the subscript j represents the dimension;
substep s44. extract a sample from the calculated probability distribution:
Figure BDA0003101495590000066
substep S45. use h(0)Reconstructing the display layer and simultaneously extracting one sample;
and S46, calculating the probability of opening the hidden layer neuron after reconstruction:
Figure BDA0003101495590000067
substep s47. update the weights W, b, c, wherein:
Figure BDA0003101495590000071
bj=bj+v(0)-v(1)
Figure BDA0003101495590000072
step S48, determining whether T 'is P', and if not, returning to step S43, if T 'is T' + 1; if yes, continuing the next step;
substep S49, fixing the weight coefficient and the offset coefficient of the first limiting Boltzmann machine, and using the final output of the first limiting Boltzmann machine as the input of the second limiting Boltzmann machine;
substep S410. repeating substeps S43 through substep S48 to train a second limiting Boltzmann machine;
substep S411. calculating the output Y of the second limiting Boltzmann machinei
Substep S412. calculating an output error MSE:
Figure BDA0003101495590000073
step S413, optimizing the error function by using a gradient descent method, and then reversely propagating the error function back to each layer to perform parameter fine adjustment;
step S414, determining whether T is P, if not, T is T +1, and returning to step S411; if yes, continuing the next step;
and S415, sending the positioned sample data into a trained model for classification and identification.
It should be noted that:
the method used in this embodiment can be converted into program steps and apparatuses that can be stored in a computer storage medium, and the program steps and apparatuses are implemented by means of calling and executing by a controller, wherein the apparatuses should be understood as functional modules implemented by a computer program.
The algorithms and displays presented herein are not inherently related to any particular computer, virtual machine, or other apparatus. Various general purpose devices may be used with the teachings herein. The required structure for constructing such a device will be apparent from the description above. Moreover, the present invention is not directed to any particular programming language. It is appreciated that a variety of programming languages may be used to implement the teachings of the present invention as described herein, and any descriptions of specific languages are provided above to disclose the best mode of the invention.
In the description provided herein, numerous specific details are set forth. It is understood, however, that embodiments of the invention may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Similarly, it should be appreciated that in the foregoing description of exemplary embodiments of the invention, various features of the invention are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure and aiding in the understanding of one or more of the various inventive aspects. However, the disclosed method should not be interpreted as reflecting an intention that: that the invention as claimed requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single foregoing disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this invention.
Those skilled in the art will appreciate that the modules in the device in an embodiment may be adaptively changed and disposed in one or more devices different from the embodiment. The modules or units or components of the embodiments may be combined into one module or unit or component, and furthermore they may be divided into a plurality of sub-modules or sub-units or sub-components. All of the features disclosed in this specification (including any accompanying claims, abstract and drawings), and all of the processes or elements of any method or apparatus so disclosed, may be combined in any combination, except combinations where at least some of such features and/or processes or elements are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings) may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.
Furthermore, those skilled in the art will appreciate that while some embodiments described herein include some features included in other embodiments, rather than other features, combinations of features of different embodiments are meant to be within the scope of the invention and form different embodiments. For example, in the following claims, any of the claimed embodiments may be used in any combination.
The various component embodiments of the invention may be implemented in hardware, or in software modules running on one or more processors, or in a combination thereof. The present invention may also be embodied as apparatus or device programs (e.g., computer programs and computer program products) for performing a portion or all of the methods described herein. Such programs implementing the present invention may be stored on computer-readable media or may be in the form of one or more signals. Such a signal may be downloaded from an internet website or provided on a carrier signal or in any other form.
For example, fig. 5 shows a schematic structural diagram of an electronic device according to an embodiment of the invention. The electronic device conventionally comprises a processor 31 and a memory 32 arranged to store computer-executable instructions (program code). The memory 32 may be an electronic memory such as a flash memory, an EEPROM (electrically erasable programmable read only memory), an EPROM, a hard disk, or a ROM. The memory 32 has a storage space 33 storing program code 34 for performing any of the method steps in the embodiments. For example, the storage space 33 for the program code may comprise respective program codes 34 for implementing respective steps in the above method. The program code can be read from or written to one or more computer program products. These computer program products comprise a program code carrier such as a hard disk, a Compact Disc (CD), a memory card or a floppy disk. Such a computer program product is typically a computer readable storage medium such as described in fig. 6. The computer readable storage medium may have memory segments, memory spaces, etc. arranged similarly to the memory 32 in the electronic device of fig. 5. The program code may be compressed, for example, in a suitable form. In general, the memory unit stores program code 41 for performing the steps of the method according to the invention, i.e. program code readable by a processor such as 31, which when run by an electronic device causes the electronic device to perform the individual steps of the method described above.
It should be noted that the above-mentioned embodiments illustrate rather than limit the invention, and that those skilled in the art will be able to design alternative embodiments without departing from the scope of the appended claims. In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" does not exclude the presence of elements or steps not listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. The invention may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the unit claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The usage of the words first, second and third, etcetera do not indicate any ordering. These words may be interpreted as names.

Claims (10)

1. The rail transit traction motor fault identification method is characterized by comprising the following steps:
s1, adding white noise to an original detection signal x (t) of a rail transit traction motor to obtain a signal s (t);
s2, decomposing a signal s (t) by using an EMD decomposition algorithm to obtain an IMF component;
s3, carrying out Hilbert transformation on the obtained mean value of the IMF components to obtain an instantaneous amplitude value and an instantaneous frequency, and extracting a disturbance signal from the instantaneous amplitude value and the instantaneous frequency;
and S4, identifying the disturbance signal by using the deep belief network.
2. The method of claim 1, wherein the step S2 further comprises:
let the extremum envelope function of the signal s (t) be
Figure FDA0003101495580000011
Wherein u (t) and v (t) are the upper and lower envelopes, respectively, of signal s (t);
obtaining mean value e of extreme value envelope function f (t)1Calculating signals s (t) and e1Difference value c of1=s(t)-e1
If c is1If the IMF is satisfied, the IMF is marked as the first intrinsic mode IMF component, otherwise f (t) is used to replace the original c1Constantly executing c1=s(t)-e1Up to c1The IMF condition is met;
the first IMF component c1Separated from the signal s (t) to obtain r1=s(t)-c1
Separating the obtained r1As a new decomposition signal S (t), re-executing step S2, and repeating the above steps to separate the sub-components of the IMF until the nth component r is reachednAnd finishing the execution when the function is a monotone function.
3. The method of claim 2, wherein the envelope of maxima and minima of the signal s (t) is constructed by cubic spline interpolation.
4. The method of claim 2, wherein the criterion for judging whether the IMF condition is satisfied is a standard deviation coefficient SD.
5. The method of claim 4,
standard of meritCoefficient of deviation
Figure FDA0003101495580000012
Wherein i is the number of decomposition layers.
6. The method of claim 5, wherein the IMF condition is deemed satisfied when the standard deviation coefficient SD is less than & 1.
7. The method of claim 6, wherein the value of & lt 1 & gt is between 0.2 and 0.3.
8. The method of claim 1, wherein the step S4 further comprises:
s41, initializing the number N of neuron layers and the number N of neurons, wherein the number M of data sent in each training, the iteration number P of a training deep belief network, the iteration number P 'of each limiting Boltzmann machine in the training deep belief network, the current iteration number T of the training deep belief network and the current iteration number T' of the limiting Boltzmann machine in the training deep belief network are obtained;
s42, obtaining a normal sample and a fault sample;
substep S43, training a first limit Boltzmann machine in the deep belief network, and assigning the training data to a display layer v(0)Calculate the probability that it causes hidden neurons to be turned on:
Figure FDA0003101495580000021
wherein the superscript is used to distinguish between different vectors, and the subscript j represents the dimension;
substep s44. extract a sample from the calculated probability distribution:
Figure FDA0003101495580000022
substeps ofStep S45. with h(0)Reconstructing the display layer and simultaneously extracting one sample;
and S46, calculating the probability of opening the hidden layer neuron after reconstruction:
Figure FDA0003101495580000023
substep s47. update the weights W, b, c, wherein:
Figure FDA0003101495580000024
bj=bj+v(0)-v(1)
Figure FDA0003101495580000025
step S48, determining whether T 'is P', and if not, returning to step S43, if T 'is T' + 1; if yes, continuing the next step;
substep S49, fixing the weight coefficient and the offset coefficient of the first limiting Boltzmann machine, and using the final output of the first limiting Boltzmann machine as the input of the second limiting Boltzmann machine;
substep S410. repeating substeps S43 through substep S48 to train a second limiting Boltzmann machine;
substep S411. calculating the output Y of the second limiting Boltzmann machinei
Substep S412. calculating an output error MSE:
Figure FDA0003101495580000026
step S413, optimizing the error function by using a gradient descent method, and then reversely propagating the error function back to each layer to perform parameter fine adjustment;
step S414, determining whether T is P, if not, T is T +1, and returning to step S411; if yes, continuing the next step;
and S415, sending the positioned sample data into a trained model for classification and identification.
9. An electronic device, wherein the electronic device comprises:
a processor; and the number of the first and second groups,
a memory arranged to store computer executable instructions that, when executed, cause the processor to perform a method according to any one of claims 1 to 8.
10. A storage medium, wherein the storage medium stores one or more programs which, when executed by a processor, implement the method of any one of claims 1-8.
CN202110624266.3A 2021-06-04 2021-06-04 Rail transit traction motor fault identification method, storage medium and electronic equipment Pending CN113392888A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110624266.3A CN113392888A (en) 2021-06-04 2021-06-04 Rail transit traction motor fault identification method, storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110624266.3A CN113392888A (en) 2021-06-04 2021-06-04 Rail transit traction motor fault identification method, storage medium and electronic equipment

Publications (1)

Publication Number Publication Date
CN113392888A true CN113392888A (en) 2021-09-14

Family

ID=77618395

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110624266.3A Pending CN113392888A (en) 2021-06-04 2021-06-04 Rail transit traction motor fault identification method, storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN113392888A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114690038A (en) * 2022-06-01 2022-07-01 华中科技大学 Motor fault identification method and system based on neural network and storage medium
CN115221982A (en) * 2022-09-21 2022-10-21 石家庄铁道大学 Traction power supply operation and maintenance method and device, terminal and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108256556A (en) * 2017-12-22 2018-07-06 上海电机学院 Wind-driven generator group wheel box method for diagnosing faults based on depth belief network
CN111610394A (en) * 2020-05-20 2020-09-01 湘潭大学 Electric energy quality disturbance positioning and identifying method for electrified railway traction power supply system
US20200293627A1 (en) * 2019-03-13 2020-09-17 General Electric Company Method and apparatus for composite load calibration for a power system

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108256556A (en) * 2017-12-22 2018-07-06 上海电机学院 Wind-driven generator group wheel box method for diagnosing faults based on depth belief network
US20200293627A1 (en) * 2019-03-13 2020-09-17 General Electric Company Method and apparatus for composite load calibration for a power system
CN111610394A (en) * 2020-05-20 2020-09-01 湘潭大学 Electric energy quality disturbance positioning and identifying method for electrified railway traction power supply system

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114690038A (en) * 2022-06-01 2022-07-01 华中科技大学 Motor fault identification method and system based on neural network and storage medium
CN115221982A (en) * 2022-09-21 2022-10-21 石家庄铁道大学 Traction power supply operation and maintenance method and device, terminal and storage medium

Similar Documents

Publication Publication Date Title
Shao et al. Generative adversarial networks for data augmentation in machine fault diagnosis
Yan et al. Deep order-wavelet convolutional variational autoencoder for fault identification of rolling bearing under fluctuating speed conditions
CN106682688B (en) Particle swarm optimization-based stacked noise reduction self-coding network bearing fault diagnosis method
Liang et al. Convolutional Recurrent Neural Network for Fault Diagnosis of High‐Speed Train Bogie
CN112885372B (en) Intelligent diagnosis method, system, terminal and medium for power equipment fault sound
CN113392888A (en) Rail transit traction motor fault identification method, storage medium and electronic equipment
Duong et al. Non-mutually exclusive deep neural network classifier for combined modes of bearing fault diagnosis
CN115114848B (en) Three-phase asynchronous motor fault diagnosis method and system based on hybrid CNN-LSTM
CN116451150A (en) Equipment fault diagnosis method based on semi-supervised small sample
CN110991472B (en) Method for diagnosing minor faults of high-speed train traction system
CN114662386A (en) Bearing fault diagnosis method and system
CN110501172A (en) A kind of rail vehicle wheel condition recognition methods based on axle box vibration
CN115219197A (en) Bearing fault intelligent diagnosis method, system, medium, equipment and terminal
Hong et al. Supervised-learning-based intelligent fault diagnosis for mechanical equipment
Chabib et al. DeepCurvMRI: Deep convolutional curvelet transform-based MRI approach for early detection of Alzheimer’s disease
Li et al. A Fault-Diagnosis Method for Railway Turnout Systems Based on Improved Autoencoder and Data Augmentation
Zhang et al. Multi-sensor graph transfer network for health assessment of high-speed rail suspension systems
CN114331214A (en) Domain-adaptive bearing voiceprint fault diagnosis method and system based on reinforcement learning
Zeng et al. Rail break prediction and cause analysis using imbalanced in-service train data
Chen et al. Railway switch fault diagnosis based on Multi-heads Channel Self Attention, Residual Connection and Deep CNN
Wang et al. Using vehicle interior noise classification for monitoring urban rail transit infrastructure
CN117828531A (en) Bearing fault diagnosis method based on multi-sensor multi-scale feature fusion
Han et al. Generative adversarial network-based fault diagnosis model for railway point machine in sustainable railway transportation
CN114997749B (en) Intelligent scheduling method and system for power personnel
CN116580292A (en) Track structure state detection method, computer equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination