CN108062572A - A kind of Fault Diagnosis Method of Hydro-generating Unit and system based on DdAE deep learning models - Google Patents

A kind of Fault Diagnosis Method of Hydro-generating Unit and system based on DdAE deep learning models Download PDF

Info

Publication number
CN108062572A
CN108062572A CN201711461876.6A CN201711461876A CN108062572A CN 108062572 A CN108062572 A CN 108062572A CN 201711461876 A CN201711461876 A CN 201711461876A CN 108062572 A CN108062572 A CN 108062572A
Authority
CN
China
Prior art keywords
ddae
mrow
network
models
msub
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201711461876.6A
Other languages
Chinese (zh)
Other versions
CN108062572B (en
Inventor
李超顺
陈昊
邹雯
赖昕杰
陈新彪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huazhong University of Science and Technology
Original Assignee
Huazhong University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huazhong University of Science and Technology filed Critical Huazhong University of Science and Technology
Priority to CN201711461876.6A priority Critical patent/CN108062572B/en
Publication of CN108062572A publication Critical patent/CN108062572A/en
Application granted granted Critical
Publication of CN108062572B publication Critical patent/CN108062572B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/004Artificial life, i.e. computing arrangements simulating life
    • G06N3/006Artificial life, i.e. computing arrangements simulating life based on simulated virtual individual or collective life forms, e.g. social simulations or particle swarm optimisation [PSO]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/088Non-supervised learning, e.g. competitive learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Molecular Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Probability & Statistics with Applications (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)
  • Complex Calculations (AREA)

Abstract

The present invention relates to Approach for Hydroelectric Generating Unit Fault Diagnosis technical fields, and in particular to a kind of Fault Diagnosis Method of Hydro-generating Unit and system based on DdAE deep learning models.The present invention is established on the basis of to the original vibrating data analysis of Hydropower Unit, employ the deep learning feature extracting method based on multilayer neural network model, artificial treatment and characteristic extraction procedure that need not be complicated, the structural parameters tuning of DdAE is carried out using the ASFA methods based on random search, achievees the purpose that policy optimization.The distributed expression of initial data by depth noise reduction autocoder model realization, and the reconstruct data after feature extraction are inputted to Softmax regression models to the working condition and fault type for judging Hydropower Unit.Network test interpretation of result shows that this method can be effectively applied to the fault diagnosis of Hydropower Unit.

Description

A kind of Fault Diagnosis Method of Hydro-generating Unit and system based on DdAE deep learning models
Technical field
The invention belongs to Approach for Hydroelectric Generating Unit Fault Diagnosis technical fields, and DdAE deep learnings are based on more particularly, to one kind The Fault Diagnosis Method of Hydro-generating Unit and system of model.
Background technology
In a hydroelectric power system, turbine-generator units are the main equipments of most critical, and whether its operating status pacifies It is complete be reliably directly related to power station can safely, the daily life that economy is national each economic department and the people provides reliably Electric power, be also directly related to the safety of power station in itself.The condition monitoring and failure diagnosis system of optimization is continuously improved, not only The economic benefit and social benefit in power station can be improved, is also beneficial to China in Large Hydropower Station fault diagnosis technology field side The development in face.With the increasingly raising of present scientific and technological level, the especially reasons such as signal processing, knowledge engineering and computational intelligence Development technologically, the fault diagnosises of turbine-generator units also just by Artificial Diagnosis to intelligent diagnostics, it is online by diagnosing offline Diagnosis, by the gradual development of field diagnostic to remote diagnosis.
Conventional machines learn and signal processing technology explores the shallow-layer learning structure for containing only individual layer nonlinear transformation.Shallow-layer mould One general character of type is to contain only the single simple structure that original input signal is transformed into particular problem space characteristics.It is typical shallow Layer learning structure includes traditional hidden Markov model, and (hidden Markov model, are abbreviated as:HMM), condition random field (conditional random field, are abbreviated as:CRFs), maximum entropy model (The Maximum Entropy model, It is abbreviated as:MaxEnt), (Support Vector Machine, are abbreviated as support vector machines:SVM), kernel regression and only multilayer sense Know that (Multi-Layer Perceptron, are abbreviated as device:MLP) etc..For example, SVM is with including one layer or zero Feature Conversion The shallow-layer modal cutoff model of layer.Shallow structure is limited in that in the case of limited sample and computing unit to complicated function Expression ability it is limited, for complicated classification problem, its generalization ability is centainly restricted.
The vibration of Hydropower Unit is different with the vibration of general dynamic power machine.Except need to consider unit rotation in itself or solid Determine outside the vibration of part, it is still necessary to consider to act on the electromagnetic force of master section and act on hydroelectric machine overcurrent for Hydropower Unit vibration Influence of the partial hydrodynamic pressure to system and its component vibration.In the case where unit operates, fluid, machinery, electromagnetism three Part is interactional.Therefore, the vibration of Hydropower Unit is electrical, mechanical, fluid coupling vibration.It is accumulated according to power station Success experience, can will cause and machinery, waterpower, electrical and noise etc. factor are divided into the reason for unit vibration.At present, exist Be able to study in hydro-generating Unit vibrating failure diagnosis and apply it is main it is faulty tree method for diagnosing faults, fuzzy diagnosis side The methods of method, wavelet analysis and neutral net.
Fault tree method for diagnosing faults traditional support vector machine (Classical-Support Vector Machine, It is abbreviated as:C-SVM on the basis of), by integrated Fuzzy clustering techniques and algorithm of support vector machine, construction one kind is suitable for failure The multistage binary tree grader of diagnosis.Shortcoming is cannot to diagnose unpredictable failure;Diagnostic result depends critically upon fault tree The correctness and integrality of information.And Approach for Hydroelectric Generating Unit Fault Diagnosis is generally multi-fault Diagnosis, and support vector machines is a kind of allusion quotation Two classification graders of type, for more classifying when, have the problem of computationally intensive.
Fuzzy diagnosis method is to solve failure using the membership function in set theory and the concept of fuzzy relation matrix Uncertainty relationship between sign.Fuzzy Fault Diagnosis is disadvantageous in that complicated diagnostic system will be established just True fuzzy rule and membership function is extremely difficult, and heavy workload.
Wavelet analysis can solve the problems, such as that many Fourier transforms are insoluble.It has good in time domain and frequency domain Localization ability can focus on the arbitrary details of signal, have very strong recognition capability to the mutation of signal, can effectively denoising and Extract useful signal.But it is difficult to choose that the wavelet basis in wavelet analysis method, which is, general is difficult to choose to meet the requirements very much Wavelet basis, and the realization effect of wavelet transformation cannot be guaranteed in higher-dimension.
The diagnostic method of neutral net is generally shallow-layer network, such as extreme learning machine (Extreme Learning Machine is abbreviated as:ELM), (Radical Basis Function network, are abbreviated as radial basis function network:RBF) etc.. This kind of shallow-layer network generally requires to combine other signal processing technologies and some are used for the intelligent algorithm of parameter optimization, Last grader is served as in entire fault diagnosis flow scheme.The signal processing that its final diagnosis effect is done dependent on front is i.e. Signal characteristic abstraction works.
The content of the invention
The object of the present invention is to provide a kind of Fault Diagnosis Method of Hydro-generating Unit based on DdAE deep learning models with being System, to solve the problems, such as cumbersome manual working that magnanimity monitoring data are brought, and improve fault diagnosis system accuracy rate and Stability.
The technical solution adopted by the present invention is:
In a first aspect, provide a kind of Fault Diagnosis Method of Hydro-generating Unit based on DdAE deep learning models, this method Including following content:
Step (1):Initial data pre-processes:
This method uses normalized first using the initial data that Hydropower Unit is vibrated as input sample collection x, even if Between data that treated are distributed in -1 to 1 by the distribution proportion of initial data, new input sample collection x' is obtained;Then will return One changes that treated that sample set x' is divided into k group data blocks, it is contemplated that the periodicity of Hydropower Unit vibration fault, in order not to destroy Fault message subject to a cycle according to being grouped.N groups are extracted from k group data blocks and are combined into training data as neutral net mould The input of type can so obtainGroup training data increases the reusability of finite data, and every group Training data, which is all equivalent to, has done noise reduction process, and the specific value of k is determined according to the actual conditions for collecting data.
Step (2):Unsupervised training process based on deep learning:
DdAE network models are established, DdAE network models are trained using training sample.By adding in dropout's Successively training method is trained DdAE network models to unsupervised greed, obtains the connection weight of DdAE network models.The mistake Journey is a kind of unsupervised characteristic extraction procedure, the lossless guarantee in greed successively training offer characteristic extraction procedure, and with Most fast speed convergence.The initialization connection weight of training guides the feature to different directions every time, provides the more of feature selecting Sample ensures.
Step (3):Training process based on Softmax regression models and BP:
DdAE network models by (2) step do unsupervised feature extraction, can obtain one group of reconstruct feature vector, Sorting technique of the Softmax regression models as Hydropower Unit failure is selected, handles more points of Hydropower Unit under various faults Class problem.The feature vector reconstructed passes through one layer of combinational network connected entirely, obtains a kind of linear combination conduct of feature The input of Softmax models calculates the probability for the appearance possibility for representing each failure.By minimize error function come The connection weight of combinations of features network is corrected, the connection weight of whole network is carried out with gradient decline and back-propagation algorithm micro- It adjusts.
Step (4):Structure parameter optimizing process based on AFSA:
I.e. adaptive structure adjustment process, process contain the above-mentioned unsupervised training process based on deep learning and are based on The Training process of Softmax regression models and BP.Using the hyper parameter in the structural parameters and error function of DdAE as The target component of AFSA, model exports object function of the error function of result as AFSA, by AFSA in entire model Hyper parameter carry out Stochastic search optimization, each step iterative process of each Artificial Fish in AFSA optimizations is a DdAE The parameter optimisation procedure of network model finally obtains the optimal Artificial Fish in position and obtains optimal models.
Second aspect, the present invention also provides a kind of Approach for Hydroelectric Generating Unit Fault Diagnosis systems based on DdAE deep learning models System, the system comprises training data processing module, neural network model training module, reconstruct feature vector generation module and events Hinder probability evaluation entity, above-mentioned each module is sequentially connected, specifically:
Training data processing module for obtaining data set, and extracts n group data blocks, as DdAE nets from data set The training data of network model;
Neural network model training module, for establishing DdAE network models, and using the training data to DdAE nets Network model is trained, and obtains the connection weight of DdAE network models;
Feature vector generation module is reconstructed, according to the DdAE network models being made of the connection weight, obtains one group of weight Structure feature vector;
Probability of malfunction computing module for reconstructing feature vector by one layer of combinational network connected entirely, obtains feature Linear combination, and as the input of Softmax models, calculate the probability for the appearance possibility for representing each failure.
The present invention compared with prior art the advantages of be:
(1) traditional time-frequency domain signal characteristic extracting methods are different from, signal processing technology and diagnosis are passed through in order to break away from The dependence tested, it is proposed that a kind of depth autocoder feature learning method is automatically and efficiently learned from the vibration signal of measurement Practise useful fault signature.
(2) influenced in preferred embodiment of the present invention in order to eliminate ambient noise, improve feature learning ability, obtaining data set Preprocessing process in influence of noise reduced using the operation of data " broken, restructuring " and adopted in unsupervised training process New depth self-encoding encoder loss function is designed with maximal correlation entropy.The influence of unknown noise is avoided well, is effectively improved The anti-noise ability of deep learning model makes it have higher accuracy and stronger stability in fault diagnosis.
(3) in order to avoid over-fitting (overfitting) that deep learning model often occurs in preferred embodiment of the present invention Problem during unsupervised learning, the hidden layer neuron of every layer of autocoder AE is operated using dropout, is solved The over-fitting problem of parameter optimization;Parameter regular terms is added in the object function design of ASFA methods, it is excellent to solve structure Over-fitting problem during change.
(4) select to use in preferred embodiment of the present invention and correct linear unit R eLU functions as excitation function, avoid The problem of gradient disperse and gradient during BP are exploded.
(5) artificial fish-swarm algorithm ASFA is applied to the hyper parameter tuning of deep learning in preferred embodiment of the present invention, Reduce the manual working selected to hyper parameter so that the intelligence learning ability enhancing of entire method, and then to multiple types Like the extensive processing capacity of problem.
(6) propose that the training of deep learning Approach for Hydroelectric Generating Unit Fault Diagnosis model is divided into two mistakes in preferred embodiment of the present invention Journey:Parameter optimisation procedure and policy optimization process, the two carry out faster more accurately converging to optimal jointly.Parameter optimization Process includes unsupervised training process and Training process:Unsupervised process is using greedy hierarchical optimization, to supervise The BP of journey provides good pre-training parameter, and two training process, which cooperate, improves Model Diagnosis precision.
Description of the drawings
In order to illustrate the technical solution of the embodiments of the present invention more clearly, it will make below to required in the embodiment of the present invention Attached drawing is briefly described.It should be evident that drawings described below is only some embodiments of the present invention, for For those of ordinary skill in the art, without creative efforts, other are can also be obtained according to these attached drawings Attached drawing.
Fig. 1 is a kind of Fault Diagnosis Method of Hydro-generating Unit based on DdAE deep learning models provided in an embodiment of the present invention Flow
Fig. 2 is a kind of autocoder structure chart provided in an embodiment of the present invention;
Fig. 3 is a kind of DdAE structure diagrams provided in an embodiment of the present invention;
Fig. 4 is a kind of successively greedy training pattern schematic diagram provided in an embodiment of the present invention;
Fig. 5 is a kind of AFSA flow diagrams provided in an embodiment of the present invention;
Fig. 6 is a kind of Fault Diagnosis Method of Hydro-generating Unit based on DdAE deep learning models provided in an embodiment of the present invention Flow;
Fig. 7 is a kind of Approach for Hydroelectric Generating Unit Fault Diagnosis system based on DdAE deep learning models provided in an embodiment of the present invention Organization Chart.
Specific embodiment
In order to make the purpose , technical scheme and advantage of the present invention be clearer, with reference to the accompanying drawings and embodiments, it is right The present invention is further elaborated.It should be appreciated that the specific embodiments described herein are merely illustrative of the present invention, and It is not used in the restriction present invention.As long as in addition, technical characteristic involved in the various embodiments of the present invention described below Conflict is not formed each other to can be combined with each other.
Since two thousand six, the deep learning (deeplearning) proposed by Canadian professor Hinton has become machine One emerging field of device learning areas.Hinton professors exist《Science》It publishes thesis and proposes a kind of deep learning model, (Deepbeliefnetwork is abbreviated as deep layer belief network:DBN), the probability which includes the random hidden variable of multilayer is given birth to Into model.Two layers of undirected symmetrical connection topmost, directed connection from top to bottom between low layer.
Deep learning can allow those to possess the computation model of multiple process layers to learn with data abstract at many levels Expression.These methods all bring significant improvement at many aspects, know comprising state-of-the-art speech recognition, visual object Not, object detection and a lot of other fields, such as drug discovery and genomics etc..Deep learning is can be found that in big data Labyrinth.It is that (BackPropagation is abbreviated as using backpropagation:BP) algorithm finishes this discovery procedure. How BP algorithm can obtain error from preceding layer with guidance machine and change the inside Ginseng numbers of this layer, these internal Ginseng numbers can be used It is represented in calculating.Depth convolutional network brings breakthrough in terms of processing image, video, voice and audio, and Recursive Networks exist Glittering one side is shown in terms of processing sequence data, such as text and voice.
The development of depth learning technology generates wide influence to signal and information process field, and will continue to influence To machine learning and other key areas of artificial intelligence, wherein failure modes and diagnosis are exactly an important development field.
Embodiment 1:
The embodiment of the present invention is based on depth noise reduction autocoder, and (Deep denoising Autoencoder, are abbreviated as DdAE) the method for diagnosing faults flow of deep learning model is as shown in Figure 1.Including step performed below:
In step 201, data set is obtained, and n group data blocks, the instruction as DdAE network models are extracted from data set Practice data.
By taking turbine-generator units as an example, in step 201, using the initial data that Hydropower Unit is vibrated as input sample collection X, and be normalized before as training data, even if treated, data are distributed by the distribution proportion of initial data Between -1 to 1, new input sample collection x' is obtained;Then the sample set x' after normalized is divided into k group data blocks, In view of the periodicity of Hydropower Unit vibration fault, in order not to destructive malfunction information according to being grouped subject to a cycle.From k group numbers Input of the training data as neural network model is combined into according to n groups are extracted in block, can so be obtained Group training data increases the reusability of finite data, and every group of training data is all equivalent to and has done noise reduction process, wherein, k Specific value determined according to the actual conditions for collecting data.
Wherein, normalization can be achieved be:It samples this concentration maximum and minimum value is denoted as XmaxAnd Xmin;According to formulaAll data in sample set are calculated, Hydropower Unit vibration data sample is obtained by result of calculation Collect the sample set x' after x normalization.
In step 202, DdAE network models are established, and DdAE network models are instructed using the training data Practice, obtain the connection weight of DdAE network models.
Wherein, the DdAE network models are made of at least two AE, wherein, the hidden layer of preceding layer AE is as later layer The input layer of AE stacks to obtain;The value of each node of hidden layer of each AE is asked by the linear weighting connection of each nodal value of input layer Be input in an excitation function and be calculated, i.e., the being input to output in hidden layer is connected by excitation function.In depth It spends in neutral net, first layer is known as input layer, last layer is known as output layer, other interlayers are known as hidden layer, hidden layer There is functional relation, this function is known as excitation function between the outputting and inputting of node.Wherein, the excitation function can be Correcting linear unit, (Rectified Linear Unit, are abbreviated as:ReLU) function.
In step 203, join the hyper parameter in the structural parameters and error function of DdAE as the target of AFSA algorithms Number, model export object function of the error function of result as AFSA, and the hyper parameter in entire model is carried out by AFSA Stochastic search optimization, each step iterative process of each Artificial Fish in AFSA optimizations is the parameter of a DdAE network model Optimization process, to obtain optimal DdAE network models;
In step 204, according to the DdAE network models being made of the connection weight, obtain one group of reconstruct feature to Amount.
In step 205, the reconstruct feature vector obtains linear group of feature by one layer of combinational network connected entirely Close, and be used as the input of Softmax models, calculate expression each failure appearance possibility probability.
An embodiment of the present invention provides a kind of Fault Diagnosis Method of Hydro-generating Unit based on DdAE deep learning models, with solution The problem of cumbersome manual working that certainly magnanimity monitoring data are brought, and improve the accuracy rate and stabilization of fault diagnosis system Property.
In embodiments of the present invention, in order to further optimize DdAE network models, also there is a kind of preferred realization content, Specifically:Using the hyper parameter in the structural parameters and error function of DdAE as the target component of AFSA algorithms, model output knot Object function of the error function of fruit as AFSA carries out Stochastic search optimization by AFSA to the hyper parameter in entire model, Each step iterative process of each Artificial Fish in AFSA optimizations is the parameter optimisation procedure of a DdAE network model, so as to Obtain optimal DdAE network models.And above-mentioned optimization DdAE processes can timely be performed in above-mentioned steps 202-205, And the condition that its triggering performs is then the hyper parameter in the structural parameters and error function of DdAE.
It is described that DdAE network models are trained using the training data in step 202 of the embodiment of the present invention, have Body includes being trained DdAE network models by adding in the unsupervised greedy successively training method of dropout, the no prison Greed successively training method is superintended and directed, is specially:
In unsupervised training process, using the input layer of each layer of the DdAE network models AE independent as one, The next layer of hidden layer as independent AE, construction one, with the output layer of dimension, so reconstruct multiple AE and carry out with input layer Individually training;In unsupervised training process, dropout operations are carried out according to probability P to the hidden layer neuron of each independent AE. Successively training method includes the unsupervised greed for adding in dropout:In the training process of DdAE networks, for nerve net It is temporarily abandoned (corresponding content, in subsequently illustrating dropout expansion according to certain probability by network unit from network It is related to).
Standard self-encoding encoder loss function is designed using MSE, does not have robustness to the feature learning of sophisticated signal, right The susceptibility of noise is also very high.Cross entropy is a kind of non-linear and local similarity measurement, for complicated and non-stationary background Noise, maximal correlation entropy are insensitive.Therefore, maximal correlation function has the potentiality of matching complex signal feature, can solve The shortcomings that MSE.In embodiments of the present invention, it is as follows that new self-encoding encoder AE loss functions are designed using maximal correlation entropy:
The loss function of the self-encoding encoder AE is:
Wherein ω be AE weight parameters composition parameter vector, m be input layer dimension, ziFor the true mark of failure modes Label,For by the diagnostic result of this method,For Mason's kernel function, kernel function is used to estimate actual value and prediction The cross entropy of value.Wherein, Mason's kernel function generally uses gaussian kernel function.The gaussian kernel function expression formula is as follows:
WhereinFor gaussian kernel function, σ is the variance of Gaussian Profile, and for general value between 0 to 10, e is certainly Right logarithm.
It trains since first layer AE, to minimize error function J (ω) as target, parameter set θ is optimized, hereafter Each layer of AE input exports for preceding layer hidden layer, trains whole network successively.One can be obtained by unsupervised training process The connection weight pre-training of the entire neutral net of group as a result, carry out the training for having supervision in next step, i.e., originally herein on the basis of result The step 205 of inventive embodiments 1.
It, can also be by adding in the unsupervised greed successively training side of dropout in the step 202 of the embodiment of the present invention Method is trained DdAE network models, obtains the connection weight of DdAE network models.The process is a kind of unsupervised feature Extraction process, greed successively training provide the lossless guarantee in characteristic extraction procedure, and with most fast speed convergence.Instruction every time Experienced initialization connection weight guides the feature to different directions, and the diversity for providing feature selecting ensures.
Dropout refers in the training process of deep learning network in embodiments of the present invention, for neutral net list Member temporarily abandons it according to certain probability from network, i.e., according to determine the probability whether epicycle input calculating in neglect The slightly network element makes it be not involved in epicycle calculating, and next round optimization is participated in without influencing it.In standard neural network, section Correlation between point causes the influence of noise scope of a node to expand, and weakens the generalization ability of network, causes over-fitting Problem, dropout destroy this correlation, avoid these problems.
During entire feature learning, it is openness that dropout so that the feature vector that study obtains has more, and contributes to The sparse expression of DdAE network models and distributed expression.The drop probability P ranges of choice of dropout are preferred with 0.5 to 0.8 Value.
With reference to the embodiment of the present invention, there are a kind of preferred implementations, are realized available for matching step 203, specifically, The connection weight of combinations of features network is corrected by minimizing error function, is declined with gradient and backpropagation BP algorithm is to whole The connection weight of a network is finely adjusted.
Back-propagation process in the embodiment of the present invention is the optimization process based on gradient, by the result of Softmax classification It is compared with training data label, finds the combinations of features value sequence number j corresponding to correct fault type, ladder is calculated by equation below Degree:
Wherein ΔiFor i-th of element of gradient vector Δ, j is the sequence number of the corresponding true fault type of label, zjFor with The corresponding combinations of features value of true fault type sequence number, ziFor with ΔiThe combinations of features value of the corresponding fault type of sequence number, zkFor The combinations of features value of corresponding kth kind fault type, K are the number of faults that can classify in total.The depth proposed in embodiments of the present invention Degree noise reduction autocoder DdAE models are obtained based on traditional single hidden layer autocoder design (as shown in Figure 2), single Hidden layer autocoder AE is made of an input layer, a hidden layer and an output layer, theoretically requires input layer and defeated Go out that layer result is equal, this feature for representing to hide representated by node layer can reconstruct input layer data, reach lossless feature extraction Purpose.The DdAE network models discard output layer by multiple AE, using the hidden layer of preceding layer AE as the defeated of later layer AE Enter layer to stack to obtain (as shown in Figure 3), wherein the value of each node of each hidden layer is connected by the linear weighting of each nodal value of input layer Summation is connect, is input in an excitation function and is calculated.The output that is input to of wherein hidden layer is connected by an excitation function, ReLU functions are selected in the embodiment of the present invention as excitation function.In entire DdAE network models, except first layer and finally One layer, it is left all to be connected by excitation function between the neuron input in all interlayers and output.The superperformance of ReLU functions It is possible to prevente effectively from the gradient attenuation during BP, ensure that trained optimization rate.
The embodiment of the present invention additionally provides a kind of method of the initialization of unsupervised training, specifically:
The initialization of unsupervised training includes the structure initialization of DdAE network models and DdAE network model connection weights The initialization of parameter.The embodiment of the present invention uses empirical formula method, DdAE nets for the initialization of the structure of DdAE network models The input layer of network model is 128 nodes, and neuron node quantity is successively half-and-half successively decreased, and final output layer is 8 nodes, with one The combination layer of a 6 node connects to be input in Softmax models after linear combination entirely classifies, and DdAE network models part is total to It is designed as 5 hidden layers.The initialization of DdAE network model interlayer connection weight parameters uses empirical equationWherein njFor the preceding layer neuron node number of weight matrix W connections, nj+1 For the later layer neuron node number of weight matrix W connections, weight matrix W is initialized according to such a be uniformly distributed.
On the other hand, the present invention additionally provides a kind of preferred Softmax models, specific Softmax models letter in real time Counting expression formula is:
Wherein z is the feature vector of corresponding various failures, σ (z)jFor the fuzzy evaluation value of corresponding jth kind failure, zjTo be right Answer the combinations of features value of jth kind fault type, zkFor the combinations of features value of corresponding kth kind fault type, K is the event that can classify in total Hinder number.
By by unsupervised part learn as a result, reconstruct feature vector, carry out linear group of a fully-connected network It closes, obtains to represent the assemblage characteristic vector z of various failures, be input in Softmax models and obtain the person in servitude to various failures Category degree, it is final result to take degree of membership maximum fault type.
In embodiments of the present invention, the artificial fish-swarm algorithm process AFSA is a kind of optimization method of prevalence, with other Optimization algorithm is compared, it has fast convergence rate, high to initial value tolerance, the advantage of strong robustness, and can find the overall situation Optimal solution.Therefore, the embodiment of the present invention carries out double optimization using AFSA to the key parameter of depth autocoder.
In embodiments of the present invention, the specific implementation of the artificial fish-swarm algorithm AFSA proposed is as shown in figure 5, including step It is as follows:
Step1:Prepare depth self-encoding encoder model, the initial parameter in the model is by step 202 of the embodiment of the present invention Unsupervised learning train to obtain.
Step2:The basic parameter of artificial fish-swarm algorithm is set, including number of fish school L, adjusting range LB, UB of parameter, fish The field range V of group, maximum step-length S, the trial number try_number of foraging behavior of shoal of fish movement, maximum procreation algebraically Maxgen and crowding factor δ.The object function of AFSA is designed using the failure modes precision of depth autocoder.
Step3:In parameter variation range, the original of the initialization shoal of fish is generated according to the model initial parameter collection of Step1 State.Bulletin is established, records the optimum position of the shoal of fish and the minimum target functional value per a generation.
Step4:(Artificial Fish, are abbreviated as each Artificial Fish:AF appropriate row) is attempted according to optimization regulation For.Update is announced to record optimum parameter value.
Step5:It checks whether the corresponding target function value of optimum parameter value reaches optimization purpose, is completed if reaching excellent Change and export optimized parameter;Otherwise, check whether procreation algebraically reaches maximum procreation algebraically Maxgen, completed if reaching excellent Change, and export optimized parameter;Otherwise return and perform Step4.
With reference to the embodiment of the present invention, a kind of policy optimization learning objective function is additionally provided, specially based on the core limit (Kerner Extreme Learning Machine, are abbreviated as habit machine:KELM) theoretical error function.Feedforward neural network Training error it is smaller, the norm of weight is smaller, and network is intended to obtain better Generalization Capability.ELM tends both to minimum Change training error and maximize the openness standard of feature.
min||Hβ-T||2AND||β||
Wherein β is feature vector, and H is characterized combinatorial matrix, and T is true tag vector.H β can be by above-mentioned 3rd step Softmax classification results vector represents.Objectives function is designed as minimizing such as minor function:
Wherein xiFor the diagnostic result of i-th group of input, ziFor the corresponding label vector of i-th group of input, m is training set size. θ be parameter vector to be optimized, the hidden layer number of plies including DdAE, every layer of neuronal quantity nj, Gaussian kernel variances sigma and The drop probability P of dropout.
Embodiment 2:
The embodiment of the present invention be by many extensions in embodiment 1 and the compound scheme that is integrated together of preferred scheme, Wherein, with reference to the Fault Diagnosis Method of Hydro-generating Unit flow chart as shown in Figure 6 based on DdAE deep learning models.The present embodiment Main two processes of training and test including DdAE network models.The training process of DdAE network models may be summarized as follows:
Step (1):Initial data pre-processes
The embodiment of the present invention using the initial data that Hydropower Unit is vibrated as input sample collection x, first using normalization at Reason between even if treated data are distributed in -1 to 1 by the distribution proportion of initial data, obtains new input sample collection x'; Then the sample set x' after normalized is divided into k group data blocks, it is contemplated that the periodicity of Hydropower Unit vibration fault is Destructive malfunction information is not grouped according to subject to a swing circle.N groups, which are randomly selected, from k group data blocks is combined into training Input of the data as neural network model builds the distributed table of data characteristics by the deep learning of multilayer neural network It reaches.It can so obtainGroup training data increases the reusability of finite data, and every group of training Data, which are all equivalent to, has done noise reduction process, and the specific value of k is determined according to the actual conditions for collecting data.With entire sample set Test set as model.
Above-mentioned normalized is specially:
Assuming that obtained Hydropower Unit vibration data sample set is X, samples this concentration maximum and minimum value is denoted as XmaxWith Xmin, then all data in sample set are calculatedThe new sample set x ' so obtained is exactly Data after the normalization of original sample collection.
Step (2):Initialize ASFA models
The basic parameter of artificial fish-swarm algorithm is set, (surpassing for L difference DdAE network model is represented including number of fish school L Parameter set), adjusting range LB, UB (the adjustable range bound for representing DdAE network model hyper parameters) of parameter, the shoal of fish regards Wild scope V (representing to be capable of the maximum range of interactional two Artificial Fishs), the maximum step-length S of shoal of fish movement (represents each The maximum magnitude that DdAE network models hyper parameter adjusts in iteration), the maximum number of attempts try_number of foraging behavior (is represented Maximum exploration number under DdAE network models hyper parameter guiding within the vision), maximum procreation algebraically Maxgen (represents institute Have the greatest iteration optimization number of the DdAE network model hyper parameter collection of initialization) and crowding factor δ.Each AF in ASFA models Represent the DdAE network models (being also called AF for short in the embodiment of the present invention) of a definite hyper parameter, the location determination of AF The hyper parameter collection (being also called AF positions for short in the embodiment of the present invention) of its DdAE network model represented
The initial method of above-mentioned AFSA models is specially:
The AF location parameters of artificial fish-swarm model include the network number of plies Layer of DdAE network models, the god per layer network Through first number of nodes Neti, the variances sigma of Gaussian Profile, dropout probability Ps.Wherein Layer is generally chosen between 3 to 8, optional A random value is initialization value in the range of taking;NetiSelection range between desired taxonomic species counts to 1024, Net1General root 2 integer power, Net are initialized as according to the dimension of an input dataiInitialization value is by Net1Successively halve to obtain, and most Later layer output is not less than the taxonomic species number of requirement;The selection range of σ is generally initialized as 1, the choosing of probability P between 0 to 10 Scope is taken between 0.5 to 0.8, is generally initialized as a random value in optional scope.Number of fish school L is initialized 10 to 100 Between, field range V is initialized between 0.01 to the 0.02 of entire parameter adjustment scope, and maximum step-length S is initialized as 2*V, Maximum number of attempts try_number is initialized between 5 to 10, and maximum procreation algebraically Maxgen initialization is gathered around between 10 to 20 Factor delta initialization is squeezed between 0.2 to 0.5.
Step (3):Unsupervised training process based on deep learning
Hyper parameter collection representated by the position of each Artificial Fish in ASFA models, establishes DdAE network models, adopts DdAE network models are trained with the training data that processing obtains in step (1).By adding in the unsupervised greedy of dropout Successively training method is trained DdAE network models to the heart, obtains the connection weight of DdAE network models.The process is a kind of Unsupervised characteristic extraction procedure, greed successively training provide the lossless guarantee in characteristic extraction procedure, and with most fast speed Degree convergence.The initialization connection weight of training guides the feature to different directions every time, and the diversity for providing feature selecting ensures.
Above-mentioned depth noise reduction autocoder (DdAE) model is specially:
Depth noise reduction autocoder (DdAE) model that the embodiment of the present invention proposes is automatic based on traditional single hidden layer Encoder is (as shown in Figure 2) to design what is obtained, and single hidden layer autocoder (AE) is by an input layer, a hidden layer and one Output layer forms, and theoretically requires input layer and output layer result equal, this feature for representing to hide representated by node layer can Reconstruct input layer data, achievees the purpose that lossless feature extraction.The DdAE network models discard output layer by multiple AE, in the past The hidden layer of one layer of AE stacks to obtain (as shown in Figure 3) as the input layer of later layer AE, wherein each node of each hidden layer Value is input in an excitation function and is calculated by the linear weighting connection summation of each nodal value of input layer.Wherein hidden layer The output that is input to connected by excitation function, the embodiment of the present invention selects ReLU functions as excitation function.ReLU functions Superperformance it is possible to prevente effectively from during BP gradient attenuation, ensure that trained optimization rate.
Successively training process is specially above-mentioned unsupervised greed:
In unsupervised training process, to each layer of connection weight W of DdAE network modelsiIndividually training, from first layer Connection weight W1Start, by W1Preceding layer neuron be considered as the input layer of an independent single hidden layer autocoder, by W1's Later layer neuron is considered as hidden layer, then constructs one with input layer with the output layer of dimension for it, and hidden layer to output layer connects Weight is connect as W1', W1' initial value byIt obtains, thus constructs the structure of a single hidden layer AE, such as Fig. 4.With instruction Practice data for input, using the object function of design as guidance, this list hidden layer AE is trained, the W after the completion of training1As The first layer connection weight of DdAE network models, then to train the hidden layer that the connection weight completed calculates this single hidden layer AE defeated Go out data, the input data trained as next layer is successively trained successively.
In unsupervised training process, dropout operations are carried out according to probability P to the hidden layer neuron of each independent AE.
Standard self-encoding encoder loss function is using based on mean square error, (mean-square error, are abbreviated as:MSE) Design does not have robustness to the feature learning of sophisticated signal, very high to the susceptibility of noise yet, is very easy to be made an uproar on a small quantity Sound shadow is rung.Joint entropy is a kind of non-linear and local similarity measurement, for complicated and non-stationary ambient noise, maximal correlation Entropy is insensitive.Therefore, maximal correlation entropy has the potentiality of matching sophisticated signal feature, can solve the disadvantage that MSE.At this In inventive embodiments, it is as follows that new self-encoding encoder loss function is designed using maximal correlation entropy.
Wherein ω be AE weight parameters composition parameter vector, m be input layer dimension, ziPreferably to export as a result,For By the output of hidden layer feature reconstruction as a result,For Mason's kernel function, generally using gaussian kernel function, Gaussian kernel For function for estimating actual value and the cross entropy of predicted value, expression formula is as follows.
Wherein, σ is the variance of Gaussian Profile, and for general value between 0 to 10, e is natural logrithm.
The connection weight pre-training of one group of entire neutral net can be obtained by unsupervised training process as a result, tying herein Next step Training is carried out on the basis of fruit.
Dropout in above-mentioned unsupervised training process, which is operated, is specially:
Dropout refers in the training process of deep learning network, for neutral net unit, according to certain probability It is temporarily abandoned from network, even it is temporarily not involved in calculating in next step.Standard neural network, the correlation between node So that the influence of noise scope of a node expands, weaken the generalization ability of network, cause over-fitting problem, dropout is broken This correlation is broken, avoids these problems.
During entire feature learning, it is openness that dropout so that the feature vector arrived of study has more, and contributes to The sparse expression of DdAE network models and distributed expression.
The initial method of above-mentioned unsupervised training is specially:
The embodiment of the present invention uses empirical formula method, and the initialization of DdAE network model interlayer connection weight parameters is using warp Test formulaWherein, njFor the preceding layer neuron node of weight matrix W connections Number, nj+1For the later layer neuron node number of weight matrix W connections, weight matrix W is uniformly distributed progress initially according to such a Change.
Step (4):Training process based on Softmax regression models and BP
By the feature extraction that step (3) is unsupervised, one group of reconstruct feature vector can be obtained, Softmax is selected to return Sorting technique of the model as Hydropower Unit failure, more classification problems of the processing Hydropower Unit under various faults.It reconstructs Feature vector passes through one layer of combinational network connected entirely, obtains a kind of input of the linear combination of feature as Softmax models, Calculate the probability of occurrence for representing each failure.The connection weight of combinations of features network is corrected by minimizing error function Weight is declined with gradient and back-propagation algorithm is finely adjusted the connection weight of whole network.
Above-mentioned Softmax assorting processes are specially:
The function expression of Softmax models is
Wherein z is that the assemblage characteristic of corresponding various failures is vectorial, σ (z)jFor the fuzzy evaluation value of corresponding jth kind failure, zj For the combinations of features value of corresponding jth kind fault type, zkFor the combinations of features value of corresponding kth kind fault type, K is that can divide in total Class number of faults.
By by unsupervised part learn as a result, i.e. feature extraction is as a result, carry out linear group of a fully-connected network It closes, obtains to represent the assemblage characteristic vector z of various failures, be input in Softmax models and obtain the person in servitude to various failures Category degree, it is final result to take degree of membership maximum fault type.
Above-mentioned BP optimization process is specially:
The back-propagation process of the embodiment of the present invention is the optimization process based on gradient.By Softmax classification result with Training data label compares, and finds the combinations of features value sequence number j corresponding to correct fault type, gradient is calculated by equation below:
Wherein ΔiFor i-th of element of gradient vector Δ, j is the sequence number of the corresponding true fault type of label, zjFor with The corresponding combinations of features value of true fault type sequence number, ziFor with ΔiThe combinations of features value of the corresponding fault type of sequence number, zkFor The combinations of features value of corresponding kth kind fault type, K are the number of faults that can classify in total.
Step (5):Calculate the object function of each AF in AFSA.
The Training of unsupervised training and step (4) by step (3) obtains what a parameter optimization was completed The test data that processing obtains in step (1) is input in the DdAE network models by DdAE network models, according to failure modes Overall accuracy designs the object function of AF.
The object function of above-mentioned policy optimization study, which designs, is specially:
The policy optimization learning objective function of design of the embodiment of the present invention is the mistake based on core extreme learning machine KELM theories Difference function.The training error of feedforward neural network is smaller, and the norm of weight is smaller, and network is intended to obtain better generalization Energy.ELM tends both to minimize training error and maximization feature is openness for standard.
min||Hβ-T||2AND||β||
Wherein β is feature vector, and H is characterized combinatorial matrix, and T is true tag vector.H β can be by above-mentioned 3rd step Softmax classification results vector represents.Objectives function is designed as minimizing such as minor function:
Wherein xiFor the diagnostic result of i-th group of input, ziFor the corresponding label vector of i-th group of input, m is training set size. θ be parameter vector to be optimized, the hidden layer number of plies including DdAE, every layer of neuronal quantity nj, Gaussian kernel variances sigma and The drop probability P of dropout.
Step (6):The appropriate action of each AF is selected according to the principle of optimality.
The behavior of AF includes foraging behavior (Prey), clustering behavior (Swarm), behavior of knocking into the back (Follow) and random behavior (Move).The optimization behavior conducted in current iteration process of each AF is selected according to the principle of optimality, is each DdAE Optimization direction of the network model hyper parameter in current iteration.
The above-mentioned principle of optimality is specially:
1. preferentially attempting clustering behavior, calculation formula is:
WhereinFor the position of the AF in estimated next step iteration;For the position of the AF in current iteration;XcTo work as The place-centric of preceding AF all AF within sweep of the eye;S is maximum step-length;Rand () is a random number generation function, generates 0 Random number between to 1.
Calculate XcThe object function result Y of corresponding AF positionsc,The object function result Y of corresponding AF positionsi.IfWherein nfFor AF quantity within sweep of the eye, then otherwise AF selection clustering behaviors in current iteration optimization are attempted It knocks into the back behavior.
The behavior 2. trial is knocked into the back, calculation formula are:
Wherein XjFor optimal AF positions within sweep of the eye.
Calculate XjThe object function result Y of corresponding AF positionsj,The object function result Y of corresponding AF positionsi.IfThen the AF selects behavior of knocking into the back in current iteration optimization, otherwise attempts foraging behavior.
3. attempting foraging behavior, calculation formula is:
Wherein, XjPosition for the AF randomly selected within sweep of the eye.
It calculatesObject function under position, if object function result is better thanAs a result, then current iteration is excellent under position The AF selects foraging behavior in change;Otherwise, if selection XjNumber is less than maximum number of attempts try_number, then reselects Xj;It is no It then abandons foraging behavior and carries out random behavior.
4. attempting random behavior, calculation formula is:
Wherein, V is field range.
Random behavior is the default behavior of above-mentioned three kinds of optimization behavior, and in above-mentioned three behaviors, optional time does not perform at random Behavior.
Step (7):Iteration carries out structure optimization process
The process of the hyper parameter of AFSA model optimization DdAE network models is structure optimization process.By step (6) After fish school behavior performs, the corresponding AF location informations of optimal objective function are recorded on billboard, and checks whether and meets iteration Suspension condition exports the AF positions recorded on last billboard if suspension condition is reached and surpasses as final DdAE network models Parameter determines a DdAE network architecture as the model structure for fault diagnosis;Otherwise return and perform step (3).
Embodiment 3:
The embodiment of the present invention additionally provides a kind of Approach for Hydroelectric Generating Unit Fault Diagnosis system based on DdAE deep learning models, such as Shown in Fig. 7, the system comprises training data processing module, neural network model training module, DdAE network models optimization moulds Block, reconstruct feature vector generation module and probability of malfunction computing module, above-mentioned each module are sequentially connected, specifically:
Training data processing module for obtaining data set, and extracts n group data blocks, as DdAE nets from data set The training data of network model;
Neural network model training module, for establishing DdAE network models, and using the training data to DdAE nets Network model is trained, and obtains the connection weight of DdAE network models;
DdAE network model optimization modules, for the hyper parameter in the structural parameters and error function using DdAE as AFSA The target component of algorithm, model exports object function of the error function of result as AFSA, by AFSA in entire model Hyper parameter carry out Stochastic search optimization, each step iterative process of AFSA optimizations for one group of DdAE network model parameter optimization Process, to obtain optimal DdAE network models;
Feature vector generation module is reconstructed, according to the DdAE network models being made of the connection weight, obtains one group of weight Structure feature vector;
Probability of malfunction computing module for reconstructing feature vector by one layer of combinational network connected entirely, obtains feature Linear combination, and as the input of Softmax models, calculate the probability for the appearance possibility for representing each failure.
What deserves to be explained is the contents such as information exchange, implementation procedure between module, unit in above system, due to Same design is based on the processing method embodiment of the present invention, particular content can be found in the narration in the method for the present invention embodiment 1, Details are not described herein again.
Embodiment 4:
The embodiment of the present invention additionally provides a kind of electronic equipment, be used to implement embodiment 1 or described in embodiment 2 based on The Fault Diagnosis Method of Hydro-generating Unit of DdAE deep learning models, described device include:
At least one processor;And the memory being connected at least one processor communication;Wherein, it is described to deposit Reservoir is stored with the instruction that can be performed by least one processor, and described instruction is had by the memory storage can be described The instruction repertorie that at least one processor performs, described instruction are arranged to carry out realizing 2 institute of embodiment 1 or embodiment by program The Fault Diagnosis Method of Hydro-generating Unit based on DdAE deep learning models stated.
One of ordinary skill in the art will appreciate that all or part of step in the various methods of embodiment is can to lead to Program is crossed relevant hardware to be instructed to complete, which can be stored in a computer readable storage medium, storage medium It can include:Read-only memory (ROM, Read Only Memory), random access memory (RAM, Random Access Memory), disk or CD etc..
As it will be easily appreciated by one skilled in the art that the foregoing is merely illustrative of the preferred embodiments of the present invention, not to The limitation present invention, all any modification, equivalent and improvement made within the spirit and principles of the invention etc., should all include Within protection scope of the present invention.

Claims (10)

1. a kind of Fault Diagnosis Method of Hydro-generating Unit based on DdAE deep learning models, which is characterized in that method includes:
Data set is obtained, and n group data blocks, the training data as DdAE network models are extracted from data set;
DdAE network models are established, and DdAE network models are trained using the training data, obtain DdAE network moulds The connection weight of type;The input value of each neuron node in each layer network in DdAE network models, by previous The output valve of all neuron nodes of layer asks weighted average to obtain, and all weights between every two-tier network combine shape Into a connection weight matrix, abbreviation connection weight;
Using the hyper parameter in the structural parameters and error function of DdAE as the target component of AFSA algorithms, model exports result Object function of the error function as AFSA carries out Stochastic search optimization, AFSA by AFSA to the hyper parameter in entire model Each step iterative process of optimization is the parameter optimisation procedure of one group of DdAE network model, to obtain optimal DdAE networks mould Type;
According to the DdAE network models being made of the connection weight, one group of reconstruct feature vector is obtained;
The reconstruct feature vector obtains the linear combination of feature, and is used as Softmax by one layer of combinational network connected entirely The input of model calculates the probability for the appearance possibility for representing each failure.
2. the Fault Diagnosis Method of Hydro-generating Unit according to claim 1 based on DdAE deep learning models, feature exist In the DdAE network models of the foundation are specially:
The DdAE network models are made of at least two AE, wherein, the input of the hidden layer of preceding layer AE as later layer AE Layer is stacked and obtained;
The value of each node of hidden layer of each AE is input to an excitation by the linear weighting connection summation of each nodal value of input layer It is calculated in function.
3. the Fault Diagnosis Method of Hydro-generating Unit according to claim 1 based on DdAE deep learning models, feature exist In, it is described that DdAE network models are trained using the training data, it specifically includes by adding in the unsupervised of dropout Successively training method is trained DdAE network models to greed, and the unsupervised greedy successively training method is specially:
In unsupervised training process, using each layer of the DdAE network models single hidden layer autocoder independent as one, Multiple independent AE are reconstructed individually to be trained;
Successively training method includes the unsupervised greed for adding in dropout:In the training process of DdAE networks, for god Through network element, it is temporarily abandoned from network according to certain probability;Wherein, temporarily abandoned from network and be specially:It presses Whether in the calculating of epicycle input ignore the network element according to determine the probability, it is made to be not involved in epicycle calculating, and in next round Decide whether to participate in calculating by probability again in calculating.
4. the Fault Diagnosis Method of Hydro-generating Unit according to claim 3 based on DdAE deep learning models, feature exist In the initialization of the DdAE network models includes structure initialization and parameter initialization:
The structure initialization of DdAE network models is AF position initializations process in ASFA models, including DdAE network model layers Number, DdAE network model input layer quantity, the variances sigma of Gaussian Profile and dropout probability Ps;AF in ASFA models The initialization put is evenly distributed according to initialized parameter adjustment range L B, UB in parameter adjustment space;
The parameter initialization of DdAE network models is the initialization of unsupervised training, is joined for DdAE network models connection weight Several initialization uses empirical formula method, using empirical equationWherein njFor The preceding layer neuron node number of weight matrix W connections, nj+1For the later layer neuron node number of weight matrix W connections, weight Each element is initialized according to such a be uniformly distributed in matrix W.
5. according to any Fault Diagnosis Method of Hydro-generating Unit based on DdAE deep learning models of claim 1-4, It is characterized in that, the function expression of the Softmax models is:
<mrow> <mi>&amp;sigma;</mi> <msub> <mrow> <mo>(</mo> <mi>z</mi> <mo>)</mo> </mrow> <mi>j</mi> </msub> <mo>=</mo> <mfrac> <msup> <mi>e</mi> <msub> <mi>z</mi> <mi>j</mi> </msub> </msup> <mrow> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>k</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>K</mi> </munderover> <msup> <mi>e</mi> <msub> <mi>z</mi> <mi>k</mi> </msub> </msup> </mrow> </mfrac> </mrow>
Wherein z is the feature vector of corresponding various failures, σ (z)jFor the fuzzy evaluation value of corresponding jth kind failure, zjFor corresponding jth The combinations of features value of kind fault type, zkFor the combinations of features value of corresponding kth kind fault type, K is the number of faults that can classify in total.
6. the Fault Diagnosis Method of Hydro-generating Unit according to claim 1 based on DdAE deep learning models, feature exist In after the connection weight is calculated, the method is further included corrects the company of combinational network by minimizing error function Weight is connect, is declined with gradient and back-propagation algorithm is finely adjusted the connection weight of whole network.
7. the Fault Diagnosis Method of Hydro-generating Unit according to claim 6 based on DdAE deep learning models, feature exist In the gradient declines and back-propagation algorithm is specially:
Back-propagation process is the optimization process based on gradient, and the result of Softmax classification with training data label is compared, is looked for To the combinations of features value sequence number j corresponding to correct fault type, gradient is calculated by equation below:
<mrow> <msub> <mi>&amp;Delta;</mi> <mi>i</mi> </msub> <mo>=</mo> <mfenced open = "{" close = ""> <mtable> <mtr> <mtd> <mfrac> <mrow> <msup> <mi>e</mi> <msub> <mi>z</mi> <mi>i</mi> </msub> </msup> <mrow> <mo>(</mo> <mrow> <munderover> <mi>&amp;Sigma;</mi> <mrow> <mi>k</mi> <mo>=</mo> <mn>1</mn> <mo>,</mo> <mi>k</mi> <mo>&amp;NotEqual;</mo> <mi>i</mi> </mrow> <mi>K</mi> </munderover> <msup> <mi>e</mi> <msub> <mi>z</mi> <mi>k</mi> </msub> </msup> </mrow> <mo>)</mo> </mrow> </mrow> <msup> <mrow> <mo>(</mo> <mrow> <munderover> <mi>&amp;Sigma;</mi> <mrow> <mi>k</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>K</mi> </munderover> <msup> <mi>e</mi> <msub> <mi>z</mi> <mi>k</mi> </msub> </msup> </mrow> <mo>)</mo> </mrow> <mn>2</mn> </msup> </mfrac> </mtd> <mtd> <mrow> <mi>i</mi> <mo>=</mo> <mi>j</mi> </mrow> </mtd> </mtr> <mtr> <mtd> <mfrac> <msup> <mi>e</mi> <mrow> <msub> <mi>z</mi> <mi>i</mi> </msub> <mo>+</mo> <msub> <mi>z</mi> <mi>j</mi> </msub> </mrow> </msup> <msup> <mrow> <mo>(</mo> <mrow> <munderover> <mi>&amp;Sigma;</mi> <mrow> <mi>k</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>K</mi> </munderover> <msup> <mi>e</mi> <msub> <mi>z</mi> <mi>k</mi> </msub> </msup> </mrow> <mo>)</mo> </mrow> <mn>2</mn> </msup> </mfrac> </mtd> <mtd> <mrow> <mi>i</mi> <mo>&amp;NotEqual;</mo> <mi>j</mi> </mrow> </mtd> </mtr> </mtable> </mfenced> </mrow>
Wherein ΔiFor i-th of element of gradient vector Δ, j is the sequence number of the corresponding true fault type of label, zjFor with it is true The corresponding combinations of features value of fault type sequence number, ziFor with ΔiThe combinations of features value of the corresponding fault type of sequence number, zkFor correspondence The combinations of features value of kth kind fault type.
8. according to any Fault Diagnosis Method of Hydro-generating Unit based on DdAE deep learning models of claim 1-7, It is characterized in that, the artificial fish-swarm algorithm AFSA is specially:
Step1:Prepare depth self-encoding encoder model;
Step2:The basic parameter of artificial fish-swarm algorithm is set;Wherein, the basic parameter includes number of fish school L, the tune of parameter Whole range L B, UB, the field range V of the shoal of fish, maximum step-length S, the trial number try_number of foraging behavior of shoal of fish movement, most One or more in big procreation algebraically Maxgen and crowding factor δ, uses the failure modes precision of depth autocoder To design the object function of AFSA;
Step3:In parameter variation range, the reset condition of the initialization shoal of fish is generated according to the model initial parameter collection of Step1, Record the optimum position of the shoal of fish and the minimum target functional value per a generation;
Step4:Each Artificial Fish AF attempts appropriate behavior according to optimization regulation, records optimum parameter value;
Step5:It checks whether the corresponding target function value of optimum parameter value reaches optimization purpose, optimization is completed if reaching simultaneously Export optimized parameter;Otherwise, check whether procreation algebraically reaches maximum procreation algebraically Maxgen, if procreation algebraically is more than or waits It then completes to optimize in Maxgen, and exports optimized parameter;Otherwise return and perform Step4.
9. a kind of Approach for Hydroelectric Generating Unit Fault Diagnosis system based on DdAE deep learning models, which is characterized in that the system comprises Training data processing module, neural network model training module, DdAE network models optimization module, reconstruct feature vector generation mould Block and probability of malfunction computing module, above-mentioned each module are sequentially connected, specifically:
Training data processing module for obtaining data set, and extracts n group data blocks, as DdAE network moulds from data set The training data of type;
Neural network model training module, for establishing DdAE network models, and using the training data to DdAE network moulds Type is trained, and obtains the connection weight of DdAE network models;
DdAE network model optimization modules, for the hyper parameter in the structural parameters and error function using DdAE as AFSA algorithms Target component, object function of the error function as AFSA of model output result, by AFSA to surpassing in entire model Parameter carries out Stochastic search optimization, and each step iterative process of AFSA optimizations is the parameter optimization mistake of one group of DdAE network model Journey, to obtain optimal DdAE network models;
Feature vector generation module is reconstructed, according to the DdAE network models being made of the connection weight, it is special to obtain one group of reconstruct Sign vector;
Probability of malfunction computing module for reconstructing feature vector by one layer of combinational network connected entirely, obtains the linear of feature Combination, and be used as the input of Softmax models, calculate expression each failure appearance possibility probability.
10. the Approach for Hydroelectric Generating Unit Fault Diagnosis system according to claim 9 based on DdAE deep learning models, feature exist In the function expression of the Softmax models is:
<mrow> <mi>&amp;sigma;</mi> <msub> <mrow> <mo>(</mo> <mi>z</mi> <mo>)</mo> </mrow> <mi>j</mi> </msub> <mo>=</mo> <mfrac> <msup> <mi>e</mi> <msub> <mi>z</mi> <mi>j</mi> </msub> </msup> <mrow> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>k</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>K</mi> </munderover> <msup> <mi>e</mi> <msub> <mi>z</mi> <mi>k</mi> </msub> </msup> </mrow> </mfrac> </mrow>
Wherein z is the feature vector of corresponding various failures, σ (z)jFor the fuzzy evaluation value of corresponding jth kind failure, zjFor corresponding jth The combinations of features value of kind fault type, zkFor the combinations of features value of corresponding kth kind fault type, K is the number of faults that can classify in total.
CN201711461876.6A 2017-12-28 2017-12-28 Hydroelectric generating set fault diagnosis method and system based on DdAE deep learning model Active CN108062572B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711461876.6A CN108062572B (en) 2017-12-28 2017-12-28 Hydroelectric generating set fault diagnosis method and system based on DdAE deep learning model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711461876.6A CN108062572B (en) 2017-12-28 2017-12-28 Hydroelectric generating set fault diagnosis method and system based on DdAE deep learning model

Publications (2)

Publication Number Publication Date
CN108062572A true CN108062572A (en) 2018-05-22
CN108062572B CN108062572B (en) 2021-04-06

Family

ID=62140653

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711461876.6A Active CN108062572B (en) 2017-12-28 2017-12-28 Hydroelectric generating set fault diagnosis method and system based on DdAE deep learning model

Country Status (1)

Country Link
CN (1) CN108062572B (en)

Cited By (33)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109214460A (en) * 2018-09-21 2019-01-15 西华大学 Method for diagnosing fault of power transformer based on Relative Transformation Yu nuclear entropy constituent analysis
CN109480864A (en) * 2018-10-26 2019-03-19 首都医科大学附属北京安定医院 A kind of schizophrenia automatic evaluation system based on nervous functional defects and machine learning
CN109613395A (en) * 2018-12-03 2019-04-12 华中科技大学 It is a kind of that soft straight electric network fault detection method is maked somebody a mere figurehead based on ANN
CN109711062A (en) * 2018-12-28 2019-05-03 广东电网有限责任公司 A kind of equipment fault diagnosis method and device based on cloud service
CN109726505A (en) * 2019-01-14 2019-05-07 哈尔滨工程大学 A kind of forging machine tool main drive gear fault diagnosis system based on intelligent trouble tree
CN109781411A (en) * 2019-01-28 2019-05-21 西安交通大学 A kind of combination improves the Method for Bearing Fault Diagnosis of sparse filter and KELM
CN109829538A (en) * 2019-02-28 2019-05-31 苏州热工研究院有限公司 A kind of equipment health Evaluation method and apparatus based on deep neural network
CN109902617A (en) * 2019-02-25 2019-06-18 百度在线网络技术(北京)有限公司 A kind of image identification method, device, computer equipment and medium
CN109902741A (en) * 2019-02-28 2019-06-18 上海理工大学 A kind of breakdown of refrigeration system diagnostic method
CN110033181A (en) * 2019-03-29 2019-07-19 华中科技大学 A kind of generating equipment state evaluating method based on self-encoding encoder
CN110068760A (en) * 2019-04-23 2019-07-30 哈尔滨理工大学 A kind of Induction Motor Fault Diagnosis based on deep learning
CN110096785A (en) * 2019-04-25 2019-08-06 华北电力大学 A kind of stacking self-encoding encoder modeling method applied to extra-supercritical unit
CN110263767A (en) * 2019-07-12 2019-09-20 南京工业大学 In conjunction with the intelligent Rotating Shaft Fault method of compressed data acquisition and deep learning
CN110334764A (en) * 2019-07-04 2019-10-15 西安电子科技大学 Rotating machinery intelligent failure diagnosis method based on integrated depth self-encoding encoder
CN110412872A (en) * 2019-07-11 2019-11-05 中国石油大学(北京) Reciprocating compressor fault diagnosis optimization method and device
CN110659741A (en) * 2019-09-03 2020-01-07 浩鲸云计算科技股份有限公司 AI model training system and method based on piece-splitting automatic learning
CN110969194A (en) * 2019-11-21 2020-04-07 国网辽宁省电力有限公司电力科学研究院 Cable early fault positioning method based on improved convolutional neural network
CN111144303A (en) * 2019-12-26 2020-05-12 华北电力大学(保定) Power line channel transmission characteristic identification method based on improved denoising autoencoder
CN111222133A (en) * 2019-11-14 2020-06-02 辽宁工程技术大学 Multistage self-adaptive coupling method for industrial control network intrusion detection
CN111504680A (en) * 2020-04-30 2020-08-07 东华大学 Fault diagnosis method and system for polyester filament yarn production based on WSVM and DCAE
CN111562496A (en) * 2020-05-15 2020-08-21 北京天工智造科技有限公司 Motor running state judgment method based on data mining
CN111581746A (en) * 2020-05-11 2020-08-25 中国矿业大学 Novel multi-objective optimization method for three-phase cylindrical switched reluctance linear generator
CN112131787A (en) * 2020-09-18 2020-12-25 江西兰叶科技有限公司 Unsupervised self-evolving motor design method and system
CN112634391A (en) * 2020-12-29 2021-04-09 华中科技大学 Gray level image depth reconstruction and fault diagnosis system based on compressed sensing
CN112686366A (en) * 2020-12-01 2021-04-20 江苏科技大学 Bearing fault diagnosis method based on random search and convolutional neural network
CN112836577A (en) * 2020-12-30 2021-05-25 中南大学 Intelligent traffic unmanned vehicle fault gene diagnosis method and system
CN112861625A (en) * 2021-01-05 2021-05-28 深圳技术大学 Method for determining stacking denoising autoencoder model
CN113075546A (en) * 2021-03-24 2021-07-06 河南中烟工业有限责任公司 Motor vibration signal feature extraction method and system
CN113378887A (en) * 2021-05-14 2021-09-10 太原理工大学 Emulsion pump fault grading diagnosis method
WO2021218120A1 (en) * 2020-04-27 2021-11-04 江苏科技大学 Method for fault identification of ship power device
US11443137B2 (en) 2019-07-31 2022-09-13 Rohde & Schwarz Gmbh & Co. Kg Method and apparatus for detecting signal features
TWI794907B (en) * 2021-07-26 2023-03-01 友達光電股份有限公司 Prognostic and health management system for system management and method thereof
CN117197048A (en) * 2023-08-15 2023-12-08 力鸿检验集团有限公司 Ship water gauge reading detection method, device and equipment

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103886396A (en) * 2014-04-10 2014-06-25 河海大学 Method for determining mixing optimizing of artificial fish stock and particle swarm
CN104819846A (en) * 2015-04-10 2015-08-05 北京航空航天大学 Rolling bearing sound signal fault diagnosis method based on short-time Fourier transform and sparse laminated automatic encoder
CN104914851A (en) * 2015-05-21 2015-09-16 北京航空航天大学 Adaptive fault detection method for airplane rotation actuator driving device based on deep learning
CN105023580A (en) * 2015-06-25 2015-11-04 中国人民解放军理工大学 Unsupervised noise estimation and speech enhancement method based on separable deep automatic encoding technology
CN106127804A (en) * 2016-06-17 2016-11-16 淮阴工学院 The method for tracking target of RGB D data cross-module formula feature learning based on sparse depth denoising own coding device
CN107256393A (en) * 2017-06-05 2017-10-17 四川大学 The feature extraction and state recognition of one-dimensional physiological signal based on deep learning

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103886396A (en) * 2014-04-10 2014-06-25 河海大学 Method for determining mixing optimizing of artificial fish stock and particle swarm
CN104819846A (en) * 2015-04-10 2015-08-05 北京航空航天大学 Rolling bearing sound signal fault diagnosis method based on short-time Fourier transform and sparse laminated automatic encoder
CN104914851A (en) * 2015-05-21 2015-09-16 北京航空航天大学 Adaptive fault detection method for airplane rotation actuator driving device based on deep learning
CN105023580A (en) * 2015-06-25 2015-11-04 中国人民解放军理工大学 Unsupervised noise estimation and speech enhancement method based on separable deep automatic encoding technology
CN106127804A (en) * 2016-06-17 2016-11-16 淮阴工学院 The method for tracking target of RGB D data cross-module formula feature learning based on sparse depth denoising own coding device
CN107256393A (en) * 2017-06-05 2017-10-17 四川大学 The feature extraction and state recognition of one-dimensional physiological signal based on deep learning

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
JIANGLIN LIANG: "Stacked denoising autoencoder and dropout together to prevent overfitting in deep neural network", 《2015 8TH INTERNATIONAL CONGRESS ON IMAGE AND SIGNAL PROCESSING》 *
SHAO HAIDONG: "A novel deep autoencoder feature learning method for rotating machinery fault diagnosis", 《MECHANICAL SYSTEMS AND SIGNAL PROCESSING 95 (2017)》 *

Cited By (45)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109214460A (en) * 2018-09-21 2019-01-15 西华大学 Method for diagnosing fault of power transformer based on Relative Transformation Yu nuclear entropy constituent analysis
CN109214460B (en) * 2018-09-21 2022-01-11 西华大学 Power transformer fault diagnosis method based on relative transformation and nuclear entropy component analysis
CN109480864A (en) * 2018-10-26 2019-03-19 首都医科大学附属北京安定医院 A kind of schizophrenia automatic evaluation system based on nervous functional defects and machine learning
CN109613395A (en) * 2018-12-03 2019-04-12 华中科技大学 It is a kind of that soft straight electric network fault detection method is maked somebody a mere figurehead based on ANN
CN109711062A (en) * 2018-12-28 2019-05-03 广东电网有限责任公司 A kind of equipment fault diagnosis method and device based on cloud service
CN109726505B (en) * 2019-01-14 2023-03-17 哈尔滨工程大学 Forging and pressing lathe main drive mechanism fault diagnosis system based on intelligence trouble tree
CN109726505A (en) * 2019-01-14 2019-05-07 哈尔滨工程大学 A kind of forging machine tool main drive gear fault diagnosis system based on intelligent trouble tree
CN109781411A (en) * 2019-01-28 2019-05-21 西安交通大学 A kind of combination improves the Method for Bearing Fault Diagnosis of sparse filter and KELM
CN109902617A (en) * 2019-02-25 2019-06-18 百度在线网络技术(北京)有限公司 A kind of image identification method, device, computer equipment and medium
CN109829538A (en) * 2019-02-28 2019-05-31 苏州热工研究院有限公司 A kind of equipment health Evaluation method and apparatus based on deep neural network
CN109902741A (en) * 2019-02-28 2019-06-18 上海理工大学 A kind of breakdown of refrigeration system diagnostic method
CN110033181A (en) * 2019-03-29 2019-07-19 华中科技大学 A kind of generating equipment state evaluating method based on self-encoding encoder
CN110068760A (en) * 2019-04-23 2019-07-30 哈尔滨理工大学 A kind of Induction Motor Fault Diagnosis based on deep learning
CN110096785B (en) * 2019-04-25 2020-09-01 华北电力大学 Stack self-encoder modeling method applied to ultra-supercritical unit
CN110096785A (en) * 2019-04-25 2019-08-06 华北电力大学 A kind of stacking self-encoding encoder modeling method applied to extra-supercritical unit
CN110334764B (en) * 2019-07-04 2022-03-04 西安电子科技大学 Rotary machine intelligent fault diagnosis method based on integrated depth self-encoder
CN110334764A (en) * 2019-07-04 2019-10-15 西安电子科技大学 Rotating machinery intelligent failure diagnosis method based on integrated depth self-encoding encoder
CN110412872A (en) * 2019-07-11 2019-11-05 中国石油大学(北京) Reciprocating compressor fault diagnosis optimization method and device
CN110263767A (en) * 2019-07-12 2019-09-20 南京工业大学 In conjunction with the intelligent Rotating Shaft Fault method of compressed data acquisition and deep learning
US11443137B2 (en) 2019-07-31 2022-09-13 Rohde & Schwarz Gmbh & Co. Kg Method and apparatus for detecting signal features
CN110659741A (en) * 2019-09-03 2020-01-07 浩鲸云计算科技股份有限公司 AI model training system and method based on piece-splitting automatic learning
CN111222133A (en) * 2019-11-14 2020-06-02 辽宁工程技术大学 Multistage self-adaptive coupling method for industrial control network intrusion detection
CN110969194A (en) * 2019-11-21 2020-04-07 国网辽宁省电力有限公司电力科学研究院 Cable early fault positioning method based on improved convolutional neural network
CN110969194B (en) * 2019-11-21 2023-12-19 国网辽宁省电力有限公司电力科学研究院 Cable early fault positioning method based on improved convolutional neural network
CN111144303A (en) * 2019-12-26 2020-05-12 华北电力大学(保定) Power line channel transmission characteristic identification method based on improved denoising autoencoder
WO2021218120A1 (en) * 2020-04-27 2021-11-04 江苏科技大学 Method for fault identification of ship power device
CN111504680B (en) * 2020-04-30 2021-03-26 东华大学 Fault diagnosis method and system for polyester filament yarn production based on WSVM and DCAE
CN111504680A (en) * 2020-04-30 2020-08-07 东华大学 Fault diagnosis method and system for polyester filament yarn production based on WSVM and DCAE
CN111581746A (en) * 2020-05-11 2020-08-25 中国矿业大学 Novel multi-objective optimization method for three-phase cylindrical switched reluctance linear generator
CN111562496A (en) * 2020-05-15 2020-08-21 北京天工智造科技有限公司 Motor running state judgment method based on data mining
CN112131787A (en) * 2020-09-18 2020-12-25 江西兰叶科技有限公司 Unsupervised self-evolving motor design method and system
CN112131787B (en) * 2020-09-18 2022-05-27 江西兰叶科技有限公司 Unsupervised self-evolving motor design method and system
CN112686366A (en) * 2020-12-01 2021-04-20 江苏科技大学 Bearing fault diagnosis method based on random search and convolutional neural network
CN112634391B (en) * 2020-12-29 2023-12-29 华中科技大学 Gray image depth reconstruction and fault diagnosis system based on compressed sensing
CN112634391A (en) * 2020-12-29 2021-04-09 华中科技大学 Gray level image depth reconstruction and fault diagnosis system based on compressed sensing
CN112836577A (en) * 2020-12-30 2021-05-25 中南大学 Intelligent traffic unmanned vehicle fault gene diagnosis method and system
CN112836577B (en) * 2020-12-30 2024-02-20 中南大学 Intelligent traffic unmanned vehicle fault gene diagnosis method and system
CN112861625A (en) * 2021-01-05 2021-05-28 深圳技术大学 Method for determining stacking denoising autoencoder model
CN112861625B (en) * 2021-01-05 2023-07-04 深圳技术大学 Determination method for stacked denoising self-encoder model
CN113075546A (en) * 2021-03-24 2021-07-06 河南中烟工业有限责任公司 Motor vibration signal feature extraction method and system
CN113378887B (en) * 2021-05-14 2022-07-05 太原理工大学 Emulsion pump fault grading diagnosis method
CN113378887A (en) * 2021-05-14 2021-09-10 太原理工大学 Emulsion pump fault grading diagnosis method
TWI794907B (en) * 2021-07-26 2023-03-01 友達光電股份有限公司 Prognostic and health management system for system management and method thereof
CN117197048A (en) * 2023-08-15 2023-12-08 力鸿检验集团有限公司 Ship water gauge reading detection method, device and equipment
CN117197048B (en) * 2023-08-15 2024-03-08 力鸿检验集团有限公司 Ship water gauge reading detection method, device and equipment

Also Published As

Publication number Publication date
CN108062572B (en) 2021-04-06

Similar Documents

Publication Publication Date Title
CN108062572A (en) A kind of Fault Diagnosis Method of Hydro-generating Unit and system based on DdAE deep learning models
CN109102005B (en) Small sample deep learning method based on shallow model knowledge migration
CN109800875A (en) Chemical industry fault detection method based on particle group optimizing and noise reduction sparse coding machine
Madhiarasan et al. Analysis of artificial neural network: architecture, types, and forecasting applications
CN103926526A (en) Analog circuit fault diagnosis method based on improved RBF neural network
CN110213244A (en) A kind of network inbreak detection method based on space-time characteristic fusion
CN112087442B (en) Time sequence related network intrusion detection method based on attention mechanism
CN107609634A (en) A kind of convolutional neural networks training method based on the very fast study of enhancing
CN106874963A (en) A kind of Fault Diagnosis Method for Distribution Networks and system based on big data technology
CN114118138A (en) Bearing composite fault diagnosis method based on multi-label field self-adaptive model
CN117201122A (en) Unsupervised attribute network anomaly detection method and system based on view level graph comparison learning
CN115600137A (en) Multi-source domain variable working condition mechanical fault diagnosis method for incomplete category data
CN115293249A (en) Power system typical scene probability prediction method based on dynamic time sequence prediction
CN115906959A (en) Parameter training method of neural network model based on DE-BP algorithm
Pérez-Pérez et al. Fault detection and isolation in wind turbines based on neuro-fuzzy qLPV zonotopic observers
CN115600134A (en) Bearing transfer learning fault diagnosis method based on domain dynamic impedance self-adaption
JP7230324B2 (en) Neural network learning method, computer program and computer device
Pupezescu Pulsating Multilayer Perceptron
Wasukar Artificial neural network–an important asset for future computing
Abraham et al. An intelligent forex monitoring system
CN111476367A (en) Task splitting type pulse neural network structure prediction and network anti-interference method
Phatai et al. Cultural algorithm initializes weights of neural network model for annual electricity consumption prediction
Gra~ na et al. Experiments of fast learning with high order Boltzmann machines
Amouzadi et al. Hierarchical fuzzy rule-based classification system by evolutionary boosting algorithm
Zou et al. Hybrid deep neural network based on SDAE and GRUNN

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant