CN114757300A - GA (genetic algorithm) -improved WNN-based IGBT (insulated Gate Bipolar transistor) module fault prediction method - Google Patents

GA (genetic algorithm) -improved WNN-based IGBT (insulated Gate Bipolar transistor) module fault prediction method Download PDF

Info

Publication number
CN114757300A
CN114757300A CN202210484151.3A CN202210484151A CN114757300A CN 114757300 A CN114757300 A CN 114757300A CN 202210484151 A CN202210484151 A CN 202210484151A CN 114757300 A CN114757300 A CN 114757300A
Authority
CN
China
Prior art keywords
data
model
prediction
training
neural network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210484151.3A
Other languages
Chinese (zh)
Other versions
CN114757300B (en
Inventor
吴松荣
黄柯勋
杨平
周懿
张浩然
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Southwest Jiaotong University
Original Assignee
Southwest Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Southwest Jiaotong University filed Critical Southwest Jiaotong University
Priority to CN202210484151.3A priority Critical patent/CN114757300B/en
Publication of CN114757300A publication Critical patent/CN114757300A/en
Application granted granted Critical
Publication of CN114757300B publication Critical patent/CN114757300B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/10Pre-processing; Data cleansing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/086Learning methods using evolutionary algorithms, e.g. genetic algorithms or genetic programming
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/12Computing arrangements based on biological models using genetic models
    • G06N3/126Evolutionary algorithms, e.g. genetic algorithms or genetic programming

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Biomedical Technology (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Physiology (AREA)
  • Genetics & Genomics (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention discloses a method for predicting the failure of an IGBT module of a wavelet neural network based on genetic algorithm improvement, which comprises the following steps: acquiring saturated voltage drop data of a collector emitter of the IGBT module, removing dead spots, compressing the data and reducing dimensions; importing data into a prediction model, dividing a training set, a verification set and a prediction set; training a data training set by adopting a wavelet neural network WNN; optimizing a wavelet neural network WNN by using a genetic algorithm GA, and retraining the model; and substituting the verification data set into the model for verification, and substituting the prediction set into the model for fault prediction. The invention effectively improves the accuracy and precision of fault prediction; meanwhile, on the basis of ensuring the fault prediction precision, the model structure is simplified, the detection precision is improved, and the feasibility of model application and deployment is greatly improved.

Description

GA (genetic algorithm) -improved WNN-based IGBT (insulated Gate Bipolar transistor) module fault prediction method
Technical Field
The invention belongs to the field of IGBT module fault prediction rapidness, and particularly relates to a Wavelet Neural Network (WNN) IGBT module fault prediction method based on Genetic Algorithm (GA) improvement.
Background
The IGBT module is one of core components in the traction converter of the high-speed train and is used as a main switching element forming the inverter, and the reliability of the IGBT module is the key of stable operation of the whole converter system. However, in the application process, the IGBT module can bear severe load impact, and is prone to generating defect damage such as bonding wire falling and solder layer fatigue (as shown in fig. 2), and such damage is accumulated with time, and finally the IGBT module fails, so that the converter fails, and a huge challenge is brought to safe operation of the train, and therefore the IGBT fault is accurately predicted, the high-speed train can be planned to be overhauled and maintained, and the operation and maintenance intelligence of the rail transit system is effectively improved.
The existing technologies for IGBT module fault prediction can be divided into two categories, namely prediction based on a reliability model and prediction by adopting a data driving method. The method for predicting the fault based on the reliability model has high prediction precision (such as a Norris-Landzberg analytic model), but needs to know the physical properties of the module and is easily influenced by the working conditions of the module, so that the modeling is difficult and the application is limited; in addition, algorithms used by some technologies (such as a time delay neural network) adopting data driving method prediction are too complex, the requirement on the computing performance of equipment is higher, the use cost is increased, and large-scale application deployment is difficult to realize.
Disclosure of Invention
Aiming at the defects of the prior art, the method aims to solve the problems of complex modeling of the previous model, high use cost and difficult model deployment. The invention provides a Genetic Algorithm (GA) improved Wavelet Neural Network (WNN) IGBT module fault prediction method.
The invention discloses a method for predicting the failure of an IGBT module of a wavelet neural network based on genetic algorithm improvement, which comprises the following steps:
step 1: and acquiring saturated voltage drop of a collector and an emitter of the IGBT module.
The IGBT collector-emitter saturation voltage drop is used as an input parameter of a fault prediction model, the IGBT module collector-emitter saturation voltage drop during working is collected on line through a data acquisition card arranged near a traction converter in a high-speed train, and data are sent to an upper computer and stored.
And 2, step: and (4) preprocessing data.
Judging whether fault data exist in the acquired data by taking the increase of 5% of saturation pressure drop as a threshold value of module failure, and if not, discarding the acquisition result; for a complete data set containing fault data, dead pixels need to be removed, data needs to be compressed, and dimension reduction processing needs to be performed.
Firstly, dead spots are removed by adopting the criterion of Rhein, and collected data x are subjected to 1,x2,…,xi,…xnCalculating the arithmetic mean value thereof
Figure BDA0003628945270000021
And residual error
Figure BDA0003628945270000022
Then the root mean square deviation is obtained by using Bessel method
Figure BDA0003628945270000023
If it is
Figure BDA0003628945270000024
Discarding xiIf at all
Figure BDA0003628945270000025
Then x is reservedi
And then, compressing the data after the dead pixels are removed, and realizing the data mean value of the single-time operation as the characteristic value of the data mean value.
And finally, carrying out normalization processing on the data, so that the data samples are mapped into a range [ c, d ] according to a unified standard:
Figure BDA0003628945270000026
in the formula, xminIs the smallest sample in the sample set, xmaxThe largest sample.
And step 3: and importing the data into a prediction model and dividing.
Uploading the preprocessed IGBT saturated pressure drop data set to a fault prediction model, and dividing according to 70% of a training set, 15% of a verification set and 15% of a prediction set.
And 4, step 4: and (5) training a model foundation.
In order to exert the characteristics of the module pressure drop data time sequence, a training and predicting method for predicting the 11 th data by adopting every 10 data as a group; setting 10 input layer neurons, 1 output layer neuron and 10 hidden layer neurons of a fault prediction model, wherein the iteration period is 1000 times, and the training precision is 0.1; and then training the data training set by adopting a wavelet neural network WNN. The wavelet basis functions used are Morlet wavelet basis functions:
Figure BDA0003628945270000027
The output expression is:
Figure BDA0003628945270000028
where k is 1,2, and … N, i is 1,2, and … m are numbers of nodes in the input layer, j is 1,2, and … N are numbers of nodes in the hidden layer, and ω is ωijFor the weights connecting the output layer node i and the hidden layer node j, a and b are wavelet expansion translation coefficients, omegajkWeights, x, for connecting hidden layer node j with input layer node kk(t) is the t sample point of the kth input sample of the input layer, yiIs the ith output value of the output layer; η is Sigmoid function and μ is learning rate.
The error function is:
Figure BDA0003628945270000029
wherein P is 1,2, …, P is the number of modes of the input sample,
Figure BDA00036289452700000210
the ith desired output of the pth mode layer,
Figure BDA00036289452700000211
the ith actual network output of the pth mode layer.
And correcting the network weight and the wavelet expansion translation coefficient by using a gradient correction method, wherein the correction rule is as follows:
Figure BDA0003628945270000031
Figure BDA0003628945270000032
Figure BDA0003628945270000033
Figure BDA0003628945270000034
in the formula, aj、bjRespectively, the expansion and translation coefficients of the jth hidden layer node, delta is an introduced momentum coefficient, and delta represents the variation of the corresponding parameter.
And 5: and (5) optimizing and improving the model.
Optimizing a wavelet neural network WNN by using a genetic algorithm GA; errors obtained by WNN training of the wavelet neural network are optimized by using 3 operators of selection, crossing and mutation respectively, and 3 operations are completed by selecting a roulette method of a formula (9), single-point crossing of the formula (10) and basic bit mutation of the formulas (11) to (12).
Figure BDA0003628945270000035
Figure BDA0003628945270000036
Figure BDA0003628945270000037
f(g)=r2(1-g/Gmax) (12)
In formula (9), piProbability of being selected for the ith individual, fiIs its fitness value; in the formula (10), the individual amAnd anCrossing at position j, b being [0,1 ]]A certain random number in between; in the formulae (11) and (12), aijIs the jth part of the ith individual, amaxIs aijUpper bound of aminIs aijR is [0,1 ]]A random number in between, f (g) is a variation function, r2Is [0,1 ]]G is the current iteration number, GmaxIs the maximum number of iterations.
After optimization improvement using genetic algorithm GA, the model is retrained.
Step 6: and (5) model verification and prediction.
And 5, substituting 15% of the verification data set into the model for verification according to the optimized training result in the step 5, and substituting 15% of the prediction set into the model for fault prediction after verifying that the accuracy of the aggregation result meets a set threshold value.
Further comprising step 7: evaluating the accuracy of the model: and evaluating the model prediction result by using 3 evaluation indexes of mean absolute error MAE, root mean square error RMSE and absolute mean percentage error MAPE.
The beneficial technical effects of the invention are as follows:
1. according to the GA-based improved WNN IGBT module fault prediction method, multi-scale analysis is performed on IGBT fault data through wavelet transformation, the IGBT fault data are decomposed into a plurality of detail sequences representing a high-frequency part and background sequences representing a low-frequency part, the advantages of BP neural network signal forward transmission and error reverse transmission are kept, the advantages of the wavelet analysis in processing nonlinear time sequence problems are achieved, the model structure is simplified, the detection precision is improved on the basis of ensuring the fault prediction precision, and the feasibility of model application and deployment is greatly improved.
2. According to the invention, the network training process is optimized by utilizing three operators of GA selection, crossing and mutation, so that the problem that the original training method is easy to fall into local optimization and the error of a prediction result is larger is effectively avoided, the principle is clear, the operation is simple and rapid, the model can be converged to an individual with global optimization, and the accuracy and precision of fault prediction are effectively improved.
Drawings
Fig. 1 is a flow chart of the IGBT module fault prediction method based on the wavelet neural network improved by the genetic algorithm.
Fig. 2 is an example diagram of IGBT module failure.
Fig. 3 is a diagram of a wavelet neural network structure.
Detailed Description
The invention is further described in detail below with reference to the drawings and the detailed description.
The invention discloses a genetic algorithm improved wavelet neural network-based IGBT module fault prediction method, which is shown in figure 1 and comprises the following steps:
step 1: and acquiring saturated voltage drop of a collector and an emitter of the IGBT module.
The IGBT collector-emitter saturation voltage drop is used as an input parameter of a fault prediction model, the IGBT module collector-emitter saturation voltage drop during working is collected on line through a data acquisition card arranged near a traction converter in a high-speed train, and data are sent to an upper computer and stored.
And 2, step: and (4) preprocessing data.
Judging whether fault data exist in the acquired data by taking the increase of 5% of saturation pressure drop as a threshold value of module failure, and if not, discarding the acquisition result; for a complete data set containing fault data, the IGBT module collector-emitter saturation voltage drop data acquired from the acquisition card is huge and may have dead spots, and the dead spots need to be removed, data are compressed, and dimension reduction processing is performed.
Firstly, dead pixels are removed by adopting a Rhein criterion, and acquired data x1,x2,…,xi,…xnCalculating the arithmetic mean value thereof
Figure BDA0003628945270000051
And residual error
Figure BDA0003628945270000052
Then using Bessel method to obtain root mean square deviation
Figure BDA0003628945270000053
If it is
Figure BDA0003628945270000054
Abandoning xiIf, if
Figure BDA0003628945270000055
Then x is reservedi
And then, compressing the data after the dead pixels are removed, and realizing the data mean value of the single-time operation as the characteristic value of the data mean value.
And finally, carrying out normalization processing on the data, so that the data samples are mapped into a range [ c, d ] according to a unified standard:
Figure BDA0003628945270000056
in the formula, xminIs the smallest sample in the sample set, xmaxThe largest sample.
And step 3: and importing the data into a prediction model and dividing.
Uploading the preprocessed IGBT saturated pressure drop data set to a fault prediction model, and dividing according to 70% of a training set, 15% of a verification set and 15% of a prediction set.
And 4, step 4: and (5) performing model base training.
In order to exert the characteristic of the module pressure drop data time sequence, every 10 data are adoptedA training and prediction method for predicting 11 th data; setting 10 neurons of an input layer, 1 neuron of an output layer and 10 neurons of a hidden layer of a fault prediction model, wherein the iteration period is 1000 times, and the training precision is 0.1; and then training the data training set by adopting a wavelet neural network WNN. The wavelet neural network used is shown in FIG. 3, X1、X2、XnIs an input parameter of WNN, Y1、Y2、YmFor prediction output, the wavelet basis functions used are Morlet wavelet basis functions:
Figure BDA0003628945270000057
the output expression is:
Figure BDA0003628945270000058
where k is 1,2, and … N, i is 1,2, and … m are numbers of nodes in the input layer, j is 1,2, and … N are numbers of nodes in the hidden layer, and ω is ωijFor the weights connecting the output layer node i and the hidden layer node j, a and b are wavelet expansion translation coefficients, omegajkWeights, x, for connecting hidden layer node j with input layer node kk(t) is the t sample point of the kth input sample of the input layer, yiIs the ith output value of the output layer; η is Sigmoid function and μ is learning rate.
The error function is:
Figure BDA0003628945270000059
wherein P is 1,2, …, P is the number of modes of the input sample,
Figure BDA00036289452700000510
The ith desired output of the P-th mode layer,
Figure BDA00036289452700000511
the ith actual network output of the P-th mode layer.
And correcting the network weight and the wavelet expansion translation coefficient by using a gradient correction method, wherein the correction rule is as follows:
Figure BDA0003628945270000061
Figure BDA0003628945270000062
Figure BDA0003628945270000063
Figure BDA0003628945270000064
in the formula, aj、bjRespectively, the expansion and translation coefficients of the jth hidden layer node, delta is an introduced momentum coefficient, and delta represents the variation of the corresponding parameter.
And 5: and (5) optimizing and improving the model.
Since the gradient correction method used by WNN in step 4 is slow in evolution and is easy to fall into local optimum, the model prediction accuracy is insufficient. For this purpose, a genetic algorithm GA is used for optimizing a wavelet neural network WNN; errors obtained by WNN training of the wavelet neural network are optimized by using 3 operators of selection, crossing and mutation respectively, and 3 operations are completed by selecting a roulette method of a formula (9), single-point crossing of the formula (10) and basic bit mutation of the formulas (11) to (12).
Figure BDA0003628945270000065
Figure BDA0003628945270000066
Figure BDA0003628945270000067
f(g)=r2(1-g/Gmax) (12)
In the formula (9), piProbability of being selected for the i-th individual, fiIs its fitness value; in the formula (10), the individual amAnd anCrossing at position j, b being [0,1 ]]A certain random number in between; in the formulae (11) and (12), aijIs the jth part of the ith individual, amaxIs aijUpper bound of aminIs aijR is [0,1 ]]A random number in between, f (g) is a variation function, r 2Is [0,1 ]]A random number in between, G is the current iteration number, GmaxIs the maximum number of iterations.
After optimization and improvement using genetic algorithm GA, the model is retrained.
And 6: and (5) model verification and prediction.
And (5) substituting 15% of the verification data set into the model for verification according to the optimized training result in the step (5), and substituting 15% of the prediction set into the model for fault prediction after verifying that the accuracy of the aggregation result meets a set threshold value.
And 7: evaluating the accuracy of the model: the model prediction results were evaluated using 3 evaluation indexes, Mean Absolute Error (MAE), Root Mean Square Error (RMSE), and absolute Mean percentage error (MAPE). The prediction accuracy of the model is higher when the values of MAE, RMSE and MAPE are smaller.
In summary, on the system level, the invention provides the IGBT module fault prediction method capable of being applied and deployed in a large scale, which effectively improves the fault prediction precision and can realize the deployment in embedded equipment and the online real-time prediction. On the model level, the invention provides a WNN prediction algorithm based on GA improvement, optimizes the network training process through three operators of selection, intersection and mutation, and has the advantages of simple principle and simple and convenient operation.

Claims (2)

1. A method for predicting the failure of an IGBT module of a wavelet neural network based on genetic algorithm improvement is characterized by comprising the following steps:
step 1: obtaining saturated voltage drop of a collector and an emitter of the IGBT module;
the IGBT collector-emitter saturation voltage drop is used as an input parameter of a fault prediction model, the IGBT module collector-emitter saturation voltage drop during working is collected on line through a data acquisition card arranged near a traction converter in a high-speed train, and data are sent to an upper computer and stored;
and 2, step: preprocessing data;
judging whether fault data exist in the acquired data by taking the increase of 5% of saturation pressure drop as a threshold value of module failure, and if not, discarding the acquisition result; for a complete data set containing fault data, dead pixels need to be removed, data is compressed, and dimension reduction processing is carried out;
firstly, dead pixels are removed by adopting a Rhein criterion, and acquired data x1,x2,…,xi,…xnCalculating the arithmetic mean value thereof
Figure FDA0003628945260000011
And residual error
Figure FDA0003628945260000012
Then the root mean square deviation is obtained by using Bessel method
Figure FDA0003628945260000013
If it is
Figure FDA0003628945260000014
Abandoning xiIf, if
Figure FDA0003628945260000015
Then x is reservedi
Then, compressing the data with the dead points removed, and realizing the data average value of the single operation as the characteristic value of the data average value;
and finally, carrying out normalization processing on the data, so that the data samples are mapped into a range [ c, d ] according to a unified standard:
Figure FDA0003628945260000016
In the formula, xminFor the smallest sample in the sample set, xmaxIs the largest sample;
and step 3: importing and dividing data into a prediction model;
uploading the preprocessed IGBT saturated pressure drop data set to a fault prediction model, and dividing according to 70% of a training set, 15% of a verification set and 15% of a prediction set;
and 4, step 4: training a model foundation;
in order to exert the characteristics of the module pressure drop data time sequence, a training and predicting method for predicting the 11 th data by adopting every 10 data as a group; setting 10 input layer neurons, 1 output layer neuron and 10 hidden layer neurons of a fault prediction model, wherein the iteration period is 1000 times, and the training precision is 0.1; training the data training set by adopting a wavelet neural network WNN; the wavelet basis functions used are Morlet wavelet basis functions:
Figure FDA0003628945260000017
the output expression is:
Figure FDA0003628945260000021
where k is 1,2, and … N, i is 1,2, and … m are numbers of nodes in the input layer, j is 1,2, and … N are numbers of nodes in the hidden layer, and ω is ωijFor the weights connecting the output layer node i and the hidden layer node j, a and b are wavelet expansion translation coefficients, omegajkTo connect hidden layer junctionsWeight, x, of point j and input level node kk(t) is the t sample point of the kth input sample of the input layer, y iIs the ith output value of the output layer; eta is a Sigmoid function, and mu is a learning rate;
the error function is:
Figure FDA0003628945260000022
wherein P is 1,2, …, P is the number of modes of the input sample,
Figure FDA0003628945260000023
the ith desired output of the P-th mode layer,
Figure FDA0003628945260000024
the ith actual network output of the P-th mode layer;
and correcting the network weight and the wavelet expansion translation coefficient by using a gradient correction method, wherein the correction rule is as follows:
Figure FDA0003628945260000025
Figure FDA0003628945260000026
Figure FDA0003628945260000027
Figure FDA0003628945260000028
in the formula, aj、bjRespectively, the expansion and translation coefficients of the jth hidden layer node, delta is the introduced momentum coefficient, and delta represents the corresponding parameterThe amount of change of (c);
and 5: optimizing and improving the model;
optimizing a wavelet neural network WNN by using a genetic algorithm GA; optimizing errors obtained by WNN training of the wavelet neural network by using 3 operators of selection, crossing and mutation respectively, and completing 3 operations by selecting a roulette method of a formula (9), single-point crossing of the formula (10) and basic bit mutation of the formulas (11) to (12);
Figure FDA0003628945260000029
Figure FDA00036289452600000210
Figure FDA0003628945260000031
f(g)=r2(1-g/Gmax) (12)
in the formula (9), piProbability of being selected for the i-th individual, fiIs its fitness value; in the formula (10), the individual amAnd anCrossing at position j, b being [0,1 ]]A certain random number in between; in the formulae (11) and (12), aijIs the jth part of the ith individual, amaxIs aijUpper bound of aminIs aijR is [0,1 ] ]A random number in between, f (g) is a variation function, r2Is [0,1 ]]G is the current iteration number, GmaxIs the maximum iteration number;
after genetic algorithm GA optimization improvement, the model is retrained;
step 6: model verification and prediction;
and 5, substituting 15% of the verification data set into the model for verification according to the optimized training result in the step 5, and substituting 15% of the prediction set into the model for fault prediction after verifying that the accuracy of the aggregation result meets a set threshold value.
2. The IGBT module fault prediction method based on the wavelet neural network improved by the genetic algorithm as claimed in claim 1, characterized by further comprising the step 7: evaluating the accuracy of the model: and evaluating the model prediction result by using 3 evaluation indexes of mean absolute error MAE, root mean square error RMSE and absolute mean percentage error MAPE.
CN202210484151.3A 2022-05-06 2022-05-06 IGBT module fault prediction method based on GA improved WNN Active CN114757300B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210484151.3A CN114757300B (en) 2022-05-06 2022-05-06 IGBT module fault prediction method based on GA improved WNN

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210484151.3A CN114757300B (en) 2022-05-06 2022-05-06 IGBT module fault prediction method based on GA improved WNN

Publications (2)

Publication Number Publication Date
CN114757300A true CN114757300A (en) 2022-07-15
CN114757300B CN114757300B (en) 2023-04-28

Family

ID=82333212

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210484151.3A Active CN114757300B (en) 2022-05-06 2022-05-06 IGBT module fault prediction method based on GA improved WNN

Country Status (1)

Country Link
CN (1) CN114757300B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US12073668B1 (en) 2023-06-08 2024-08-27 Mercedes-Benz Group AG Machine-learned models for electric vehicle component health monitoring

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190242936A1 (en) * 2018-02-05 2019-08-08 Wuhan University Fault diagnosis method for series hybrid electric vehicle ac/dc converter
CN110133538A (en) * 2019-05-16 2019-08-16 合肥工业大学 A kind of ANPC three-level inverter open-circuit fault diagnostic method and experiment porch
CN112746934A (en) * 2020-12-31 2021-05-04 江苏国科智能电气有限公司 Method for diagnosing fan fault through self-association neural network

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190242936A1 (en) * 2018-02-05 2019-08-08 Wuhan University Fault diagnosis method for series hybrid electric vehicle ac/dc converter
CN110133538A (en) * 2019-05-16 2019-08-16 合肥工业大学 A kind of ANPC three-level inverter open-circuit fault diagnostic method and experiment porch
CN112746934A (en) * 2020-12-31 2021-05-04 江苏国科智能电气有限公司 Method for diagnosing fan fault through self-association neural network

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
FEIMING LIU 等: "A Constant Frequency ZVS Modulation Scheme for Four-Switch Buck–Boost Converter With Wide Input and Output Voltage Ranges and Reduced Inductor Current" *
HAILIN HU 等: "Open-circuit fault diagnosis of NPC inverter IGBT based on independent component analysis and neural network" *
黄柯勋 等: "基于改进小波神经网络的IGBT时间序列预测算法研究" *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US12073668B1 (en) 2023-06-08 2024-08-27 Mercedes-Benz Group AG Machine-learned models for electric vehicle component health monitoring

Also Published As

Publication number Publication date
CN114757300B (en) 2023-04-28

Similar Documents

Publication Publication Date Title
US20200285900A1 (en) Power electronic circuit fault diagnosis method based on optimizing deep belief network
CN111860982A (en) Wind power plant short-term wind power prediction method based on VMD-FCM-GRU
CN111160520A (en) BP neural network wind speed prediction method based on genetic algorithm optimization
CN110969194B (en) Cable early fault positioning method based on improved convolutional neural network
CN110070228B (en) BP neural network wind speed prediction method for neuron branch evolution
CN109614669B (en) Network-level bridge structure performance evaluation and prediction method
CN111815056A (en) Aircraft external field aircraft fuel system fault prediction method based on flight parameter data
CN111784061B (en) Training method, device and equipment for power grid engineering cost prediction model
CN114662793B (en) Business process remaining time prediction method and system based on interpretable hierarchical model
CN108053052A (en) A kind of oil truck oil and gas leakage speed intelligent monitor system
CN114757300B (en) IGBT module fault prediction method based on GA improved WNN
CN113743592A (en) Telemetry data anomaly detection method based on GAN
CN104732067A (en) Industrial process modeling forecasting method oriented at flow object
CN115965177A (en) Improved autoregressive error compensation wind power prediction method based on attention mechanism
CN111553400A (en) Accurate diagnosis method for vibration fault of wind generating set
CN111506868A (en) Ultrashort-term wind speed prediction method based on HHT weight optimization
CN113222250B (en) High-power laser device output waveform prediction method based on convolutional neural network
CN115964937A (en) IGBT life prediction method based on GA-Elman-LSTM combined model
CN116304663B (en) Train control vehicle-mounted equipment health state management device based on unbalanced sample enhancement
CN116778709A (en) Prediction method for traffic flow speed of convolutional network based on attention space-time diagram
CN116703819A (en) Rail wagon steel floor damage detection method based on knowledge distillation
CN110610203A (en) Electric energy quality disturbance classification method based on DWT and extreme learning machine
CN113779864B (en) Method and device for constructing running design area for automatic driving automobile
CN114372640A (en) Wind power prediction method based on fluctuation sequence classification correction
CN115056829A (en) Train motion state estimation method for multi-vehicle type continuous learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant