CN114757300B - IGBT module fault prediction method based on GA improved WNN - Google Patents

IGBT module fault prediction method based on GA improved WNN Download PDF

Info

Publication number
CN114757300B
CN114757300B CN202210484151.3A CN202210484151A CN114757300B CN 114757300 B CN114757300 B CN 114757300B CN 202210484151 A CN202210484151 A CN 202210484151A CN 114757300 B CN114757300 B CN 114757300B
Authority
CN
China
Prior art keywords
data
model
prediction
training
neural network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210484151.3A
Other languages
Chinese (zh)
Other versions
CN114757300A (en
Inventor
吴松荣
黄柯勋
杨平
周懿
张浩然
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Southwest Jiaotong University
Original Assignee
Southwest Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Southwest Jiaotong University filed Critical Southwest Jiaotong University
Priority to CN202210484151.3A priority Critical patent/CN114757300B/en
Publication of CN114757300A publication Critical patent/CN114757300A/en
Application granted granted Critical
Publication of CN114757300B publication Critical patent/CN114757300B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/10Pre-processing; Data cleansing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/086Learning methods using evolutionary algorithms, e.g. genetic algorithms or genetic programming
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/12Computing arrangements based on biological models using genetic models
    • G06N3/126Evolutionary algorithms, e.g. genetic algorithms or genetic programming

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Biophysics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Biomedical Technology (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Physiology (AREA)
  • Genetics & Genomics (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention discloses an IGBT module fault prediction method based on a wavelet neural network improved by a genetic algorithm, which comprises the following steps: acquiring emitter saturation voltage drop data of an IGBT module, and performing bad point removal, data compression and dimension reduction treatment; the data is imported into a prediction model and a training set is divided, and a verification set and a prediction set are obtained; training a data training set by adopting a wavelet neural network WNN; optimizing a wavelet neural network WNN by using a genetic algorithm GA, and retraining a model; and substituting the verification data set into the model for verification, and substituting the prediction set into the model for fault prediction. The invention effectively improves the accuracy and precision of fault prediction; meanwhile, on the basis of guaranteeing the fault prediction precision, the model structure is simplified, the detection precision is improved, and the application deployment feasibility of the model is greatly improved.

Description

IGBT module fault prediction method based on GA improved WNN
Technical Field
The invention belongs to the field of rapid fault prediction of IGBT modules, and particularly relates to an IGBT module fault prediction method of a wavelet neural network (Wave Neural Network, WNN) based on an improved genetic algorithm (GeneticAlgorithm, GA).
Background
The IGBT module is one of core components in the high-speed train traction converter, and is used as a main switching element forming an inverter, so that the reliability of the IGBT module is a key of whether the whole converter system can stably operate. However, in the application process, the IGBT module can bear severe load impact, defect damage (shown in fig. 2) such as bonding wire falling off and solder layer fatigue easily occurs, the damage is accumulated along with time, and finally the IGBT module is invalid, so that the converter breaks down, and a great challenge is brought to the safe operation of a train, so that the IGBT fault is accurately predicted, the high-speed train can be overhauled and maintained in a planned manner, and the intelligent operation and maintenance of a rail transit system is effectively improved.
The current technology for IGBT module fault prediction can be divided into two categories, namely reliability model prediction and data driving method prediction. The method for carrying out fault prediction based on the reliability model has high prediction precision (such as Norris-Landzberg analysis model), but needs to know the physical properties of the module, is easily influenced by the working condition of the module, and has difficult modeling and limited application; some technologies (such as a time delay neural network) adopting data driving method prediction use too complex algorithms, have high requirements on the computing performance of equipment, increase the use cost and are difficult to realize large-scale application deployment.
Disclosure of Invention
Aiming at the defects of the prior art, the method aims at solving the problems of complex modeling, high use cost and difficult model deployment of the traditional model. The invention provides an IGBT module fault prediction method based on a wavelet neural network (Wave Neural Network, WNN) improved by a genetic algorithm (Genetic Algorithm, GA).
The invention discloses an IGBT module fault prediction method based on a wavelet neural network improved by a genetic algorithm, which comprises the following steps:
step 1: and acquiring the emitter saturation voltage drop of the IGBT module.
And taking the IGBT module emitter saturation voltage drop as an input parameter of a fault prediction model, and collecting the on-line working IGBT module emitter saturation voltage drop through a data collection card arranged near a traction converter in a high-speed train, and sending the data to an upper computer for storage.
Step 2: and (5) preprocessing data.
Judging whether fault data exist in the acquired data by taking the increase of saturated pressure drop by 5% as a module failure threshold value, and if not, discarding the acquisition result; and for a complete data set containing fault data, dead pixels are removed, the data are compressed, and dimension reduction processing is carried out.
Firstly, removing dead pixels by using a Rhin reaching criterion, and for the acquired data x 1 ,x 2 ,…,x i ,…x n Calculate the arithmetic mean value
Figure BDA0003628945270000021
Residual error->
Figure BDA0003628945270000022
Then using Bessel method to obtain root mean square deviation +.>
Figure BDA0003628945270000023
If->
Figure BDA0003628945270000024
Discard x i If->
Figure BDA0003628945270000025
Then reserve x i
And then, compressing the data with dead pixels removed, and taking the average value of the single-operation data as the characteristic value of the data.
Finally, carrying out data normalization processing to map the data samples into the range of [ c, d ] according to the unified standard:
Figure BDA0003628945270000026
wherein x is min X is the smallest sample in the sample set max Is the largest sample.
Step 3: the data is imported into the predictive model and partitioned.
Uploading the preprocessed IGBT saturated voltage drop data set to a fault prediction model, and dividing the IGBT saturated voltage drop data set into 15% prediction sets according to 70% training sets and 15% verification sets.
Step 4: and (5) model basic training.
In order to exert the characteristics of the time sequence of the pressure drop data of the module, a training and predicting method for predicting 11 th data by adopting 10 data as a group is adopted; setting 10 input layer neurons and 1 output layer neurons of the fault prediction model, setting 10 hidden layer neurons, and carrying out an iteration cycle 1000 times, wherein the training precision is 0.1; and further training the data training set by adopting a wavelet neural network WNN. The wavelet basis function used is the Morlet wavelet basis function:
Figure BDA0003628945270000027
the output expression is:
Figure BDA0003628945270000028
wherein k=1, 2, … N is the number of nodes of the input layer, i=1, 2, … m is the number of nodes of the output layer, j=1, 2, … N is the number of nodes of the hidden layer, ω ij For connecting the weight of the output layer node i and the hidden layer node j, a and b are wavelet expansion translation coefficients and omega respectively jk To connect the weights of hidden layer node j and input layer node k, x k (t) is the t sample point, y, of the k-th input sample of the input layer i An ith output value of the output layer; η is a Sigmoid function and μ is a learning rate.
The error function is:
Figure BDA0003628945270000029
where p=1, 2, …, P is the number of modes of the input samples,
Figure BDA00036289452700000210
the ith desired output of the P-th mode layer,/->
Figure BDA00036289452700000211
The ith actual network output of the P-th mode layer.
Correcting the network weight and the wavelet expansion translation coefficient by using a gradient correction method, wherein the correction rule is as follows:
Figure BDA0003628945270000031
Figure BDA0003628945270000032
Figure BDA0003628945270000033
Figure BDA0003628945270000034
wherein a is j 、b j The expansion and contraction translation coefficients of the j hidden layer nodes are respectively, delta is an introduced momentum coefficient, and delta represents the variation of corresponding parameters.
Step 5: model optimization improvement.
Optimizing a wavelet neural network WNN by using a genetic algorithm GA; and (3) optimizing errors obtained by wavelet neural network WNN training by using 3 operators of selection, crossing and variation, and completing 3 operations by selecting a roulette method of a formula (9), single-point crossing of a formula (10) and basic bit variation of formulas (11) - (12).
Figure BDA0003628945270000035
Figure BDA0003628945270000036
Figure BDA0003628945270000037
f(g)=r 2 (1-g/G max ) (12)
In the formula (9), p i Probability of being selected for the ith individual, f i The fitness value is used as the fitness value; in formula (10), individual a m And a n Crossing at position j, b being [0,1]A random number therebetween; in the formulas (11) and (12), a ij The j-th part, a, of the i-th individual max Is a as ij Upper bound of a min Is a as ij R is [0,1 ]]Some random number between f (g) is a variation function, r 2 Is [0,1]Some random number between, G is the current iteration number, G max Is the maximum number of iterations.
After the genetic algorithm GA optimization was improved, the model was retrained.
Step 6: model verification and prediction.
And 5, substituting 15% of verification data sets into the model for verification according to the training result after the optimization in the step 5, and substituting 15% of prediction sets into the model for fault prediction after the accuracy of the verification results meets the set threshold.
Further comprising the step 7 of: model accuracy evaluation: the model prediction result was evaluated using 3 evaluation indexes of mean absolute error MAE, root mean square error RMSE, and absolute average percent error MAPE.
The beneficial technical effects of the invention are as follows:
1. according to the IGBT module fault prediction method based on the GA improved WNN, multi-scale analysis is carried out on IGBT fault data through wavelet transformation, the IGBT fault data are decomposed into a plurality of detail sequences representing high-frequency parts and background sequences representing low-frequency parts, the advantages of forward transmission and error reverse transmission of BP neural network signals are maintained, the advantage of processing nonlinear time sequence problems in wavelet analysis is achieved, the model structure is simplified on the basis of guaranteeing fault prediction precision, the detection precision is improved, and the application deployment feasibility of the model is greatly improved.
2. According to the invention, through optimizing the network training process by utilizing three operators, namely GA selection, intersection and mutation, the problem that the prediction result error is large due to the fact that the original training method is easy to fall into local optimum is effectively avoided, the principle is clear, the operation is simple and quick, the model can be converged to a globally optimum individual, and the accuracy and precision of fault prediction are effectively improved.
Drawings
Fig. 1 is a flowchart of an IGBT module failure prediction method of the wavelet neural network based on genetic algorithm improvement of the present invention.
Fig. 2 is a diagram illustrating an IGBT module failure.
Fig. 3 is a structural diagram of a wavelet neural network.
Detailed Description
The invention will be described in further detail with reference to the accompanying drawings and the detailed description.
The invention discloses a method for predicting IGBT module faults of a wavelet neural network based on genetic algorithm improvement, which is shown in figure 1, and comprises the following steps:
step 1: and acquiring the emitter saturation voltage drop of the IGBT module.
And taking the IGBT module emitter saturation voltage drop as an input parameter of a fault prediction model, and collecting the on-line working IGBT module emitter saturation voltage drop through a data collection card arranged near a traction converter in a high-speed train, and sending the data to an upper computer for storage.
Step 2: and (5) preprocessing data.
Judging whether fault data exist in the acquired data by taking the increase of saturated pressure drop by 5% as a module failure threshold value, and if not, discarding the acquisition result; for a complete data set containing fault data, the quantity of the emitter saturation voltage drop data of the IGBT module acquired from the acquisition card is huge, and dead pixels possibly exist, so that the dead pixels are removed, the data are compressed, and dimension reduction processing is carried out.
Firstly, removing dead pixels by using a Rhin reaching criterion, and for the acquired data x 1 ,x 2 ,…,x i ,…x n Calculate the arithmetic mean value
Figure BDA0003628945270000051
Residual error->
Figure BDA0003628945270000052
Then using Bessel method to obtain root mean square deviation +.>
Figure BDA0003628945270000053
If->
Figure BDA0003628945270000054
Discard x i If->
Figure BDA0003628945270000055
Then reserve x i
And then, compressing the data with dead pixels removed, and taking the average value of the single-operation data as the characteristic value of the data.
Finally, carrying out data normalization processing to map the data samples into the range of [ c, d ] according to the unified standard:
Figure BDA0003628945270000056
wherein x is min X is the smallest sample in the sample set max Is the largest sample.
Step 3: the data is imported into the predictive model and partitioned.
Uploading the preprocessed IGBT saturated voltage drop data set to a fault prediction model, and dividing the IGBT saturated voltage drop data set into 15% prediction sets according to 70% training sets and 15% verification sets.
Step 4: and (5) model basic training.
In order to exert the characteristics of the time sequence of the pressure drop data of the module, a training and predicting method for predicting 11 th data by adopting 10 data as a group is adopted; setting 10 input layer neurons and 1 output layer neurons of a fault prediction model, and hidingThe number of the layer-containing neurons is 10, the iteration period is 1000 times, and the training precision is 0.1; and further training the data training set by adopting a wavelet neural network WNN. The wavelet neural network used is shown in fig. 3, X 1 、X 2 、X n Is the input parameter of WNN, Y 1 、Y 2 、Y m For prediction of output, the wavelet basis function used is the Morlet wavelet basis function:
Figure BDA0003628945270000057
the output expression is:
Figure BDA0003628945270000058
wherein k=1, 2, … N is the number of nodes of the input layer, i=1, 2, … m is the number of nodes of the output layer, j=1, 2, … N is the number of nodes of the hidden layer, ω ij For connecting the weight of the output layer node i and the hidden layer node j, a and b are wavelet expansion translation coefficients and omega respectively jk To connect the weights of hidden layer node j and input layer node k, x k (t) is the t sample point, y, of the k-th input sample of the input layer i An ith output value of the output layer; η is a Sigmoid function and μ is a learning rate.
The error function is:
Figure BDA0003628945270000059
where p=1, 2, …, P is the number of modes of the input samples,
Figure BDA00036289452700000510
the ith desired output of the P-th mode layer,/->
Figure BDA00036289452700000511
The ith actual network output of the P-th mode layer.
Correcting the network weight and the wavelet expansion translation coefficient by using a gradient correction method, wherein the correction rule is as follows:
Figure BDA0003628945270000061
Figure BDA0003628945270000062
Figure BDA0003628945270000063
Figure BDA0003628945270000064
wherein a is j 、b j The expansion and contraction translation coefficients of the j hidden layer nodes are respectively, delta is an introduced momentum coefficient, and delta represents the variation of corresponding parameters.
Step 5: model optimization improvement.
The gradient correction method used by WNN in the step 4 is slow in evolution and easy to sink into local optimum, so that the model prediction precision is insufficient. For this purpose, a genetic algorithm GA is used to optimize the wavelet neural network WNN; and (3) optimizing errors obtained by wavelet neural network WNN training by using 3 operators of selection, crossing and variation, and completing 3 operations by selecting a roulette method of a formula (9), single-point crossing of a formula (10) and basic bit variation of formulas (11) - (12).
Figure BDA0003628945270000065
Figure BDA0003628945270000066
Figure BDA0003628945270000067
f(g)=r 2 (1-g/G max ) (12)
In the formula (9), p i Probability of being selected for the ith individual, f i The fitness value is used as the fitness value; in formula (10), individual a m And a n Crossing at position j, b being [0,1]A random number therebetween; in the formulas (11) and (12), a ij The j-th part, a, of the i-th individual max Is a as ij Upper bound of a min Is a as ij R is [0,1 ]]Some random number between f (g) is a variation function, r 2 Is [0,1]Some random number between, G is the current iteration number, G max Is the maximum number of iterations.
After the genetic algorithm GA optimization was improved, the model was retrained.
Step 6: model verification and prediction.
And 5, substituting 15% of verification data sets into the model for verification according to the training result after the optimization in the step 5, and substituting 15% of prediction sets into the model for fault prediction after the accuracy of the verification results meets the set threshold.
Step 7: model accuracy evaluation: the model predictions were evaluated using 3 evaluation indices, average absolute error (Mean absolute error, MAE), root mean square error (Root mean square error, RMSE) and absolute average percent error (Mean absolute percentage error, MAPE). MAE, RMSE, MAPE is smaller value to prove that the model prediction accuracy is higher.
In a system level, the invention provides the IGBT module fault prediction method capable of being applied and deployed on a large scale, effectively improves the fault prediction precision, and can be deployed on embedded equipment and predicted on line in real time. On the model level, the invention provides a WNN prediction algorithm based on GA improvement, and the network training process is optimized through three operators of selection, intersection and variation, so that the method has the advantages of simple principle and simplicity and convenience in operation.

Claims (2)

1. The IGBT module fault prediction method based on the wavelet neural network improved by the genetic algorithm is characterized by comprising the following steps of:
step 1: acquiring the saturation voltage drop of the emitter of the IGBT module;
the IGBT module collector-emitter saturation voltage drop is used as an input parameter of a fault prediction model, and the IGBT module collector-emitter saturation voltage drop is acquired on line and works through a data acquisition card arranged near a traction converter in a high-speed train, and data is sent to an upper computer and stored;
step 2: preprocessing data;
judging whether fault data exist in the acquired data by taking the increase of saturated pressure drop by 5% as a module failure threshold value, and if not, discarding the acquisition result; for a complete data set containing fault data, dead pixels need to be removed, the data is compressed, and dimension reduction processing is carried out;
firstly, removing dead pixels by using a Rhin reaching criterion, and for the acquired data x 1 ,x 2 ,…,x i ,…x n Calculate the arithmetic mean value
Figure FDA0003628945260000011
Residual error->
Figure FDA0003628945260000012
Then using Bessel method to obtain root mean square deviation +.>
Figure FDA0003628945260000013
If it is
Figure FDA0003628945260000014
Discard x i If->
Figure FDA0003628945260000015
Then reserve x i
Then, compressing the data with dead pixels removed, and taking the average value of the single-operation data as the characteristic value of the data;
finally, carrying out data normalization processing to map the data samples into the range of [ c, d ] according to the unified standard:
Figure FDA0003628945260000016
wherein x is min X is the smallest sample in the sample set max Is the largest sample;
step 3: importing data into a prediction model and dividing;
uploading the preprocessed IGBT saturated voltage drop data set to a fault prediction model, and dividing the IGBT saturated voltage drop data set into 15 percent prediction sets according to 70 percent training sets and 15 percent verification sets;
step 4: training a model foundation;
in order to exert the characteristics of the time sequence of the pressure drop data of the module, a training and predicting method for predicting 11 th data by adopting 10 data as a group is adopted; setting 10 input layer neurons and 1 output layer neurons of the fault prediction model, setting 10 hidden layer neurons, and carrying out an iteration cycle 1000 times, wherein the training precision is 0.1; further training the data training set by adopting a wavelet neural network WNN; the wavelet basis function used is the Morlet wavelet basis function:
Figure FDA0003628945260000017
the output expression is:
Figure FDA0003628945260000021
wherein k=1, 2, … N is the number of nodes of the input layer, i=1, 2, … m is the number of nodes of the output layer, j=1, 2, … N is the number of nodes of the hidden layer, ω ij For connecting the weight of the output layer node i and the hidden layer node j, a and b are wavelet expansion translation coefficients and omega respectively jk To connect the weights of hidden layer node j and input layer node k, x k (t) is the t sample point, y, of the k-th input sample of the input layer i An ith output value of the output layer; η is a Sigmoid function, μ is a learning rate;
the error function is:
Figure FDA0003628945260000022
where p=1, 2, …, P is the number of modes of the input samples,
Figure FDA0003628945260000023
the ith desired output of the P-th mode layer,/->
Figure FDA0003628945260000024
The ith actual network output of the P-th mode layer;
correcting the network weight and the wavelet expansion translation coefficient by using a gradient correction method, wherein the correction rule is as follows:
Figure FDA0003628945260000025
Figure FDA0003628945260000026
Figure FDA0003628945260000027
Figure FDA0003628945260000028
wherein a is j 、b j The expansion translation coefficients are j hidden layer nodes respectively, delta is an introduced momentum coefficient, and delta represents the variation of corresponding parameters;
step 5: model optimization and improvement;
optimizing a wavelet neural network WNN by using a genetic algorithm GA; optimizing errors obtained by wavelet neural network WNN training by using 3 operators of selection, crossing and variation respectively, and completing 3 operations by selecting a roulette method of a formula (9), single-point crossing of a formula (10) and basic bit variation of formulas (11) - (12);
Figure FDA0003628945260000029
Figure FDA00036289452600000210
Figure FDA0003628945260000031
f(g)=r 2 (1-g/G max ) (12)
in the formula (9), p i Probability of being selected for the ith individual, f i The fitness value is used as the fitness value; in formula (10), individual a m And a n Crossing at position j, b being [0,1]A random number therebetween; in the formulas (11) and (12), a ij The j-th part, a, of the i-th individual max Is a as ij Upper bound of a min Is a as ij R is [0,1 ]]Some random number between f (g) is a variation function, r 2 Is [0,1]Some random number between, G is the current iteration number, G max The maximum iteration number;
after the genetic algorithm GA is used for optimization and improvement, the model is retrained;
step 6: model verification and prediction;
and 5, substituting 15% of verification data sets into the model for verification according to the training result after the optimization in the step 5, and substituting 15% of prediction sets into the model for fault prediction after the accuracy of the verification results meets the set threshold.
2. The method for predicting the failure of an IGBT module based on a wavelet neural network modified by a genetic algorithm according to claim 1, further comprising the step 7 of: model accuracy evaluation: the model prediction result was evaluated using 3 evaluation indexes of mean absolute error MAE, root mean square error RMSE, and absolute average percent error MAPE.
CN202210484151.3A 2022-05-06 2022-05-06 IGBT module fault prediction method based on GA improved WNN Active CN114757300B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210484151.3A CN114757300B (en) 2022-05-06 2022-05-06 IGBT module fault prediction method based on GA improved WNN

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210484151.3A CN114757300B (en) 2022-05-06 2022-05-06 IGBT module fault prediction method based on GA improved WNN

Publications (2)

Publication Number Publication Date
CN114757300A CN114757300A (en) 2022-07-15
CN114757300B true CN114757300B (en) 2023-04-28

Family

ID=82333212

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210484151.3A Active CN114757300B (en) 2022-05-06 2022-05-06 IGBT module fault prediction method based on GA improved WNN

Country Status (1)

Country Link
CN (1) CN114757300B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110133538A (en) * 2019-05-16 2019-08-16 合肥工业大学 A kind of ANPC three-level inverter open-circuit fault diagnostic method and experiment porch
CN112746934A (en) * 2020-12-31 2021-05-04 江苏国科智能电气有限公司 Method for diagnosing fan fault through self-association neural network

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108416103A (en) * 2018-02-05 2018-08-17 武汉大学 A kind of method for diagnosing faults of electric automobile of series hybrid powder AC/DC convertor

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110133538A (en) * 2019-05-16 2019-08-16 合肥工业大学 A kind of ANPC three-level inverter open-circuit fault diagnostic method and experiment porch
CN112746934A (en) * 2020-12-31 2021-05-04 江苏国科智能电气有限公司 Method for diagnosing fan fault through self-association neural network

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Feiming Liu 等.A Constant Frequency ZVS Modulation Scheme for Four-Switch Buck–Boost Converter With Wide Input and Output Voltage Ranges and Reduced Inductor Current.IEEE Transactions on Industrial Electronics.2022,全文. *
Hailin Hu 等.Open-circuit fault diagnosis of NPC inverter IGBT based on independent component analysis and neural network.2020 7th International Conference on Power and Energy Systems Engineering.2020,134-143. *
黄柯勋 等.基于改进小波神经网络的IGBT时间序列预测算法研究.机车电传动.2021,161-166页. *

Also Published As

Publication number Publication date
CN114757300A (en) 2022-07-15

Similar Documents

Publication Publication Date Title
CN109214575B (en) Ultrashort-term wind power prediction method based on small-wavelength short-term memory network
CN111160520A (en) BP neural network wind speed prediction method based on genetic algorithm optimization
CN107292446B (en) Hybrid wind speed prediction method based on component relevance wavelet decomposition
CN109784692B (en) Rapid safety constraint economic dispatching method based on deep learning
CN110070228B (en) BP neural network wind speed prediction method for neuron branch evolution
CN110334948B (en) Power equipment partial discharge severity evaluation method and system based on characteristic quantity prediction
CN113485261B (en) CAEs-ACNN-based soft measurement modeling method
CN114186379A (en) Transformer state evaluation method based on echo network and deep residual error neural network
CN107121926A (en) A kind of industrial robot Reliability Modeling based on deep learning
CN112347531B (en) Brittle marble Dan Sanwei crack propagation path prediction method and system
CN112434891A (en) Method for predicting solar irradiance time sequence based on WCNN-ALSTM
CN115827335A (en) Time sequence data missing interpolation system and method based on modal crossing method
CN115877068A (en) Voltage sag propagation track identification method of regional power grid based on deep learning
CN114757300B (en) IGBT module fault prediction method based on GA improved WNN
CN117056865B (en) Method and device for diagnosing operation faults of machine pump equipment based on feature fusion
CN113469013A (en) Motor fault prediction method and system based on transfer learning and time sequence
CN116796171A (en) Method and device for predicting residual service life of mechanical equipment
CN116070768A (en) Short-term wind power prediction method based on data reconstruction and TCN-BiLSTM
CN116703819A (en) Rail wagon steel floor damage detection method based on knowledge distillation
CN114372640A (en) Wind power prediction method based on fluctuation sequence classification correction
CN115310746A (en) State evaluation method and system for main transmission system of wind generating set
CN111177975A (en) Aviation equipment availability prediction method based on artificial intelligence
CN111008584A (en) Electric energy quality measurement deficiency repairing method of fuzzy self-organizing neural network
CN117493793A (en) Residual life prediction method based on improved pulse separable convolution enhancement Transformer encoder
CN117933492B (en) Ship track long-term prediction method based on space-time feature fusion

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant