CN107271925A - The level converter Fault Locating Method of modularization five based on depth convolutional network - Google Patents
The level converter Fault Locating Method of modularization five based on depth convolutional network Download PDFInfo
- Publication number
- CN107271925A CN107271925A CN201710493277.6A CN201710493277A CN107271925A CN 107271925 A CN107271925 A CN 107271925A CN 201710493277 A CN201710493277 A CN 201710493277A CN 107271925 A CN107271925 A CN 107271925A
- Authority
- CN
- China
- Prior art keywords
- depth convolutional
- level converter
- data
- modularization
- layer
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 43
- 238000013527 convolutional neural network Methods 0.000 claims abstract description 27
- 238000005070 sampling Methods 0.000 claims abstract description 14
- 238000001514 detection method Methods 0.000 claims abstract description 5
- 230000004927 fusion Effects 0.000 claims abstract description 5
- 238000012549 training Methods 0.000 claims description 42
- 210000002569 neuron Anatomy 0.000 claims description 23
- 239000013598 vector Substances 0.000 claims description 10
- 230000008569 process Effects 0.000 claims description 9
- 238000012360 testing method Methods 0.000 claims description 9
- 239000011159 matrix material Substances 0.000 claims description 6
- 238000011478 gradient descent method Methods 0.000 claims description 5
- 238000010606 normalization Methods 0.000 claims description 5
- 239000000284 extract Substances 0.000 claims description 4
- 108010076504 Protein Sorting Signals Proteins 0.000 claims description 3
- 230000004913 activation Effects 0.000 claims description 3
- 230000006854 communication Effects 0.000 claims description 3
- 238000002474 experimental method Methods 0.000 claims description 3
- 230000004807 localization Effects 0.000 claims description 3
- 238000007781 pre-processing Methods 0.000 claims description 3
- 230000000644 propagated effect Effects 0.000 claims description 3
- 210000005036 nerve Anatomy 0.000 claims 1
- 238000012545 processing Methods 0.000 abstract description 2
- 230000006870 function Effects 0.000 description 6
- 230000005540 biological transmission Effects 0.000 description 4
- ORILYTVJVMAKLC-UHFFFAOYSA-N Adamantane Natural products C1C(C2)CC3CC1CC2C3 ORILYTVJVMAKLC-UHFFFAOYSA-N 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 230000005611 electricity Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 230000001537 neural effect Effects 0.000 description 2
- 238000005457 optimization Methods 0.000 description 2
- 230000004888 barrier function Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 230000001066 destructive effect Effects 0.000 description 1
- 238000006073 displacement reaction Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 238000012804 iterative process Methods 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01R—MEASURING ELECTRIC VARIABLES; MEASURING MAGNETIC VARIABLES
- G01R31/00—Arrangements for testing electric properties; Arrangements for locating electric faults; Arrangements for electrical testing characterised by what is being tested not provided for elsewhere
- G01R31/40—Testing power supplies
- G01R31/42—AC power supplies
Landscapes
- Engineering & Computer Science (AREA)
- Power Engineering (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Locating Faults (AREA)
Abstract
The invention discloses a kind of level converter Fault Locating Method of modularization five based on depth convolutional network, the capacitance voltage signal collected in the submodule of the level converter of modularization five is combined into multi-way series first, sampling obtains data band and is normalized from sequence again, and the data band after processing is considered as gray-scale map and as the input of depth convolutional neural networks model;In depth convolutional neural networks model, the feature of data is extracted first with multiple convolutional layers and pond layer staggeredly, the characteristic pattern of last pond layer is passed into full articulamentum again and carries out fusion treatment, finally failure modes are realized using softmax graders, the failure submodule of different classification correspondence diverse locations, so as to realize the detection and positioning of failure.Data needed for method in the present invention are readily available, it is not necessary to extra sensor, greatly reduce cost, and with stronger MMC Fault Identifications and stationkeeping ability.
Description
Technical field
The present invention relates to converter fault positioning field, more particularly to a kind of modularization five based on depth convolutional network
Level converter Fault Locating Method.
Background technology
Flexible direct current power transmission system employs IGBT turn-off devices and high frequency modulated technology, overcomes Traditional DC defeated
Many inherent shortcomings of electricity, the core of flexible direct current power transmission system is the transverter at its two ends, plays rectification and inversion is made
With.In the actually flexible direct current power transmission system of input engineer applied, transverter mainly has 3 types, is that two level are changed respectively
Device, three-level converter and modularization multi-level converter (Modular Multilevel Converter, MMC) are flowed, is compared
First two transverter, modularization multi-level converter can reach different voltages by changing the submodule quantity in circuit
And power requirement, with output voltage quality is high and harmonic content is few, modular construction is easy to Redundancy Design, system reliability
High the advantages of.
Sub-module fault in MMC is generally divided into 2 types:Short trouble and open fault.Short trouble it is destructive compared with
Greatly, thus in submodule drive circuit short circuit protection module has been commonly equipped with, should by local locking when short trouble occurs
Submodule, it is ensured that system still can normally be run;Although open fault harm is relatively small, it is difficult to be immediately detected, from
And the consequences such as voltage current waveform distortion are caused, threaten the normal operation of system.Therefore, a kind of accurate MMC open circuits are found
Normal operation of the failure Fault Locating Method to flexible direct current power transmission system is very necessary.
The content of the invention
In order to solve the above-mentioned technical problem, the present invention provide that a kind of cost is low, measurement is accurate, positioning precision it is high based on
The level converter Fault Locating Method of modularization five of depth convolutional network.
Technical proposal that the invention solves the above-mentioned problems is:A kind of level of modularization five based on depth convolutional network is changed
Device Fault Locating Method is flowed, is comprised the following steps:
1) capacitance voltage in the level converter of three-phase five is chosen as reference quantity, is obtained in the level converter of three-phase five
Capacitance voltage data in 24 submodules;
2) original capacitance voltage data is converted into the two-dimensional matrix form of similar image by pre-processing, and upsets number
According to collection sample;
3) it is used for failure modes for data sample generation class label;
4) build depth convolutional neural networks framework and train depth convolutional neural networks model;
5) the level converter failure of three-phase five is positioned using the model trained, obtains fault localization accuracy.
The above-mentioned level converter Fault Locating Method of modularization five based on depth convolutional network, the step 2) it is specific
Step is:
2-1) for each group of emulation experiment, choose 24 capacitance voltage signal sequences and be combined into a multi-way series,
And multi-way series are pre-processed according to Min-Max normalization mode, the value after normalization is between [0,1];
It is 2-2) 24 × l using sizewindowSliding window in each multi-way series in chronological order at equal intervals
Num_samples_perSeq data band sample of sampling, lwindowFor the length of sliding window, similar figure is converted data to
The two-dimensional matrix pattern of picture, pretreated data band is considered as gray-scale map;
All data band samples obtained from the sampling of each multi-way series 2-3) are merged into a data set;
2-4) obtained data set is upset smoothly, training set and test set is divided into.
The above-mentioned level converter Fault Locating Method of modularization five based on depth convolutional network, the step 3) it is specific
Step is:
The sample data sampled before 3-1) sub-module fault is occurred is labeled as normally, and label is 0;
3-2) sample data for arriving failure generation post-sampling is labeled as failure, by 24 in the level converter of three-phase five
Submodule is according to from A phases to C phases, and the order in every phase from upper arm to underarm is labeled as 1 to 24 successively, and label is to break down
Submodule label, normal submodules different with 24 are broken down and are defined as different classifications, then one have 25 kinds it is former
Hinder classification.
The above-mentioned level converter Fault Locating Method of modularization five based on depth convolutional network, the step 4) in, it is deep
Spending the framework of convolutional neural networks includes 1 input layer, 3 convolutional layers, 3 pond layers, 2 full articulamentums and 1 output
Layer.
The above-mentioned level converter Fault Locating Method of modularization five based on depth convolutional network, the step 4) in, it is deep
Spending convolutional neural networks model training process steps is:
4-1) weight initialization;The use of average is 0, standard deviation isGaussian Profile initialization weight, wherein n is
The input quantity of each node;
Data input 4-2) is taken out from pretreated training set into depth convolutional neural networks, and provides target
Vectorial D;
4-3) propagated forward:The output for obtaining each layer in depth convolutional neural networks is calculated from front to back;
4-4) backpropagation:Calculate the error of each layer successively from back to front;
4-5) whether error in judgement is less than threshold value, if it is not, then entering step 4-6);If so, then entering step 4-8);
4-6) reversely calculate the adjustment amount of each layer weights and the gradient of biasing successively according to error;
4-7) weight parameter weights and biasing are updated using gradient descent method;
4-8) repeat above-mentioned 4-2) arrive 4-5) parameter finishing is carried out, until error function is less than the threshold value of setting, trained
Journey terminates.
The above-mentioned level converter Fault Locating Method of modularization five based on depth convolutional network, the step 4-7) in,
In the training stage, every time with one group of sample training network model when, by the output of hidden neuron according to certain ratio p with
Machine is arranged to 0, and the weight coefficient of the neuron set to 0 keeps constant in forward and backward communication process, that is, is not involved in weight
Update;Because one group of neuron being zeroed out of selection is a random process, when using gradient descent method training pattern is criticized,
For each group of sample, the neuron for being not involved in weight renewal being selected is different with upper one group, therefore training is obtained
Model be also different;In test phase, it is necessary to activate neurons all in model, become complete structure;It is all
The activation value of neuron is all multiplied by proportionality coefficient 1-p, i.e., different models are averaged;
Before each iteration cycle starts, upset at random sample in training set then according to 100 batch of sizes according to
Secondary reading sample, carries out model training;Data using data are upset at random again after finishing, into the next generation of model
Train, altogether iteration 100 times;
To prevent model over-fitting, morning has been used to stop strategy in the training process, if i.e. point of the model on checking collection
Class accuracy does not rise after continuous 5 iteration cycles, and model terminates training.
The above-mentioned level converter Fault Locating Method of modularization five based on depth convolutional network, the step 5) it is specific
Step is as follows:
5-1) the input layer of pretreated data input to depth convolutional neural networks;
The feature of input data 5-2) is extracted using 3 convolutional layers and pond layer staggeredly;
The output characteristic figure of the 3rd pond layer 5-3) is passed into 2 full articulamentums and carries out fusion treatment;
5-4) output layer judges whether to break down and determines abort situation using softmax graders
The above-mentioned level converter Fault Locating Method of modularization five based on depth convolutional network, the step 5-2) in,
In convolutional layer, each neuron of characteristic pattern is connected with the local receptor field of preceding layer, extracts local by convolution operation
There are multiple characteristic patterns in feature, convolutional layer, each characteristic pattern extracts a kind of feature;When extracting feature, in same characteristic pattern
Neuron share one group of weights, the weights of different characteristic figure are different so as to extracting different features.
The above-mentioned level converter Fault Locating Method of modularization five based on depth convolutional network, the step 5-3) in,
The output characteristic figure of 3rd pond layer is connected into a characteristic vector, the input of first full articulamentum is used as.
The above-mentioned level converter Fault Locating Method of modularization five based on depth convolutional network, the step 5-4) in,
Using the output characteristic vector of second full articulamentum as the input of Softmax graders, fault detection and location problem is turned
Many classification problems comprising fault-free and abort situation are turned to, i.e., for 24 submodules, 25 dimensions for including 25 probability are set
The vectorial output as Softmax graders, corresponding call number is followed successively by 0,1,2 ..., and 24, call number 0 is represented, without reason
Barrier, call number 1,2 ..., 24 represent 24 submodules successively.
The beneficial effects of the present invention are:The electricity that the present invention will be collected in the submodule of the level converter of modularization five
Hold voltage signal and be combined into multi-way series, then sampling obtains data band and is normalized from sequence, after processing
Data band is considered as gray-scale map and as the input of depth convolutional neural networks model;It is first in depth convolutional neural networks model
The feature of data is extracted first with multiple convolutional layers and pond layer staggeredly, then the characteristic pattern of last pond layer is transmitted
Fusion treatment is carried out to full articulamentum, finally failure modes are realized using softmax graders, different classification correspondences is different
The failure submodule of position, so as to realize the detection and positioning of failure.Data needed for method in the present invention are readily available,
And extra sensor is not needed, cost is greatly reduced, and with stronger MMC Fault Identifications and stationkeeping ability, warp
Overtesting is proved:All " normal " sample standard deviations are correctly classified;In all " failure " samples, except " A-SM3 " failure
The classification accuracy rate of sample is that beyond 94%, the classification accuracy rate of remaining sample is all higher than being equal to 95%, the general classification of model
Accuracy is 98.16%, this demonstrate that model proposed by the invention can detect whether exactly it is faulty, and with very high
Precision determine position of the failure submodule in MMC circuits, be that the stable operation and decision-making of modularization multi-level converter are carried
For facility.
Brief description of the drawings
Fig. 1 is flow chart of the invention.
Fig. 2 is the level converter structure chart of three-phase five of the invention.
Fig. 3 is depth convolutional neural networks frame diagram of the invention.
Fig. 4 is convolution operation schematic diagram of the invention.
Fig. 5 is depth convolutional neural networks model training process of the invention.
Fig. 6 recognizes and classified the result figure of the degree of accuracy for the sample data of the present invention.
Embodiment
The present invention is further illustrated with reference to the accompanying drawings and examples.
As shown in figure 1, a kind of level converter Fault Locating Method of modularization five based on depth convolutional network, including
Following steps:
1) capacitance voltage in Fig. 2 in the level converter of three-phase five is chosen as reference quantity, obtains the level change of current of three-phase five
Capacitance voltage data in device in 24 submodules.
2) original capacitance voltage data is converted into the two-dimensional matrix form of similar image by pre-processing, and upsets number
According to collection sample, for training and test model.Concretely comprise the following steps:
2-1) for each group of emulation experiment, 24 capacitance voltage signal sequences in the level converter of modularization five are chosen
Row are combined into a multi-way series, and multi-way series are pre-processed according to Min-Max normalization mode, normalize
Value afterwards is between [0,1];
It is 2-2) 24 × l using sizewindowSliding window in each multi-way series in chronological order at equal intervals
Num_samples_perSeq data band sample of sampling, the length l of sliding windowwindow=200, in each multi-way series
The data volume num_samples_perSeq=800 of sampling, wherein occurring the sample that post-sampling is obtained with failure before failure occurs
Originally it is 400.Then for 72 groups of sampled results, one has 72 × 800=57600 data sample, pretreated data
Band is considered as gray-scale map;
All data band samples obtained from the sampling of each multi-way series 2-3) are merged into a data set;
2-4) obtain 57600 data sets are upset smoothly, training set and test set, training set and test is divided into
The included sample size of collection is respectively 50000,7600.
3) it is used for failure modes for data sample generation class label.Concretely comprise the following steps:
The sample data sampled before 3-1) sub-module fault is occurred is labeled as normally, and label is 0;
3-2) sample data for arriving failure generation post-sampling is labeled as failure, by 24 in the level converter of three-phase five
Submodule is according to from A phases to C phases, and the order in every phase from upper arm to underarm is labeled as 1 to 24 successively, and label is to break down
Submodule label, normal submodules different with 24 are broken down and are defined as different classifications, then one have 25 kinds it is former
Hinder classification.As long as predicting the classification of data sample, it is possible to judge whether to have in MMC submodule break down and determine be
Which submodule breaks down, i.e. abort situation.
4) build depth convolutional neural networks framework and train depth convolutional neural networks model.Depth convolutional Neural net
The framework of network is as shown in figure 3, depth convolutional neural networks framework is one 10 layers of supervised learning network, including 1 inputs
Layer, 3 convolutional layers, 3 pond layers, 2 full articulamentums and 1 output layer.
Depth convolutional neural networks model training process steps are:
4-1) weight initialization;The use of average is 0, standard deviation isGaussian Profile initialization weight, wherein n is
The input quantity of each node;
Data input 4-2) is taken out from pretreated training set into depth convolutional neural networks, and provides target
Vectorial D;
4-3) propagated forward:The output for obtaining each layer in depth convolutional neural networks is calculated from front to back;
4-4) backpropagation:Calculate the error of each layer successively from back to front;
4-5) whether error in judgement is less than threshold value, if it is not, then entering step 4-6);If so, then entering step 4-8);
4-6) reversely calculate the adjustment amount of each layer weights and the gradient of biasing successively according to error;
4-7) weight parameter weights and biasing are updated using gradient descent method;
6) optimized algorithm and strategy used during model training of the present invention:
Optimized algorithm:Model optimization is carried out from Adam algorithms in the present invention, initial learning rate is set to 0.001. Adam
Optimized algorithm is based on First-order Gradient come optimization object function, and it is according to single order of the cost function to the gradient of each model parameter
Moments estimation and second order moments estimation adjust the learning rate of each parameter.In each iterative process, the Learning Step of parameters
All be limited in one determine in the range of, it is ensured that will not cause very big Learning Step because of gradient is excessive, thus parameter
Value is all more stable.
Dropout strategies:In the training stage, when every time with one group of sample training network model, by the defeated of hidden neuron
Go out and be randomly provided into 0 according to certain ratio p, the weight coefficient of the neuron set to 0 is protected in forward and backward communication process
Hold constant, that is, be not involved in weight renewal.Because one group of neuron being zeroed out of selection is a random process, using batch ladder
When spending descent method training pattern, for each group of sample, the neuron for being not involved in weight renewal being selected is not with upper one group
The same, therefore the model that training is obtained is also different.In test phase, it is necessary to activate neurons all in model,
Become complete structure.The activation value of all neurons is all multiplied by proportionality coefficient 1-p, i.e., different models are averaged.Herein
The framework proposed connects Dropout layers after each pond layer and full articulamentum, and proportionality coefficient p is set to 0.5, for preventing
Model over-fitting.
Iterative strategy:Because the data set that this model is used is not huge enough, reused so data are carried out with a point generation.
Before each iteration cycle starts, the sample in training set is upset at random and then sample is successively read according to 100 batch size
This, carries out model training.Data, into next generation's training of model, are changed altogether using data are upset at random again after finishing
Generation 100 times.
Early stop strategy:To prevent model over-fitting, " early stopping " strategy has been used in the training process.Specifically, if mould
Classification accuracy rate of the type on checking collection does not rise after continuous 5 iteration cycles, and model terminates training.
4-8) repeat above-mentioned 4-2) arrive 4-5) parameter finishing is carried out, until error function is less than the threshold value of setting, trained
Journey terminates.
5) the level converter failure of three-phase five is positioned using the model trained, obtains fault localization accuracy.
Step 5) comprise the following steps that:
5-1) the input layer of pretreated data input to depth convolutional neural networks, input layer passes through pretreatment
Convert data to the two-dimensional matrix pattern of similar image.
The feature of input data 5-2) is extracted using 3 convolutional layers and pond layer staggeredly.
In convolutional layer, each neuron of characteristic pattern is connected with the local receptor field of preceding layer, carried by convolution operation
Take local feature.There are multiple characteristic patterns in convolutional layer, each characteristic pattern extracts a kind of feature.When extracting feature, same spy
Levy the neuron in figure and share one group of weights (i.e. convolution kernel), the weights of different characteristic figure are different so as to extract different features.
Weighting parameter is constantly adjusted in the training process, makes feature extraction towards the direction progress for being conducive to classification.Convolution operation is shown
It is intended to as shown in Figure 4.Relu unsaturation functions are employed in the present invention, it export bound be not limited to [- 1,1] it
It is interior, (Gradient vanish) problem is dissipated for alleviating the gradient that depth structure is brought, accelerates the training of network.
The pond layer of the present invention is added in after convolutional layer using maximum pondization strategy, and main function is to reduce characteristic pattern
Resolution ratio, reduces intrinsic dimensionality, it is to avoid network over-fitting, while increase network is to displacement, scaling, distortion to a certain extent
Robustness.
The convolution step-length of all convolutional layers is set to 1 in the present invention, and the pond of all pond layers is sized to 2 × 2 and pond
Step-length is set to 2.In first convolutional layer, the wave filter that the input picture and 6 sizes that size is 24 × 200 are 5 × 11 is carried out
Convolution, produces the characteristic pattern that 6 sizes are 20 × 190.After first pond layer, their size is changed into 10 × 95.So
The output characteristic figure of first pond layer is used as to the input feature vector figure of second convolutional layer afterwards, is 3 × 8 with 16 sizes
Wave filter performs convolution operation, obtains the characteristic pattern that 16 sizes are 8 × 88.After second pond layer, their size
It is changed into 4 × 44.Use what is obtained after wave filter and characteristic pattern convolution of 32 sizes for 3 × 7, convolution in 3rd convolutional layer
The size of characteristic pattern is 2 × 38, then operated by pondization that obtained characteristic pattern size is changed into 1 × 19.
The output characteristic figure of the 3rd pond layer 5-3) is passed into 2 full articulamentums and carries out fusion treatment;Full articulamentum
The output characteristic figure of 3rd pond layer is connected into a characteristic vector, as the input of first full articulamentum, two complete
Neuronal quantity in articulamentum is set to 256 and 128.
5-4) output layer judges whether the abort situation that breaks down and determine using softmax graders.
Output layer uses Softmax graders, and the Fault Locating Method that text invention is proposed depends on Softmax graders
Output judge whether to break down and fault location.Specifically, fault detection and location problem is converted into bag
Many classification problems containing " fault-free " and abort situation, i.e., for n submodule, set comprising n+1 probability n+1 tie up to
The output as Softmax graders is measured, corresponding call number is followed successively by 0,1,2 ..., n.Call number 0 represents " fault-free ",
Call number 1,2 ..., n represents n submodule successively.It regard the output characteristic vector of second full articulamentum as the defeated of grader
Enter." failure " of handle " normal " submodule different with 24 has 25 kinds of classifications, therefore as different classifications in the present invention
The output of Softmax graders is 25 dimensional vectors, and correspondence belongs to the probability of each class.
As shown in fig. 6, test proves that:All " normal " sample standard deviations are correctly classified;All " failure " samples
In, in addition to the classification accuracy rate of " A-SM3 " fault sample is 94%, the classification accuracy rate of remaining sample is all higher than being equal to
95%, the general classification accuracy of model is 98.16%, this demonstrate that model proposed by the invention can be detected exactly
It is whether faulty, and position of the failure submodule in MMC circuits is determined with very high precision, it is the modular multilevel change of current
The stable operation and decision-making of device provide facility.
Claims (10)
1. a kind of level converter Fault Locating Method of modularization five based on depth convolutional network, comprises the following steps:
1) capacitance voltage in the level converter of three-phase five is chosen as reference quantity, obtains 24 sons in the level converter of three-phase five
Capacitance voltage data in module;
2) original capacitance voltage data is converted into the two-dimensional matrix form of similar image by pre-processing, and upsets data set sample
This;
3) it is used for failure modes for data sample generation class label;
4) build depth convolutional neural networks framework and train depth convolutional neural networks model;
5) the level converter failure of three-phase five is positioned using the model trained, obtains fault localization accuracy.
2. the modularization five level converter Fault Locating Method according to claim 1 based on depth convolutional network, its
It is characterised by, the step 2) concretely comprise the following steps:
2-1) for each group of emulation experiment, choose 24 capacitance voltage signal sequences and be combined into a multi-way series, and press
Multi-way series are pre-processed according to Min-Max normalization mode, the value after normalization is between [0,1];
It is 2-2) 24 × l using sizewindowSliding window in each multi-way series equal interval sampling in chronological order
Num_samples_perSeq data band sample, lwindowFor the length of sliding window, the two of similar image are converted data to
Matrix pattern is tieed up, pretreated data band is considered as gray-scale map;
All data band samples obtained from the sampling of each multi-way series 2-3) are merged into a data set;
2-4) obtained data set is upset smoothly, training set and test set is divided into.
3. the modularization five level converter Fault Locating Method according to claim 1 based on depth convolutional network, its
It is characterised by, the step 3) concretely comprise the following steps:
The sample data sampled before 3-1) sub-module fault is occurred is labeled as normally, and label is 0;
3-2) sample data for arriving failure generation post-sampling is labeled as failure, by 24 submodules in the level converter of three-phase five
According to from A phases to C phases, the order in every phase from upper arm to underarm is labeled as 1 to 24 successively, and label is the submodule broken down
Label, normal submodules different with 24 are broken down and are defined as different classifications, then one have 25 kinds of fault categories.
4. the modularization five level converter Fault Locating Method according to claim 1 based on depth convolutional network, its
It is characterised by, the step 4) in, the framework of depth convolutional neural networks includes 1 input layer, 3 convolutional layers, 3 ponds
Layer, 2 full articulamentums and 1 output layer.
5. the modularization five level converter Fault Locating Method according to claim 4 based on depth convolutional network, its
It is characterised by, the step 4) in, depth convolutional neural networks model training process steps are:
4-1) weight initialization;The use of average is 0, standard deviation isGaussian Profile initialization weight, wherein n is each section
The input quantity of point;
Data input 4-2) is taken out from pretreated training set into depth convolutional neural networks, and provides object vector D;
4-3) propagated forward:The output for obtaining each layer in depth convolutional neural networks is calculated from front to back;
4-4) backpropagation:Calculate the error of each layer successively from back to front;
4-5) whether error in judgement is less than threshold value, if it is not, then entering step 4-6);If so, then entering step 4-8);
4-6) reversely calculate the adjustment amount of each layer weights and the gradient of biasing successively according to error;
4-7) weight parameter weights and biasing are updated using gradient descent method;
4-8) repeat above-mentioned 4-2) arrive 4-5) parameter finishing is carried out, until error function is less than the threshold value of setting, training process knot
Beam.
6. the modularization five level converter Fault Locating Method according to claim 5 based on depth convolutional network, its
It is characterised by, the step 4-7) in, in the training stage, when every time with one group of sample training network model, by hidden neuron
Output be randomly provided into 0 according to certain ratio p, the weight coefficient of the neuron set to 0 is in forward and backward communication process
It is middle to keep constant, that is, it is not involved in weight renewal;Because one group of neuron being zeroed out of selection is a random process, criticized using
During gradient descent method training pattern, for each group of sample, what is be selected is not involved in the neuron of weight renewal is with upper one group
It is different, therefore the model that training is obtained is also different;In test phase, it is necessary to activate nerves all in model
Member, becomes complete structure;The activation value of all neurons is all multiplied by proportionality coefficient 1-p, i.e., different models are averaged;
Before each iteration cycle starts, the sample in training set is upset at random and then is read successively according to 100 batch of sizes
Sampling originally, carries out model training;Data using data are upset at random again after finishing, into next generation's training of model,
It is total to iteration 100 times;
To prevent model over-fitting, morning has been used to stop strategy in the training process, if that is, classification of the model on checking collection is just
True rate does not rise after continuous 5 iteration cycles, and model terminates training.
7. the modularization five level converter Fault Locating Method according to claim 1 based on depth convolutional network, its
It is characterised by, the step 5) comprise the following steps that:
5-1) the input layer of pretreated data input to depth convolutional neural networks;
The feature of input data 5-2) is extracted using 3 convolutional layers and pond layer staggeredly;
The output characteristic figure of the 3rd pond layer 5-3) is passed into 2 full articulamentums and carries out fusion treatment;
5-4) output layer judges whether the abort situation that breaks down and determine using softmax graders.
8. the modularization five level converter Fault Locating Method according to claim 7 based on depth convolutional network, its
It is characterised by, the step 5-2) in, in convolutional layer, each neuron of characteristic pattern and the local receptor field phase of preceding layer
Even, being extracted by convolution operation has multiple characteristic patterns in local feature, convolutional layer, each characteristic pattern extracts a kind of feature;Carrying
When taking feature, the neuron in same characteristic pattern shares one group of weights, and the weights of different characteristic figure are different so as to extract difference
Feature.
9. the modularization five level converter Fault Locating Method according to claim 7 based on depth convolutional network, its
It is characterised by, the step 5-3) in, the output characteristic figure of the 3rd pond layer is connected into a characteristic vector, first is used as
The input of individual full articulamentum.
10. the modularization five level converter Fault Locating Method according to claim 7 based on depth convolutional network, its
It is characterised by, the step 5-4) in, it regard the output characteristic vector of second full articulamentum as the defeated of Softmax graders
Enter, fault detection and location problem is converted into many classification problems comprising fault-free and abort situation, i.e., for 24 submodules
Block, sets 25 dimensional vectors comprising 25 probability as the output of Softmax graders, corresponding call number is followed successively by 0,1,
2 ..., 24, call number 0 is represented, fault-free, call number 1,2 ..., and 24 represent 24 submodules successively.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710493277.6A CN107271925B (en) | 2017-06-26 | 2017-06-26 | Five level converter Fault Locating Method of modularization based on depth convolutional network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710493277.6A CN107271925B (en) | 2017-06-26 | 2017-06-26 | Five level converter Fault Locating Method of modularization based on depth convolutional network |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107271925A true CN107271925A (en) | 2017-10-20 |
CN107271925B CN107271925B (en) | 2019-11-05 |
Family
ID=60068507
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710493277.6A Expired - Fee Related CN107271925B (en) | 2017-06-26 | 2017-06-26 | Five level converter Fault Locating Method of modularization based on depth convolutional network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107271925B (en) |
Cited By (34)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107633242A (en) * | 2017-10-23 | 2018-01-26 | 广州视源电子科技股份有限公司 | Network model training method, device, equipment and storage medium |
CN108010016A (en) * | 2017-11-20 | 2018-05-08 | 华中科技大学 | A kind of data-driven method for diagnosing faults based on convolutional neural networks |
CN108120900A (en) * | 2017-12-22 | 2018-06-05 | 北京映翰通网络技术股份有限公司 | A kind of electrical power distribution network fault location method and system |
CN108334937A (en) * | 2018-02-06 | 2018-07-27 | 大连海事大学 | A kind of oil film relative thickness extracting method and system waterborne based on convolutional neural networks |
CN108564565A (en) * | 2018-03-12 | 2018-09-21 | 华南理工大学 | A kind of power equipment infrared image multi-target orientation method based on deep learning |
CN108777157A (en) * | 2018-05-08 | 2018-11-09 | 南京邮电大学 | The adaptive approach of MLC flash voltage threshold is predicted based on deep neural network |
CN108875537A (en) * | 2018-02-28 | 2018-11-23 | 北京旷视科技有限公司 | Method for checking object, device and system and storage medium |
CN108932567A (en) * | 2018-08-10 | 2018-12-04 | 燕山大学 | A kind of more energy consumption index prediction techniques of cement burning assembly procedure based on convolutional neural networks |
CN109141847A (en) * | 2018-07-20 | 2019-01-04 | 上海工程技术大学 | A kind of aircraft system faults diagnostic method based on MSCNN deep learning |
CN109242049A (en) * | 2018-11-21 | 2019-01-18 | 安徽建筑大学 | Water supply pipe network multipoint leakage positioning method and device based on convolutional neural network |
CN109409567A (en) * | 2018-09-17 | 2019-03-01 | 西安交通大学 | Complex device method for predicting residual useful life based on the double-deck shot and long term memory network |
CN109444604A (en) * | 2018-12-13 | 2019-03-08 | 武汉理工大学 | A kind of DC/DC converter method for diagnosing faults based on convolutional neural networks |
CN109614981A (en) * | 2018-10-17 | 2019-04-12 | 东北大学 | The Power System Intelligent fault detection method and system of convolutional neural networks based on Spearman rank correlation |
CN109838696A (en) * | 2019-01-09 | 2019-06-04 | 常州大学 | Pipeline fault diagnostic method based on convolutional neural networks |
CN109969895A (en) * | 2019-04-15 | 2019-07-05 | 淄博东升电梯工程有限公司 | A kind of failure prediction method based on parameters of elevator run, terminal and readable storage medium storing program for executing |
CN110223195A (en) * | 2019-05-22 | 2019-09-10 | 上海交通大学 | Distribution network failure detection method based on convolutional neural networks |
CN110415215A (en) * | 2019-06-27 | 2019-11-05 | 同济大学 | Intelligent detecting method based on figure neural network |
CN110488121A (en) * | 2019-08-22 | 2019-11-22 | 广东工业大学 | A kind of fault detection method of MMC, system, device and readable storage medium storing program for executing |
CN110672976A (en) * | 2019-10-18 | 2020-01-10 | 东北大学 | Multi-terminal direct-current transmission line fault diagnosis method based on parallel convolutional neural network |
CN111459697A (en) * | 2020-03-27 | 2020-07-28 | 河海大学 | Excitation system fault monitoring method based on deep learning network |
CN111459140A (en) * | 2020-04-10 | 2020-07-28 | 北京工业大学 | Fermentation process fault monitoring method based on HHT-DCNN |
CN111524478A (en) * | 2019-02-05 | 2020-08-11 | 三星显示有限公司 | Apparatus and method for detecting failure |
CN112200881A (en) * | 2020-08-24 | 2021-01-08 | 贵州大学 | Method for converting motor current into gray level image |
CN112488011A (en) * | 2020-12-04 | 2021-03-12 | 黄冈师范学院 | Fault classification method for modular multilevel converter |
CN113010547A (en) * | 2021-05-06 | 2021-06-22 | 电子科技大学 | Database query optimization method and system based on graph neural network |
CN113286311A (en) * | 2021-04-29 | 2021-08-20 | 沈阳工业大学 | Distributed perimeter security protection environment sensing system based on multi-sensor fusion |
CN113358993A (en) * | 2021-05-13 | 2021-09-07 | 武汉大学 | Online fault diagnosis method and system for multi-level converter IGBT |
CN113640633A (en) * | 2021-08-12 | 2021-11-12 | 贵州大学 | Fault positioning method for gas insulated switchgear |
CN113721162A (en) * | 2021-08-27 | 2021-11-30 | 中国科学院合肥物质科学研究院 | Fusion magnet power failure intelligent diagnosis method based on deep learning |
CN114157552A (en) * | 2021-10-29 | 2022-03-08 | 国网河南省电力公司漯河供电公司 | Distribution network fault detection method based on twin timing diagram network |
CN114184861A (en) * | 2021-11-28 | 2022-03-15 | 辽宁石油化工大学 | Fault diagnosis method for oil-immersed transformer |
CN114609546A (en) * | 2022-03-10 | 2022-06-10 | 东南大学 | Modularized multi-level converter open-circuit fault diagnosis method based on isolated forest |
CN115951002A (en) * | 2023-03-10 | 2023-04-11 | 山东省计量科学研究院 | Gas chromatography-mass spectrometer fault detection device |
CN116956174A (en) * | 2019-05-13 | 2023-10-27 | 北京绪水互联科技有限公司 | Classification model for cold head state classification detection and life prediction and generation method of prediction model |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103248255A (en) * | 2013-05-24 | 2013-08-14 | 哈尔滨工业大学 | Tri-phase modular multi-level converter and fault-tolerate detecting method for IGBT (insulated gate bipolar translator) open circuit fault in sub-modules thereof |
CN103761372A (en) * | 2014-01-06 | 2014-04-30 | 上海海事大学 | Multilevel inverter fault diagnosis strategy based on principal component analysis and multi-classification related vector machine(PCA-mRVM) |
CN104597370A (en) * | 2015-02-16 | 2015-05-06 | 哈尔滨工业大学 | State observer-based detection method of open-circuit fault of IGBT (insulated gate bipolar transistor) of MMC (modular multilevel converter) |
KR101521105B1 (en) * | 2014-07-31 | 2015-05-19 | 연세대학교 산학협력단 | Method for detecting fault of sub module of modular multilevel converter |
CN104698397A (en) * | 2015-03-16 | 2015-06-10 | 浙江万里学院 | Fault diagnosis method of multi-level inverter |
CN105044624A (en) * | 2015-08-11 | 2015-11-11 | 上海海事大学 | Seven-electric level inverter with fault diagnosis function and fault diagnosis method |
CN105896476A (en) * | 2016-04-13 | 2016-08-24 | 西安科技大学 | Two-level flexible direct current power transmission converter fault protection and fault diagnosis method |
-
2017
- 2017-06-26 CN CN201710493277.6A patent/CN107271925B/en not_active Expired - Fee Related
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103248255A (en) * | 2013-05-24 | 2013-08-14 | 哈尔滨工业大学 | Tri-phase modular multi-level converter and fault-tolerate detecting method for IGBT (insulated gate bipolar translator) open circuit fault in sub-modules thereof |
CN103761372A (en) * | 2014-01-06 | 2014-04-30 | 上海海事大学 | Multilevel inverter fault diagnosis strategy based on principal component analysis and multi-classification related vector machine(PCA-mRVM) |
KR101521105B1 (en) * | 2014-07-31 | 2015-05-19 | 연세대학교 산학협력단 | Method for detecting fault of sub module of modular multilevel converter |
CN104597370A (en) * | 2015-02-16 | 2015-05-06 | 哈尔滨工业大学 | State observer-based detection method of open-circuit fault of IGBT (insulated gate bipolar transistor) of MMC (modular multilevel converter) |
CN104698397A (en) * | 2015-03-16 | 2015-06-10 | 浙江万里学院 | Fault diagnosis method of multi-level inverter |
CN105044624A (en) * | 2015-08-11 | 2015-11-11 | 上海海事大学 | Seven-electric level inverter with fault diagnosis function and fault diagnosis method |
CN105896476A (en) * | 2016-04-13 | 2016-08-24 | 西安科技大学 | Two-level flexible direct current power transmission converter fault protection and fault diagnosis method |
Non-Patent Citations (2)
Title |
---|
SHUAI SHAO, ET AL.: "Fault Detection for Modular Multilevel Converters Based on Sliding Mode Observer", 《IEEE TRANSACTIONS ON POWER ELECTRONICS》 * |
王奎 等: "基于新型模块化多电平变换器的", 《电工技术学报》 * |
Cited By (50)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107633242A (en) * | 2017-10-23 | 2018-01-26 | 广州视源电子科技股份有限公司 | Network model training method, device, equipment and storage medium |
CN108010016A (en) * | 2017-11-20 | 2018-05-08 | 华中科技大学 | A kind of data-driven method for diagnosing faults based on convolutional neural networks |
CN108120900B (en) * | 2017-12-22 | 2020-02-11 | 北京映翰通网络技术股份有限公司 | Power distribution network fault positioning method and system |
CN108120900A (en) * | 2017-12-22 | 2018-06-05 | 北京映翰通网络技术股份有限公司 | A kind of electrical power distribution network fault location method and system |
CN108334937A (en) * | 2018-02-06 | 2018-07-27 | 大连海事大学 | A kind of oil film relative thickness extracting method and system waterborne based on convolutional neural networks |
CN108875537A (en) * | 2018-02-28 | 2018-11-23 | 北京旷视科技有限公司 | Method for checking object, device and system and storage medium |
CN108875537B (en) * | 2018-02-28 | 2022-11-08 | 北京旷视科技有限公司 | Object detection method, device and system and storage medium |
CN108564565A (en) * | 2018-03-12 | 2018-09-21 | 华南理工大学 | A kind of power equipment infrared image multi-target orientation method based on deep learning |
CN108777157A (en) * | 2018-05-08 | 2018-11-09 | 南京邮电大学 | The adaptive approach of MLC flash voltage threshold is predicted based on deep neural network |
CN108777157B (en) * | 2018-05-08 | 2021-07-09 | 南京邮电大学 | Self-adaptive method for predicting MLC flash memory voltage threshold based on deep neural network |
CN109141847A (en) * | 2018-07-20 | 2019-01-04 | 上海工程技术大学 | A kind of aircraft system faults diagnostic method based on MSCNN deep learning |
CN109141847B (en) * | 2018-07-20 | 2020-06-05 | 上海工程技术大学 | Aircraft system fault diagnosis method based on MSCNN deep learning |
CN108932567A (en) * | 2018-08-10 | 2018-12-04 | 燕山大学 | A kind of more energy consumption index prediction techniques of cement burning assembly procedure based on convolutional neural networks |
CN108932567B (en) * | 2018-08-10 | 2020-12-01 | 燕山大学 | Convolutional neural network-based multi-energy-consumption index prediction method for cement sintering process |
CN109409567A (en) * | 2018-09-17 | 2019-03-01 | 西安交通大学 | Complex device method for predicting residual useful life based on the double-deck shot and long term memory network |
CN109409567B (en) * | 2018-09-17 | 2022-03-08 | 西安交通大学 | Complex equipment residual life prediction method based on double-layer long-short term memory network |
CN109614981A (en) * | 2018-10-17 | 2019-04-12 | 东北大学 | The Power System Intelligent fault detection method and system of convolutional neural networks based on Spearman rank correlation |
CN109242049A (en) * | 2018-11-21 | 2019-01-18 | 安徽建筑大学 | Water supply pipe network multipoint leakage positioning method and device based on convolutional neural network |
CN109444604A (en) * | 2018-12-13 | 2019-03-08 | 武汉理工大学 | A kind of DC/DC converter method for diagnosing faults based on convolutional neural networks |
CN109838696A (en) * | 2019-01-09 | 2019-06-04 | 常州大学 | Pipeline fault diagnostic method based on convolutional neural networks |
CN111524478A (en) * | 2019-02-05 | 2020-08-11 | 三星显示有限公司 | Apparatus and method for detecting failure |
CN109969895B (en) * | 2019-04-15 | 2021-07-23 | 淄博东升电梯工程有限公司 | Fault prediction method based on elevator operation parameters, terminal and readable storage medium |
CN109969895A (en) * | 2019-04-15 | 2019-07-05 | 淄博东升电梯工程有限公司 | A kind of failure prediction method based on parameters of elevator run, terminal and readable storage medium storing program for executing |
CN116956174A (en) * | 2019-05-13 | 2023-10-27 | 北京绪水互联科技有限公司 | Classification model for cold head state classification detection and life prediction and generation method of prediction model |
CN110223195A (en) * | 2019-05-22 | 2019-09-10 | 上海交通大学 | Distribution network failure detection method based on convolutional neural networks |
CN110415215B (en) * | 2019-06-27 | 2023-03-24 | 同济大学 | Intelligent detection method based on graph neural network |
CN110415215A (en) * | 2019-06-27 | 2019-11-05 | 同济大学 | Intelligent detecting method based on figure neural network |
CN110488121A (en) * | 2019-08-22 | 2019-11-22 | 广东工业大学 | A kind of fault detection method of MMC, system, device and readable storage medium storing program for executing |
CN110672976A (en) * | 2019-10-18 | 2020-01-10 | 东北大学 | Multi-terminal direct-current transmission line fault diagnosis method based on parallel convolutional neural network |
CN111459697A (en) * | 2020-03-27 | 2020-07-28 | 河海大学 | Excitation system fault monitoring method based on deep learning network |
CN111459140B (en) * | 2020-04-10 | 2021-06-25 | 北京工业大学 | Fermentation process fault monitoring method based on HHT-DCNN |
CN111459140A (en) * | 2020-04-10 | 2020-07-28 | 北京工业大学 | Fermentation process fault monitoring method based on HHT-DCNN |
CN112200881A (en) * | 2020-08-24 | 2021-01-08 | 贵州大学 | Method for converting motor current into gray level image |
CN112488011B (en) * | 2020-12-04 | 2024-05-17 | 黄冈师范学院 | Fault classification method for modularized multi-level converter |
CN112488011A (en) * | 2020-12-04 | 2021-03-12 | 黄冈师范学院 | Fault classification method for modular multilevel converter |
CN113286311A (en) * | 2021-04-29 | 2021-08-20 | 沈阳工业大学 | Distributed perimeter security protection environment sensing system based on multi-sensor fusion |
CN113286311B (en) * | 2021-04-29 | 2024-04-12 | 沈阳工业大学 | Distributed perimeter security environment sensing system based on multi-sensor fusion |
CN113010547A (en) * | 2021-05-06 | 2021-06-22 | 电子科技大学 | Database query optimization method and system based on graph neural network |
CN113010547B (en) * | 2021-05-06 | 2023-04-07 | 电子科技大学 | Database query optimization method and system based on graph neural network |
CN113358993B (en) * | 2021-05-13 | 2022-10-04 | 武汉大学 | Online fault diagnosis method and system for multi-level converter IGBT |
CN113358993A (en) * | 2021-05-13 | 2021-09-07 | 武汉大学 | Online fault diagnosis method and system for multi-level converter IGBT |
CN113640633B (en) * | 2021-08-12 | 2024-04-09 | 贵州大学 | Fault positioning method for gas-insulated switchgear |
CN113640633A (en) * | 2021-08-12 | 2021-11-12 | 贵州大学 | Fault positioning method for gas insulated switchgear |
CN113721162B (en) * | 2021-08-27 | 2023-10-24 | 中国科学院合肥物质科学研究院 | Fusion magnet power failure intelligent diagnosis method based on deep learning |
CN113721162A (en) * | 2021-08-27 | 2021-11-30 | 中国科学院合肥物质科学研究院 | Fusion magnet power failure intelligent diagnosis method based on deep learning |
CN114157552A (en) * | 2021-10-29 | 2022-03-08 | 国网河南省电力公司漯河供电公司 | Distribution network fault detection method based on twin timing diagram network |
CN114157552B (en) * | 2021-10-29 | 2024-04-05 | 国网河南省电力公司漯河供电公司 | Distribution network fault detection method based on twin time sequence diagram network |
CN114184861A (en) * | 2021-11-28 | 2022-03-15 | 辽宁石油化工大学 | Fault diagnosis method for oil-immersed transformer |
CN114609546A (en) * | 2022-03-10 | 2022-06-10 | 东南大学 | Modularized multi-level converter open-circuit fault diagnosis method based on isolated forest |
CN115951002A (en) * | 2023-03-10 | 2023-04-11 | 山东省计量科学研究院 | Gas chromatography-mass spectrometer fault detection device |
Also Published As
Publication number | Publication date |
---|---|
CN107271925B (en) | 2019-11-05 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107271925A (en) | The level converter Fault Locating Method of modularization five based on depth convolutional network | |
Lin et al. | A fault classification method by RBF neural network with OLS learning procedure | |
CN109308522B (en) | GIS fault prediction method based on recurrent neural network | |
CN108062572A (en) | A kind of Fault Diagnosis Method of Hydro-generating Unit and system based on DdAE deep learning models | |
Tayeb | Faults detection in power systems using artificial neural network | |
CN110084148A (en) | A kind of Mechanical Failure of HV Circuit Breaker diagnostic method | |
CN110726898B (en) | Power distribution network fault type identification method | |
CN111275004A (en) | Bearing fault diagnosis method based on LMD and impulse neural network | |
CN106198551A (en) | Method and device for detecting defects of power transmission line | |
CN109522930A (en) | A kind of object detecting method based on type of barrier prediction | |
CN106372724A (en) | Artificial neural network algorithm | |
CN116400168A (en) | Power grid fault diagnosis method and system based on depth feature clustering | |
CN113902946A (en) | Power system fault direction judging method and device, terminal equipment and storage medium | |
CN112651519A (en) | Secondary equipment fault positioning method and system based on deep learning theory | |
CN112068033B (en) | On-line identification method for open-circuit faults of inverter power tube based on 1/6 period current | |
CN113222067A (en) | Intelligent island detection method based on SVM-Adaboost algorithm | |
CN116956124A (en) | Method for achieving fault diagnosis of three-phase rectifying device based on improved CNN | |
CN115781136B (en) | Intelligent recognition and optimization feedback method for welding heat input abnormality | |
CN116415485A (en) | Multi-source domain migration learning residual service life prediction method based on dynamic distribution self-adaption | |
CN104880216B (en) | A kind of sensor fault discrimination method based on different Error Correction of Coding cross-references | |
Raval | ANN based classification and location of faults in EHV transmission line | |
CN117150379B (en) | Printed circuit board fault diagnosis method | |
CN108956153A (en) | A kind of automobile anti-lock braking detection method based on RBF radial base neural net | |
CN112487707B (en) | LSTM-based intelligent dispensing pattern generation method | |
CN116774005B (en) | Four-switch Buck-Boost converter fault diagnosis method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20191105 |