CN116842463A - Electric automobile charging pile equipment fault diagnosis method - Google Patents
Electric automobile charging pile equipment fault diagnosis method Download PDFInfo
- Publication number
- CN116842463A CN116842463A CN202310819698.9A CN202310819698A CN116842463A CN 116842463 A CN116842463 A CN 116842463A CN 202310819698 A CN202310819698 A CN 202310819698A CN 116842463 A CN116842463 A CN 116842463A
- Authority
- CN
- China
- Prior art keywords
- data
- charging pile
- tcn
- network
- fedformer
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000003745 diagnosis Methods 0.000 title claims abstract description 19
- 238000000034 method Methods 0.000 title claims abstract description 18
- 230000006870 function Effects 0.000 claims abstract description 45
- 238000004422 calculation algorithm Methods 0.000 claims abstract description 37
- 238000012549 training Methods 0.000 claims abstract description 28
- 230000004927 fusion Effects 0.000 claims abstract description 27
- 230000002441 reversible effect Effects 0.000 claims abstract description 16
- 238000005457 optimization Methods 0.000 claims abstract description 14
- 102100028661 Amine oxidase [flavin-containing] A Human genes 0.000 claims abstract description 13
- 101000694718 Homo sapiens Amine oxidase [flavin-containing] A Proteins 0.000 claims abstract description 13
- 238000012360 testing method Methods 0.000 claims abstract description 10
- 238000012795 verification Methods 0.000 claims abstract description 9
- 238000007781 pre-processing Methods 0.000 claims abstract description 6
- 238000012935 Averaging Methods 0.000 claims abstract description 5
- 230000001133 acceleration Effects 0.000 claims description 27
- 230000003068 static effect Effects 0.000 claims description 21
- 230000002123 temporal effect Effects 0.000 claims description 15
- 238000011176 pooling Methods 0.000 claims description 14
- 108010074506 Transfer Factor Proteins 0.000 claims description 12
- 230000005856 abnormality Effects 0.000 claims description 10
- 238000011161 development Methods 0.000 claims description 9
- 208000037170 Delayed Emergence from Anesthesia Diseases 0.000 claims description 6
- 238000004364 calculation method Methods 0.000 claims description 6
- 238000012512 characterization method Methods 0.000 claims description 6
- 230000001932 seasonal effect Effects 0.000 claims description 6
- 238000010606 normalization Methods 0.000 claims description 5
- 238000012545 processing Methods 0.000 claims description 4
- 230000002159 abnormal effect Effects 0.000 claims description 3
- 230000004913 activation Effects 0.000 claims description 3
- 230000001364 causal effect Effects 0.000 claims description 3
- 238000004140 cleaning Methods 0.000 claims description 3
- 125000004122 cyclic group Chemical group 0.000 claims description 3
- 238000000354 decomposition reaction Methods 0.000 claims description 3
- 238000011156 evaluation Methods 0.000 claims description 3
- 238000000605 extraction Methods 0.000 claims description 3
- 238000009413 insulation Methods 0.000 claims description 3
- 230000009191 jumping Effects 0.000 claims description 3
- 239000011159 matrix material Substances 0.000 claims description 3
- 238000011084 recovery Methods 0.000 claims description 3
- 230000002708 enhancing effect Effects 0.000 claims 1
- 238000011423 initialization method Methods 0.000 abstract 1
- 239000000284 extract Substances 0.000 description 3
- 238000011160 research Methods 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 230000005611 electricity Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000000513 principal component analysis Methods 0.000 description 1
- 238000007637 random forest analysis Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
- 238000010200 validation analysis Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2415—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01R—MEASURING ELECTRIC VARIABLES; MEASURING MAGNETIC VARIABLES
- G01R31/00—Arrangements for testing electric properties; Arrangements for locating electric faults; Arrangements for electrical testing characterised by what is being tested not provided for elsewhere
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01R—MEASURING ELECTRIC VARIABLES; MEASURING MAGNETIC VARIABLES
- G01R31/00—Arrangements for testing electric properties; Arrangements for locating electric faults; Arrangements for electrical testing characterised by what is being tested not provided for elsewhere
- G01R31/36—Arrangements for testing, measuring or monitoring the electrical condition of accumulators or electric batteries, e.g. capacity or state of charge [SoC]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/213—Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/004—Artificial life, i.e. computing arrangements simulating life
- G06N3/006—Artificial life, i.e. computing arrangements simulating life based on simulated virtual individual or collective life forms, e.g. social simulations or particle swarm optimisation [PSO]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/0464—Convolutional networks [CNN, ConvNet]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/0475—Generative networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/049—Temporal neural networks, e.g. delay elements, oscillating neurons or pulsed inputs
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/094—Adversarial learning
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Software Systems (AREA)
- Mathematical Physics (AREA)
- Computing Systems (AREA)
- Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Probability & Statistics with Applications (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
The invention discloses a fault diagnosis method of electric vehicle charging pile equipment, which comprises the steps of obtaining historical fault data of a charging pile in a large database, and preprocessing the data; performing data enhancement on the original data set by using TimeGAN to generate time sequence data; dividing the expanded data set into a training set, a verification set and a test set; constructing a TCN-FEDformer fusion model through a time convolution network TCN and FEDformers; improving an Archimedes optimization algorithm AOA by using a Latin hypercube initialization method and a Cauchy reverse learning hybrid variation strategy to obtain a MAOA algorithm; optimizing super parameters of a TCN-FEDformer fusion model by using a MAOA algorithm to obtain more effective fault characteristic information of the charging pile equipment; the output of the fusion model was pooled by global averaging and then classified using the softmax function. The invention can realize rapid and accurate diagnosis of the faults of the charging pile equipment and improve the safety and usability of the charging pile equipment.
Description
Technical Field
The invention relates to a fault diagnosis technology of a charging pile, in particular to a fault diagnosis method of electric vehicle charging pile equipment.
Background
With the increasing importance of global energy structure transformation and environmental protection, electric vehicles are becoming an important component of modern traffic. Along with the rapid development of the electric automobile market, the demand of the charging pile as a key component of an electric automobile charging facility is also rapidly growing. However, during the operation of the charging pile, various faults may occur, such as voltage abnormality, current fluctuation, excessive temperature, etc., which may cause damage to the charging equipment, even endangering user safety.
The charging pile is used as important charging equipment of the new energy automobile, and the stability and the reliability of the charging pile are very important for popularization and development of the new energy automobile. At present, aiming at the direct current charging pile, research hotspots mainly comprise a charging strategy, a control system, negative electricity prediction and the like. The research method mainly comprises deep neural network, random forest algorithm, principal component analysis, wavelet packet analysis and the like. The TCN-FEDformer fusion model is constructed through a Time Convolution Network (TCN) and the FEDformer, the TCN can effectively extract space-time characteristics in a time sequence, the FEDformer model can better capture global characteristics of the time sequence, and the fusion of the two models can improve the accuracy and efficiency of fault diagnosis of the charging pile equipment and provide technical support for popularization and development of new energy automobiles.
Disclosure of Invention
The invention aims to: the invention aims to provide an efficient and accurate fault diagnosis method for a charging pile, which is used for realizing intelligent processing of operation data of the charging pile and accurate identification of fault types by combining a Time Convolution Network (TCN), a FEDformer and an improved Archimedes optimization algorithm (MAOA).
The technical scheme is as follows: the invention discloses a fault diagnosis method for electric vehicle charging pile equipment, which comprises the following steps:
(1) Acquiring historical fault data of the charging pile in a large database, and preprocessing the data; the historical fault data comprise voltage abnormality, charging current abnormality, over-temperature of a charging module, output overcurrent, direct-current output short-circuit fault and insulation abnormality of the charging pile;
(2) Performing data enhancement on the original data set by using TimeGAN to generate time sequence data so as to expand the original data set;
(3) Dividing the data set expanded in the step (2) into a training set, a verification set and a test set; constructing a TCN-FEDformer fusion model through a time convolution network TCN and a FEDformer model, extracting space-time characteristics in a time sequence through the time convolution network TCN, and capturing global characteristics of the time sequence through the FEDformer model;
(4) Initializing a population of an Archimedes optimization algorithm AOA by using Latin hypercube, introducing a cauchy reverse learning hybrid variation strategy to avoid the algorithm from sinking into local optimum, and obtaining an improved Archimedes optimization algorithm MAOA; optimizing super parameters of the TCN-FEDformer combined model by using MAOA, wherein the super parameters comprise learning rate, forgetting rate and hidden layer number;
(5) And performing fault diagnosis on the electric vehicle charging pile equipment by using the optimized TCN-FEDformer combined model, and classifying the output of the fusion model by using a softmax function after global average pooling.
Further, the step of preprocessing the data in the step (1) is as follows:
step 2.1: data cleaning, namely removing redundant data, filling missing data and correcting abnormal data;
step 2.2: and normalizing the data to eliminate the influence of the data dimension.
Further, in the step (2), the TimeGAN performs data enhancement on the original data set, and the step of generating time-series data is as follows:
step 3.1: constructing a TimeGAN network, and adjusting countermeasure training between a generator and a discriminator; the real time sequence is subjected to data reconstruction in the self-encoder, and the embedding and reproduction functions can be defined as follows:
h s =e s (s),h t =e X (h s ,h t-1 ,x t ) (1)
wherein s is a vector space of static features, and x is a vector space of time sequence features; e. r represents an embedding function and a reproduction function respectively; e, e S An embedded network that is a static feature; e, e X Is a cyclic embedded network with time characteristics; r is (r) S and rX A recovery network that is static and time embedded; h is a t-1 Represents the previous temporal feature, h s and ht Potential space corresponding to static features and timing features; and />Input data decoded for the reproduction function;
step 3.2: designing a generating countermeasure network, wherein the generating function and the countermeasure function are defined as:
wherein g and d respectively represent a generator function and a discriminator function, z represents two initial noise types of the generator, g S Generating network for static characteristics g X A network is generated for the cycle of the time feature, and />Representing a sequence of forward and reverse hidden states, respectively,/-> and />For both data formats after processing by the generator, < +.> and />Discrimination results of corresponding data;
step 3.3: establishing an error loss function to perform joint training optimization on the TimeGAN model;
step 3.3.1: using data reconstruction loss L R Optimizing the encoding and decoding of the self-encoder, and generating more efficient low-dimensional potential characterization of the data;
step 3.3.2: introducing real metadata as supervision items of a generator by defining a supervised loss L between the generator and the real data S The potential characterization of the time sequence correlation and the learning ability of the real data characteristic are reflected by the evaluation generator;
step 3.3.3: definition of countermeasures against loss L of an unsupervised GAN U The feedback model of the generator is realized, and the study of the sequence correlation under the embedded space is completed on the basis of minimum three errors realized by the combined training of each network, so that the generated data conforming to the real time sequence distribution is generated;
the error loss formulas of the models are defined as follows:
wherein subscripts S, x 1 T-p represents the original data distribution, s andrepresenting the original static features and the static after passing through the self-encoder, respectivelyState characteristics, x t and />Representing the original temporal feature and the temporal feature generated from the encoder, y s and yt Discrimination result representing true sequence, < >> and />Indicating the discrimination result of the generated sequence, h t Representing potential temporal features of real data g X (h S ,h t-1 ,z t ) Representing potential temporal features of the sequence generated by the generator; step 3.4: after training is completed, the new time series data generated by the generator is expanded into the original data set.
Further, the step of constructing the TCN-FEDformer fusion model in the step (3) is as follows:
step 4.1: the data set after expanding in the step (2) is processed according to the following 6:2:2 is divided into a training set, a verification set and a test set;
step 4.2: the method comprises the steps of fusing a time convolution network and an FEDformer model, and extracting space-time characteristics of an input time sequence through causal convolution, wherein the formula is as follows:
wherein f= (F 1 ,f 2 ,…,f K ) As a filter, x= (X 1 ,x 2 ,…,x T ) To input a sequence, x t K is the complete width of the convolution kernel, and K is the effective width in the convolution kernel;
step 4.3: and (3) carrying out pooling operation, wherein the formula is as follows:
wherein R is the pool size, n is the step size of the distance of the data area to be moved, which is smaller than the input size y, and l is the number of layers of the convolution layer;
step 4.4: introducing an activation function ReLU, weight normalization and Dropout operation, combining the operation into a residual block through the steps 4.2 and 4.3, and forming a residual network by a plurality of residual blocks;
step 4.5: inputting the output of the TCN into an encoder-decoder structure of the fed former;
step 4.6.1: defining a structure of an encoder;
where l e {1, …, N } represents the output of the layer I encoder,is an embedded historical sequence; the Encoder (·) form is:
wherein ,respectively representing the i-th separated seasonal components of the first layer;
step 4.6.2: defining a structure of a decoder;
where l ε {1, …, M } represents the output of the layer I decoder;
the Decoder (-) form is:
wherein ,respectively representing the season component and the trend component after the i-th deblocking of the layer I; w (W) l,i I.e {1,2,3} represents the trend of the ith extraction ∈A ∈1 }>Is a projection of (2); the prediction result is the sum of two refined decomposition components: wherein WS Is to transform the depth-transformed seasonal component +.>Projecting to a target dimension;
step 4.7: and (3) training a TCN-FEDformer fusion model by using the training set and the verification set divided in the step (4.1), and predicting a test set by using the fusion model.
Further, in the step (4), an archimedes optimizing algorithm AOA is improved, and an improved archimedes optimizing algorithm MAOA is obtained, which comprises the following steps:
step 5.1: setting the population size and iteration times of an AOA algorithm, and the upper limit and the lower limit of a search space;
step 5.2: the population position of the algorithm is initialized by using Latin hypercube strategy, and the improved formula is shown as follows:
wherein ,lbj,i Is the lower bound of the j dimension of the i-th population, ub j,i For the upper bound of the j-th dimension of the i-th population, lb j and ubj For the upper and lower bounds of the j-th dimension, A i,j Search space for the j dimension of the i-th population, A j Representing the sub-search space in which the ith population is located, X i,d For the position of the ith dimension of the ith population, RFP is a full permutation operation, n represents population size, d represents problem dimension, X i Represents the initialization value of the ith population, rand is a value of [0,1 ]]Random values of (a);
step 5.3: updating the density and volume of the individual:
in the formula , and />For the density of the ith individual in the current and next iteration,/v> and />For the volume of the ith volume, d, in the current and next iterations best 、v best Respectively the current optimal density and volume;
step 5.4: acceleration is updated;
step 5.4.1: when the transfer factor TF is less than or equal to 0.5, the algorithm performs a global search stage, and the acceleration update formula is shown as the following formula:
wherein ,for the acceleration of the ith individual in the next iteration, d mr 、v mr and amr Randomly selecting the density, the volume and the acceleration of an individual in the current iteration respectively;
step 5.4.2: when the transfer factor TF is greater than 0.5, the algorithm performs a local development stage, and the acceleration update formula is shown as follows:
wherein ,abest Acceleration as the optimal object;
step 5.4.3: the acceleration is normalized:
wherein ,for the acceleration normalized by the ith individual in the next iteration, max (a) and min (a) are the maximum and minimum accelerations in the global search, and u and l represent normalization ranges;
step 5.5: updating the object position;
step 5.5.1: when the transfer factor TF is less than or equal to 0.5, the algorithm performs a global search stage, and the position updating formula is shown as follows:
wherein rand E (0, 1), C 1 Is constant, x rand Representing the position of the ith random individual at the nth iteration;
step 5.5.2: when the transfer factor TF is more than 0.5, the algorithm performs a local development stage, introduces a cauchy reverse learning hybrid variation strategy, and perturbs the position to enable the position to have the capability of jumping out of local optimum, and the formula is as follows:
X′ best (t)=k 1 (ub+lb)-X best (t) (25)
wherein ub and lb represent upper and lower bounds; x'. best (t) is the optimal individual inverse solution at the t-th iteration, X best (t) is the optimal individual solution at the t-th iteration,for the cauchy reverse learning of the optimal solution, k 1 、k 2 Respectively [0,1 ]]Random numbers of (a); cauchy (0, 1) is a standard cauchy distribution, and p is a random probability following a normal distribution; when P is more than 0.5, the cauchy operator is mutated into an optimal solution, and when P is less than or equal to 0.5, the reverse learning strategy perturbs the current optimal solution;
step 5.6: judging whether the maximum iteration times are reached, if so, outputting an optimal solution, extracting the super parameters of the TCN-FEDformer fusion model, otherwise, returning to the step 5.3.
Further, the step (5) of classifying the output of the fusion model by using a softmax function after global average pooling includes the following steps:
step 6.1: and (3) reducing the dimension of the obtained charging pile fault feature instead of a full-connection layer through global average pooling, and then calculating the probability P of the feature vector being classified into each category by using a softmax function of the pooled feature vector s, wherein the softmax function has the following calculation formula:
wherein ,Ws Weight matrix pooled for global averaging, S i B is the feature vector after pooling s 、b f Are all bias parameters;
step 6.2: the model training uses a cross entropy Loss function Loss to adjust network parameters, and the calculation formula is as follows:
in the formula ,li Representing the actual label, N represents the total number of samples, and x represents traversing all possible categories.
The beneficial effects are that:
the invention can effectively utilize the historical fault data of the charging pile in the big data, and improves the accuracy and reliability of fault diagnosis. And carrying out data enhancement on the original data set by using the TimeGAN to generate time sequence data, and expanding the original data set. Constructing a TCN-FEDformer fusion model through a time convolution network TCN and a FEDformer, wherein the TCN can effectively extract space-time characteristics in a time sequence, and the FEDformer model can better capture global characteristics of the time sequence; by utilizing the MAOA optimized TCN-FEDformer fusion model super-parameters, more effective fault characteristic information of the charging pile equipment can be captured, and the precision and efficiency of fault prediction are improved.
Drawings
FIG. 1 is a schematic flow chart of the present invention;
FIG. 2 is a diagram of a TCN-FEDformer fusion model;
FIG. 3 is a schematic flow chart of the MAOA algorithm.
Detailed Description
The invention is further described below with reference to the accompanying drawings. The following examples are only for more clearly illustrating the technical aspects of the present invention, and are not intended to limit the scope of the present invention.
The invention discloses a fault diagnosis method for electric vehicle charging pile equipment, which is shown in fig. 1 to 3 and specifically comprises the following steps:
(1) Acquiring historical fault data of the charging pile in a large database, and preprocessing the data; the historical fault data comprise voltage abnormality, charging current abnormality, over-temperature of a charging module, output overcurrent, direct-current output short-circuit fault and insulation abnormality of the charging pile.
Step 1.1: and (3) cleaning data, removing redundant data, filling missing data and correcting abnormal data.
Step 1.2: and normalizing the data to eliminate the influence of the data dimension.
(2) And carrying out data enhancement on the original data set by using the TimeGAN to generate time sequence data so as to expand the original data set.
Step 2.1: constructing a TimeGAN network, and adjusting countermeasure training between a generator and a discriminator; the real time sequence is subjected to data reconstruction in the self-encoder, and the embedding and reproduction functions can be defined as follows:
h s =e s (s),h t =e X (h s ,h t-1 ,x t ) (1)
wherein s is a vector space of static features, and x is a vector space of time sequence features; e. r represents an embedding function and a reproduction function respectively; e, e S An embedded network that is a static feature; e, e X Is a cyclic embedded network with time characteristics; r is (r) S and rX A recovery network that is static and time embedded; h is a t-1 Represents the previous temporal feature, h s and ht Potential space corresponding to static features and timing features; and />Input data decoded for the reproduction function.
Step 2.2: designing a generating countermeasure network, wherein the generating function and the countermeasure function are defined as:
wherein g and d respectively represent a generator function and a discriminator function, z represents two initial noise types of the generator, g S Generating network for static characteristics g X A network is generated for the cycle of the time feature, and />Representing a sequence of forward and reverse hidden states, respectively,/-> and />For both data formats after processing by the generator, < +.> and />And judging the result of the corresponding data.
Step 2.3: establishing an error loss function to perform joint training optimization on a TimeGAN model:
step 2.3.1: using data reconstruction loss L R The optimization of the encoding and decoding of the self-encoder is realized, and the more efficient low-dimensional potential characterization of the data is generated.
Step 2.3.2: introducing real metadata as supervision items of a generator by defining a supervised loss L between the generator and the real data S The evaluation generator learns the potential characterization and true data features that characterize the timing correlation.
Step 2.3.3: definition of countermeasures against loss L of an unsupervised GAN U The feedback model of the generator is realized, and the study of the sequence correlation under the embedded space is completed on the basis of minimum three errors realized by the combined training of each network, so that the generated data conforming to the real time sequence distribution is generated;
the error loss formulas of the models are defined as follows:
wherein subscripts S, x 1 T-p represents the original data distribution, s andrepresenting the original static feature and the static feature after passing through the self-encoder, x, respectively t and />Representing the original temporal feature and the temporal feature generated from the encoder, y s and yt The discrimination result of the true sequence is represented,/> and />Indicating the discrimination result of the generated sequence, h t Representing potential temporal features of real data g X (h S ,h t-1 ,z t ) Representing potential temporal features of the sequence generated by the generator.
Step 2.4: after training is completed, the new time series data generated by the generator is expanded into the original data set.
(3) Dividing the data set expanded in the step (2) into a training set, a verification set and a test set; and constructing a TCN-FEDformer fusion model through a time convolution network TCN and a FEDformer model, wherein the time convolution network TCN extracts space-time characteristics in the time sequence, and the FEDformer model captures global characteristics of the time sequence.
Step 3.1: the data set after expanding in the step (2) is processed according to the following 6:2: the scale of 2 is divided into a training set, a validation set and a test set.
Step 3.2: the method comprises the steps of fusing a time convolution network and an FEDformer model, and extracting space-time characteristics of an input time sequence through causal convolution, wherein the formula is as follows:
wherein f= (F 1 ,f 2 ,…,f K ) As a filter, x= (X 1 ,x 2 ,…,x T ) To input a sequence, x t K is the full width of the convolution kernel, and K is the effective width in the convolution kernel, for the input layer node.
Step 3.3: and (3) carrying out pooling operation, wherein the formula is as follows:
where R is the pool size, n is the step size that determines the distance of the data area to be moved, less than the input size y, and l is the number of convolutional layers.
Step 3.4: introducing an activation function ReLU, weight normalization and Dropout operation, combining the steps into a residual block through the steps 3.2 and 3.3, and forming a residual network by a plurality of residual blocks.
Step 3.5: the output of TCN is input into the encoder-decoder structure of the fed former.
The encoder-decoder structure of the FEDformer is as follows:
step 3.5.1: defining a structure of an encoder;
where l e {1, …, N } represents the output of the layer I encoder,is an embedded historical sequence; the Encoder (·) form is:
wherein ,respectively representing the i-th separated seasonal components of the layer.
Step 3.5.2: defining a structure of a decoder;
where l ε {1, …, M } represents the output of the layer I decoder.
The Decoder (-) form is:
wherein ,respectively representing the season component and the trend component after the i-th deblocking of the layer I; w (W) l,i I.e {1,2,3} represents the trend of the ith extraction ∈A ∈1 }>Is a projection of (2); the prediction result is the sum of two refined decomposition components: wherein WS Is to transform the depth-transformed seasonal component +.>Projected to the target dimension.
Step 3.6: and (3) training a TCN-FEDformer fusion model by using the training set and the verification set divided in the step (3.1), and predicting a test set by using the fusion model.
(4) Initializing a population of an Archimedes optimization algorithm AOA by using Latin hypercube, introducing a cauchy reverse learning hybrid variation strategy to avoid the algorithm from sinking into local optimum, and obtaining an improved Archimedes optimization algorithm MAOA; and optimizing super parameters of the TCN-FEDformer combined model by using MAOA, wherein the super parameters comprise learning rate, forgetting rate and hidden layer number.
Step 4.1: and setting the population size and the iteration number of the AOA algorithm, and the upper limit and the lower limit of the search space.
Step 4.2: the population position of the algorithm is initialized by using Latin hypercube strategy, and the improved formula is shown as follows:
wherein ,lbj,i Is the lower bound of the j dimension of the i-th population, ub j,i For the upper bound of the j-th dimension of the i-th population, lb j and ubj For the upper and lower bounds of the j-th dimension, A i,j Search space for the j dimension of the i-th population, A j Representing the sub-search space in which the ith population is located, X i,d For the position of the ith dimension of the ith population, RFP is a full permutation operation, n represents population size, d represents problem dimension, X i Represents the initialization value of the ith population, rand is a value of [0,1 ]]Is a random value of (a).
Step 4.3: updating the density and volume of the individual:
in the formula , and />For the density of the ith individual in the current and next iteration,/v> and />For the volume of the ith volume, d, in the current and next iterations best 、v best Respectively the current optimal density and volume.
Step 4.4: acceleration is updated.
Step 4.4.1: when the transfer factor TF is less than or equal to 0.5, the algorithm performs a global search stage, and the acceleration update formula is shown as the following formula:
wherein ,for the acceleration of the ith individual in the next iteration, d mr 、v mr and amr And randomly selecting the density, the volume and the acceleration of the individual in the current iteration respectively.
Step 4.4.2: when the transfer factor TF is greater than 0.5, the algorithm performs a local development stage, and the acceleration update formula is shown as follows:
wherein ,abest Acceleration is the optimal object.
Step 4.4.3: the acceleration is normalized:
wherein ,for the acceleration normalized by the ith individual in the next iteration,max (a) and min (a) are the maximum and minimum accelerations in the global search, and u and l represent normalized ranges.
Step 4.5: and updating the object position.
Step 4.5.1: when the transfer factor TF is less than or equal to 0.5, the algorithm performs a global search stage, and the position updating formula is shown as follows:
wherein rand E (0, 1), C 1 Is constant, x rand Representing the position of the ith random individual at the t-th iteration.
Step 4.5.2: when the transfer factor TF is more than 0.5, the algorithm performs a local development stage, introduces a cauchy reverse learning hybrid variation strategy, and perturbs the position to enable the position to have the capability of jumping out of local optimum, and the formula is as follows:
X′ best (t)=k 1 (ub+lb)-X best (t) (25)
wherein ub and lb represent upper and lower bounds; x'. best (t) is the optimal individual inverse solution at the t-th iteration, X best (t) is the optimal individual solution at the t-th iteration,for the cauchy reverse learning of the optimal solution, k 1 、k 2 Respectively [0,1 ]]Random numbers of (a); cauchy (0, 1) is a standard cauchy distribution, and p is a random probability following a normal distribution; when P is more than 0.5, the cauchy operator is mutated into an optimal solution, and when P is less than or equal to 0.5, the reverse learning strategy perturbs the current optimal solution.
Step 4.6: judging whether the maximum iteration times are reached, if so, outputting an optimal solution, extracting the super parameters of the TCN-FEDformer fusion model, otherwise, returning to the step 4.3.
(5) And performing fault diagnosis on the electric vehicle charging pile equipment by using the optimized TCN-FEDformer combined model, and classifying the output of the fusion model by using a softmax function after global average pooling.
Step 5.1: and (3) reducing the dimension of the obtained charging pile fault feature instead of a full-connection layer through global average pooling, and then calculating the probability P of the feature vector being classified into each category by using a softmax function of the pooled feature vector s, wherein the softmax function has the following calculation formula:
wherein ,Ws Weight matrix pooled for global averaging, S i B is the feature vector after pooling s 、b f Are all bias parameters.
Step 5.2: the model training uses a cross entropy Loss function Loss to adjust network parameters, and the calculation formula is as follows:
in the formula ,li Representing the actual label, N represents the total number of samples, and x represents traversing all possible categories.
The foregoing embodiments are merely illustrative of the technical concept and features of the present invention, and are intended to enable those skilled in the art to understand the present invention and to implement the same, not to limit the scope of the present invention. All equivalent changes or modifications made according to the spirit of the present invention should be included in the scope of the present invention.
Claims (6)
1. The fault diagnosis method for the electric vehicle charging pile equipment is characterized by comprising the following steps of:
(1) Acquiring historical fault data of the charging pile in a large database, and preprocessing the data; the historical fault data comprise voltage abnormality, charging current abnormality, over-temperature of a charging module, output overcurrent, direct-current output short-circuit fault and insulation abnormality of the charging pile;
(2) Performing data enhancement on the original data set by using TimeGAN to generate time sequence data so as to expand the original data set;
(3) Dividing the data set expanded in the step (2) into a training set, a verification set and a test set; constructing a TCN-FEDformer fusion model through a time convolution network TCN and a FEDformer model, extracting space-time characteristics in a time sequence through the time convolution network TCN, and capturing global characteristics of the time sequence through the FEDformer model;
(4) Initializing a population of an Archimedes optimization algorithm AOA by using Latin hypercube, introducing a cauchy reverse learning hybrid variation strategy to avoid the algorithm from sinking into local optimum, and obtaining an improved Archimedes optimization algorithm MAOA; optimizing super parameters of the TCN-FEDformer combined model by using MAOA, wherein the super parameters comprise learning rate, forgetting rate and hidden layer number;
(5) And performing fault diagnosis on the electric vehicle charging pile equipment by using the optimized TCN-FEDformer combined model, and classifying the output of the fusion model by using a softmax function after global average pooling.
2. The fault diagnosis method for the electric vehicle charging pile device according to claim 1, wherein the step of preprocessing the data in the step (1) is as follows:
step 2.1: data cleaning, namely removing redundant data, filling missing data and correcting abnormal data;
step 2.2: and normalizing the data to eliminate the influence of the data dimension.
3. The fault diagnosis method for the electric vehicle charging pile device according to claim 1, wherein the step (2) of data enhancing the original data set by the TimeGAN, and generating the time-series data comprises the steps of:
step 3.1: constructing a TimeGAN network, and adjusting countermeasure training between a generator and a discriminator; the real time sequence is subjected to data reconstruction in the self-encoder, and the embedding and reproduction functions can be defined as follows:
h s =e s (s),h t =e X (h s ,h t-1 ,x t ) (1)
wherein s is a vector space of static features, and x is a vector space of time sequence features; e. r represents an embedding function and a reproduction function respectively; e, e S An embedded network that is a static feature; e, e X Is a cyclic embedded network with time characteristics; r is (r) S and rX A recovery network that is static and time embedded; h is a t-1 Represents the previous temporal feature, h s and ht Potential space corresponding to static features and timing features; and />Input data decoded for the reproduction function;
step 3.2: designing a generating countermeasure network, wherein the generating function and the countermeasure function are defined as:
wherein g and d respectively represent a generator function and a discriminator function, z represents two initial noise types of the generator, g S Generating network for static characteristics g X A network is generated for the cycle of the time feature, and />Representing a sequence of forward and reverse hidden states, respectively,/-> and />For both data formats after processing by the generator, < +.> and />Discrimination results of corresponding data;
step 3.3: establishing an error loss function to perform joint training optimization on the TimeGAN model;
step 3.3.1: using data reconstruction loss L R Optimizing the encoding and decoding of the self-encoder, and generating more efficient low-dimensional potential characterization of the data;
step 3.3.2: introducing real metadata as supervision items of a generator by defining a supervised loss L between the generator and the real data S The potential characterization of the time sequence correlation and the learning ability of the real data characteristic are reflected by the evaluation generator;
step 3.3.3: definition of countermeasures against loss L of an unsupervised GAN U The feedback model of the generator is realized, and the study of the sequence correlation under the embedded space is completed on the basis of minimum three errors realized by the combined training of each network, so that the generated data conforming to the real time sequence distribution is generated;
the error loss formulas of the models are defined as follows:
wherein subscripts S, x 1 T-p represents the original data distribution, s andrepresenting the original static feature and the static feature after passing through the self-encoder, x, respectively t and />Representing the original temporal feature and the temporal feature generated from the encoder, y s and yt Discrimination result representing true sequence, < >> and />Indicating the discrimination result of the generated sequence, h t Representing potential temporal features of real data g X (h S ,h t-1 ,z t ) Representing potential temporal features of the sequence generated by the generator;
step 3.4: after training is completed, the new time series data generated by the generator is expanded into the original data set.
4. The fault diagnosis method for the electric vehicle charging pile equipment according to claim 1, wherein the step of constructing the TCN-fed fusion model in the step (3) is as follows:
step 4.1: the data set after expanding in the step (2) is processed according to the following 6:2:2 is divided into a training set, a verification set and a test set;
step 4.2: the method comprises the steps of fusing a time convolution network and an FEDformer model, and extracting space-time characteristics of an input time sequence through causal convolution, wherein the formula is as follows:
wherein f= (F 1 ,f 2 ,…,f K ) As a filter, x= (X 1 ,x 2 ,…,x T ) To input a sequence, x t K is the complete width of the convolution kernel, and K is the effective width in the convolution kernel;
step 4.3: and (3) carrying out pooling operation, wherein the formula is as follows:
wherein R is the pool size, n is the step size of the distance of the data area to be moved, which is smaller than the input size y, and l is the number of layers of the convolution layer;
step 4.4: introducing an activation function ReLU, weight normalization and Dropout operation, combining the operation into a residual block through the steps 4.2 and 4.3, and forming a residual network by a plurality of residual blocks;
step 4.5: inputting the output of the TCN into an encoder-decoder structure of the fed former;
step 4.6.1: defining a structure of an encoder;
where l e {1, …, N } represents the output of the layer I encoder,is an embedded historical sequence; the Encoder (·) form is:
wherein ,respectively representing the i-th separated seasonal components of the first layer;
step 4.6.2: defining a structure of a decoder;
where l ε {1, …, M } represents the output of the layer I decoder;
the Decoder (-) form is:
wherein ,respectively representing the season component and the trend component after the i-th deblocking of the layer I; w (W) l,i I.e {1,2,3} represents the trend of the ith extraction ∈A ∈1 }>Is a projection of (2); the prediction result is the sum of two refined decomposition components: wherein WS Is to transform the depth-transformed seasonal component +.>Projecting to a target dimension;
step 4.7: and (3) training a TCN-FEDformer fusion model by using the training set and the verification set divided in the step (4.1), and predicting a test set by using the fusion model.
5. The fault diagnosis method for the electric vehicle charging pile device according to claim 1, wherein the step (4) is to improve an archimedes optimization algorithm AOA to obtain an improved archimedes optimization algorithm MAOA, and the method comprises the following steps:
step 5.1: setting the population size and iteration times of an AOA algorithm, and the upper limit and the lower limit of a search space;
step 5.2: the population position of the algorithm is initialized by using Latin hypercube strategy, and the improved formula is shown as follows:
wherein ,lbj,i Is the lower bound of the j dimension of the i-th population, ub j,i On the jth dimension of the ith populationBunge, lb j and ubj For the upper and lower bounds of the j-th dimension, A i,j Search space for the j dimension of the i-th population, A j Representing the sub-search space in which the ith population is located, X i,d For the position of the ith dimension of the ith population, RFP is a full permutation operation, n represents population size, d represents problem dimension, X i Represents the initialization value of the ith population, rand is a value of [0,1 ]]Random values of (a);
step 5.3: updating the density and volume of the individual:
in the formula , and />For the density of the ith individual in the current and next iteration,/v> and />For the volume of the ith volume, d, in the current and next iterations best 、v best Respectively the current optimal density and volume;
step 5.4: acceleration is updated;
step 5.4.1: when the transfer factor TF is less than or equal to 0.5, the algorithm performs a global search stage, and the acceleration update formula is shown as the following formula:
wherein ,for the acceleration of the ith individual in the next iteration, d mr 、v mr and amr Randomly selecting the density, the volume and the acceleration of an individual in the current iteration respectively;
step 5.4.2: when the transfer factor TF is greater than 0.5, the algorithm performs a local development stage, and the acceleration update formula is shown as follows:
wherein ,abest Acceleration as the optimal object;
step 5.4.3: the acceleration is normalized:
wherein ,for the acceleration normalized by the ith individual in the next iteration, max (a) and min (a) are the maximum and minimum accelerations in the global search, and u and l represent normalization ranges;
step 5.5: updating the object position;
step 5.5.1: when the transfer factor TF is less than or equal to 0.5, the algorithm performs a global search stage, and the position updating formula is shown as follows:
wherein rand E (0, 1), C 1 Is constant, x rand Representing the position of the ith random individual at the nth iteration;
step 5.5.2: when the transfer factor TF is more than 0.5, the algorithm performs a local development stage, introduces a cauchy reverse learning hybrid variation strategy, and perturbs the position to enable the position to have the capability of jumping out of local optimum, and the formula is as follows:
X′ best (t)=k 1 (ub+lb)-X best (t) (25)
wherein ub and lb represent upper and lower bounds; x'. best (t) is the optimal individual inverse solution at the t-th iteration, X best (t) is the optimal individual solution at the t-th iteration,for the cauchy reverse learning of the optimal solution, k 1 、k 2 Respectively [0,1 ]]Random numbers of (a); cauchy (0, 1) is a standard cauchy distribution, and p is a random probability following a normal distribution; when P is more than 0.5, the cauchy operator is mutated into an optimal solution, and when P is less than or equal to 0.5, the reverse learning strategy perturbs the current optimal solution;
step 5.6: judging whether the maximum iteration times are reached, if so, outputting an optimal solution, extracting the super parameters of the TCN-FEDformer fusion model, otherwise, returning to the step 5.3.
6. The method for diagnosing a fault in an electric vehicle charging pile device according to claim 1, wherein the step (5) classifies the output of the fusion model by using a softmax function after global averaging pooling, and comprises the steps of:
step 6.1: and (3) reducing the dimension of the obtained charging pile fault feature instead of a full-connection layer through global average pooling, and then calculating the probability P of the feature vector being classified into each category by using a softmax function of the pooled feature vector s, wherein the softmax function has the following calculation formula:
wherein ,Ws Weight matrix pooled for global averaging, S i B is the feature vector after pooling s 、b f Are all bias parameters;
step 6.2: the model training uses a cross entropy Loss function Loss to adjust network parameters, and the calculation formula is as follows:
in the formula ,li Representing the actual label, N represents the total number of samples, and x represents traversing all possible categories.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310819698.9A CN116842463A (en) | 2023-07-05 | 2023-07-05 | Electric automobile charging pile equipment fault diagnosis method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310819698.9A CN116842463A (en) | 2023-07-05 | 2023-07-05 | Electric automobile charging pile equipment fault diagnosis method |
Publications (1)
Publication Number | Publication Date |
---|---|
CN116842463A true CN116842463A (en) | 2023-10-03 |
Family
ID=88159623
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310819698.9A Pending CN116842463A (en) | 2023-07-05 | 2023-07-05 | Electric automobile charging pile equipment fault diagnosis method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116842463A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117633479A (en) * | 2024-01-26 | 2024-03-01 | 国网湖北省电力有限公司 | Method and system for analyzing and processing faults of charging piles |
-
2023
- 2023-07-05 CN CN202310819698.9A patent/CN116842463A/en active Pending
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117633479A (en) * | 2024-01-26 | 2024-03-01 | 国网湖北省电力有限公司 | Method and system for analyzing and processing faults of charging piles |
CN117633479B (en) * | 2024-01-26 | 2024-04-09 | 国网湖北省电力有限公司 | Method and system for analyzing and processing faults of charging piles |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109635928B (en) | Voltage sag reason identification method based on deep learning model fusion | |
CN113884290B (en) | Voltage regulator fault diagnosis method based on self-training semi-supervised generation countermeasure network | |
CN109902740B (en) | Re-learning industrial control intrusion detection method based on multi-algorithm fusion parallelism | |
CN111967343A (en) | Detection method based on simple neural network and extreme gradient lifting model fusion | |
CN109214460A (en) | Method for diagnosing fault of power transformer based on Relative Transformation Yu nuclear entropy constituent analysis | |
CN116842463A (en) | Electric automobile charging pile equipment fault diagnosis method | |
CN113255661B (en) | Bird species image identification method related to bird-involved fault of power transmission line | |
CN114760098A (en) | CNN-GRU-based power grid false data injection detection method and device | |
CN110851654A (en) | Industrial equipment fault detection and classification method based on tensor data dimension reduction | |
CN114067286A (en) | High-order camera vehicle weight recognition method based on serialized deformable attention mechanism | |
CN115206092A (en) | Traffic prediction method of BiLSTM and LightGBM model based on attention mechanism | |
CN112766618A (en) | Anomaly prediction method and device | |
Tao et al. | Intelligent feature selection using GA and neural network optimization for real-time driving pattern recognition | |
CN115587335A (en) | Training method of abnormal value detection model, abnormal value detection method and system | |
CN115345222A (en) | Fault classification method based on TimeGAN model | |
CN114692956A (en) | Charging facility load prediction method and system based on multilayer optimization kernel limit learning machine | |
CN111858343A (en) | Countermeasure sample generation method based on attack capability | |
Liu et al. | Fault diagnosis of complex industrial systems based on multi-granularity dictionary learning and its application | |
CN116720095A (en) | Electrical characteristic signal clustering method for optimizing fuzzy C-means based on genetic algorithm | |
CN114841266A (en) | Voltage sag identification method based on triple prototype network under small sample | |
Xiang et al. | An Improved Multiple Imputation Method Based on Chained Equations for Distributed Photovoltaic Systems | |
CN113032612B (en) | Construction method of multi-target image retrieval model, retrieval method and device | |
CN116775423A (en) | Method for cluster failure prediction | |
Moreira et al. | Prototype generation using self-organizing maps for informativeness-based classifier | |
CN115877068A (en) | Voltage sag propagation track identification method of regional power grid based on deep learning |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |