CN113033678A - Lithium battery pack fault diagnosis method based on adaptive countermeasure network - Google Patents
Lithium battery pack fault diagnosis method based on adaptive countermeasure network Download PDFInfo
- Publication number
- CN113033678A CN113033678A CN202110348139.5A CN202110348139A CN113033678A CN 113033678 A CN113033678 A CN 113033678A CN 202110348139 A CN202110348139 A CN 202110348139A CN 113033678 A CN113033678 A CN 113033678A
- Authority
- CN
- China
- Prior art keywords
- domain
- data
- network
- training
- target
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 36
- WHXSMMKQMYFTQS-UHFFFAOYSA-N Lithium Chemical compound [Li] WHXSMMKQMYFTQS-UHFFFAOYSA-N 0.000 title claims abstract description 28
- 229910052744 lithium Inorganic materials 0.000 title claims abstract description 28
- 238000003745 diagnosis Methods 0.000 title claims abstract description 16
- 230000003044 adaptive effect Effects 0.000 title claims description 11
- 238000012549 training Methods 0.000 claims abstract description 39
- 230000006870 function Effects 0.000 claims abstract description 37
- 238000009826 distribution Methods 0.000 claims abstract description 34
- 230000036541 health Effects 0.000 claims abstract description 8
- 238000013507 mapping Methods 0.000 claims abstract description 5
- 239000004576 sand Substances 0.000 claims abstract description 5
- 230000006978 adaptation Effects 0.000 claims description 12
- 238000013527 convolutional neural network Methods 0.000 claims description 12
- 230000004913 activation Effects 0.000 claims description 9
- 230000008569 process Effects 0.000 claims description 9
- 238000013506 data mapping Methods 0.000 claims description 3
- 239000011159 matrix material Substances 0.000 claims description 3
- 238000005457 optimization Methods 0.000 claims description 3
- 238000013461 design Methods 0.000 abstract description 3
- 230000000694 effects Effects 0.000 abstract 1
- 230000001537 neural effect Effects 0.000 abstract 1
- 238000012360 testing method Methods 0.000 description 6
- 230000032683 aging Effects 0.000 description 5
- 238000013528 artificial neural network Methods 0.000 description 4
- 238000002405 diagnostic procedure Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 238000000605 extraction Methods 0.000 description 2
- 238000011160 research Methods 0.000 description 2
- 238000013526 transfer learning Methods 0.000 description 2
- ORILYTVJVMAKLC-UHFFFAOYSA-N Adamantane Natural products C1C(C2)CC3CC1CC2C3 ORILYTVJVMAKLC-UHFFFAOYSA-N 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000002485 combustion reaction Methods 0.000 description 1
- 230000001808 coupling effect Effects 0.000 description 1
- 238000012774 diagnostic algorithm Methods 0.000 description 1
- 238000007599 discharging Methods 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 230000036039 immunity Effects 0.000 description 1
- 229910000625 lithium cobalt oxide Inorganic materials 0.000 description 1
- BFZPBUKRYWOWDV-UHFFFAOYSA-N lithium;oxido(oxo)cobalt Chemical compound [Li+].[O-][Co]=O BFZPBUKRYWOWDV-UHFFFAOYSA-N 0.000 description 1
- 238000007726 management method Methods 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 238000013508 migration Methods 0.000 description 1
- 230000005012 migration Effects 0.000 description 1
- 238000010606 normalization Methods 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 230000000135 prohibitive effect Effects 0.000 description 1
- 230000035945 sensitivity Effects 0.000 description 1
- 230000002269 spontaneous effect Effects 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
- 238000012800 visualization Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q50/00—Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
- G06Q50/06—Energy or water supply
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Health & Medical Sciences (AREA)
- General Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Business, Economics & Management (AREA)
- General Health & Medical Sciences (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Computational Linguistics (AREA)
- Economics (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Software Systems (AREA)
- Mathematical Physics (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Human Resources & Organizations (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Biology (AREA)
- Public Health (AREA)
- Water Supply & Treatment (AREA)
- Bioinformatics & Computational Biology (AREA)
- Marketing (AREA)
- Primary Health Care (AREA)
- Strategic Management (AREA)
- Tourism & Hospitality (AREA)
- General Business, Economics & Management (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a lithium battery pack fault diagnosis method based on a self-adaptive countermeasure network, which comprises the following steps of: set available K health tagged source domain data { xs,ysAnd target domain data without label xtLayering the data of the source domain and the data of the target domain by a diagnostic program, reducing the number of feature output mappings, and optimizing the joint distribution difference and the edge distribution difference of a target function; the diagnostic program comprises an asymmetric convolutional self-coding network and a domain confrontation training. The invention has the following advantages and effects: the invention designs the common one-dimensional convolution network as the non-pair of a deep convolution neural networkThe convolutional coding network is called, and high-dimensional data are layered and scaled; the method can not only learn the classification judgment to carry out accurate classification, but also optimize the classifier and the discriminator on the combined distribution difference and the edge distribution difference of the target functions of the classifier and the discriminator.
Description
Technical Field
The invention relates to the field of mechanical fault diagnosis, in particular to a lithium battery pack fault diagnosis method based on a self-adaptive countermeasure network.
Background
With the gradual depletion of resources and the enhancement of environmental protection, clean energy represented by lithium batteries is popularized and used. However, with the large-scale application of lithium batteries, the safety problem is gradually revealed. In the use process of the lithium battery, the power lithium battery is prone to failure caused by improper operation of users or accidental physical collision. The lithium battery has various faults, the service life of the lithium battery can be shortened due to slight faults, open fire and spontaneous combustion phenomena can be caused to the electric automobile due to serious faults, and the use safety of the electric automobile is threatened. Therefore, in the working process of the lithium battery of the electric automobile, the most effective method for avoiding the fault of the lithium battery of the electric automobile is to analyze real-time parameters such as current, voltage and temperature of the lithium battery by using a battery management system and judge whether the lithium battery has the fault. The battery parameters of the lithium battery are obviously changed in the early stage of the occurrence of the faults of the lithium battery, and the changes reflect the types of the faults.
The safety of the lithium battery of the electric automobile is of great importance, and numerous researchers carry out a great deal of research on the fault state of the lithium battery. During the operation of a lithium battery, its safety is affected by many factors, among which overcharge, overdischarge, and aging are three important factors. Due to the complex working condition of the electric vehicle and the complex grid structure of the battery pack, the faults of a plurality of single batteries in the battery pack have a space-time coupling effect, so that the fault classification of the battery pack is inaccurate. The deep network not only has strong feature learning and big data processing capability, but also liberates manpower and priori knowledge, thereby realizing more efficient and accurate diagnosis performance. Among the various depth models, convolutional neural networks and their variants, as a very popular branch, have achieved the most advanced level in many applications. But only when the training data and the test data share the same distribution, an impressive performance increase is obtained. However, this assumption is not always true in practical applications due to variations in operating conditions, external temperature, and noise. That is, when the source domain and the target domain have different data distributions, the performance of most methods may be drastically degraded. One might solve this problem by retraining or fine-tuning the network model for the target task, but in this case, labeled data is needed. Manually collecting well-annotated data or markers is often very expensive and impractical in real-time diagnostic tasks. Therefore, there is a need for a more efficient model that can be trained using abundant labeled data in the relevant source domain and reused in a new target domain, where the distribution of data in different domains changes.
The purpose of the transfer learning is to establish a learning mechanism and learn in different fields according to different probability distributions. Unsupervised domain adaptation is an active branch of the migration learning domain, with the ability to span distribution differences across different domains and to explore domain invariant features. Domain adaptation is a special case of transfer learning, aimed at establishing the transfer of knowledge from the source training domain to the target testing domain by exploring domain invariant features and compensating for distribution differences. Reviewing the literature, domain adaptation can be roughly divided into two modes, namely supervised and unsupervised. Because annotating samples in the target domain is often expensive or prohibitive, we are primarily concerned with the Unsupervised Domain Adaptation (UDA) problem in this work. Existing UDA methods for fault diagnosis mainly include two major categories. The first is a method based on moment matching; the other is a counteradaptation method, which includes a feature generator and a domain discriminator. The generator is trained to learn features that make it impossible for the discriminator to distinguish between the source and target domains, and the discriminator cannot be spoofed. However, some problems still exist in diagnostic based methods. Domain discriminators typically only attempt to distinguish the characteristics of the source domain or target domain, without regard to the decision boundaries of the particular task between classes; thus, the generated features may be ambiguous near the class boundary. In practice, each domain sample usually has its own characteristics, i.e., each domain sample has a certain relationship with the decision boundary of a specific task. Therefore, if these features are not considered, it is difficult to completely match the feature distribution and construct a powerful transferable diagnostic algorithm.
Disclosure of Invention
The invention aims to provide a lithium battery pack fault diagnosis method based on an adaptive countermeasure network, which aims to solve the problems in the background art.
The technical purpose of the invention is realized by the following technical scheme: a lithium battery pack fault diagnosis method based on an adaptive countermeasure network comprises the following steps:
set available K health tagged source domain data { xs,ysAnd target domain data without label xtLayering the data of the source domain and the data of the target domain by a diagnostic program, reducing the number of feature output mappings, and optimizing the joint distribution difference and the edge distribution difference of a target function;
the diagnostic program comprises an asymmetric convolutional self-coding network and a domain confrontation training.
Further, the asymmetric convolutional self-coding network comprises the following steps:
setting the input vector of the asymmetric convolution self-coding network as x ∈ R, and the data mapping of the first hidden layer learning input layer as giE R, the coding function is:
gi=f(wigi-1+bi)i=1,…,n (1)
in formula (1), n represents the number of hidden layers; w is aiAnd biFor each convolution kernel parameter, g when i is 00X. f is an activation function, activation operation is carried out after each layer of convolution, and the expression of the activation function is as follows:
in formula (2), α is a coefficient, and may be 1;
input data x is mapped to outputs G, respectivelys(xs) And Gt(xt) (ii) a The output of the new advanced function can be expressed as:
in the formulae (3) and (4),andis a feature map of a convolutional neural network,andis the weight matrix of the i-th layer of the convolutional neural network of the source domain and the target domain, and L is the number of layers of each convolutional network.
Further, the method comprises the following steps:
the robust domain adaptive network generally comprises a feature generator G, a label classifier C and a domain discriminator D, with the parameters being respectively thetag、θcAnd thetad;
In the training process, one is used for training a discriminator used for distinguishing a source domain from a target domain, and the other is used for training a feature generator used for confusing the domain discriminator; meanwhile, training a classifier to minimize the classification loss of the source domain data; the overall loss function expression of the domain countermeasure network is as follows:
in formula (5), JyRepresenting the cross entropy loss function, diRepresents a domain tag, JdRepresenting a domain classification loss, and λ represents a trade-off parameter between the two losses;
in the optimization objective, training the generation rules to minimize label prediction loss while maximizing domain classification loss; training a classifier to minimize label prediction loss, training a domain discriminator to minimize domain recognition loss;
to reduce the joint distribution difference between the source domain and the target domain for domain adaptation, a maximum mean difference is calculated as follows:
in formula (6), JyRepresenting cross entropy loss functions, f and fcRespectively representing the characteristics of the global collection level and the classification level.
Further setting is that the field confrontation training also comprises the following steps:
besides the joint distribution difference, the edge distribution difference is also considered so as to carry out more comprehensive domain adjustment; therefore, domain discriminators from those connected to the global convergence layer need to be optimized to form a reactive adaptation loss for reducing edge distribution differences between domains; the formula for this loss function is as follows:
in formula (7), JdRepresents a classification loss; diRepresents a domain tag, with a value of 0 or 1;
the expression of the final optimized loss function is therefore:
in formula (8), α and β are weight coefficients;
given labeled source domain data and unlabeled target data, source task features G may be extracted from the corresponding encoder networks(xs) And target task characteristics Gt(xt) (ii) a Then, feeding all the characteristics back to the domain discriminator D to realize training; by optimizing the corresponding objective function, all parameters can be found that satisfy the following conditions:
the parameters of the fixed-domain discriminator D are updated according to the discrimination gradient update of the binary classifier, which can be performed by a standard back-propagation algorithm.
The invention has the beneficial effects that:
the invention provides a lithium battery pack fault diagnosis method based on a self-adaptive countermeasure network, which designs a common one-dimensional convolution network into an asymmetric convolution coding network of a deep convolution neural network and carries out layering and scaling on high-dimensional data; the method can not only learn the classification judgment to carry out accurate classification, but also optimize the classifier and the discriminator on the combined distribution difference and the edge distribution difference of the target functions of the classifier and the discriminator.
Drawings
FIG. 1 is a diagnostic process diagram of an embodiment;
FIG. 2 is a diagram of an asymmetric convolutional self-coding network in an embodiment;
FIG. 3 is voltage data during overcharge of 4.7V in example;
FIG. 4 is voltage data during overdischarge of a battery in accordance with an embodiment;
FIG. 5 is voltage data during aging of a battery in an embodiment;
FIG. 6 is the NASA Pco laboratory battery test data for the examples;
FIG. 7 is a classification visualization result in an embodiment.
Detailed Description
The asymmetric convolution self-coding network of the deep convolution neural network is designed in the embodiment and used for feature extraction, the feature distribution difference between the training domain and the testing domain is reduced, the frame can learn category discrimination to perform accurate classification, and a classifier and a discriminator target function are optimized.
The present invention will be described in further detail with reference to the accompanying drawings.
A lithium battery pack fault diagnosis method based on an adaptive countermeasure network comprises the following steps:
set available K health tagged source domain data { xs,ysAnd target domain data without label xtLayering the data of the source domain and the data of the target domain by a diagnostic program, reducing the number of feature output mappings, and optimizing the joint distribution difference and the edge distribution difference of a target function;
the diagnostic program comprises an asymmetric convolutional self-coding network and a domain confrontation training.
For the proposed fault diagnosis framework, we assume tagged source domain data with K health conditions { x }s,ysAnd target domain data without label xtAvailable, its primary purpose is to learn the feature encoder model and the discrimination model, so as to correctly identify K fault classes in the target domain. The basic diagnostic procedure is shown in fig. 1 and consists of two main components, an asymmetric convolutional self-coding network and domain confrontation training.
Wherein, in order to learn the high-level feature representation of the source domain and the target domain, a feature encoder network is first introduced, which comprises a generator G and a classifier C. The generator G is used to encode the input data to obtain a high level of discriminative representation and the classifier C will ultimately classify the source and target tasks. Because the deep network has good feature learning and classification capability, a one-dimensional neural network is constructed for feature extraction and fault classification.
The embodiment provides an asymmetric convolution self-coding network, a convolution self-coder is built on the basis of an self-coder, and convolution operation is added on the basis of an self-coder. The advantages of the convolutional neural network and the self-encoder are combined, and the sensitivity of the convolutional neural network to the weight and the dependence on large-scale marking data are solved. The asymmetric convolution self-coding network is a process that only an encoder (asymmetric) exists in an encoder-decoder (symmetric), and the main purpose is to reduce the number of feature output mappings in the process of feature learning, so that the optimal feature is screened out by a neural network structure and is preferentially output, and the model structure learns the optimal feature of each layer. And if the correct learning structure exists, the calculation amount can be reduced, and the accuracy of the model is improved.
The present embodiment uses an asymmetric convolutional self-coding network to layer and scale high-dimensional data. The training process is shown in fig. 2, which shows symmetric and asymmetric convolutional self-coding networks; wherein g represents the hidden layer with reduced dimension, e represents the encoding stage, and d represents the decoding stage.
The asymmetric convolution self-coding network comprises the following steps:
setting the input vector of the asymmetric convolution self-coding network as x ∈ R, and the data mapping of the first hidden layer learning input layer as giE R, the coding function is:
gi=f(wigi-1+bi)i=1,…,n (1)
in formula (1), n represents the number of hidden layers; w is aiAnd biFor each convolution kernel parameter, g when i is 00X. f is an activation function, each layer of convolution is followed by an activation operation,the expression of the activation function is:
in formula (2), α is a coefficient, and may be 1;
input data x is mapped to outputs G, respectivelys(xs) And Gt(xt) (ii) a The output of the new advanced function can be expressed as:
in the formulae (3) and (4),andis a feature map of a convolutional neural network,andis the weight matrix of the i-th layer of the convolutional neural network of the source domain and the target domain, and L is the number of layers of each convolutional network.
In form, this network architecture includes a source domain and a target domain shared feature generator G, a shared health classifier C, and a domain evaluator D. In order to effectively extract features and reduce the design of a complex signal preprocessing algorithm, a one-dimensional asymmetric convolution self-coding network is designed to be used as a feature generator to directly process an original mechanical signal.
In the CNN architecture composed of G and C, the convolutional layer and the convergence layer are stacked together to form a one-dimensional depth CNN. The input to the CNN is the original vibration signal of 2000 data points. The size of the first convolution kernel is typically chosen to be 16-128. A size of 32 with a step size of 16 is chosen to obtain good noise immunity. Batch Normalization (BN) is added after the convolutional layer to accelerate the convergence of training. The pool size is typically chosen to be 2, step size 2. To classify the failure mode, the output corresponds to K health conditions.
In the decision phase, a domain discriminator D is constructed to implement the countermeasure network. The output is connected to the input of D, and the probability is obtained, which estimates the data distribution from the real. Two hidden layers were designed, each with 200 nodes, to obtain a non-linear feature representation, where the output is a binary classifier, outputting either a 0 or a 1. The maximum number of training sessions was set to 200 and the batch size was 50. The optimizer in Tensorflow, The Adam optimizer, was used to optimize The parameters of The proposed data network.
For the field confrontation training, attention needs to be paid to the problem of fault diagnosis based on unsupervised domain adaptation, wherein the labeled data only exists in the source domain, and the target domain has no labeled data. Given a source domainnsFor marking example and a target domainntAn unlabeled example. Where x and y represent data examples and category labels, respectively. The label space is the same for the source domain and the target domain, while the data is obtained from different distributions. The method aims to establish a deep intelligent network y (F) (x), and can learn domain invariant and class distinguishing features so as to minimize the risk of object classification.
The field confrontation training comprises the following steps:
the robust domain adaptive network generally comprises a feature generator G, a label classifier C and a domain discriminator D, with the parameters being respectively thetag、θcAnd thetad;
In the training process, one is used for training a discriminator used for distinguishing a source domain from a target domain, and the other is used for training a feature generator used for confusing the domain discriminator; meanwhile, training a classifier to minimize the classification loss of the source domain data; the overall loss function expression of the domain countermeasure network is as follows:
in formula (5), JyRepresenting the cross entropy loss function, diRepresents a domain tag, JdRepresenting a domain classification loss, and λ represents a trade-off parameter between the two losses;
in this optimization objective, the generation rules are trained to minimize label prediction loss (i.e., features are discriminative), while maximizing domain classification loss (i.e., features are domain invariant); training a classifier to minimize label prediction loss, training a domain discriminator to minimize domain recognition loss;
to reduce the joint distribution difference between the source domain and the target domain for domain adaptation, a maximum mean difference is calculated as follows:
in formula (6), JyRepresenting cross entropy loss functions, f and fcRespectively representing the characteristics of the global collection level and the classification level.
Wherein, besides the joint distribution difference, the edge distribution difference is also considered so as to perform more comprehensive domain adjustment; therefore, domain discriminators from those connected to the global convergence layer need to be optimized to form a reactive adaptation loss for reducing edge distribution differences between domains; the formula for this loss function is as follows:
in formula (7), JdRepresents a classification loss; diRepresents a domain tag, with a value of 0 or 1;
the expression of the final optimized loss function is therefore:
in formula (8), α and β are weight coefficients;
given labeled source domain data and unlabeled target data, source task features G may be extracted from the corresponding encoder networks(xs) And target task characteristics Gt(xt) (ii) a Then, feeding all the characteristics back to the domain discriminator D to realize training; by optimizing the corresponding objective function, all parameters can be found that satisfy the following conditions:
the parameters of the fixed-domain discriminator D are updated according to the discrimination gradient update of the binary classifier, which can be performed by a standard back-propagation algorithm.
Application examples
In order to verify the correctness of the lithium battery pack fault diagnosis method of the self-adaptive countermeasure network based on the asymmetric convolution self-coding, the application embodiment takes a Song 18650 lithium cobalt oxide battery as a research object, the battery capacity range is 2700mAh-2900mAh, the normal voltage range of the battery work is 2.5V-4.2V, the normal charging temperature is 0-45 ℃, and the normal discharging temperature is-20-60 ℃. And carrying out overcharge, overdischarge and aging tests at 25 ℃ to acquire data. The voltage data of the battery during overcharge of 4.7V is shown in FIG. 3, the voltage data of the battery during overdischarge is shown in FIG. 4, and the voltage data of the battery during aging is shown in FIG. 5. The 4 th data set, a battery test random data set for the NASA Pcoe laboratory, was used as the target domain data set as shown in fig. 6; the method was used for fault classification, i.e. overcharge, overdischarge and aging faults, and the results were visualized as shown in fig. 7.
From these results, it can be seen that the method proposed in this embodiment makes the source domain features and the target domain features quite close, and this method can aggregate features of the same health condition for accurate classification, resulting in better classification performance. Methods based on domain adaptation can be found to be of great significance to the actual diagnostic needs.
The above disclosure is only for the purpose of illustrating the preferred embodiments of the present invention, and it is therefore to be understood that the invention is not limited by the scope of the appended claims.
Claims (4)
1. A lithium battery pack fault diagnosis method based on a self-adaptive countermeasure network is characterized by comprising the following steps:
set available K health tagged source domain data { xs,ysAnd target domain data without label xtLayering the data of the source domain and the data of the target domain by a diagnostic program, reducing the number of feature output mappings, and optimizing the joint distribution difference and the edge distribution difference of a target function;
the diagnostic program comprises an asymmetric convolutional self-coding network and a domain confrontation training.
2. The method for diagnosing the fault of the lithium battery pack based on the adaptive countermeasure network according to claim 1, wherein the asymmetric convolutional self-coding network comprises the following steps:
setting the input vector of the asymmetric convolution self-coding network as x ∈ R, and the data mapping of the first hidden layer learning input layer as giE.g. R, its codingThe function is:
gi=f(wigi-1+bi)i=1,…,n (1)
in formula (1), n represents the number of hidden layers; w is aiAnd biFor each convolution kernel parameter, g when i is 00X. f is an activation function, activation operation is carried out after each layer of convolution, and the expression of the activation function is as follows:
in formula (2), α is a coefficient, and may be 1;
input data x is mapped to outputs G, respectivelys(xs) And Gt(xt) (ii) a The output of the new advanced function can be expressed as:
3. The method for diagnosing the lithium battery pack fault based on the adaptive countermeasure network according to claim 1, wherein the field countermeasure training comprises the following steps:
the robust domain adaptive network generally comprises a feature generator G, a label classifier C and a domain discriminator D, with the parameters being respectively thetag、θcAnd thetad;
In the training process, one is used for training a discriminator used for distinguishing a source domain from a target domain, and the other is used for training a feature generator used for confusing the domain discriminator; meanwhile, training a classifier to minimize the classification loss of the source domain data; the overall loss function expression of the domain countermeasure network is as follows:
in formula (5), JyRepresenting the cross entropy loss function, diRepresents a domain tag, JdRepresenting a domain classification loss, and λ represents a trade-off parameter between the two losses;
in the optimization objective, training the generation rules to minimize label prediction loss while maximizing domain classification loss; training a classifier to minimize label prediction loss, training a domain discriminator to minimize domain recognition loss;
to reduce the joint distribution difference between the source domain and the target domain for domain adaptation, a maximum mean difference is calculated as follows:
in formula (6), JyRepresenting cross entropy loss functions, f and fcRespectively representing the characteristics of the global collection level and the classification level.
4. The lithium battery pack fault diagnosis method based on the adaptive countermeasure network according to claim 3, wherein the field countermeasure training further comprises the following steps:
besides the joint distribution difference, the edge distribution difference is also considered so as to carry out more comprehensive domain adjustment; therefore, domain discriminators from those connected to the global convergence layer need to be optimized to form a reactive adaptation loss for reducing edge distribution differences between domains; the formula for this loss function is as follows:
in formula (7), JdRepresents a classification loss; diRepresents a domain tag, with a value of 0 or 1;
the expression of the final optimized loss function is therefore:
in formula (8), α and β are weight coefficients;
given labeled source domain data and unlabeled target data, source task features G may be extracted from the corresponding encoder networks(xs) And target task characteristics Gt(xt) (ii) a Then, feeding all the characteristics back to the domain discriminator D to realize training; by optimizing the corresponding objective function, all parameters can be found that satisfy the following conditions:
the parameters of the fixed-domain discriminator D are updated according to the discrimination gradient update of the binary classifier, which can be performed by a standard back-propagation algorithm.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110348139.5A CN113033678A (en) | 2021-03-31 | 2021-03-31 | Lithium battery pack fault diagnosis method based on adaptive countermeasure network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110348139.5A CN113033678A (en) | 2021-03-31 | 2021-03-31 | Lithium battery pack fault diagnosis method based on adaptive countermeasure network |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113033678A true CN113033678A (en) | 2021-06-25 |
Family
ID=76453331
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110348139.5A Pending CN113033678A (en) | 2021-03-31 | 2021-03-31 | Lithium battery pack fault diagnosis method based on adaptive countermeasure network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113033678A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114034486A (en) * | 2021-10-11 | 2022-02-11 | 中国人民解放军92578部队 | Unsupervised transfer learning-based bearing fault diagnosis method for pump mechanical equipment |
CN117686937A (en) * | 2024-02-02 | 2024-03-12 | 河南科技学院 | Method for estimating health state of single battery in battery system |
-
2021
- 2021-03-31 CN CN202110348139.5A patent/CN113033678A/en active Pending
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114034486A (en) * | 2021-10-11 | 2022-02-11 | 中国人民解放军92578部队 | Unsupervised transfer learning-based bearing fault diagnosis method for pump mechanical equipment |
CN114034486B (en) * | 2021-10-11 | 2024-04-23 | 中国人民解放军92578部队 | Pump mechanical equipment bearing fault diagnosis method based on unsupervised transfer learning |
CN117686937A (en) * | 2024-02-02 | 2024-03-12 | 河南科技学院 | Method for estimating health state of single battery in battery system |
CN117686937B (en) * | 2024-02-02 | 2024-04-12 | 河南科技学院 | Method for estimating health state of single battery in battery system |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109753992B (en) | Unsupervised domain adaptive image classification method based on condition generation countermeasure network | |
CN109635928B (en) | Voltage sag reason identification method based on deep learning model fusion | |
CN108647716B (en) | Photovoltaic array fault diagnosis method based on composite information | |
CN112379269B (en) | Battery abnormality detection model training and detection method and device thereof | |
CN113033678A (en) | Lithium battery pack fault diagnosis method based on adaptive countermeasure network | |
CN109214460A (en) | Method for diagnosing fault of power transformer based on Relative Transformation Yu nuclear entropy constituent analysis | |
CN114676742A (en) | Power grid abnormal electricity utilization detection method based on attention mechanism and residual error network | |
CN116304905B (en) | Permanent magnet synchronous motor demagnetizing fault diagnosis method under multi-load working condition | |
CN116484299A (en) | Charging pile fault diagnosis method based on integration of gradient lifting tree and multi-layer perceptron | |
CN113283491A (en) | Fault diagnosis method of electric vehicle alternating current charging pile based on optimized deep confidence network | |
CN115631365A (en) | Cross-modal contrast zero sample learning method fusing knowledge graph | |
CN113743537A (en) | Deep sparse memory model-based highway electromechanical system fault classification method | |
CN115659254A (en) | Power quality disturbance analysis method for power distribution network with bimodal feature fusion | |
CN113538037B (en) | Method, system, equipment and storage medium for monitoring charging event of battery car | |
CN112327190B (en) | Method for identifying health state of energy storage battery | |
CN117272230A (en) | Non-invasive load monitoring method and system based on multi-task learning model | |
CN116317937A (en) | Distributed photovoltaic power station operation fault diagnosis method | |
CN114841266A (en) | Voltage sag identification method based on triple prototype network under small sample | |
Xia et al. | Smart substation network fault classification based on a hybrid optimization algorithm | |
CN116050583B (en) | Water environment quality deep learning prediction method coupled with space-time context information | |
CN114501525B (en) | Wireless network interruption detection method based on condition generation countermeasure network | |
CN115865627B (en) | Cellular network fault diagnosis method for carrying out characterization learning based on pattern extraction | |
CN117388716B (en) | Battery pack fault diagnosis method, system and storage medium based on time sequence data | |
CN117725529B (en) | Transformer fault diagnosis method based on multi-mode self-attention mechanism | |
Lu et al. | Anomaly Recognition Method for Massive Data of Power Internet of Things Based on Bayesian Belief Network |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |