CN114966409A - Power lithium battery state of charge estimation method based on multi-layer perceptron algorithm - Google Patents
Power lithium battery state of charge estimation method based on multi-layer perceptron algorithm Download PDFInfo
- Publication number
- CN114966409A CN114966409A CN202210496363.3A CN202210496363A CN114966409A CN 114966409 A CN114966409 A CN 114966409A CN 202210496363 A CN202210496363 A CN 202210496363A CN 114966409 A CN114966409 A CN 114966409A
- Authority
- CN
- China
- Prior art keywords
- layer
- activation function
- error
- model
- feature
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01R—MEASURING ELECTRIC VARIABLES; MEASURING MAGNETIC VARIABLES
- G01R31/00—Arrangements for testing electric properties; Arrangements for locating electric faults; Arrangements for electrical testing characterised by what is being tested not provided for elsewhere
- G01R31/36—Arrangements for testing, measuring or monitoring the electrical condition of accumulators or electric batteries, e.g. capacity or state of charge [SoC]
- G01R31/367—Software therefor, e.g. for battery testing using modelling or look-up tables
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/60—Other road transportation technologies with climate change mitigation effect
- Y02T10/70—Energy storage systems for electromobility, e.g. batteries
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Tests Of Electric Status Of Batteries (AREA)
Abstract
The invention discloses a method for estimating the state of charge of a power lithium battery based on a multilayer perceptron algorithm, which has the improvement that the method comprises the following steps: step 1, data splitting: step 2, data preprocessing: step 3, designing a multilayer perceptron network: and 4, step 4: selecting hyper-parameters: and 5: parameter adjustment: and 6, verifying the model. The estimation method disclosed by the invention is more accurate in estimation and prediction compared with a simple table look-up method and a model estimation method, is insensitive to factors such as environment and the like, and has good stability. The training speed is faster compared to the LSTM method of the data-driven method.
Description
Technical Field
The invention belongs to the field of power lithium battery state-of-charge estimation methods, and particularly relates to a power lithium battery state-of-charge estimation method based on a multilayer perceptron algorithm in the field.
Background
Since the 21 st century, the problems of energy crisis, automobile exhaust pollution and the like are increasingly prominent, the reduction of carbon emission becomes a consensus, and the power lithium battery is an important direction for the development of new energy and can be widely applied to the fields of new energy automobiles, energy storage and the like. The state of charge (SOC) of the battery is an important factor for measuring the performance of the battery, and is one of the key parameters of the lithium ion battery. The more accurate the estimation of the battery SOC is, the more excellent the performance of the battery management system is, otherwise, if the estimation of the battery SOC is not accurate, the energy is wasted, the battery is damaged, and potential safety hazards can be caused in serious cases.
The conventional battery SOC is a total value obtained by comparing the amount of electricity that can be discharged from the battery with the rated capacity thereof in this case at a certain discharge current. Lithium ion batteries have typical non-linear characteristics and it is difficult to determine the total amount of charge released by the battery by existing means or methods. According to the ampere-hour integration method theory, the battery SOC is particularly critical, and the battery SOC can accurately reflect the energy condition. The current estimation method of the battery SOC mainly comprises the following steps: simple table lookup, model estimation and data-driven estimation.
A simple lookup table method is generally used to perform a static test on the open-circuit voltage and the battery SOC by using an open-circuit voltage method, and then an OCVSOC curve is fitted to be used as a static correction. And then carrying out dynamic estimation by using an ampere-hour integration method, and finally estimating the value of the state of charge of the battery. The most important defects of the simple table look-up method are that the time consumption is long, the energy is wasted, the estimation cannot be carried out in real time, and more importantly, the accumulated error exists by adopting an open-loop estimation method.
The model estimation method generally establishes a battery model, such as an Electrochemical Impedance Model (EIM) and an Equivalent Circuit Model (ECM). And estimating the state of charge of the power lithium battery through the state of charge of the model. The method has the main defects that the modeling is difficult, the battery can be modeled only by deeply knowing the battery based on the estimation method of the model, the parameters such as the used electrochemistry and the material performance are difficult to identify, the calculated amount is large, and the like.
The data driving estimation method is based on data accumulated in the application process of the lithium ion battery, learns the internal mechanism of the charge state through a machine learning method, and estimates the charge state by using a supervised learning method. The data driving estimation method describes the performance of the battery from the perspective of overall process test data, further analyzes the SOC of the battery, needs to use professional lithium battery test equipment and a high-precision data acquisition circuit, and simulates the real working condition of the lithium battery through the test of the lithium battery. The main disadvantages of the method are that the method is highly dependent on the integrity of data and the processing speed of a computer, and has high requirement on the data and long discrete training time.
Disclosure of Invention
The invention aims to solve the technical problem of providing a method for estimating the state of charge of a power lithium battery based on a multilayer perceptron algorithm.
The invention adopts the following technical scheme:
the improvement of a method for estimating the state of charge of a power lithium battery based on a multilayer perceptron algorithm is that the method comprises the following steps:
step 1, data splitting:
total data set T for battery test ot Divided into two parts, 75% of which are training data set T ra 25% of the test data set T es ;
Step 2, data preprocessing:
for training data set T ra The data in (1) is normalized, and the formula is as follows:
where x denotes a certain feature, x min Represents the smallest value, x, in the feature max Represents the maximum value in the feature;
step 3, designing a multilayer perceptron network:
designing a fully-connected neural network:
m=f(z)
h=f(s)
in the above formula, D represents a scale, x 1 ,x 2 ,……x d Indicates that there are d feature inputs, m 1 ,m 2 ,m 3 ,……m d Representing a feature with a hidden layer of d, h 1 ,h 2 ……,h d Number of features, o, representing a 2 nd hidden layer 1 Representing the output values of the multi-layer perceptron network, a representing the weight of each feature, b being the offset, W, X and M both being matrices, f (-) representing the activation function;
and 4, step 4: selecting hyper-parameters:
the loss function j (a) uses the root mean square error RMSE and the absolute value error MAE,
in the above formula, a represents the weight of each feature, x represents a certain feature, Y represents the value of SOC, N represents the total number of training data set samples, Y represents the total number of training data set samples n Representing the true SOC value, O, of each training data sample n (x) Representing the SOC value of each training data sample calculated by the multi-layer perceptron algorithm;
in the model training process, the learning rate adopts a cosine annealing algorithm, and the initial learning rate is 0.00015;
and 5: parameter adjustment:
firstly, adjusting the depth of a network, fixing the maximum network width to 80, after training the network depth, fixing two network depths to optimize the network width, randomly taking values from 30 to 200 for training cycle times to measure, and finally selecting an optimal model with the best precision and accuracy;
step 6, model verification:
checking the training error and the testing error of the optimal model selected in the step 5 as a verification model, and checking the optimal model when the training error is E tr Less than one in a thousand, test error E te And (5) when the model is less than five thousandths, selecting the model as a final algorithm model, and repeating the process of the step (5) if the model does not meet the requirement.
Further, a layer normalization operation is added to each layer in step 3, and a DropOut pruning operation is added after each layer, wherein the DropOut pruning operation is to connect the network with a certain probability P d Random shearing of P d =0.3。
Further, the activation function in step 3 uses a ReLU activation function and a Mish activation function, and the ReLU activation function and the Mish activation function are alternately used between layers, and the formula is as follows:
f(v)=ReLU(v)=MAX(0,v)
f(v)=Mish(v)=v*tanh(1+e v )
in the above formula, v represents a feature value trained by the neural network of the previous layer, and tanh is a bi-tangent function.
Further, the fully-connected neural network in step 3 is a nine-layer fully-connected neural network, the input features of the first layer are 15 parameters, the output features are 80 parameters, the activation function adopts a Mish activation function, the second layer adopts 80 input feature parameters, the output parameters are 100, the activation function adopts a ReLU, the third layer adopts 100 input features, the output features are 100, the activation function adopts a Mish activation function, the fourth layer adopts 100 input features, the output features are 70, the activation function is ReLU, the fifth layer adopts 70 input features, the output features are 70, the activation function is Mish, the sixth layer adopts 70, the output features are 50, the activation function is ReLU, the seventh layer adopts 50 input features, the output features are 50, the activation function is Mish, the eighth layer adopts 50 input features, the output features are 20, the activation function is ReLU, the ninth layer is a prediction layer and adopts the input characteristic of 20, the output characteristic of 1 and the activation function of Mish.
Further, in the loss function of step 4, the training error is based on the RMSE root mean square error, and the RMSE error E tr If the error is less than one thousandth, the test error adopts an absolute value error MAE as a judgment basis, and the MAE error E te Less than five thousandths of a day.
The invention has the beneficial effects that:
the estimation method disclosed by the invention is more accurate in estimation and prediction compared with a simple table look-up method and a model estimation method, is insensitive to factors such as environment and the like, and has good stability. The training speed is faster compared to the LSTM method of the data-driven method.
The estimation method disclosed by the invention can be used for more accurately estimating the value of the state of charge, and the training error MSE is less than 2%. The more accurate the charge state is, the more accurate the driving mileage can be provided, and the safety of the lithium battery and the new energy automobile can be ensured. The method has the advantages of low implementation cost, simplicity, easy use, suitability for practical engineering, wide market prospect and popularization value.
Drawings
FIG. 1 is a schematic diagram of a fully-connected neural network in step 3 of the estimation method of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be described in further detail below with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
In embodiment 1, the present embodiment discloses a method for estimating a state of charge of a power lithium battery based on a multilayer perceptron algorithm, and the method learns an internal mechanism of the state of charge by using a nine-layer neural network through deep analysis of each parameter in a public test data set of a lithium battery 18650. The activation function uses Mish interleaved with ReLU to satisfy the non-linearity. 70% of the data set was used to train the method model parameters and 30% of the data set was used to test the generalization of the verification model. The finally designed and trained model method can accurately estimate the state of charge (SOC). And judging the comprehensive performance of the lithium battery through the charge state, and finally determining the driving mileage of the new energy automobile by using the charge state. The method specifically comprises the following steps:
step 1, data splitting:
total data set T for battery test ot (Total DataSet is divided into two parts, 75% of which is the training data set T ra (Training DataSet), 25% is the test data set T es (Testing DataSet);
Step 2, data preprocessing:
in the data set, the value range of some characteristics is large, and the value range of some characteristics is small, so that the large value range of the characteristics can be considered as important characteristics by a neural network algorithm, and the large value change range of the characteristics can also cause poor weight training. This time mainly standardized techniques are used. Note that the normalization operation is used only in the training data set.
For training data set T ra The data in (1) is normalized, and the formula is as follows:
where x denotes a certain feature, x min Represents the smallest value, x, in the feature max Represents the maximum value in the feature;
step 3, designing a multilayer perceptron network:
as shown in fig. 1, a fully-connected neural network is designed, and the fully-connected neural network applies all input parameters to the hidden layer:
m=f(z)
h=f(s)
in the above formula, D represents a scale, and different network layers have different values of D, indicating how many features are. x is the number of 1 ,x 2 ,……x d Indicates that there are d feature inputs, m 1 ,m 2 ,m 3 ,……m d Representing features with d hidden layers and different network layers with different feature values. h is 1 ,h 2 ……,h d Number of features, o, representing a 2 nd hidden layer 1 Representing the output values of the multi-layered perceptron network, a representing the weight of each feature, b being the offset, W, X and M both being matrices, f (-) representing the activation function, which is typically a non-linear function;
the selection of the activation function mainly uses a ReLU activation function and a Mish activation function, and the ReLU activation function and the Mish activation function are alternately used between layers, and the formula is as follows:
f(v)=ReLU(v)=MAX(0,v)
f(v)=Mish(v)=v*tanh(1+e v )
in the above formula, v represents a feature value trained by the neural network of the previous layer, and tanh is a bi-tangent function.
The nonlinearity of the ReLU function results from using the value of the input feature to maintain a linear transfer when the value of the feature is greater than 0, and using 0 instead to maintain nonlinearity when the value of the input feature is less than 0. The non-linearity of the Mish activation function results from the multiplication of the value v of the feature by a bi-tangent function.
Adding a layer normalization (Batchnormalization) operation to each layer, and adding a Dropout pruning operation after each layer, wherein the Dropout pruning operation is to connect the network with a certain probability P d Random shearing of P d 0.3. By designing different depths of the multi-layer perceptron network toAnd designing different widths of the multi-layer perceptron network, and using different Dropout pruning weights to achieve the purpose of optimizing the network expression.
In this embodiment, the fully-connected neural network in this step is a nine-layer fully-connected neural network, the first layer has 15 input features, the output features have 80 parameters, the activation function uses a Mish activation function, the second layer uses 80 input feature parameters, the output parameters are 100, and the activation function uses a ReLU, the third layer uses 100 input features, the output features are 100, the activation function uses a Mish activation function, the fourth layer uses 100 input features, the output features are 70, the activation function is ReLU, the fifth layer has 70 input features, the output features are 70, the activation function is Mish, the sixth layer has 70 input features, the output features are 50, the activation function is ReLU, the seventh layer has 50 input features, the output features are 50, the activation function is Mish, the eighth layer has 50 input features, the output features are 20, and the activation function is ReLU, the ninth layer is a prediction layer and adopts the input characteristic of 20, the output characteristic of 1 and the activation function of Mish.
And 4, step 4: selecting hyper-parameters:
the loss function j (a) uses the root mean square error RMSE and the absolute value error MAE,
in the above formula, a represents the weight of each feature, x represents a certain feature, Y represents the true value of the label, i.e. the value of SOC, N represents the total number of training data set samples, Y represents the total number of training data set samples n Representing the true SOC value, O, of each training data sample n (x) Representing multiple layersCalculating the SOC value of each training data sample by a perceptron algorithm;
the learning rate in the super-parameter selection is used, after the multilayer perceptron is designed and improved, a Cosine annealing algorithm (Cosine annealing) is adopted to accelerate the convergence of the model in the process of training the multilayer perceptron algorithm model, and the learning rate of the algorithm tends to rise first and then fall along with the increase of the training times. The initial learning rate was 0.00015.
In the loss function of this step, the training error is based on the RMSE root mean square error, and the RMSE error E tr If the error is less than one thousandth, the test error adopts an absolute value error MAE as a judgment basis, and the error E of the MAE is smaller than one thousandth te Less than five thousandths of a day.
And 5: parameter adjustment:
in the training process, due to the difference of the activation functions and the difference of the hyper-parameters, the width of the multi-layer perceptron network is different, and the depth of the multi-layer perceptron is different. These all result in a reduction in the accuracy and precision of the training model.
Firstly, adjusting the depth of a network, fixing the maximum network width to 80, after training a better network depth, fixing two network depths to optimize the network width, randomly taking values from 30 to 200 for measurement of the number of training cycles (EPOCH), and finally selecting an optimal model with the best precision and accuracy;
step 6, model verification:
checking the training error and the testing error of the optimal model selected in the step 5 as a verification model, and checking the optimal model when the training error is E tr Less than one in a thousand, test error E te And (5) when the model is less than five thousandths, selecting the model as a final algorithm model, and repeating the process of the step (5) if the model does not meet the requirement.
Claims (5)
1. A method for estimating the state of charge of a power lithium battery based on a multilayer perceptron algorithm is characterized by comprising the following steps:
step 1, data splitting:
total data set T for battery test ot Divided into two parts, 75% of which are training data set T ra 25% of the test data set T es ;
Step 2, data preprocessing:
for training data set T ra The data in (1) is normalized, and the formula is as follows:
where x denotes a certain feature, x min Represents the smallest value, x, in the feature max Represents the maximum value in the feature;
step 3, designing a multilayer perceptron network:
designing a fully-connected neural network:
m=f(z)
h=f(s)
in the above formula, D represents a scale, x 1 ,x 2 ,……x d Indicates that there are d feature inputs, m 1 ,m 2 ,m 3 ,……m d Representing a feature with a hidden layer of d, h 1 ,h 2 ……,h d Number of features, o, representing a 2 nd hidden layer 1 Representing the output values of the multi-layer perceptron network, a representing the weight of each feature, b being the offset, W, X and M both being matrices, f (-) representing the activation function;
and 4, step 4: selecting hyper-parameters:
the loss function j (a) uses the root mean square error RMSE and the absolute value error MAE,
in the above formula, a represents the weight of each feature, x represents a certain feature, Y represents the value of SOC, N represents the total number of training data set samples, Y represents the total number of training data set samples n Representing the true SOC value, O, of each training data sample n (x) Representing the SOC value of each training data sample calculated by the multi-layer perceptron algorithm;
in the model training process, the learning rate adopts a cosine annealing algorithm, and the initial learning rate is 0.00015;
and 5: parameter adjustment:
firstly, adjusting the depth of a network, fixing the maximum network width to 80, after training the network depth, fixing two network depths to optimize the network width, randomly taking values from 30 to 200 for training cycle times to measure, and finally selecting an optimal model with the best precision and accuracy;
step 6, model verification:
checking the training error and the testing error of the optimal model selected in the step 5 as a verification model, and checking the optimal model when the training error is E tr Less than one in a thousand, test error E te And (5) when the model is less than five thousandths, selecting the model as a final algorithm model, and repeating the process of the step (5) if the model does not meet the requirement.
2. According to claimThe method for estimating the state of charge of the power lithium battery based on the multilayer perceptron algorithm is characterized by comprising the following steps of: adding layer normalization operation to each layer in the step 3, and adding DropOut pruning operation after each layer, wherein DropOut pruning operation refers to connecting the network with a certain probability P d Random shearing of P d =0.3。
3. The method for estimating the state of charge of the power lithium battery based on the multi-layer perceptron algorithm according to claim 1, characterized in that: the activation function in step 3 uses a ReLU activation function and a Mish activation function, and the ReLU activation function and the Mish activation function are alternately used between layers, and the formula is as follows:
f(v)=ReLU(v)=MAX(0,v)
f(v)=Mish(v)=v*tanh(1+e v )
in the above formula, v represents a feature value trained by the neural network of the previous layer, and tanh is a bi-tangent function.
4. The method for estimating the state of charge of the power lithium battery based on the multi-layer perceptron algorithm according to claim 3, wherein: the fully-connected neural network in the step 3 is nine-layer fully-connected neural network, the first layer input features are 15 parameters, the output features are 80 parameters, the activation function adopts a Mish activation function, the second layer adopts 80 input feature parameters, the output parameters are 100, the activation function adopts a ReLU, the third layer adopts 100 input features, the output features are 100, the activation function adopts a Mish activation function, the fourth layer adopts 100 input features, the output features are 70, the activation function is ReLU, the fifth layer input features are 70, the output features are 70, the activation function is Mish, the sixth layer input features are 70, the output features are 50, the activation function is ReLU, the seventh layer input features are 50, the output features are 50, the activation function is Mish, the eighth layer input features are 50, the output features are 20, the activation function is ReLU, the ninth layer adopts the input features are 20, the output characteristic is 1 and the activation function is Mish.
5. The method for estimating the state of charge of the power lithium battery based on the multi-layer perceptron algorithm according to claim 1, characterized in that: in the loss function of step 4, the training error is based on the RMSE root mean square error, and the RMSE error E tr If the error is less than one thousandth, the test error adopts an absolute value error MAE as a judgment basis, and the error E of the MAE is smaller than one thousandth te Less than five thousandths of a day.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210496363.3A CN114966409A (en) | 2022-05-09 | 2022-05-09 | Power lithium battery state of charge estimation method based on multi-layer perceptron algorithm |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210496363.3A CN114966409A (en) | 2022-05-09 | 2022-05-09 | Power lithium battery state of charge estimation method based on multi-layer perceptron algorithm |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114966409A true CN114966409A (en) | 2022-08-30 |
Family
ID=82980769
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210496363.3A Pending CN114966409A (en) | 2022-05-09 | 2022-05-09 | Power lithium battery state of charge estimation method based on multi-layer perceptron algorithm |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114966409A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115438588A (en) * | 2022-09-29 | 2022-12-06 | 中南大学 | Temperature prediction method, system, equipment and storage medium of lithium battery |
-
2022
- 2022-05-09 CN CN202210496363.3A patent/CN114966409A/en active Pending
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115438588A (en) * | 2022-09-29 | 2022-12-06 | 中南大学 | Temperature prediction method, system, equipment and storage medium of lithium battery |
CN115438588B (en) * | 2022-09-29 | 2023-05-09 | 中南大学 | Temperature prediction method, system, equipment and storage medium for lithium battery |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110824363B (en) | Lithium battery SOC and SOE joint estimation method based on improved CKF | |
CN109061506A (en) | Lithium-ion-power cell SOC estimation method based on Neural Network Optimization EKF | |
CN112180274B (en) | Rapid detection and evaluation method for power battery pack | |
CN105911476B (en) | A kind of battery energy storage system SOC prediction techniques based on data mining | |
Zhang et al. | Modeling of back-propagation neural network based state-of-charge estimation for lithium-ion batteries with consideration of capacity attenuation | |
CN112557907A (en) | SOC estimation method of electric vehicle lithium ion battery based on GRU-RNN | |
CN112345939B (en) | Lithium ion battery model parameter identification method based on continuous impulse response | |
CN112630659A (en) | Lithium battery SOC estimation method based on improved BP-EKF algorithm | |
CN112163372B (en) | SOC estimation method of power battery | |
CN106777786A (en) | A kind of lithium ion battery SOC estimation method | |
CN112782594B (en) | Method for estimating SOC (state of charge) of lithium battery by data-driven algorithm considering internal resistance | |
CN112686380A (en) | Neural network-based echelon power cell consistency evaluation method and system | |
CN116224074A (en) | Soft package lithium ion battery state of charge estimation method, device and storage medium | |
CN113791351B (en) | Lithium battery life prediction method based on transfer learning and difference probability distribution | |
Chen et al. | State of health estimation for lithium-ion battery based on long short term memory networks | |
CN115219918A (en) | Lithium ion battery life prediction method based on capacity decline combined model | |
CN116699414A (en) | Lithium battery SOC estimation method and system based on UKF-LSTM algorithm | |
CN114966409A (en) | Power lithium battery state of charge estimation method based on multi-layer perceptron algorithm | |
Hu et al. | Performance evaluation strategy for battery pack of electric vehicles: Online estimation and offline evaluation | |
Xu et al. | A hybrid method for lithium-ion batteries state-of-charge estimation based on gated recurrent unit neural network and an adaptive unscented Kalman filter | |
Chen et al. | A bias correction based state-of-charge estimation method for multi-cell battery pack under different working conditions | |
CN112462281A (en) | SOC estimation method and system based on gas-liquid dynamic model belt parameter correction | |
CN116466250A (en) | Dynamic working condition model error characteristic-based power battery health state estimation method | |
CN116500480A (en) | Intelligent battery health monitoring method based on feature transfer learning hybrid model | |
Zhou et al. | Hyperparameter optimization for SOC estimation by LSTM with internal resistance |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |