CN114255371A - Small sample image classification method based on component supervision network - Google Patents
Small sample image classification method based on component supervision network Download PDFInfo
- Publication number
- CN114255371A CN114255371A CN202111567708.1A CN202111567708A CN114255371A CN 114255371 A CN114255371 A CN 114255371A CN 202111567708 A CN202111567708 A CN 202111567708A CN 114255371 A CN114255371 A CN 114255371A
- Authority
- CN
- China
- Prior art keywords
- representing
- component
- loss
- supervision
- task
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2415—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/047—Probabilistic or stochastic networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/048—Activation functions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
Abstract
The invention relates to the technical field of pattern recognition, in particular to a small sample image classification method based on a component supervision network, and aims to solve the problem that feature extraction is not suitable in the current small sample image classification field. The small sample image classification method based on the component supervision network provides a new scheme for improving the small sample image classification performance, namely the component supervision network, refers to a common hierarchical dictionary WordNet in natural language processing, collects component information of samples and generates multiple labels, constructs an auxiliary task based on component supervision, uses a standard classification task, the auxiliary task of the component supervision and an automatic supervision auxiliary task to assist in training a feature extractor, uses a trained feature extractor to extract features of new data, and uses a linear classifier to finish a final classification task so as to improve the adaptability of the feature extractor.
Description
Technical Field
The invention relates to the technical field of pattern recognition, in particular to a small sample image classification method based on a component supervision network.
Background
The small sample image classification process generally comprises two stages, wherein the first stage is to train a feature extraction model by using base class data, the second stage is to extract the features of new class data by using the trained feature extraction model, and a designed classifier is used for completing classification tasks. The design of the feature extraction model and the classifier is taken as an important link of a small sample image classification system and is always a core problem of research in the field of small sample image classification.
At present, the main methods for designing the small sample image classifier include the following methods: (1) data-based small sample image classification: data-based small sample learning utilizes data augmentation to enhance the training set. Schwartz et al utilized an autoencoder to find the distortions between samples of the same class and different samples, then utilized them to generate new samples for samples of other classes, and finally utilized the expanded data set to train the classifier. Hariharan and Girshick thought that the differences between different samples in the same class could be generalized to other classes, and based on this idea a generator was designed to amplify the training data. Kim et al use the idea of countermeasures to augment the support set data, fix the feature extractor and classifier, augment a feature for each support set class using the idea of countermeasures, constrain the generated feature with the least probability difference of classification, i.e., the generated data approximates on the classification boundary. Yang et al propose to correct the distribution of fewer classes, extending the input to the classifier by shifting the statistics of classes with sufficient samples, and then extracting a sufficient number of samples from the corrected distribution. (2) Metric-based small sample image classification: the small sample learning based on the measurement is to combine the measurement learning with the small sample learning, and the basic principle is to learn the measurement distance function according to different tasks. The prototype network proposed by Snell et al is a simple and efficient small sample learning method, and the classification is judged by calculating the Euclidean distance from the embedded features of the test sample to each prototype. Oreshkin et al found that the effect of measuring scale and measuring task conditions is important for improving the performance of small sample algorithms, and proposed a simple and effective method to make a learner constrain a task sample set to learn task-related metric spaces, and also proposed an end-to-end optimization process based on assistant task collaborative training to learn task-related metric spaces. Ren et al extended the approach of prototype networks to the semi-supervised domain. Wang et al expand the diversity of the samples by generating additional data from the model and training using a prototype network, and the data expansion can effectively reduce the influence of sample deficiency on class prototype calculation. The matching network proposed by Vinyals et al is mapped to feature space by kernel function and then classified by k-nearest neighbor classifier.
The goal of small sample image classification is to solve the classification problem on a cross-domain basis in the event of insufficient label data, and typically researchers pre-train a feature extractor with base class data and then use it to extract features of new data and identify them. However, the new data set has only a few annotation samples and is completely different from the class of the base class data set, which results in the feature extractor trained in advance not being well adapted to the new data, and there is a problem that the feature extraction is not adapted.
Disclosure of Invention
The invention provides a small sample image classification method based on a component supervision network, and aims to solve the problem that feature extraction is not suitable in the current small sample image classification field.
In order to achieve the above object, the present invention provides a small sample image classification method based on a component supervision network, the method comprising:
constructing a standard classification task, inputting an image into a pre-trained feature extractor to obtain a feature vector, inputting the feature vector into a base classifier to predict a soft label, projecting the feature vector into a label space, converting the feature vector into probability distribution, introducing a classification cross entropy function, and calculating standard classification loss, wherein the standard classification loss is as follows:
wherein x represents an image sample;a one-time truth label vector representing a sample x in a standard classification task; cbRepresents the number of base classes;representing a classification cross entropy function;representing a base classifier;representing a pre-trained feature extractor;a predicted soft label representing sample x in a standard classification task;representing feature vectors projected into the label space;
constructing an auxiliary task for monitoring the component, inputting an image into a pre-trained feature extractor to obtain a feature vector, inputting the feature vector into a base classifier to predict a soft label, projecting the feature vector into a label space based on the component, converting the feature vector into binomial distribution, introducing a binary cross entropy function, and calculating component monitoring auxiliary loss, wherein the component monitoring auxiliary loss is as follows:
wherein x represents an image sample;a true multi-label representing a component; cmRepresenting the number of components;representing a binary cross entropy function;representing a base classifier;representing a pre-trained feature extractor;a predictive soft multi-label representing a component;a feature vector representing a projection to a component-based label space;
constructing an automatic supervision auxiliary task, mapping the rotated basic data to a rotation-based label space, and then calculating an automatic supervision auxiliary loss by using a rotation-based probability distribution, wherein the automatic supervision auxiliary loss is as follows:
wherein x represents an image sample;representing a rotation-based tag vector;representing a rotation-based probability distribution;
calculating the overall loss of the component supervision network by overlapping the standard classification loss, the component supervision auxiliary loss and the self-supervision auxiliary loss, wherein the overall loss of the component supervision network is as follows:
where α is an empirical parameter used to control the effect of component supervision assistance loss, determined by the accuracy of the component-based multi-tag.
Taking the total loss function of the component supervision network as the loss function of the training feature extractor, updating the parameters of the feature extractor in a gradient manner according to the loss value in each training cycle, and selecting the parameter with the highest accuracy rate in the training process as the final parameter of the feature extractor;
extracting the characteristics of the new data by using the trained characteristic extractor, obtaining a classifier by using a mode of directly deriving a linear regression function, and finishing a final classification task, wherein the linear regression objective function is as follows:
wherein, the classifier is:
W=YsVs T(VsVs T+βI)-1
wherein | · | purple sweetFA true single hot spot label matrix representing Frobenius-norm, the support data;representing the classifier to be learned, CnIndicating the number of new categories; n is a radical ofsRepresenting the amount of support data;a support feature representing a tape flag; dim represents the dimension of the sample; w represents a classifier; beta represents a hyper-parameterStopping overfitting; i represents a diagonal matrix;
using classifiers W to VqClassifying to obtain:
Yq=WVq
wherein the content of the first and second substances,a soft label matrix representing query sample generation;representing an unlabeled query property; n is a radical ofqRepresenting the amount of query data.
In the above small sample image classification method based on the component monitoring network, optionally, in the standard classification task, a softmax activation function is adopted as a base classifier.
In the above small sample image classification method based on the component supervision network, optionally, in the component supervision auxiliary task, a sigmoid activation function is adopted as a base classifier.
The invention provides a small sample image classification method based on a component supervision network, which provides a new scheme for improving the small sample image classification performance, namely the component supervision network, wherein a common hierarchical dictionary WordNet in natural language processing is referred, component information of a sample is collected and multi-labels are generated, an auxiliary task based on component supervision is constructed, a standard classification task, an auxiliary task of component supervision and an automatic supervision auxiliary task are used for assisting in training a feature extractor, the trained feature extractor is used for extracting the features of new data, and a linear classifier is used for finishing the final classification task, so that the adaptability of the feature extractor is improved.
The construction of the present invention and other objects and advantages thereof will be more apparent from the following description of the preferred embodiments taken in conjunction with the accompanying drawings.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
Fig. 1 is a schematic flowchart of a small sample image classification method based on a component monitoring network according to an embodiment of the present invention;
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention clearer, the technical solutions in the embodiments of the present invention will be described in more detail below with reference to the accompanying drawings in the preferred embodiments of the present invention. In the drawings, the same or similar reference numerals denote the same or similar components or components having the same or similar functions throughout. The described embodiments are only some, but not all embodiments of the invention. The embodiments described below with reference to the drawings are illustrative and intended to be illustrative of the invention and are not to be construed as limiting the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention. Embodiments of the present invention will be described in detail below with reference to the accompanying drawings.
As shown in fig. 1, the present invention provides a small sample image classification method based on a component surveillance network, the method comprising:
step 110: constructing a standard classification task, and inputting the image x into a pre-trained feature extractorObtaining a feature vectorWhere dim represents the dimension of the feature vector, the feature vector is input to the basis classifierTo predict soft tags, in the standard classification task, the softmax activation function is adopted asA base classifier.
Projecting the feature vectors into a label space, x → zscWherein, in the step (A),Cbrepresents the number of base classes; then it is converted into a probability distribution, resulting in:
wherein the content of the first and second substances,a predicted soft label representing sample x in a standard classification task;
finally, introducing a classification cross entropy functionThe standard classification loss was calculated as:
wherein x represents an image sample;a one-time truth label vector representing a sample x in a standard classification task;
step 120: although the class of the base class dataset and the new dataset are different, the composition of the sample is similar (for example, cat and dog contain legs and head components), in fact, such entity components are stable in the class, and have good cross-class universality and new-class universality, therefore, referring to a common hierarchical dictionary WordNet in natural language processing, the component information of the sample is collected and multi-labels are generated, and a component-based auxiliary task is constructed to improve the adaptability of a feature extractor, and the components contained in each class of articles can be obtained through the WordNet, wherein each component belongs to one part of the multi-labels.
Constructing an auxiliary task supervised by the components, inputting the image into a pre-trained feature extractorObtaining a feature vectorWhere dim represents the dimension of the feature vector, the feature vector is input to the basis classifierSoft tags are predicted, and a sigmoid activation function is adopted as a base classifier in a component supervision auxiliary task.
Projecting feature vectors into a component-based label space, x → zcsWherein, in the step (A),Cmrepresenting the number of components; this is then converted into a binomial distribution, resulting in:
wherein the content of the first and second substances,a predictive soft multi-label representing a component;
finally introducing a binary cross entropy functionThe component supervision assistance loss is calculated as:
it should be noted that, the auxiliary task of component supervision generates a multi-label based on the component for the base class data, and introduces the multi-label component supervision assistance loss as the assistance loss for updating the network.
Step 130: to strengthen the rotation invariance of the network, self-supervision learning is introduced into the network model, which helps to capture the general features of no tag, specifically we rotate the underlying data to four angles of 0 °, 90 °, 180 ° and 270 ° to predict the rotation angle of the image.
Constructing an automatic supervision auxiliary task, and mapping the rotated basic data to a rotation-based label space, x → zssWherein, in the step (A),then using the rotation-based probability distributionThe calculated self-supervision assistance loss is:
step 140: and calculating the total loss of the component supervision network by superposing the standard classification loss, the component supervision auxiliary loss and the self-supervision auxiliary loss to obtain:
where α is an empirical parameter used to control the effect of component supervision assistance loss, determined by the accuracy of the component-based multi-tag.
It should be noted that, the fitting degree of the network to the accumulated data can be seen according to the loss function, and the smaller the loss obtained by calculation through the loss function is, the better the fitting condition of the network is to some extent. In the neural network training process, a loss function is an indispensable loop, and the loss function directly influences or guides the updating direction of the network parameters.
Step 150: and taking the total loss function of the component supervision network as the loss function of the training feature extractor, updating parameters of the feature extractor in a gradient manner according to the loss value in each training, and selecting the parameter with the highest accuracy in the training process as the final parameter of the feature extractor for the next step to extract the sample features.
It should be noted that, a resnet-12 network is used as a feature extractor, the overall loss function of the component supervision network is used as a loss function for training the feature extractor, and then the trained feature extraction network is used to extract features of the new class of data.
Step 160: and (5) extracting the features of the new data by using the trained feature extractor in the step 150, obtaining a classifier by using a direct derivative linear regression function mode, and finishing the final classification task.
The new class data is sent to the trained feature extractor in step 150 to obtain new class data featuresWherein V is [ V ]s,Vq],A supporting feature representing a tape flag is shown,representing unlabeled query properties, NsAnd NqPresentation supportAnd the number of query data, dim represents the dimensionality of the sample.
Next, use VsTraining a new classifier, specifically, selecting a linear regression function as the classifier, wherein the linear regression objective function is:
wherein | · | purple sweetFA true single hot spot label matrix representing Frobenius-norm, the support data;representing the classifier to be learned, CnIndicating the number of new categories; beta represents a hyper-parameter, preventing overfitting;
optimizing the objective function by adopting a direct derivation mode to obtain a classifier W as follows:
W=YsVs T(VsVs T+βI)-1
wherein I represents a diagonal matrix;
using classifiers W to VqClassifying to obtain:
Yq=WVq
wherein the content of the first and second substances,a soft label matrix representing the query sample generation.
The invention provides a small sample image classification method based on a component supervision network, which provides a new scheme for improving the small sample image classification performance, namely the component supervision network, wherein a common hierarchical dictionary WordNet in natural language processing is referred, component information of a sample is collected and multi-labels are generated, an auxiliary task based on component supervision is constructed, a standard classification task, an auxiliary task of component supervision and an automatic supervision auxiliary task are used for assisting in training a feature extractor, the trained feature extractor is used for extracting the features of new data, and a linear classifier is used for finishing the final classification task, so that the adaptability of the feature extractor is improved.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solution of the present invention, and not to limit the same; while the invention has been described in detail and with reference to the foregoing embodiments, it will be understood by those skilled in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present invention.
Claims (3)
1. A small sample image classification method based on a component supervision network is characterized by comprising the following steps:
constructing a standard classification task, inputting an image into a pre-trained feature extractor to obtain a feature vector, inputting the feature vector into a base classifier to predict a soft label, projecting the feature vector into a label space, converting the feature vector into probability distribution, introducing a classification cross entropy function, and calculating standard classification loss, wherein the standard classification loss is as follows:
wherein x represents an image sample;a one-time truth label vector representing a sample x in a standard classification task; cbRepresents the number of base classes;representing a classification cross entropy function;representing a base classifier;representing a pre-trained feature extractor;a predicted soft label representing sample x in a standard classification task;representing the feature vectors projected into the label space.
Constructing an auxiliary task for monitoring the component, inputting an image into a pre-trained feature extractor to obtain a feature vector, inputting the feature vector into a base classifier to predict a soft label, projecting the feature vector into a label space based on the component, converting the feature vector into binomial distribution, introducing a binary cross entropy function, and calculating component monitoring auxiliary loss, wherein the component monitoring auxiliary loss is as follows:
wherein x represents an image sample;a true multi-label representing a component; cmRepresenting the number of components;representing a binary cross entropy function;representing a base classifier;representing a pre-trained feature extractor;a predictive soft multi-label representing a component;representing feature vectors projected into the component-based label space.
Constructing an automatic supervision auxiliary task, mapping the rotated basic data to a rotation-based label space, and then calculating an automatic supervision auxiliary loss by using a rotation-based probability distribution, wherein the automatic supervision auxiliary loss is as follows:
wherein x represents an image sample;representing a rotation-based tag vector;representing a rotation-based probability distribution.
Calculating the overall loss of the component supervision network by overlapping the standard classification loss, the component supervision auxiliary loss and the self-supervision auxiliary loss, wherein the overall loss of the component supervision network is as follows:
where α is an empirical parameter used to control the effect of component supervision assistance loss, determined by the accuracy of the component-based multi-tag.
And taking the total loss function of the component supervision network as the loss function of the training feature extractor, updating the parameters of the feature extractor in a gradient manner according to the loss value in each training, and selecting the parameter with the highest accuracy in the training process as the final parameter of the feature extractor.
Extracting the characteristics of the new data by using the trained characteristic extractor, obtaining a classifier by using a mode of directly deriving a linear regression function, and finishing a final classification task, wherein the linear regression objective function is as follows:
wherein, the classifier is:
W=YsVs T(VsVs T+βI)-1
wherein | · | purple sweetFA true single hot spot label matrix representing Frobenius-norm, the support data;representing the classifier to be learned, CnIndicating the number of new categories; n is a radical ofsRepresenting the amount of support data;a support feature representing a tape flag; dim represents the dimension of the sample; w represents a classifier; beta represents a hyper-parameter, preventing overfitting; i represents a diagonal matrix;
using classifiers W to VqClassifying to obtain:
Yq=WVq
2. The component surveillance network-based small sample image classification method according to claim 1, characterized in that in a standard classification task, a softmax activation function is used as a base classifier.
3. The small sample image classification method based on the component supervision network according to claim 1 or 2, characterized in that in the component supervision assistance task, a sigmoid activation function is adopted as a base classifier.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111567708.1A CN114255371A (en) | 2021-12-21 | 2021-12-21 | Small sample image classification method based on component supervision network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111567708.1A CN114255371A (en) | 2021-12-21 | 2021-12-21 | Small sample image classification method based on component supervision network |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114255371A true CN114255371A (en) | 2022-03-29 |
Family
ID=80793418
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111567708.1A Pending CN114255371A (en) | 2021-12-21 | 2021-12-21 | Small sample image classification method based on component supervision network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114255371A (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114580571A (en) * | 2022-04-01 | 2022-06-03 | 南通大学 | Small sample power equipment image classification method based on migration mutual learning |
CN116071609A (en) * | 2023-03-29 | 2023-05-05 | 中国科学技术大学 | Small sample image classification method based on dynamic self-adaptive extraction of target features |
CN116863327A (en) * | 2023-06-05 | 2023-10-10 | 中国石油大学(华东) | Cross-domain small sample classification method based on cooperative antagonism of double-domain classifier |
-
2021
- 2021-12-21 CN CN202111567708.1A patent/CN114255371A/en active Pending
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114580571A (en) * | 2022-04-01 | 2022-06-03 | 南通大学 | Small sample power equipment image classification method based on migration mutual learning |
CN114580571B (en) * | 2022-04-01 | 2023-05-23 | 南通大学 | Small sample power equipment image classification method based on migration mutual learning |
CN116071609A (en) * | 2023-03-29 | 2023-05-05 | 中国科学技术大学 | Small sample image classification method based on dynamic self-adaptive extraction of target features |
CN116071609B (en) * | 2023-03-29 | 2023-07-18 | 中国科学技术大学 | Small sample image classification method based on dynamic self-adaptive extraction of target features |
CN116863327A (en) * | 2023-06-05 | 2023-10-10 | 中国石油大学(华东) | Cross-domain small sample classification method based on cooperative antagonism of double-domain classifier |
CN116863327B (en) * | 2023-06-05 | 2023-12-15 | 中国石油大学(华东) | Cross-domain small sample classification method based on cooperative antagonism of double-domain classifier |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109086658B (en) | Sensor data generation method and system based on generation countermeasure network | |
EP3767536A1 (en) | Latent code for unsupervised domain adaptation | |
CN114255371A (en) | Small sample image classification method based on component supervision network | |
Gao et al. | Edited AdaBoost by weighted kNN | |
CN103425996B (en) | A kind of large-scale image recognition methods of parallel distributed | |
CN107943856A (en) | A kind of file classification method and system based on expansion marker samples | |
CN116644755B (en) | Multi-task learning-based few-sample named entity recognition method, device and medium | |
Cui et al. | Label error correction and generation through label relationships | |
CN116051479A (en) | Textile defect identification method integrating cross-domain migration and anomaly detection | |
Xu et al. | Graphical modeling for multi-source domain adaptation | |
CN115439715A (en) | Semi-supervised few-sample image classification learning method and system based on anti-label learning | |
US20200143209A1 (en) | Task dependent adaptive metric for classifying pieces of data | |
Hu et al. | Robust semi-supervised classification based on data augmented online ELMs with deep features | |
CN112434686B (en) | End-to-end misplaced text classification identifier for OCR (optical character) pictures | |
EP3627403A1 (en) | Training of a one-shot learning classifier | |
CN114048314A (en) | Natural language steganalysis method | |
CN117308077A (en) | Water supply control system for evaporator of nuclear power unit of reactor | |
Chen et al. | Online vehicle logo recognition using Cauchy prior logistic regression | |
CN116681961A (en) | Weak supervision target detection method based on semi-supervision method and noise processing | |
CN113592045B (en) | Model adaptive text recognition method and system from printed form to handwritten form | |
Kim | Probabilistic sequence translation-alignment model for time-series classification | |
CN115797642A (en) | Self-adaptive image semantic segmentation algorithm based on consistency regularization and semi-supervision field | |
CN116246102A (en) | Image classification method and system based on self-encoder and decision tree | |
CN112270334B (en) | Few-sample image classification method and system based on abnormal point exposure | |
CN114398488A (en) | Bilstm multi-label text classification method based on attention mechanism |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |