CN110472533B - Face recognition method based on semi-supervised training - Google Patents

Face recognition method based on semi-supervised training Download PDF

Info

Publication number
CN110472533B
CN110472533B CN201910698243.XA CN201910698243A CN110472533B CN 110472533 B CN110472533 B CN 110472533B CN 201910698243 A CN201910698243 A CN 201910698243A CN 110472533 B CN110472533 B CN 110472533B
Authority
CN
China
Prior art keywords
data
label
pictures
loss function
loss
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910698243.XA
Other languages
Chinese (zh)
Other versions
CN110472533A (en
Inventor
宋丹丹
陈科宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Institute of Technology BIT
Original Assignee
Beijing Institute of Technology BIT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Institute of Technology BIT filed Critical Beijing Institute of Technology BIT
Priority to CN201910698243.XA priority Critical patent/CN110472533B/en
Publication of CN110472533A publication Critical patent/CN110472533A/en
Application granted granted Critical
Publication of CN110472533B publication Critical patent/CN110472533B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • G06F18/2155Generating training patterns; Bootstrap methods, e.g. bagging or boosting characterised by the incorporation of unlabelled data, e.g. multiple instance learning [MIL], semi-supervised techniques using expectation-maximisation [EM] or naïve labelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation

Abstract

The invention relates to a face recognition method based on semi-supervised training, belonging to the field of computer vision; firstly, using a face recognition data set as labeled data, crawling a face picture from the Internet as unlabeled data, and carrying out face detection and alignment on the labeled data and the unlabeled data to obtain training data; introducing a loss function based on the unlabelled picture, and performing semi-supervised training together with the loss function of the labeled picture; and introducing a task balance factor alpha and a data balance factor beta to balance the relationship between the supervised task and the unsupervised task. Compared with other face recognition methods using label-free data, the method does not need to cluster the label-free pictures, is more efficient in the mode of using the label-free pictures, and has better model performance; the method of the invention achieves good performance improvement on a plurality of face recognition test sets and has good universality.

Description

Face recognition method based on semi-supervised training
Technical Field
The invention relates to a face recognition method based on semi-supervised training, in particular to a face recognition method based on the semi-supervised training of a labeled picture and a non-labeled picture, and belongs to the technical field of computer vision.
Background
Along with the development of deep learning, the precision of the face recognition model is greatly improved. The face recognition technology is widely applied to the fields of intelligent security, financial payment, entrance guard card punching and the like, and has extremely high commercial value. Currently, the size of face recognition data sets has exceeded tens of millions of pictures, one hundred thousand of individual face categories. Further collecting more tagged data requires a large expenditure of manpower and material resources. And a large amount of label-free data exists in the internet and is not reasonably utilized.
At present, researchers use unlabeled data to optimize a face recognition model, but the methods need to cluster the unlabeled data, the accuracy of a clustering algorithm is low, and large-scale clustering needs a large amount of memory and time. The number of each category of the clustered data is inconsistent, resulting in serious category imbalance.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provides a face recognition method based on semi-supervised training, which can more reasonably use label-free data and further improve the performance of a face recognition model.
The purpose of the invention is realized by the following technical scheme.
A face recognition method based on semi-supervised training comprises the following steps: the method comprises the following steps:
step 1: preprocessing training data;
carrying out face detection and alignment on the labeled data and the unlabeled data to ensure that the size of the aligned picture is mxn and meets the size of an input picture required by a network structure model;
preferably, m-n-112.
Preferably, the face recognition data set is used as tagged data, and the face picture is crawled from the internet as untagged data.
Step 2: designing a network structure model;
the network structure model comprises two parts: a backbone network and a loss function; the main network is responsible for feature extraction, the loss function determines the optimization content of the network, and the loss function is divided into a supervised loss function based on a labeled picture and an unsupervised loss function based on an unlabeled picture;
and step 3: designing a loss function for training the network;
the method adopts a plurality of loss functions to combine, and the loss function of the network comprises two parts: including supervision loss LlabelUnsupervised loss Lunlabel(ii) a Overall loss function LtotalComprises the following steps:
Ltotal=Llabel+αLunlabel
wherein α controls the unsupervised lost weight;
preferably, the supervised loss LlabelUsing cross entropy loss, expressed as follows:
Figure RE-GDA0002210480850000021
n represents the number of the labeled sample pictures, M represents the number of the face classes, f is an activation vector obtained after the face pictures pass through a backbone network, and the dimensionality is M; y isiIs a label for picture i; f. ofjJ is 1, 2, 3, …, M, which is the activation value of the face picture corresponding to the category j; f. ofyiIs the activation value of the face picture corresponding to the tag.
Preferably, the unsupervised loss LunlabelUsing euclidean losses, we represent as follows:
Figure RE-GDA0002210480850000022
the number of the unlabelled pictures represents the similarity between the unlabelled pictures and the face classes in the training set;
and 4, step 4: training the network structure model in the step2 by using the labeled training data in the step1 and the supervised loss function in the step3 to obtain model parameters Paramslabel
And 5: further training the network structure model in the step2 by using the labeled training data and the unlabeled training data in the step1 and the overall loss function in the step 3; wherein, the parameters of the network use the model parameters Params obtained by training in the step4labelCarrying out initialization;
preferably, this step is achieved by the following process:
step 1: using the model parameters Params in step4labelInitializing a network structure model;
step 2: inputting the pictures with the labels and the pictures without the labels into a backbone network according to batches to obtain the characteristics of the pictures, wherein the number ratio of the pictures with the labels to the pictures without the labels in the pictures of each batch is determined by a data balance factor beta;
step 3: the method comprises the steps that the characteristics of pictures pass through a full connection layer to obtain an activation value of each picture, and the activation values are divided into a labeled activation value and a non-labeled activation value;
step 4: obtaining the probability of the activation value with the label through a softmax functionDistribution, concatenation cross entropy loss function Llabel
Step 5: connecting the label-free activation value to the Euclidean loss function LunlabelAnd multiplied by the unsupervised loss weight a;
step 6: global loss function L according to step3totalCalculating to obtain final loss, then reversely propagating the calculation gradient, and updating parameter values of the backbone network and the full connection layer;
step 7: repeating the steps 2-6 until the overall loss is stable.
Step 6: carrying out face comparison application, and judging whether two persons are the same person: extracting features of the two preprocessed face pictures through a backbone network respectively, calculating the similarity of the two features, and considering the two preprocessed face pictures as the same person when the value of the similarity is larger than a threshold value; otherwise the same person is considered.
Advantageous effects
Compared with the prior art, the method of the invention has the following beneficial effects:
the invention introduces an unsupervised loss function, and does not need to cluster the unlabeled pictures, because the used labeled data are mostly pictures of celebrities, and the crawled unlabeled data are picture of the plain person, namely the two are not the same person. Therefore, the similarity between each unlabeled picture and the labeled class in the training set can be directly minimized. By the method, more label-free data can be used for optimizing the model, so that the performance of the model is better; because clustering is not needed, the method is free from defects caused by a clustering algorithm, such as introduction of noise and data imbalance problems, and when the noise of training data is increased and the data imbalance problem is more serious, the performance of a trained model is reduced.
The invention introduces a semi-supervised training mode, firstly, a labeled data training model is used, on the basis of the model, a non-labeled picture and a labeled picture are combined to carry out semi-supervised training, the relation between a supervised task and a non-supervised task is adjusted by introducing a task balance factor alpha and a data balance factor beta, and the performance of the model can be further improved by the training mode.
The invention has good performance improvement on a plurality of face recognition test sets and good universality.
Drawings
FIG. 1 is a flow chart of the method of the present invention;
FIG. 2 is a flow chart of training data preprocessing of the method of the present invention;
fig. 3 is a diagram of the overall network architecture of the method of the present invention.
Detailed Description
The invention is described in detail below with reference to the figures and the specific embodiments
Examples
The embodiment is an overall process and a network structure of a face recognition model using semi-supervised training.
A face recognition method based on semi-supervised training, as shown in fig. 1, includes the following steps:
step 1: acquiring and preprocessing training data;
as shown in fig. 2, for unlabeled data, a face picture is crawled from the internet, and a MTCNN face detector is used for face detection to obtain a face rectangular frame and five key points; according to the face rectangular frame and the five key points, face alignment is carried out by using a warpAffine function of OpenCV, and the size of an aligned picture is 112 multiplied by 112; for labeled data, an existing face recognition data set, such as MS1M published by microsoft, is selected for face detection and alignment as for unlabeled data, and the size of the aligned picture is 112 × 112.
Step 2: designing a network structure model;
as shown in fig. 3, the network structure model comprises two parts: a backbone network and a loss function. The backbone network is responsible for feature extraction, and may be a network model commonly used in deep learning at present, such as ResNet, MobileNet, and the like, and ResNet50 is selected in this embodiment. The loss function determines the optimized content of the network. The loss function is divided into a supervised loss function based on a labeled picture and an unsupervised loss function based on an unlabeled picture.
And step 3: designing a loss function for training the network;
the two loss functions are combined, and the loss function of the network comprises supervision loss LlabelUnsupervised loss Lunlabel(ii) a Overall loss function LtotalComprises the following steps:
Ltotal=Llabel+αLunlabel
where a controls the weight of the unsupervised loss, e.g., α ═ 0.1.
With supervision loss LlabelUsing cross entropy loss, expressed as follows:
Figure RE-GDA0002210480850000051
wherein, N represents the number of the labeled sample pictures, M represents the number of the face classes, f is an activation vector obtained after the face pictures pass through the backbone network, and the dimensionality is M. y isiIs a label for picture i. f. ofj,j=1,2,3,…,MIs the activation value of the face picture corresponding to the category j. f. ofyiIs the activation value of the face picture corresponding to the tag.
Unsupervised loss LunlabelUsing euclidean losses, we represent as follows:
Figure RE-GDA0002210480850000061
the number of the unlabelled pictures represents the similarity between the unlabelled pictures and the face categories in the training set;
of course, those skilled in the art will recognize that there is a loss of supervision LlabelWithout being limited to the use of cross-entropy losses, other loss functions may be used, such as triplet losses, cross-entropy losses with spacing; unsupervised loss LunlabelWithout being limited to the use of euclidean losses, other loss functions may be used, such as L1 losses, SmoothL1 losses.
And 4, step 4: training the network model of step2 with the labeled training data of step1, the supervised loss function of step3. Wherein the initial value of the learning rate is 0.1, and the initial value is divided by 10 in 10 ten thousand, 16 ten thousand, 22 ten thousand and 24 ten thousand iterations; the weight attenuation is 5 e-4; momentum is 0.9; the optimization method is random gradient descent; the batch size on each graphics card is 128, for a total of 4P 40 graphics cards. Obtaining model parameters Paramslabel
And 5: further training the network model in the step2 by using the labeled training data and the unlabeled training data in the step1 and the overall loss function in the step 3;
preferably, this step is achieved by the following process:
step 1: using the model parameters Params in step4labelThe model is initialized, and the network is trained by using 500 ten thousand labeled pictures and 230 ten thousand unlabeled pictures. Wherein, the initial learning rate is 0.0001, and is divided by 10 when the iteration is carried out for 3 ten thousand and 6 ten thousand times; the weight attenuation is 5 e-4; momentum is 0.9; the optimization method is random gradient descent; the batch size on each graphics card is 128, for a total of 4P 40 graphics cards.
Step 2: each batch of pictures includes both tagged pictures and non-tagged pictures. The ratio of the two is determined by a data balance factor β, as in the case of unlabeled pictures: the ratio of the pictures with labels is 1: 3. All pictures pass through a backbone network to obtain the characteristics of the pictures;
step 3: the method comprises the steps that the characteristics of pictures pass through a full connection layer to obtain an activation value of each picture, and the activation values are divided into a labeled activation value and a non-labeled activation value;
step 4: obtaining probability distribution of the labeled activation values through a softmax function, and connecting cross entropy loss functions in parallel;
step 5: connecting the label-free activation value with an Euclidean loss function, and multiplying the label-free activation value by a weight alpha of unsupervised loss;
step 6: global loss function L according to step3totalCalculating to obtain a final loss function, then reversely propagating a calculation gradient, and updating parameter values of a backbone network and a full connection layer;
step 7: repeating the steps 2-6 until the loss function is stable.
Step 6: and performing face recognition application, extracting features of the two preprocessed face pictures through a backbone network respectively, and judging whether the two preprocessed face pictures are the same person or not by using cosine similarity of the two features.
The model trained by the method of the invention has good performance improvement on a plurality of face recognition test sets and has good universality. Specifically, on the IJB-C test set, when the False Positive Rate (False Positive Rate) is 1e-6, the True Positive Rate (True Positive Rate) of the semi-supervised model realized by the invention is 88.01%, which is improved by about 5% compared with the model trained only by using labeled data; on the IQVID test set, when the False Positive Rate (False Positive Rate) is 1e-4, the True Positive Rate (True Positive Rate) of the semi-supervised model realized by the invention is 48.3%, which is about 8% higher than the model trained by using only the labeled data.
The foregoing description of the specific embodiments has been presented for purposes of illustration and description. However, it should be understood by those skilled in the art that the present invention is not limited to the above preferred embodiments, and that various other forms of the product can be obtained by anyone who has the benefit of the present invention, and any changes in the shape or structure thereof, which have the same or similar technical solutions as those of the present invention, fall within the protection scope of the present invention.

Claims (4)

1. A face recognition method based on semi-supervised training is characterized by comprising the following steps:
step 1: preprocessing training data;
training data comprises labeled data and unlabeled data, and face detection and alignment are carried out on the labeled data and the unlabeled data;
step 2: designing a network structure model;
the network structure model comprises two parts: a backbone network and a loss function; the method comprises the steps that a backbone network is responsible for feature extraction, a loss function determines optimization content of the network, and the loss function is divided into a supervised loss function based on a labeled picture and an unsupervised loss function based on an unlabeled picture;
and step 3: designing a loss function for training the network;
the method adopts a plurality of loss functions to combine, and the loss function of the network comprises two parts: including supervision loss LlabelUnsupervised loss Lunlabel(ii) a Overall loss function LtotalComprises the following steps:
Ltotal=Llabel+αLunlabel
wherein α is the unsupervised loss weight;
and 4, step 4: training the network model in the step2 by using the labeled training data in the step1 and the supervised loss function in the step 3; obtaining model parameters Paramslabel
And 5: further training the network model in the step2 by using the labeled training data and the unlabeled training data in the step1 and the overall loss function in the step 3; wherein the parameters of the network use ParamslabelCarrying out initialization;
step 6: carrying out face comparison application, and judging whether two persons are the same person: extracting features of the two preprocessed face pictures respectively through the backbone network of the network model trained in the step5, calculating the similarity of the two features, and considering the two preprocessed face pictures as a same person when the value of the similarity is larger than a threshold value; otherwise the same person is considered.
2. The method of claim 1, wherein the supervised loss LlabelUsing cross entropy loss, expressed as follows:
Figure FDA0002149972380000021
wherein, N represents the number of the labeled sample pictures, f is an activation vector obtained after the face picture passes through the backbone network, and the dimensionality is M, fyiIs the activation value of the class of picture i belonging to label yi, M is the number of classes of faces in the training set, fjIs the activation value for picture i belonging to category j.
3. The method of claim 1, wherein the unsupervised loss L isunlabelUsing euclidean losses, we represent as follows:
Figure FDA0002149972380000022
wherein U is the number of unlabeled pictures, M is the number of classes of faces in the training set, sijAnd representing the similarity between the unlabeled picture i and the face class j in the training set.
4. A method according to any one of claims 1 to 3, wherein step5 is carried out by:
step 1: using ParamslabelInitializing a network structure model;
step 2: the tagged pictures and the non-tagged pictures are input into a backbone network according to batches to obtain the characteristics of the pictures, and the number proportion of the tagged pictures and the non-tagged pictures in the pictures of each batch is determined by a data balance factor beta;
step 3: the method comprises the steps that the characteristics of pictures pass through a full connection layer to obtain an activation value of each picture, and the activation values are divided into a labeled activation value and a non-labeled activation value;
step 4: obtaining probability distribution of the labeled activation values through a softmax function, and connecting a cross entropy loss function L in parallellabel
Step 5: connecting the label-free activation value with an Euclidean loss function, and multiplying the label-free activation value by a weight alpha of unsupervised loss;
step 6: global loss function L according to step3totalCalculating to obtain final loss, then reversely propagating the calculation gradient, and updating parameter values of the backbone network and the full connection layer;
step 7: repeating the steps 2-6 until the overall loss is stable.
CN201910698243.XA 2019-07-31 2019-07-31 Face recognition method based on semi-supervised training Active CN110472533B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910698243.XA CN110472533B (en) 2019-07-31 2019-07-31 Face recognition method based on semi-supervised training

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910698243.XA CN110472533B (en) 2019-07-31 2019-07-31 Face recognition method based on semi-supervised training

Publications (2)

Publication Number Publication Date
CN110472533A CN110472533A (en) 2019-11-19
CN110472533B true CN110472533B (en) 2021-11-09

Family

ID=68509246

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910698243.XA Active CN110472533B (en) 2019-07-31 2019-07-31 Face recognition method based on semi-supervised training

Country Status (1)

Country Link
CN (1) CN110472533B (en)

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111222648B (en) * 2020-01-15 2023-09-26 深圳前海微众银行股份有限公司 Semi-supervised machine learning optimization method, device, equipment and storage medium
CN111461002B (en) * 2020-03-31 2023-05-26 华南理工大学 Sample processing method for thermal imaging pedestrian detection
CN111553267B (en) * 2020-04-27 2023-12-01 腾讯科技(深圳)有限公司 Image processing method, image processing model training method and device
CN111522958A (en) * 2020-05-28 2020-08-11 泰康保险集团股份有限公司 Text classification method and device
CN111723756B (en) * 2020-06-24 2022-09-06 中国科学技术大学 Facial feature point tracking method based on self-supervision and semi-supervision learning
CN111797935B (en) * 2020-07-13 2023-10-31 扬州大学 Semi-supervised depth network picture classification method based on group intelligence
CN111860669A (en) * 2020-07-27 2020-10-30 平安科技(深圳)有限公司 Training method and device of OCR recognition model and computer equipment
CN112417986B (en) * 2020-10-30 2023-03-10 四川天翼网络股份有限公司 Semi-supervised online face recognition method and system based on deep neural network model
CN112329735B (en) * 2020-11-30 2022-05-10 深圳市海洋网络科技有限公司 Training method of face recognition model and online education system
CN113128620B (en) * 2021-05-11 2022-10-21 北京理工大学 Semi-supervised domain self-adaptive picture classification method based on hierarchical relationship
CN113591914A (en) * 2021-06-28 2021-11-02 中国平安人寿保险股份有限公司 Data classification method and device, computer equipment and storage medium
CN113627366B (en) * 2021-08-16 2023-04-07 电子科技大学 Face recognition method based on incremental clustering

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102332094A (en) * 2011-10-24 2012-01-25 西安电子科技大学 Semi-supervised online study face detection method
US8571272B2 (en) * 2006-03-12 2013-10-29 Google Inc. Techniques for enabling or establishing the use of face recognition algorithms
CN106845336A (en) * 2016-12-02 2017-06-13 厦门理工学院 A kind of semi-supervised face identification method based on local message and group sparse constraint
CN109829433A (en) * 2019-01-31 2019-05-31 北京市商汤科技开发有限公司 Facial image recognition method, device, electronic equipment and storage medium

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9390370B2 (en) * 2012-08-28 2016-07-12 International Business Machines Corporation Training deep neural network acoustic models using distributed hessian-free optimization
CN110046583A (en) * 2019-04-18 2019-07-23 南京信息工程大学 Color face recognition method based on semi-supervised multiple view increment dictionary learning

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8571272B2 (en) * 2006-03-12 2013-10-29 Google Inc. Techniques for enabling or establishing the use of face recognition algorithms
CN102332094A (en) * 2011-10-24 2012-01-25 西安电子科技大学 Semi-supervised online study face detection method
CN106845336A (en) * 2016-12-02 2017-06-13 厦门理工学院 A kind of semi-supervised face identification method based on local message and group sparse constraint
CN109829433A (en) * 2019-01-31 2019-05-31 北京市商汤科技开发有限公司 Facial image recognition method, device, electronic equipment and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
A novel semi-supervised face recognition for video;Ke L.等;《2010 International Conference on Intelligent Control and Information Processing》;20100909;第313-316页 *
基于半监督学习的人脸识别算法研究;卢小玲;《中国优秀硕士学位论文全文数据库 信息科技辑》;20160315;第I138-5866页 *

Also Published As

Publication number Publication date
CN110472533A (en) 2019-11-19

Similar Documents

Publication Publication Date Title
CN110472533B (en) Face recognition method based on semi-supervised training
CN113326764B (en) Method and device for training image recognition model and image recognition
CN109165950B (en) Financial time series characteristic-based abnormal transaction identification method, device and readable storage medium
CN111985310B (en) Training method of deep convolutional neural network for face recognition
CN111125358B (en) Text classification method based on hypergraph
CN110929848B (en) Training and tracking method based on multi-challenge perception learning model
CN105335756A (en) Robust learning model and image classification system
CN112668579A (en) Weak supervision semantic segmentation method based on self-adaptive affinity and class distribution
CN109460508B (en) Efficient spam comment user group detection method
CN103632160A (en) Combination-kernel-function RVM (Relevance Vector Machine) hyperspectral classification method integrated with multi-scale morphological characteristics
CN109871749B (en) Pedestrian re-identification method and device based on deep hash and computer system
CN110008699B (en) Software vulnerability detection method and device based on neural network
CN114021799A (en) Day-ahead wind power prediction method and system for wind power plant
CN110472652A (en) A small amount of sample classification method based on semanteme guidance
CN112529638B (en) Service demand dynamic prediction method and system based on user classification and deep learning
Pellegrini et al. An analytic theory of shallow networks dynamics for hinge loss classification
CN106529604A (en) Adaptive image tag robust prediction method and system
CN113222072A (en) Lung X-ray image classification method based on K-means clustering and GAN
CN116910013A (en) System log anomaly detection method based on semantic flowsheet mining
CN108491719A (en) A kind of Android malware detection methods improving NB Algorithm
CN115062727A (en) Graph node classification method and system based on multi-order hypergraph convolutional network
CN108388918B (en) Data feature selection method with structure retention characteristics
CN111783688B (en) Remote sensing image scene classification method based on convolutional neural network
CN106295688B (en) A kind of fuzzy clustering method based on sparse mean value
CN116227939A (en) Enterprise credit rating method and device based on graph convolution neural network and EM algorithm

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant