CN108268872B - Robust nonnegative matrix factorization method based on incremental learning - Google Patents

Robust nonnegative matrix factorization method based on incremental learning Download PDF

Info

Publication number
CN108268872B
CN108268872B CN201810166689.3A CN201810166689A CN108268872B CN 108268872 B CN108268872 B CN 108268872B CN 201810166689 A CN201810166689 A CN 201810166689A CN 108268872 B CN108268872 B CN 108268872B
Authority
CN
China
Prior art keywords
matrix
sample
training
feature extraction
projection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810166689.3A
Other languages
Chinese (zh)
Other versions
CN108268872A (en
Inventor
曹宗杰
曹昌杰
崔宗勇
闵锐
皮亦鸣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Electronic Science and Technology of China
Original Assignee
University of Electronic Science and Technology of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Electronic Science and Technology of China filed Critical University of Electronic Science and Technology of China
Priority to CN201810166689.3A priority Critical patent/CN108268872B/en
Publication of CN108268872A publication Critical patent/CN108268872A/en
Application granted granted Critical
Publication of CN108268872B publication Critical patent/CN108268872B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)
  • Character Discrimination (AREA)

Abstract

The invention relates to the field of image recognition, in particular to a robust nonnegative matrix decomposition method based on incremental learning. The invention provides a general robust nonnegative matrix factorization method with incremental property based on the traditional robust nonnegative matrix factorization method, and is applied to image identification feature extraction. As an incremental non-negative matrix factorization method, IRNMF keeps incremental property, so that image recognition has the capability of self-updating, repeated training is avoided, recognition efficiency is improved, and meanwhile, the time cost is only 53.7% of the time of traditional INMF feature extraction. Meanwhile, as a feature extraction method, the IRNMF has more stable feature extraction result compared with the traditional feature extraction methods such as INMF, NMF and the like.

Description

Robust nonnegative matrix factorization method based on incremental learning
Technical Field
The invention relates to the field of image recognition, in particular to a robust nonnegative matrix factorization method based on incremental learning.
Background
In the task of image recognition, reducing redundant parts in high-dimensional image data through feature extraction is an important step for improving image recognition accuracy and reducing image recognition time. In a conventional feature extraction method, Principal Component Analysis (PCA) can achieve the effect of reducing data dimensionality and find an effective dimensionality representation direction of image data, but in the process of reducing data redundancy, the method inevitably reduces a part of judgment information which is significant for image identification, which undoubtedly results in the reduction of the accuracy of image identification. Although Linear Discriminant Analysis (LDA) can find an effective decision direction for high-dimensional image data, it is easy to cause a "small sample problem" because the number of training samples is smaller than the feature space dimension of the high-dimensional image, resulting in overfitting of the training model and loss of generalization capability. In contrast to the above method, non-Negative Matrix Factorization (NMF) uses a non-linear dimension reduction for high-dimensional image data and a purely additive description for the decomposed data. The image data can be widely applied on the premise of more conforming to the psychological and physiological structures of human beings in the real world. Although the traditional NMF feature extraction method can realize effective dimensionality reduction of high-dimensional image data, the feature extraction result is very prone to abnormal image samples generated by noise in the image data, and therefore the effect of a feature extraction model is unstable. Robust non-negative Matrix Factorization (RNMF) skillfully solves the problem by defining new paradigm constraints on the basis of NMF, thereby becoming an extremely effective feature extraction mode.
However, with the continuous abundance of various high-dimensional image data resources, the training samples of the feature extraction model are also increased rapidly. In the process of image recognition, the training time of the feature extraction model is directly related to the number of training samples. In the traditional feature extraction model training, a step of adding a newly added sample directly to an original sample set and performing training on a training sample again usually means repeated training of the training sample, which results in increased calculation cost and reduced recognition efficiency, and meanwhile, a larger data storage space has to be spent on storing the existing training sample.
One approach to solve this problem is to make the feature extraction model have the ability of self-updating and growth learning, and use the incremental learning method to complete feature extraction. In recent years, in other fields such as video surveillance and face recognition, many incremental learning feature extraction methods are proposed, such as Incremental Principal Component Analysis (IPCA), Incremental Linear Discriminant Analysis (ILDA), incremental non-negative matrix factorization (INMF), and the like. Although the method enables the original feature extraction model to have the capacity of incremental learning, the method still has to be limited by some inherent defects of the method. In the existing traditional feature extraction method, RNMF is a more robust feature extraction method, so that if an incremental learning manner is applied to RNMF, on the premise of ensuring that high-dimensional image data can obtain as much decision information as possible, training time can be greatly reduced, storage space of training samples is reduced, and the feature extraction method can be kept stable. This will certainly be a better incremental feature extraction method.
Disclosure of Invention
Aiming at the problems or the defects, the invention provides a Robust non-negative Matrix Factorization (IRNMF) method based on Incremental learning, which aims to overcome the defect that the traditional feature extraction method needs repeated learning under the condition that training samples are increased, so that the feature extraction method has the capabilities of self-updating and growth learning, can extract the discrimination information of high-dimensional image data as much as possible and keeps the stability of a model.
The invention is realized by the following steps, and the characteristic extraction algorithm is shown in the attached figure 1.
Step 1, carrying out Robust Nonnegative Matrix Factorization (RNMF) initialization on the existing image sample training data to obtain an initial projection matrix W, a coding matrix H and a diagonal element matrix D. In RNMF, for a sample matrix V ∈ Rm×nEach column represents a training sample with m pixel points, n training samples are counted, and the training samples are decomposed into a base matrix W epsilon Rm×rThe coding matrix H is formed by Rr×nAnd obtaining a diagonal element matrix D epsilon Rr×rI.e. by
Vmn=WmrHrn
D=diag(D11,D22,...,Dnn)
Wherein W is not less than 0,H≥0,
Figure GDA0003009070410000021
r represents the dimensionality after dimensionality reduction, DiiIs the diagonal element of the diagonal element matrix, i.e.:
Figure GDA0003009070410000022
because the objective function of RNMF is defined as:
Figure GDA0003009070410000023
||·||2,1represents a newly defined 2-1 norm form, and adopts a gradient descent method to obtain an RNMF iteration rule as follows:
Figure GDA0003009070410000024
Figure GDA0003009070410000031
wherein, i is 1, …, m, j is 1, …, n;
according to an iterative formula, each element W of the initial projection matrix is obtained after iteration is carried out until convergencemrEach element H of the coding matrixrmThereby calculating a diagonal element matrix D;
and 2, after the robust nonnegative matrix decomposition initialization of part of training samples is completed, and an initial projection matrix W, a coding matrix H and a diagonal matrix D are obtained. When a new training sample is added into model training, the IRNMF algorithm is calculated through new sample information, the global updating of the projection matrix W and the local updating of the diagonal matrix D are completed, and incremental learning is achieved.
Let the number of initial training samples be k, and its cost function be:
Figure GDA0003009070410000032
when a training sample v is newly addedk+1I.e. when the number of training samples is k +1The cost function is:
Figure GDA0003009070410000033
therefore, the following steps are carried out:
Figure GDA0003009070410000034
wherein, Fk+1Cost function, W, representing k +1 training samplesk+1Projection matrix, H, representing k +1 training samplesk+1Coding matrix representing k +1 training samples, hk+1Column k +1 of H, representing the newly added sample in the coding matrix, vk+1Column k +1 of V, representing a newly added training sample, fk+1Is a cost function of the incremental portion.
In the course of incremental learning, the cost function Fk+1The independent variables of (a) are as follows: projection matrix Wk+1Newly added sample h of coding matrixk+1And a diagonal matrix adding element dk+1Firstly, a gradient descent method is adopted to solve a newly added sample h of an encoding matrixk+1The iteration rule is as follows:
Figure GDA0003009070410000035
step size muαThe following were chosen:
Figure GDA0003009070410000041
initializing diagonal element dk+1=DkkNew samples h of the coding matrix can be obtainedk+1The iteration rule of (1) is:
Figure GDA0003009070410000042
subsequently, the pair is realizedLast diagonal element d of the corner matrixk+1Updating:
Figure GDA0003009070410000043
let Dk+1,k+1=dk+1Thereby completing a diagonal matrix Dk+1Local updates to single samples.
Finally, each element (W) of the new projection matrix is obtained by adopting a gradient descent methodk+1)The iteration rule of (1) is:
Figure GDA0003009070410000044
wherein each element (W) of the new projection matrixk+1)The step size selected for the gradient descent is:
Figure GDA0003009070410000045
a new projection matrix W can be obtainedk+1The iteration rule of (1) is:
Figure GDA0003009070410000046
iterating to converge to obtain a new projection matrix Wk+1Completion of Wk+1Updating of a single sample.
Step 3, after updating of the projection matrix W, projecting the training sample and the sample to be identified in the feature space;
first, all training samples are re-projected:
V’train=(WTW)-1WTVtrain
wherein, V'train∈Rr×nFor training a sample matrix Vtrain∈Rm×nProjection in the feature space W;
then, projecting the sample to be identified:
h’test=(WTW)-1WThtest
wherein, h'test∈RrTo identify a sample vector htest∈RmProjection in the feature space W;
step 4, carrying out classification and identification after feature extraction, and carrying out feature V 'on the training sample'trainTraining is carried out, and a sample h 'to be recognized is treated'testAnd carrying out classification and identification.
The invention provides a general incremental learning method based on a traditional robust nonnegative matrix factorization method, which comprises the following steps: robust non-negative matrix factorization based on incremental learning and applied to image recognition. As an effective incremental feature extraction method, IRNMF not only can greatly reduce training time and reduce storage space of training samples, but also can keep the feature extraction method stable on the premise of ensuring that high-dimensional image data can obtain as much judgment information as possible, so that an image recognition model can meet higher performance requirements.
In conclusion, compared with the existing feature extraction method, the method has the capability of incremental online learning, repeated training is not needed, and the recognition efficiency is greatly improved; after the image information is subjected to feature extraction, the stability of a feature extraction model can be ensured on the premise of retaining effective judgment information as much as possible; on the basis of improving the recognition efficiency, the recognition rate of the incremental NMF is higher than that of the traditional method, and meanwhile, the training time is greatly reduced.
Drawings
FIG. 1 is a flow chart of the feature extraction method of the present invention
FIG. 2 is a MSTAR target slice read image presentation
FIG. 3 shows recognition rate statistics of three methods for SAR three-class target recognition incremental learning task
FIG. 4 is a time cost comparison of three methods for SAR three-class target recognition incremental learning task
Detailed Description
The invention is further explained by simulating actual incremental learning application by taking three types of MSTAR target image recognition tasks as examples.
The samples used in the experiment are MSTAR three-class target slices, the slices are RAW format data of 64 multiplied by 64, the training samples are targets with a pitch angle of 17 degrees, and the testing samples are targets with a pitch angle of 15 degrees. Table 1 shows the MSTAR class three target distributions. An example of a target slice read image is shown in fig. 2.
TABLE 1 MSTAR three classes target distribution
Figure GDA0003009070410000061
In the invention, the RNMF has increment learning capability, so that a training sample is divided into an initial sample and a newly added sample, the newly added sample is divided into a plurality of batches, and the condition of batch acquisition of the samples in practical application is simulated. The test sample is an unknown label sample and does not participate in training. And observing the unit feature extraction time of the new training sample when a batch of new samples are obtained every time, and reflecting the effect of the feature extraction by testing the identification accuracy of the samples under the condition that other comparison conditions are completely the same.
The experimental plan sets the number of initial training samples to be 100, and 50 new training samples are added in each batch (less than 50 training samples are also added as a group of incremental samples to be trained), and are obtained in 12 batches. Respectively counting the identification accuracy after each sample acquisition and the time consumed by each feature extraction of three methods, namely non-Negative Matrix Factorization (NMF), incremental non-negative matrix factorization (INMF) and incremental robust non-negative matrix factorization (IRNMF) (because the time required by each feature extraction of RNMF increases along with the number of training samples, the RNMF feature extraction method is not added into the comparison). NMF is a traditional method that requires retraining. The recognition accuracy of the three methods increases with the number of samples and is plotted in figure 3. As can be seen from the figure, the identification accuracy of IRNMF is higher than that of NMF and INMF in the traditional method, and the final identification accuracy reaches 96.2637%. In addition, in the increment process, the learning effect of IRNMF always increases steadily with the increase of the number of training samples, while the learning effect of INMF and NMF fluctuates to different degrees with the increase of the number of samples.
In addition, the time cost consumed by the three feature extraction methods along with the increase of the number of samples in each training process is shown in fig. 4. The training time cost of the non-incremental NMF method is in a linear increasing trend along with the increase of the number of samples, although INMF greatly reduces the time cost consumed by feature extraction in the training process by avoiding repeated training, IRNMF also reduces the feature extraction time of a single training sample from 0.0255s to 0.0137s on the basis of the INMF, namely only 53.7% of the feature extraction time cost of the INMF. The task of feature extraction in the image recognition process is completed in a faster and more stable manner.

Claims (2)

1. A robust nonnegative matrix factorization method based on incremental learning is characterized by comprising the following steps:
step 1, performing RNMF initialization on the existing image sample training data to obtain an initial projection matrix W, a coding matrix H and a diagonal element matrix D; the method specifically comprises the following steps:
in RNMF, for a sample matrix V ∈ Rm×nEach column represents a training sample with m pixel points, n training samples are counted, and the training samples are decomposed into a base matrix W epsilon Rm×rThe coding matrix H is formed by Rr×nAnd obtaining a diagonal element matrix D epsilon Rr ×rNamely:
Vmn=WmrHrn
D=diag(D11,...,Dii,...,Drr)
wherein W is not less than 0,H≥0,
Figure FDA0003009070400000011
r represents the dimensionality after dimensionality reduction, DiiIs the diagonal element of the diagonal element matrix, i.e.:
Figure FDA0003009070400000012
the objective function of RNMF is defined as:
Figure FDA0003009070400000013
||·||2,1represents a newly defined 2-1 norm form, and adopts a gradient descent method to obtain an RNMF iteration rule as follows:
Figure FDA0003009070400000014
Figure FDA0003009070400000015
wherein, i is 1, …, m, j is 1, …, n;
according to an iterative formula, each element W of the initial projection matrix is obtained after iteration is carried out until convergencemrEach element H of the coding matrixrmThereby calculating a diagonal element matrix D;
step 2, when a new training sample is added into model training, computing a robust nonnegative matrix decomposition algorithm of incremental learning through new sample information, and completing local updating of a coding matrix H, local updating of a diagonal matrix D and global updating of a projection matrix W to realize incremental learning;
when a training sample v is newly addedk+1That is, when the number of training samples is k +1, the cost function is:
Figure FDA0003009070400000021
therefore, the following steps are carried out:
Figure FDA0003009070400000022
wherein, Fk+1Cost function, W, representing k +1 training samplesk+1Projection matrix, H, representing k +1 training samplesk+1Coding matrix representing k +1 training samples, hk+1Column k +1 of H, representing the newly added sample in the coding matrix, vk+1Column k +1 of V, representing a newly added training sample, fk+1A cost function that is an incremental portion;
adopting a gradient descent method, firstly solving a newly added sample h of the coding matrixk+1Each element (h) thereofk+1)αThe iteration rule of (1) is:
Figure FDA0003009070400000023
wherein the step size muαThe following were chosen:
Figure FDA0003009070400000024
iterate to converge and get the new code matrix Hk+1H, local updating of the single sample is completed;
then, the last diagonal element d of the diagonal matrix is implementedk+1Updating:
Figure FDA0003009070400000025
let Dk+1,k+1=dk+1Thereby completing a diagonal matrix Dk+1Local updates to single samples;
obtaining each element (W) of the new projection matrixk+1)The iteration rule of (1) is:
Figure FDA0003009070400000031
wherein each of the new projection matricesElement (W)k+1)The step size selected for the gradient descent is:
Figure FDA0003009070400000032
iterating to converge to obtain a new projection matrix Wk+1Completing the updating of the single sample by W;
step 3, after updating of the projection matrix W, projecting the training sample and the sample to be identified in the feature space;
all training samples were re-projected:
V′train=(WTW)-1WTVtrain
wherein, V'train∈Rr×nFor training a sample matrix Vtrain∈Rm×nProjection in the feature space W;
projecting a sample to be identified:
h′test=(WTW)-1WThtest
wherein, h'test∈RrTo identify a sample vector htest∈RmProjection in the feature space W;
step 4, carrying out classification and identification after feature extraction, and carrying out feature V 'on the training sample'trainTraining is carried out, and a sample h 'to be recognized is treated'testAnd carrying out classification and identification.
2. The robust nonnegative matrix factorization method based on incremental learning as claimed in claim 1, wherein: step 2, after each new sample is updated, except that the current iteration result h needs to be savedk+1、dk+1、Wk+1Besides, it is also necessary to store the history information for the next update:
Figure FDA0003009070400000033
Figure FDA0003009070400000034
CN201810166689.3A 2018-02-28 2018-02-28 Robust nonnegative matrix factorization method based on incremental learning Active CN108268872B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810166689.3A CN108268872B (en) 2018-02-28 2018-02-28 Robust nonnegative matrix factorization method based on incremental learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810166689.3A CN108268872B (en) 2018-02-28 2018-02-28 Robust nonnegative matrix factorization method based on incremental learning

Publications (2)

Publication Number Publication Date
CN108268872A CN108268872A (en) 2018-07-10
CN108268872B true CN108268872B (en) 2021-06-08

Family

ID=62774578

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810166689.3A Active CN108268872B (en) 2018-02-28 2018-02-28 Robust nonnegative matrix factorization method based on incremental learning

Country Status (1)

Country Link
CN (1) CN108268872B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109918616B (en) * 2019-01-23 2020-01-31 中国人民解放军32801部队 visual media processing method based on semantic index precision enhancement
CN110334761B (en) * 2019-07-03 2021-05-04 北京林业大学 Supervised image identification method based on orthogonality constraint increment non-negative matrix factorization
CN110569879B (en) * 2019-08-09 2024-03-15 平安科技(深圳)有限公司 Tongue image extraction method, tongue image extraction device and computer readable storage medium
CN110673206B (en) * 2019-08-26 2020-12-29 吉林大学 Satellite magnetic field data earthquake abnormity detection method based on non-negative matrix factorization
CN113285758B (en) * 2021-05-18 2022-06-14 成都信息工程大学 Optical fiber nonlinear equalization method based on IPCA-DNN algorithm

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103413117A (en) * 2013-07-17 2013-11-27 浙江工业大学 Incremental learning and face recognition method based on locality preserving nonnegative matrix factorization ( LPNMF)
CN106597439A (en) * 2016-12-12 2017-04-26 电子科技大学 Synthetic aperture radar target identification method based on incremental learning

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9626566B2 (en) * 2014-03-19 2017-04-18 Neurala, Inc. Methods and apparatus for autonomous robotic control

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103413117A (en) * 2013-07-17 2013-11-27 浙江工业大学 Incremental learning and face recognition method based on locality preserving nonnegative matrix factorization ( LPNMF)
CN106597439A (en) * 2016-12-12 2017-04-26 电子科技大学 Synthetic aperture radar target identification method based on incremental learning

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Incremental robust nonnegative matrix factorization for object tracking;fanghui liu et al.;《international conference on neural information processing》;20160930;第611-619页 *
The feasibility analysis of applying NMF in sar target recognition;zongjie cao et al.;《2015 IEEE international conference on digital signal processing(DSP)》;20150910;第1376-1385页 *
二维投影非负矩阵分解算法及其在人脸识别中的应用;方蔚涛等;《Acta automatica sinica》;20121231;第38卷(第9期);第1503-1512页 *

Also Published As

Publication number Publication date
CN108268872A (en) 2018-07-10

Similar Documents

Publication Publication Date Title
CN108268872B (en) Robust nonnegative matrix factorization method based on incremental learning
CN108416370B (en) Image classification method and device based on semi-supervised deep learning and storage medium
Xie et al. Unsupervised deep embedding for clustering analysis
Jojic et al. Stel component analysis: Modeling spatial correlations in image class structure
CN114930352A (en) Method for training image classification model
CN109389166A (en) The depth migration insertion cluster machine learning method saved based on partial structurtes
CN111461238B (en) Model training method, character recognition method, device, equipment and storage medium
CN112836671B (en) Data dimension reduction method based on maximized ratio and linear discriminant analysis
Cai et al. Unsupervised embedded feature learning for deep clustering with stacked sparse auto-encoder
CN106597439A (en) Synthetic aperture radar target identification method based on incremental learning
CN116701725B (en) Engineer personnel data portrait processing method based on deep learning
CN111401156A (en) Image identification method based on Gabor convolution neural network
CN111383732B (en) Medicine auditing method, device, computer system and readable storage medium based on mutual exclusion identification
CN109063750B (en) SAR target classification method based on CNN and SVM decision fusion
CN116310462B (en) Image clustering method and device based on rank constraint self-expression
CN114692809A (en) Data processing method and device based on neural cluster, storage medium and processor
CN114463646B (en) Remote sensing scene classification method based on multi-head self-attention convolution neural network
CN109325140B (en) Method and device for extracting hash code from image and image retrieval method and device
Wang et al. Conscience online learning: an efficient approach for robust kernel-based clustering
CN116523877A (en) Brain MRI image tumor block segmentation method based on convolutional neural network
Ribeiro et al. Extracting discriminative features using non-negative matrix factorization in financial distress data
Shadvar Dimension reduction by mutual information feature extraction
CN113177587B (en) Generalized zero sample target classification method based on active learning and variational self-encoder
Liu et al. Fast tracking via spatio-temporal context learning based on multi-color attributes and pca
CN111797732B (en) Video motion identification anti-attack method insensitive to sampling

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant