CN110119811B - Convolution kernel cutting method based on entropy importance criterion model - Google Patents

Convolution kernel cutting method based on entropy importance criterion model Download PDF

Info

Publication number
CN110119811B
CN110119811B CN201910400922.4A CN201910400922A CN110119811B CN 110119811 B CN110119811 B CN 110119811B CN 201910400922 A CN201910400922 A CN 201910400922A CN 110119811 B CN110119811 B CN 110119811B
Authority
CN
China
Prior art keywords
model
convolution
layer
entropy
cutting
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910400922.4A
Other languages
Chinese (zh)
Other versions
CN110119811A (en
Inventor
闵锐
蒋霆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Electric Coreda Chengdu Technology Co ltd
Original Assignee
Electric Coreda Chengdu Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Electric Coreda Chengdu Technology Co ltd filed Critical Electric Coreda Chengdu Technology Co ltd
Priority to CN201910400922.4A priority Critical patent/CN110119811B/en
Publication of CN110119811A publication Critical patent/CN110119811A/en
Application granted granted Critical
Publication of CN110119811B publication Critical patent/CN110119811B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/082Learning methods modifying the architecture, e.g. adding, deleting or silencing nodes or connections

Abstract

The invention belongs to the technical field of neural networks, and relates to a convolution kernel clipping method based on an entropy importance criterion model. The invention provides a convolution kernel cutting method based on an entropy importance criterion model in order that a convolution neural network model with large parameter quantity, large calculation quantity and excellent performance can meet the real-time requirement in real application.

Description

Convolution kernel cutting method based on entropy importance criterion model
Technical Field
The invention belongs to the technical field of neural networks, and relates to a convolution kernel cutting method based on an entropy importance criterion model.
Background
In recent years, the development of the convolutional neural network is very rapid, and the convolutional neural network makes great progress along with the continuous perfection of theory and the support of a modern large-scale computing platform. Has application in different fields and shows very good performance in different applications. Compared with the traditional method for extracting the features, the convolutional neural network is used as a hierarchical feature extractor, the extracted features are more diversified and more abstract, and the problem that the features manually extracted by the traditional method are insufficient is solved. The diversity and abstraction of the features extracted by the convolutional neural network enable the convolutional neural network to be applied to a large number of applications, such as classification tasks, segmentation tasks, detection tasks and the like. The advantageous performance of the engineering application also contributes to the widespread application and research of convolutional neural networks.
The convolutional neural network is a calculation-intensive network model, the performance of the advantage depends on the network model with hundreds of thousands of parameters, and the training of the model involves a large number of matrix operations, so that the requirement on a calculation platform is high. For example, in the ImageNet challenge in 2012, the AlexNet network achieved the best results, with AlexNet comprising 5 convolutional layers and three fully-connected layers, containing sixty million parameters overall, taking three days for model training on the ImageNet dataset and using the invitro K40 machine. For example, in the ImageNet challenge of 2014, the VGGNet series model achieves very good performance in the same year, wherein for example, the VGG16 model, which comprises 13 layers of convolution, 3 fully-connected layers, comprises hundreds of millions of parameters, has huge parameters, and although the performance is improved, a great deal of training time is needed, and a long time is also needed for inference. The increase of the parameter quantity of the model can increase the performance, but is not suitable for the application of embedded equipment with low power consumption, low storage and low bandwidth, and the application of the model on engineering is certainly limited if the parameter quantity of the model is too large. To compensate for this drawback, some networks proposed later consider the performance of the network as much as possible, and also consider the size of the parameters and the calculation amount of the network.
Disclosure of Invention
Aiming at the problems or the defects, the method aims to solve the problem that the convolutional neural network model cannot be applied to a scene with high real-time requirement due to large parameter and calculation amount. The invention provides a model convolution kernel cutting method based on entropy importance criterion, which analyzes the information content of a convolution kernel by a method of calculating image entropy and performs pruning operation on a redundant convolution neural network structure so as to realize model compression and acceleration. The invention comprises the following steps, and an algorithm principle diagram of the invention is shown in figure 1.
S1, obtaining a training sample: acquiring original optical image data, and performing data normalization and data enhancement processing to obtain a training sample;
s2, constructing a convolutional neural network model:
constructing a convolution neural network formed by cascading a convolution filter and a pooling filter, wherein the convolution filter is used for extracting the characteristics of input data, and the number of the convolution filters represents the number of the extracted characteristics; the pooling filter is used for reducing the dimension of input data, and the constructed convolutional neural network has large parameter and good performance in the current practical application.
S3, training a convolutional neural network model:
s31, initializing parameters, including learning rate alpha, total cutting times n, fine tuning training iteration times M of the current model after each cutting, Mini-batch size M and threshold T1,T2,T3,T4The number K of local areas into which an image is divided in an information entropy formula, the number X of convolution kernels subjected to iterative clipping each time during global clipping, and the number X of convolution kernels subjected to iterative clipping each time during single-layer clipping adopt a random gradient optimization algorithm SGD as an optimization function;
s32, randomly extracting Mini-batch containing M samples from the training samples each time to be used as training data for training; and performing convolution kernel clipping operation based on the entropy importance criterion model, wherein the specific clipping mode is as follows:
adopting an evaluation criterion of the image entropy as an entropy importance criterion model, calculating the image entropy of the activation channel of each convolutional layer in the training process of the convolutional neural network model, numbering each channel of each convolutional layer, summarizing the activation channels of all convolutional layers, and calculating and sequencing according to the value of the image entropy;
the size of the image entropy is used for reflecting the average information amount in an image, and the expression of the image entropy is as follows:
Figure GDA0003108271780000021
where K denotes the division of an image into several parts, the choice of K has an impact on the overall performance. p is a radical ofnRepresenting the probability that the number of pixels in each portion is greater than the total image.
The convolution kernel clipping method comprises the following steps: defining I on the I-th convolution layer of the modeli∈RN×C×H×WIs an input tensor, where N is the size of one data Batch, C is the dimension of the input activation channel, H, W is the width and height dimension of the input, and defines Oi∈RN ×D×H×WFor the output tensor, where D is the dimension of the active channel of the output, define wi=RN×C×B×BIs a parameter matrix of convolution, B is the size of a convolution kernel of the current layer, and the cutting target of the convolution kernel is to cut off the convolution kernel omegai(ii) a Because the convolution kernels correspond to the single activation channels one by one, the value of the image entropy is used for evaluating the importance of the corresponding convolution kernel, X convolution kernels with small entropy values in the whole model are cut as the cutting result at this time during each cutting, the number of the cut convolution kernels is increased along with the increase of the cutting times, the value of X is set according to the number of the convolution kernels of the whole model, the X is a hyper-parameter, and the set size influences the cutting result.
S33, after each cutting, continuing fine tuning training for m times, then testing the current model on the test set, as the model convolution kernel is cut, gradually reducing the accuracy of the model on the test set, wherein the accuracy of each test is the same as the set minimum threshold value T of the accuracy1Comparing, if the current accuracy rate on the test set is less than or equal to T1Or the number of the convolution kernels left by the clipped model reaches the set minimum convolution kernel number threshold value T2Step 34 is entered; if it is higher than T1And T2And the maximum cutting times n are not reached, the step 32 is returned to, and the training is continued;
s34, continuously cutting each layer of convolution layer of the model obtained by the global cutting mode in the pre-step, wherein the specific mode is as follows:
and traversing from the first layer of convolution layer of the model, setting the currently traversed convolution layer as i, calculating the image entropy value of the currently remaining activation channel of i, sequencing the convolution kernels corresponding to the layer according to the image entropy value, and cutting off x convolution kernels with small image entropy values according to the sequencing result. After cutting, continuing fine tuning training for m times, then testing the current model on the test set, and gradually reducing the accuracy of the model on the test set as the single-layer convolution kernel of the model continues to be cut, wherein the accuracy of each test is the same as the set minimum threshold value T of the accuracy3Making a comparison if it is currentlyAccuracy on test set is less than or equal to T3Or the number of the convolution kernels left by the clipped model reaches the set minimum convolution kernel number threshold value T4And traversing the next convolution layer until all convolution layers of the model are traversed, otherwise, continuing to cut the current layer i.
In conclusion, the invention has the advantages that: by calculating the image entropy of each activation channel of the convolutional neural network model, the size of the image entropy value is used as a criterion for evaluating the importance of the corresponding convolutional kernel, and the convolutional kernel corresponding to the small image entropy value is cut, so that the compression and acceleration of the convolutional neural network model are realized. As a method for model pruning, the small model after pruning can meet the requirements of real-time performance and precision of application in a real scene.
Drawings
FIG. 1 is a schematic diagram of the algorithm of the present invention;
fig. 2 is a Cifar10 image data presentation;
FIG. 3 is a schematic diagram of structured pruning
FIG. 4 is a view showing a structure of a VGG model
FIG. 5 is a diagram of ResNet18 model structure
Detailed Description
In order to make the object, technical scheme and advantages of the invention clearer, the invention takes a Cifar10 data set as an example of a target recognition task, and VGG16 and ResNet18 models are respectively taken as model references, so as to further describe the invention, wherein the structure of the VGG16 model is shown in figure 4, and the structure of the ResNet18 model is shown in figure 5.
The Cifar10 training sample is a 32 x 32 optical image showing only the Cifar10 dataset, the image data shown in figure 2.
(1) VGG model experiments on Cifar10
As can be seen from table 1, three methods were tested and comparative experiments were performed based on the VGG16 model.
TABLE 1 VGG16 comparative experiments on the Cifar10 dataset
Model (model) Acc(%) Reference quantity (M) FLOPS(M) Compression ratio Acceleration rate
VGG16 88.39 14.73 313 1x 1x
Pruned-GAP 86.60 2.41 152 6.1x 2.1x
Taylor 86.03 2.65 80 5.6x 3.9x
Pruned-IEC 87.64 2.62 106 5.6x 3.0x
The VGG16 model is used as the Baseline model, and the VGG16 is different from the originally proposed VGG16 model, and comprises 13 convolutional layers and 3 fully-connected layers. Considering the limitation of a data set of the current experiment, the data size is only 32 × 32, so that the Baseline model removes the following three full-connection layers, uses a global average pooling replacement, and is verified through the global average pooling replacement, which shows that compared with the original full-connection layer, the method has no loss of performance, but reduces the parameter and the calculation amount.
Pruned-GAP and Taylor are other model pruning methods. Wherein the bound-EIC is the method mentioned in this patent.
In the experimental conclusion, it can be seen that the accuracy (Acc) of the method of the present invention is somewhat lost compared with the original uncut network, and the loss is reasonable during the cutting process, but compared with the two methods mentioned in other papers, the accuracy obtained by the method of the present invention is better than that obtained by the other two methods. Compared with the original model, the parameter quantity of the model after cutting is reduced by 5.6 times, and the calculated quantity (FLOPS) is accelerated by 3 times.
(2) ResNet18 model was tested on Cifar10
As shown in table 2, based on the accuracy of ResNet18 on the Cifar10 data set, the accuracy of the model obtained by the three methods is higher than that of the original model, and this situation also satisfies the possibility that the performance may be higher than that of the original model in the case of discarding some redundant channels. The performance of the method mentioned in the patent is not optimal in several methods, because of the contingency of the experiment, the experimental result has certain deviation, but the method of the invention can play an effective effect on model compression and acceleration as a relatively new method on the whole, and from the analysis of the experimental result, the method of the invention reduces the parameter quantity by 3.4 times and accelerates the calculation by 1.8 times compared with the obtained cutting model.
Table 2 ResNet18 comparative experiments on Cifar10 dataset
Model (model) Acc(%) Reference quantity (M) FLOPS(M) Acceleration rate Compression ratio
ResNet18 87.83 11.17 555 1x 1x
Pruned-GAP 88.20 2.65 361 4.2x 1.5x
Taylor 88.52 3.67 277 3.0x 2x
Pruned-EIC 88.35 3.30 307 3.4x 1.8x

Claims (1)

1. A convolution kernel cutting method based on an entropy importance criterion model is characterized by comprising the following steps:
s1, obtaining a training sample: acquiring original optical image data, and performing data normalization and data enhancement processing to obtain a training sample;
s2, constructing a convolutional neural network model:
constructing a convolution neural network formed by cascading a convolution filter and a pooling filter, wherein the convolution filter is used for extracting the characteristics of input data, and the number of the convolution filters represents the number of the extracted characteristics; the pooling filter is used for reducing the dimension of input data;
s3, training a convolutional neural network model:
s31, initializing parameters, including learning rate alpha, total cutting times n, fine tuning training iteration times M of the current model after each cutting, Mini-batch size M and threshold T1,T2,T3,T4The number of local areas into which an image is divided in an information entropy formula is K, the number of convolution kernels subjected to iterative clipping each time in the overall clipping process is X, the number of convolution kernels subjected to iterative clipping each time in the layer-by-layer clipping process is X, and a random gradient optimization algorithm SGD is adopted as an optimization function;
s32, randomly extracting Mini-batch containing M samples from the training samples each time to be used as training data for training; and performing convolution kernel clipping operation based on the entropy importance criterion model, wherein the specific clipping mode is as follows:
adopting an evaluation criterion of the image entropy as an entropy importance criterion model, calculating the image entropy of the activation channel of each convolutional layer in the training process of the convolutional neural network model, numbering each channel of each convolutional layer, summarizing the activation channels of all convolutional layers, and calculating and sequencing according to the value of the image entropy;
the size of the image entropy is used for reflecting the average information amount in an image, and the expression of the image entropy is as follows:
Figure FDA0003108271770000011
wherein K represents that an image is divided into a plurality of parts, and the selection of K has influence on the overall performance; p is a radical ofnRepresenting the probability of the number of pixels in each part being more than the whole image;
the convolution kernel clipping method comprises the following steps: defining I on the I-th convolution layer of the modeli∈RN×C×H×WIs an input tensor, where N is the size of one data Batch, C is the dimension of the input activation channel, H, W is the width and height dimension of the input, and defines Oi∈RN×D×H×WFor the output tensor, where D is the dimension of the active channel of the output, define wi=RN×C×B×BIs a parameter matrix of convolution, B is the size of a convolution kernel of the current layer, and the cutting target of the convolution kernel is to cut off the convolution kernel omegai(ii) a Because the convolution kernels correspond to the single activation channels one by one, the value of the image entropy is used for evaluating the importance of the corresponding convolution kernel, X convolution kernels with small entropy values in the whole model are cut as the cutting result at this time during each cutting, the number of the cut convolution kernels is increased along with the increase of the cutting times, the value of X is set according to the number of the convolution kernels of the whole model, the X is a hyper-parameter, and the set size can influence the cutting result;
s33, after each cutting, continuing fine tuning training for m times, and then testing the current model on the test set along with the modelThe convolution kernel is cut, the accuracy of the model on the test set is gradually reduced, and the accuracy of each test is the same as the set minimum threshold value T of the accuracy1Comparing, if the current accuracy rate on the test set is less than or equal to T1Or the number of the convolution kernels left by the clipped model reaches the set minimum convolution kernel number threshold value T2Proceeding to step S34; if it is higher than T1And T2And the maximum cutting times n are not reached, the step S32 is returned to, and the training is continued;
s34, continuing to cut each layer of the convolution layer of the model layer by layer according to the model obtained by the global cutting mode in the preorder step, wherein the specific mode is as follows:
traversing from the first layer of convolution layer of the model, setting the currently traversed convolution layer as i, calculating the image entropy value of the currently left activation channel of the i layers, sequencing the convolution kernels corresponding to the i layers according to the image entropy value, and cutting off x convolution kernels with small image entropy values according to the sequencing result; after cutting, continuing fine tuning training for m times, then testing the current model on the test set, and gradually reducing the accuracy of the model on the test set as the single-layer convolution kernel of the model continues to be cut, wherein the accuracy of each test is the same as the set minimum threshold value T of the accuracy3Comparing, if the current accuracy rate on the test set is less than or equal to T3Or the number of the convolution kernels left by the clipped model reaches the set minimum convolution kernel number threshold value T4And traversing the next convolution layer until all convolution layers of the model are traversed, otherwise, continuing to cut the current layer i.
CN201910400922.4A 2019-05-15 2019-05-15 Convolution kernel cutting method based on entropy importance criterion model Active CN110119811B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910400922.4A CN110119811B (en) 2019-05-15 2019-05-15 Convolution kernel cutting method based on entropy importance criterion model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910400922.4A CN110119811B (en) 2019-05-15 2019-05-15 Convolution kernel cutting method based on entropy importance criterion model

Publications (2)

Publication Number Publication Date
CN110119811A CN110119811A (en) 2019-08-13
CN110119811B true CN110119811B (en) 2021-07-27

Family

ID=67522453

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910400922.4A Active CN110119811B (en) 2019-05-15 2019-05-15 Convolution kernel cutting method based on entropy importance criterion model

Country Status (1)

Country Link
CN (1) CN110119811B (en)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110619385B (en) * 2019-08-31 2022-07-29 电子科技大学 Structured network model compression acceleration method based on multi-stage pruning
CN110796251A (en) * 2019-10-28 2020-02-14 天津大学 Image compression optimization method based on convolutional neural network
CN111062477B (en) * 2019-12-17 2023-12-08 腾讯云计算(北京)有限责任公司 Data processing method, device and storage medium
CN111291637A (en) * 2020-01-19 2020-06-16 中国科学院上海微系统与信息技术研究所 Face detection method, device and equipment based on convolutional neural network
CN111291814B (en) * 2020-02-15 2023-06-02 河北工业大学 Crack identification algorithm based on convolutional neural network and information entropy data fusion strategy
CN111753786A (en) * 2020-06-30 2020-10-09 中国矿业大学 Pedestrian re-identification method based on full-scale feature fusion and lightweight generation type countermeasure network
CN112734010B (en) * 2021-01-04 2024-04-16 暨南大学 Convolutional neural network model compression method suitable for image recognition
CN112766491A (en) * 2021-01-18 2021-05-07 电子科技大学 Neural network compression method based on Taylor expansion and data driving
CN112766364A (en) * 2021-01-18 2021-05-07 南京信息工程大学 Tomato leaf disease classification method for improving VGG19
CN113033804B (en) * 2021-03-29 2022-07-01 北京理工大学重庆创新中心 Convolution neural network compression method for remote sensing image

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107368891A (en) * 2017-05-27 2017-11-21 深圳市深网视界科技有限公司 A kind of compression method and device of deep learning model
CN108334934A (en) * 2017-06-07 2018-07-27 北京深鉴智能科技有限公司 Convolutional neural networks compression method based on beta pruning and distillation
CN108416187A (en) * 2018-05-21 2018-08-17 济南浪潮高新科技投资发展有限公司 A kind of method and device of determining pruning threshold, model pruning method and device
CN108764471A (en) * 2018-05-17 2018-11-06 西安电子科技大学 The neural network cross-layer pruning method of feature based redundancy analysis
CN109472352A (en) * 2018-11-29 2019-03-15 湘潭大学 A kind of deep neural network model method of cutting out based on characteristic pattern statistical nature
CN109685780A (en) * 2018-12-17 2019-04-26 河海大学 A kind of Retail commodity recognition methods based on convolutional neural networks

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9349101B2 (en) * 2014-08-29 2016-05-24 Salesforce.Com, Inc. Systems and methods for partitioning sets of features for a bayesian classifier
US20190065961A1 (en) * 2017-02-23 2019-02-28 Harold Szu Unsupervised Deep Learning Biological Neural Networks

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107368891A (en) * 2017-05-27 2017-11-21 深圳市深网视界科技有限公司 A kind of compression method and device of deep learning model
CN108334934A (en) * 2017-06-07 2018-07-27 北京深鉴智能科技有限公司 Convolutional neural networks compression method based on beta pruning and distillation
CN108764471A (en) * 2018-05-17 2018-11-06 西安电子科技大学 The neural network cross-layer pruning method of feature based redundancy analysis
CN108416187A (en) * 2018-05-21 2018-08-17 济南浪潮高新科技投资发展有限公司 A kind of method and device of determining pruning threshold, model pruning method and device
CN109472352A (en) * 2018-11-29 2019-03-15 湘潭大学 A kind of deep neural network model method of cutting out based on characteristic pattern statistical nature
CN109685780A (en) * 2018-12-17 2019-04-26 河海大学 A kind of Retail commodity recognition methods based on convolutional neural networks

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
An Entropy-based Pruning Method for CNN Compression;Jian-Hao Luo等;《https://arxiv.org/abs/1706.05791》;20170619;第1-10页 *
Entropy-based pruning method for convolutional neural networks;Cheonghwan Hur等;《The Journal of Supercomputing》;20181110;第2950–2963页 *
一种用于卷积神经网络压缩的混合剪枝方法;靳丽蕾等;《小型微型计算机系统》;20181211;第39卷(第12期);第2596-2601页 *

Also Published As

Publication number Publication date
CN110119811A (en) 2019-08-13

Similar Documents

Publication Publication Date Title
CN110119811B (en) Convolution kernel cutting method based on entropy importance criterion model
US20210049423A1 (en) Efficient image classification method based on structured pruning
Lym et al. Prunetrain: fast neural network training by dynamic sparse model reconfiguration
Singh et al. Play and prune: Adaptive filter pruning for deep model compression
US20190228268A1 (en) Method and system for cell image segmentation using multi-stage convolutional neural networks
CN110874631A (en) Convolutional neural network pruning method based on feature map sparsification
CN109934826A (en) A kind of characteristics of image dividing method based on figure convolutional network
CN111931914A (en) Convolutional neural network channel pruning method based on model fine tuning
CN111882040A (en) Convolutional neural network compression method based on channel number search
CN113222138A (en) Convolutional neural network compression method combining layer pruning and channel pruning
CN110322445B (en) Semantic segmentation method based on maximum prediction and inter-label correlation loss function
CN111723915A (en) Pruning method of deep convolutional neural network, computer equipment and application method
Chang et al. Automatic channel pruning via clustering and swarm intelligence optimization for CNN
CN112101364B (en) Semantic segmentation method based on parameter importance increment learning
CN113111889A (en) Target detection network processing method for edge computing terminal
CN110263917B (en) Neural network compression method and device
CN114882278A (en) Tire pattern classification method and device based on attention mechanism and transfer learning
CN114972753A (en) Lightweight semantic segmentation method and system based on context information aggregation and assisted learning
Jiang et al. Pruning-aware sparse regularization for network pruning
CN113780550A (en) Convolutional neural network pruning method and device for quantizing feature map similarity
Ding et al. Manipulating identical filter redundancy for efficient pruning on deep and complicated CNN
CN111582442A (en) Image identification method based on optimized deep neural network model
CN112381108A (en) Bullet trace similarity recognition method and system based on graph convolution neural network deep learning
Lee et al. Efficient decoupled neural architecture search by structure and operation sampling
CN112200275B (en) Artificial neural network quantification method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant