CN112487187A - News text classification method based on graph network pooling - Google Patents
News text classification method based on graph network pooling Download PDFInfo
- Publication number
- CN112487187A CN112487187A CN202011386651.0A CN202011386651A CN112487187A CN 112487187 A CN112487187 A CN 112487187A CN 202011386651 A CN202011386651 A CN 202011386651A CN 112487187 A CN112487187 A CN 112487187A
- Authority
- CN
- China
- Prior art keywords
- node
- graph
- score
- cluster
- pooling
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/30—Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
- G06F16/35—Clustering; Classification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/10—Text processing
- G06F40/194—Calculation of difference between files
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Engineering & Computer Science (AREA)
- Artificial Intelligence (AREA)
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Computational Linguistics (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Computing Systems (AREA)
- Molecular Biology (AREA)
- Biophysics (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Biomedical Technology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Databases & Information Systems (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The invention discloses a news text classification method based on graph network pooling, which comprises the following steps: s1, combining structural information and characteristic information in an attention mechanism, and calculating a similarity score between nodes in a first-order neighborhood in a graph neural network to obtain the attention mechanism with similarity nodes; s2, thinning the obtained attention mechanism by adopting a sparse probability activation function sparsemax algorithm to obtain a cluster corresponding to the node; s3, calculating the score of each cluster by adopting local aggregation convolution, and judging the information content of the clusters according to the score; s4, selecting the front with the highest score by adopting topkAnd repeating the selected clusters to obtain the final pooled neural network.
Description
Technical Field
The invention relates to the technical field of news text classification, in particular to a news text classification method based on graph network pooling.
Background
With the rapid development of the big data era, text data on the internet shows explosive growth, and it has very important significance to dig effective information from mass data. Because the news text has no fixed format, various types and high updating speed, the traditional manual classification has low efficiency and heavier subjective colors. The graph neural network is introduced into a news text classification, the news text is regarded as a graph, and nodes in the graph are composed of words. News text classification focuses mainly on the overall characteristics of the text, i.e. the study object is the entire graph itself. Graph neural networks are generally composed of convolutional and pooling layers, where research on graph convolution is very rich, with the main purpose of extracting features of the graph, but it is difficult for a model to learn information critical to graph representation and classification by stacking of graph convolutional layers alone. On one hand, learning parameters can be reduced through pooling, and on the other hand, different scale structures of the graph can be reflected.
Existing graph pooling methods include TopK, DiffPool, SAGPOOl, ASAP. Where TopK accomplishes pooling by adaptively selecting a subset of nodes that project all node features into 1 dimension using one learnable vector, then selecting the top k nodes with the largest scalar projection value. However, since the structure of the graph is not considered, the importance of the node is evaluated only from the aspect of the characteristics, and the method is too simple. DiifPool uses two graph neural networks to cluster and pool nodes, respectively, but is not suitable for large graphs because its soft allocation matrix is dense. And the SAPOol learns a scalar for each node through the structure and attribute information based on the attention mechanism, represents the importance of the corresponding node on the whole graph by the scalar, and sorts and pools the scalars. It neither aggregates node information nor computes soft-edge weights, and therefore node and edge information cannot be efficiently preserved. The ASAP improves the classification, but when the node information is aggregated, the node characteristics are easy to be too smooth, and the information is lost more, so that the news text classification effect is poor. At present, no pooling method is available for avoiding the problem of node characteristics being too smooth while preserving node information and side information in a graph.
Disclosure of Invention
The invention aims to provide a news text classification method based on graph network pooling aiming at the defects of the prior art.
In order to achieve the purpose, the invention adopts the following technical scheme:
a news text classification method based on graph network pooling comprises the following steps:
s1, combining structural information and characteristic information in an attention mechanism, and calculating a similarity score between nodes in a first-order neighborhood in a graph neural network to obtain the attention mechanism with similarity nodes;
s2, thinning the obtained attention mechanism by adopting a sparse probability activation function sparsemax algorithm to obtain a cluster corresponding to the node;
s3, calculating the score of each cluster by adopting local aggregation convolution, and judging the information content of the clusters according to the score;
s4, selecting the front with the highest score by adopting topkAnd repeating the selected clusters to obtain the final pooled neural network.
Further, the step S1 obtains an attention mechanism with similarity nodes, which is expressed as:
ei,j=σ(we[xi||xj]T)+λ·ai,j
where σ represents an activation function;representing a weight vector;andcharacteristic vectors of the respective Oersted node i and the node j; | represents a splicing operation;an adjacency matrix representing the current graph; a isi,jThe value of the ith row and the jth column of A is shown.
Further, after the sparse probability activation function sparsemax algorithm is adopted to perform the sparsification on the obtained attention mechanism in step S2, obtaining a sparse probability distribution, which is expressed as:
wherein s isiIs eiNormalized vector, eiIndicating the attention value of node i, ei=[ei,1,ei,2,...ei,N];Representing a simplex with dimension N-1.
Further, the obtaining of the sparse probability distribution further includes:
defining a lagrange function:
The dual form of the sparse probability distribution is represented as:
si,j=sparse max(ei,j)=[ei,j-τ(ei)]+
wherein, [ x ]]+Max {0, x }; τ (-) represents a threshold function.
Further, in step S3, the local aggregate convolution is used to calculate the local aggregate convolution in the score of each cluster, which is expressed as:
wherein sigmoid is adopted as the activation function sigma;respectively representing the neighborhoods of the node i and the node j;can representLearning parameters, wherein the global importance and the local importance of the clusters are considered, and the score of each cluster is obtained comprehensively;representing a feature transformation of an ith node;representing the difference between the current node and its first-order neighborhood.
Further, step S4 is specifically that:
the fitness vector phi is changed to [ phi1,φ2,...,φN]TMultiplication with the cluster representation matrix SX to make the fitness function fφCan learn:
whereinRepresenting a Hadamard product; s ═ S1,s2,...,sN]Keep away from the dependent cluster allocation matrix; x ═ X1,x2,...,xN]TRepresenting a feature matrix;
function TOPkSorting the fitness scores and screening the retention by a ratio k to yield GcMiddle frontIndexing of selected clustersExpressed as:
before selectionClustering to form a pooling map GpThen the corresponding distribution matrixAnd node feature matrixExpressed as:
after sampling the clusters, pooling the graph GpFor middle useAndto obtain a new adjacency matrix ap:
Compared with the prior art, the invention has the beneficial effects that:
1. an attention mechanism is used, the structural information of the graph is combined with the characteristic information of the nodes, and the similarity between the nodes can be calculated more accurately.
2. The Sparsemax algorithm is used for sparsifying the attention value of the first-order neighborhood, so that the nodes with high similarity form a cluster, and a new method is provided for cluster formation.
3. The local aggregation convolution and the topk algorithm are combined, so that the problem of node feature over-smoothness is effectively solved, the self-adaptive pooling operation is realized, and the accuracy rate is higher than that of the traditional graph pooling method.
Drawings
Fig. 1 is a flowchart of a news text classification method based on graph network pooling according to an embodiment.
Detailed Description
The embodiments of the present invention are described below with reference to specific embodiments, and other advantages and effects of the present invention will be easily understood by those skilled in the art from the disclosure of the present specification. The invention is capable of other and different embodiments and of being practiced or of being carried out in various ways, and its several details are capable of modification in various respects, all without departing from the spirit and scope of the present invention. It is to be noted that the features in the following embodiments and examples may be combined with each other without conflict.
The invention aims to provide a news text classification method based on graph network pooling aiming at the defects of the prior art.
Example one
The embodiment provides a news text classification method based on graph network pooling, as shown in fig. 1, including the steps of:
s1, combining structural information and characteristic information in an attention mechanism, and calculating a similarity score between nodes in a first-order neighborhood in a graph neural network to obtain the attention mechanism with similarity nodes;
s2, thinning the obtained attention mechanism by adopting a sparse probability activation function sparsemax algorithm to obtain a cluster corresponding to the node;
s3, calculating the score of each cluster by adopting local aggregation convolution, and judging the information content of the clusters according to the score;
s4, selecting the front with the highest score by adopting topkAnd repeating the selected clusters to obtain the final pooled neural network.
The embodiment specifically includes: firstly, in the graph, nodes with high similarity are adaptively selected to form a cluster by utilizing sparse attention. Structural information weight is added in the attention mechanism, so that the structural information weight is combined with the node characteristics, the structure learning is facilitated, the similarity score between the nodes in the first-order neighborhood of the structural information weight is calculated, the attention value is thinned by using Sparsemax to obtain a distribution matrix, and finally the node composition of each cluster is obtained. And then, aggregating nodes by utilizing a local aggregation convolution function to obtain cluster representation, calculating the information content of each cluster, selecting the cluster with a higher score by using TopK, and then recalculating the adjacency matrix of the graph by using the allocation matrix to obtain the feature matrix and the adjacency matrix of the graph after final pooling.
In step S1, the structural information and the feature information are combined in the attention mechanism, and the similarity score between the nodes in the first-order neighborhood in the graph neural network is calculated, so as to obtain the attention mechanism with the similarity nodes.
Similarity scores between nodes within a first-order neighborhood in the graph are calculated using an attention mechanism that combines structural information with feature information.
In order to make the node similarity in the cluster high, calculation in a first order range is selected. And calculating the similarity between the nodes in each first-order neighborhood through an attention mechanism so as to find out which node information should be concerned in the current neighborhood. In addition, in order to maintain the structure of the graph, the structure of the graph is also taken into consideration, so similar attention between the node i and the node j is as follows:
ei,j=σ(we[xi||xj]T)+λ·ai,j
where a is the activation function and where a is the activation function,in order to be a weight vector, the weight vector,andare respectivelyAnd the feature vectors of the node i and the node j, | | is splicing operation.The adjacency matrix representing the current graph, ai,jIs the value of row i and column j of A. When node i and node j are directly connected, ai,j0; when two nodes are not directly connected, ai,j≠0。
In step S2, the sparse probability activation function sparsemax algorithm is used to perform sparsification on the obtained attention mechanism, so as to obtain a cluster corresponding to the node.
And (3) thinning the obtained attention by adopting a sparsemax (sparse probability activation function) algorithm, namely directly assigning 0 to the small similarity of the node features.
Let siIs eiNormalized vector, wherein eiAttention value for node i, i.e. ei=[ei,1,ei,2,...ei,N]。Representing a simplex with dimension N-1. Sparsemax is used herein, and is characterized by a probability distribution that produces sparseness:
sparsemax directly outputs eiThe projection to the simplex mode can play a role in output sparseness. The functional form described above cannot be solved directly without knowing the true distribution. Thus the lagrangian function is first defined:
if j belongs to { 1.,. N }, there is si,j *> 0, then mui,j *Is equal to 0 and hasLet c (e)i)={z∈{1,...,N}|si,z *> 0}, there areThen
In combination, the dual form is
si,j=sparsemax(ei,j)=[ei,j-τ(ei)]+
Wherein [ x ]]+Max {0, x }, τ (·) is a threshold function, sparsemax (·) retains values above the threshold, and is set to zero when less than the threshold.
In step S3, the score of each cluster is calculated using local aggregation convolution, and the amount of information contained in the cluster is determined by the level of the score.
Calculating the score of each cluster by using local aggregation convolution, and judging the information content in the cluster according to the score;
using a fitness function fφScore phi according to cluster fitnessiThe clusters are sampled. In order to calculate the amount of information contained in the cluster, an aggregation mode is adopted for the cluster representation, and local information of the cluster representation is calculated, namely local aggregation convolution:
wherein the activation function sigma is taken to be sigmoid,respectively representing the neighborhoods of node i and node j,are learnable parameters. The global and local importance of the clusters are considered at the same time, and the score of each cluster can be obtained comprehensively. The previous itemFor feature transformation of the ith node, the latter termThe current node representation is different from the node representation in the first-order neighborhood, and if the neighboring node can well represent the node, the score of the latter item is low, that is, the node is discarded, so that the influence on the whole is small. Because the previous attention thinning does not contain information of each first-order neighbor node, the difference between the current node and the neighbor nodes is increased, and therefore the nodes with more information content are screened out.
In step S4, top with the highest score is selected using topkAnd repeating the selected clusters to obtain the final pooled neural network.
Selecting the top score using topkPerforming edge reconnection on the clusters to obtain a finally pooled graph;
the fitness vector phi is changed to [ phi1,φ2,...,φN]TMultiplication with the cluster representation matrix SX to make the fitness function fφCan learn:
whereinIs a product of Hadamard, S ═ S1,s2,...,sN]Allocating a matrix to a cluster, X ═ X1,x2,...,xN]TIs a feature matrix. Function TOPkSorting the fitness scores and screening the retention by a ratio k to yield GcMiddle frontIndexing of selected clustersAs follows:
by selecting these precursorsClustering to form a pooling map Gp. Then the corresponding allocation matrixAnd node feature matrixGiven by:
after sampling the clusters, pooling the graph GpFor middle useAndto obtain a new adjacency matrix ap:
WhereinAnd I is an identity matrix. This formula ensures that graph G is populated if there are common nodes between the two clusters, or if there are nodes in the two clusters that are connected in the original graph GpThe middle clusters i and j are connected, so that the connectivity of the graph is enhanced, and the existence of isolated nodes is reduced.
Compared with the existing news text classification method, the method has the beneficial effects that:
1. an attention mechanism is used, the structural information of the graph is combined with the characteristic information of the nodes, and the similarity between the nodes can be calculated more accurately.
2. The Sparsemax algorithm is used for sparsifying the attention value of the first-order neighborhood, so that the nodes with high similarity form a cluster, and a new method is provided for cluster formation.
3. The local aggregation convolution and the topk algorithm are combined, so that the problem of node feature over-smoothness is effectively solved, the self-adaptive pooling operation is realized, and the accuracy rate is higher than that of the traditional graph pooling method.
It is to be noted that the foregoing is only illustrative of the preferred embodiments of the present invention and the technical principles employed. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, although the present invention has been described in greater detail by the above embodiments, the present invention is not limited to the above embodiments, and may include other equivalent embodiments without departing from the spirit of the present invention, and the scope of the present invention is determined by the scope of the appended claims.
Claims (6)
1. A news text classification method based on graph network pooling is characterized by comprising the following steps:
s1, combining structural information and characteristic information in an attention mechanism, and calculating a similarity score between nodes in a first-order neighborhood in a graph neural network to obtain the attention mechanism with similarity nodes;
s2, thinning the obtained attention mechanism by adopting a sparse probability activation function sparsemax algorithm to obtain a cluster corresponding to the node;
s3, calculating the score of each cluster by adopting local aggregation convolution, and judging the information content of the clusters according to the score;
2. The method for classifying news texts based on graph network pooling of claim 1, wherein the attention mechanism with the similarity node obtained in step S1 is represented as:
ei,j=σ(we[xi||xj]T)+λ·ai,j
3. The method for classifying news texts based on graph network pooling according to claim 2, wherein the step S2 of thinning the obtained attention mechanism by using sparse probability activation function sparsemax algorithm further comprises obtaining sparse probability distribution, which is expressed as:
4. The method of claim 3, wherein the obtaining of the sparse probability distribution further comprises:
defining a lagrange function:
The dual form of the sparse probability distribution is represented as:
si,j=sparsemax(ei,j)=[ei,j-τ(ei)]+
wherein, [ x ]]+Max {0, x }; τ (-) represents a threshold function.
5. The method for classifying news texts based on graph network pooling according to claim 4, wherein the local aggregate convolution is used in step S3 to calculate the local aggregate convolution expression in the score of each cluster as:
wherein sigmoid is adopted as the activation function sigma;respectively representing the neighborhoods of the node i and the node j;the learnable parameters are expressed, the global and local importance of the clusters is considered, and the score of each cluster is obtained comprehensively;representing a feature transformation of an ith node;representing the difference between the current node and its first-order neighborhood.
6. The method for classifying news texts based on graph network pooling according to claim 5, wherein the step S4 specifically comprises:
the fitness vector phi is changed to [ phi1,φ2,...,φN]TMultiplication with the cluster representation matrix SX to make the fitness function fφCan learn:
wherein |, indicates a hadamard product; s ═ S1,s2,...,sN]Keep away from the dependent cluster allocation matrix; x ═ X1,x2,...,xN]TRepresenting a feature matrix;
function TOPkSorting the fitness scores and screening the retention by a ratio k to yield GcMiddle frontIndexing of selected clustersExpressed as:
before selectionClustering to form a pooling map GpThen the corresponding distribution matrixAnd node feature matrixExpressed as:
after sampling the clusters, pooling the graph GpFor middle useAndto obtain a new adjacency matrix ap:
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011386651.0A CN112487187B (en) | 2020-12-02 | 2020-12-02 | News text classification method based on graph network pooling |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011386651.0A CN112487187B (en) | 2020-12-02 | 2020-12-02 | News text classification method based on graph network pooling |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112487187A true CN112487187A (en) | 2021-03-12 |
CN112487187B CN112487187B (en) | 2022-06-10 |
Family
ID=74938664
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011386651.0A Active CN112487187B (en) | 2020-12-02 | 2020-12-02 | News text classification method based on graph network pooling |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112487187B (en) |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20170083747A1 (en) * | 2015-09-21 | 2017-03-23 | The Climate Corporation | Ponding water detection on satellite imagery |
CN109389055A (en) * | 2018-09-21 | 2019-02-26 | 西安电子科技大学 | Video classification methods based on mixing convolution sum attention mechanism |
US20200250139A1 (en) * | 2018-12-31 | 2020-08-06 | Dathena Science Pte Ltd | Methods, personal data analysis system for sensitive personal information detection, linking and purposes of personal data usage prediction |
CN111563164A (en) * | 2020-05-07 | 2020-08-21 | 成都信息工程大学 | Specific target emotion classification method based on graph neural network |
CN111709518A (en) * | 2020-06-16 | 2020-09-25 | 重庆大学 | Method for enhancing network representation learning based on community perception and relationship attention |
CN111985369A (en) * | 2020-08-07 | 2020-11-24 | 西北工业大学 | Course field multi-modal document classification method based on cross-modal attention convolution neural network |
-
2020
- 2020-12-02 CN CN202011386651.0A patent/CN112487187B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20170083747A1 (en) * | 2015-09-21 | 2017-03-23 | The Climate Corporation | Ponding water detection on satellite imagery |
CN109389055A (en) * | 2018-09-21 | 2019-02-26 | 西安电子科技大学 | Video classification methods based on mixing convolution sum attention mechanism |
US20200250139A1 (en) * | 2018-12-31 | 2020-08-06 | Dathena Science Pte Ltd | Methods, personal data analysis system for sensitive personal information detection, linking and purposes of personal data usage prediction |
CN111563164A (en) * | 2020-05-07 | 2020-08-21 | 成都信息工程大学 | Specific target emotion classification method based on graph neural network |
CN111709518A (en) * | 2020-06-16 | 2020-09-25 | 重庆大学 | Method for enhancing network representation learning based on community perception and relationship attention |
CN111985369A (en) * | 2020-08-07 | 2020-11-24 | 西北工业大学 | Course field multi-modal document classification method based on cross-modal attention convolution neural network |
Non-Patent Citations (2)
Title |
---|
ZHEN ZHANG等: "Hierarchical graph pooling with structure learning", 《ARXIV:1911.05954》 * |
尹一君: "基于轻量级网络的高通量遥感视觉目标检测与识别技术研究", 《中国优秀博硕士学位论文全文数据库(硕士)工程科技Ⅱ辑》 * |
Also Published As
Publication number | Publication date |
---|---|
CN112487187B (en) | 2022-06-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109271522B (en) | Comment emotion classification method and system based on deep hybrid model transfer learning | |
CN108319987B (en) | Filtering-packaging type combined flow characteristic selection method based on support vector machine | |
CN110598061A (en) | Multi-element graph fused heterogeneous information network embedding method | |
CN113868366B (en) | Streaming data-oriented online cross-modal retrieval method and system | |
CN112100514B (en) | Friend recommendation method based on global attention mechanism representation learning | |
CN113378913A (en) | Semi-supervised node classification method based on self-supervised learning | |
CN111738303A (en) | Long-tail distribution image identification method based on hierarchical learning | |
CN110674865A (en) | Rule learning classifier integration method oriented to software defect class distribution unbalance | |
CN110598848A (en) | Migration learning acceleration method based on channel pruning | |
CN114299362A (en) | Small sample image classification method based on k-means clustering | |
CN114117945B (en) | Deep learning cloud service QoS prediction method based on user-service interaction graph | |
CN113283473A (en) | Rapid underwater target identification method based on CNN feature mapping pruning | |
CN108614932B (en) | Edge graph-based linear flow overlapping community discovery method, system and storage medium | |
CN113887698A (en) | Overall knowledge distillation method and system based on graph neural network | |
CN111161282B (en) | Target scale selection method for image multi-level segmentation based on depth seeds | |
CN112487187B (en) | News text classification method based on graph network pooling | |
CN113255892A (en) | Method and device for searching decoupled network structure and readable storage medium | |
Zhan et al. | Field programmable gate array‐based all‐layer accelerator with quantization neural networks for sustainable cyber‐physical systems | |
CN111639751A (en) | Non-zero padding training method for binary convolutional neural network | |
CN111291193A (en) | Application method of knowledge graph in zero-time learning | |
CN110674333A (en) | Large-scale image high-speed retrieval method based on multi-view enhanced depth hashing | |
CN113592013B (en) | Three-dimensional point cloud classification method based on graph attention network | |
CN112836511B (en) | Knowledge graph context embedding method based on cooperative relationship | |
CN112990336B (en) | Deep three-dimensional point cloud classification network construction method based on competitive attention fusion | |
CN108898227A (en) | Learning rate calculation method and device, disaggregated model calculation method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |