CN110674291A - Chinese patent text effect category classification method based on multivariate neural network fusion - Google Patents
Chinese patent text effect category classification method based on multivariate neural network fusion Download PDFInfo
- Publication number
- CN110674291A CN110674291A CN201910776137.9A CN201910776137A CN110674291A CN 110674291 A CN110674291 A CN 110674291A CN 201910776137 A CN201910776137 A CN 201910776137A CN 110674291 A CN110674291 A CN 110674291A
- Authority
- CN
- China
- Prior art keywords
- layer
- effect
- text
- neural network
- vector
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
- 230000000694 effects Effects 0.000 title claims abstract description 65
- 238000000034 method Methods 0.000 title claims abstract description 28
- 238000013528 artificial neural network Methods 0.000 title claims abstract description 20
- 230000004927 fusion Effects 0.000 title claims abstract description 15
- 238000013527 convolutional neural network Methods 0.000 claims abstract description 18
- 230000007246 mechanism Effects 0.000 claims abstract description 7
- 230000011218 segmentation Effects 0.000 claims description 13
- 239000011159 matrix material Substances 0.000 claims description 8
- 230000006870 function Effects 0.000 claims description 7
- 238000007781 pre-processing Methods 0.000 claims description 6
- 230000006386 memory function Effects 0.000 claims description 3
- 238000012546 transfer Methods 0.000 claims description 3
- 230000008569 process Effects 0.000 abstract description 5
- 238000005516 engineering process Methods 0.000 abstract description 4
- 230000008901 benefit Effects 0.000 abstract description 3
- 230000002457 bidirectional effect Effects 0.000 abstract description 3
- 238000012827 research and development Methods 0.000 abstract description 3
- 230000000306 recurrent effect Effects 0.000 abstract description 2
- 238000013145 classification model Methods 0.000 description 8
- 239000000047 product Substances 0.000 description 5
- 238000012360 testing method Methods 0.000 description 3
- 230000004913 activation Effects 0.000 description 2
- 238000013135 deep learning Methods 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 230000018109 developmental process Effects 0.000 description 2
- 238000002474 experimental method Methods 0.000 description 2
- 238000010801 machine learning Methods 0.000 description 2
- 238000003062 neural network model Methods 0.000 description 2
- 238000012549 training Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 230000005489 elastic deformation Effects 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 239000013067 intermediate product Substances 0.000 description 1
- 238000005065 mining Methods 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 230000001737 promoting effect Effects 0.000 description 1
- 238000013526 transfer learning Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/30—Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
- G06F16/35—Clustering; Classification
- G06F16/353—Clustering; Classification into predefined classes
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Databases & Information Systems (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The invention relates to a Chinese patent text effect category classification method based on multivariate neural network fusion, and belongs to the technical field of text multi-classification. The invention starts from the perspective of the patent resource text on-demand service entity industry, classifies the patent texts by taking effect knowledge, which is a scientific principle reflecting various technologies behind the products or existing among various components, as a classification standard, provides a bidirectional gated recurrent neural network integrating and fusing a convolutional neural network and introducing an Attention mechanism strategy, and forms a multi-element neural network channel comprising a word embedding layer, a convolutional layer, a BIGRU layer, an Attention layer and a Softmax layer. The method can more comprehensively mine the patent text characteristics, acquire correct effect knowledge for users, and help the users to generate new innovative principles or technical schemes in the product research and development process. The experimental result proves that under the same experimental condition, compared with a CNN model, a GRU model or a combination model of the CNN model and the GRU model, the classification method provided by the invention has the advantages of best effect and highest accuracy.
Description
Technical Field
The invention relates to a Chinese patent text effect category classification method based on multivariate neural network fusion, and belongs to the technical field of text multi-classification.
Background
Scientific and technological resources and resource distribution are complex and various, scientific and technological service systems are numerous, and the composition and relationship between the scientific and technological service systems and the entity economic industry and the internal of the scientific and technological service systems are complex, so that the system is a typical distributed resource huge system. Enterprises try to realize comprehensive integration with machines, facilities and system networks through intelligent equipment, an intelligent system and an intelligent decision in the mode of internet plus, and the novel connection mode of the machines, data and people urgently needs to combine scientific and technological resources with business processes to form scientific and technological service. The patent literature in scientific and technological resources is used as an important carrier for knowledge development and innovation, and effect knowledge contained in the patent literature reflects the technology for realizing product functions or the scientific principle existing among various components, and is a source power for promoting the innovative design of the entity industry. Effect knowledge for realizing product functions is obtained from patent resource texts, and is integrated into enterprise technical schemes in an intermediate product or knowledge innovation mode, so that the method has great significance for transformation and upgrading of the traditional industry and promotion of high-quality development.
At present, a text classification model based on a deep learning method gradually becomes a mainstream, the text classification model is widely applied to unstructured data processing, and compared with an artificial feature engineering and shallow classification model based on traditional machine learning, the text classification model has the greatest advantage that valuable features in texts can be learned automatically without supervision. However, for patent resources, certain features in the text do not appear adjacently, have certain cohesiveness and can be combined together to bear certain associated information; meanwhile, a large amount of unstructured and even fragmented patent resources have various information sources, and the presented local and global semantic feature forms are different. The existing text classification models, such as convolutional neural networks, fully-connected neural networks and the like, cannot effectively mine text features deeper in patent data sets, so that identification of important resource information is ambiguous, and the text classification effect is not ideal.
Disclosure of Invention
The invention provides a Chinese patent text effect category classification method based on multivariate neural network fusion, which is used for classifying effect categories of patents.
The technical scheme of the invention is as follows: a Chinese patent text effect classification method based on multivariate neural network fusion comprises the following steps:
step 1: preprocessing a Chinese patent text;
step 2: the word embedding layer takes the preprocessed Chinese patent text as input, words in the text can be converted into vector forms with the same dimensionality according to a vector dictionary trained in advance, and each Chinese patent text represented by a vector can be organized into an embedding matrix Il×n(ii) a The vector dictionary is obtained by separately calculating all preprocessed resource texts through a Word2vec algorithm, each Word in the text has a corresponding vector representation with the same dimension in the vector dictionary, l represents the length of the text, and n represents the dimension of a Word vector;
and step 3: embedded matrix I with convolution layer output by word embedded layerl×nTaking the input data as input, performing convolution operation by using a plurality of convolution kernels of n multiplied by p, and automatically extracting different effect characteristics of sentences by using convolution kernels with different window sizes; finally, the effect characteristics captured by each convolution kernel are connected to obtain the output result M of the convolution layerCNNAnd is transmitted to the next BIGRU layer; where p represents the window size of the convolution kernel;
and 4, step 4: output result M of BIGRU layer by convolution layerCNNAs input, the GRU units with two different transfer directions are used to control the forgetting and selective memory functions of input data and obtain an output result hj(ii) a Wherein j is equal to [1, l-p +1 ]];
And 5: result h output by Attentition layer to BIGRU layer1,h2,h3...,hl-p+1Introducing an Attention mechanism to calculate Attention weight values a to which each effect characteristic vector should be assigned1,a2,a3...,al-p+1Representing the identification capability of important effect information, calculating a context attention effect characteristic vector C, and transmitting the context attention effect characteristic vector C to a Softmax layer for effect classification;
step 6: and the Softmax layer receives the output effect characteristic vector C of the Attention layer, maps the output effect characteristic vector C into a (0,1) interval by using a Softmax function, and classifies the patent effect types.
The preprocessing comprises Chinese word segmentation and word stop of the Chinese patent text after word segmentation.
And performing Chinese word segmentation by adopting an accurate mode of a Jieba word segmentation system.
The invention has the beneficial effects that:
1. the classification method provided by the invention adopts a deep learning technology, can automatically acquire the text features in patent resources, effectively avoids the complex feature extraction process in the traditional machine learning method, and greatly reduces the dependence on manpower.
2. Compared with the conventional text classification method, the patent classification method provided by the invention selects effect knowledge as a classification standard, better meets the requirements of the patent resources of the entity industry, can provide the patent text with correct effect knowledge for users, and helps the users to generate a new innovative principle or technical scheme in the product research and development process.
3. The classification method provided by the invention mainly solves the problems that patent resource local and global semantic feature forms are various, text long-distance dependence features are obvious, and important resource information is difficult to accurately identify, and provides a new thought and means for more comprehensively mining proprietary text features and serving entity industry on demand.
4. Compared with a CNN model, a GRU model or a combination model of the CNN model and the GRU model, the classification method provided by the invention has better effect and higher accuracy on classifying the patent resource texts under the same experimental condition.
Drawings
FIG. 1 is a flow chart of a patent classification based on multivariate neural network fusion;
FIG. 2 is a diagram of a patent classification model based on multivariate neural network fusion;
FIG. 3 is a graph comparing accuracy of different patent classification models.
Detailed Description
Example 1: as shown in fig. 1-3, a chinese patent text effect classification method based on multivariate neural network fusion, the method comprises the following steps:
step 1: preprocessing a Chinese patent text;
step 2: word embedding layerThe preprocessed Chinese patent text is used as input, words in the text are converted into vector forms with the same dimensionality by referring to a vector dictionary trained in advance, and each vector-represented Chinese patent text is organized into an embedded matrix Il×n(ii) a The vector dictionary is obtained by separately calculating all preprocessed resource texts through a Word2vec algorithm, each Word in the text has a corresponding vector representation with the same dimension in the vector dictionary, l represents the length of the text, and n represents the dimension of a Word vector; because the lengths of the resource texts are different, it is set in this embodiment that 1500 characters are taken at most in each resource text preprocessed in step 1, and the length is made up by filling 0 when the length is less than 1500.
And step 3: embedded matrix I with convolution layer output by word embedded layerl×nTaking the input data as input, performing convolution operation by using a plurality of convolution kernels of n multiplied by p, and automatically extracting different effect characteristics of sentences by using convolution kernels with different window sizes; finally, the effect characteristics captured by each convolution kernel are connected to obtain the output result M of the convolution layerCNNAnd is transmitted to the next BIGRU layer; where p represents the window size of the convolution kernel;
the specific formula is as follows:
mi=σ(W·Ii:i+p-1+b)
MCNN=[m1,m2,m3,...ml-p+1]
in the formula, MCNNRepresents the final output result of the convolutional layer, miRepresenting the I-th effect characteristic, I, obtained by convolutioni:i+p-1The ith input embedded matrix block is represented, l represents the text length, p represents the window size of a convolution kernel, sigma is a Sigmoid nonlinear activation function, W represents a weight matrix, and b is a bias term.
And 4, step 4: output result M of BIGRU layer by convolution layerCNNAs input, the GRU units with two different transfer directions are used to control the forgetting and selective memory functions of input data and obtain an output result hj(ii) a Wherein j is equal to [1, l-p +1 ]];hjThe formula of (1) is as follows:
wherein j is ∈ [1, l-p +1 ]],hjThe effect characteristic results obtained from BIGRU layer at different time are output by GRU unit of forward transfer learning from sequence starting point at the timeGRU unit output for reverse propagation learning from sequence endAnd (3) combining the components.
And 5: result h output by Attentition layer to BIGRU layer1,h2,h3...,hl-p+1Introducing an Attention mechanism to calculate Attention weight values a to which each effect characteristic vector should be assigned1,a2,a3...,al-p+1Representing the identification capability of important effect information, calculating a context attention effect characteristic vector C, and transmitting the context attention effect characteristic vector C to a Softmax layer for effect classification; the weight is calculated by the formula:
in the formula: k is an element of [1, T ]]And T represents the number of elements of the input sequence. a isiRepresenting the assigned attention weight value of the effect feature at time i,in order to characterize the vector for the text,represents the ith output vector h of the BIGRU layeriCharacterizing in textThe larger the score is, the more attention the input effect feature at the moment is allocated to the text is; v, W, U denote weight matrices, b is a bias term, and tanh is a nonlinear activation function.
Step 6: and the Softmax layer receives the output effect characteristic vector C of the Attention layer, maps the output effect characteristic vector C into a (0,1) interval by using a Softmax function, and classifies the patent effect types.
The preprocessing comprises Chinese word segmentation and word division-based Chinese patent text stop-word (stop-word means removing high-frequency words which are meaningless to text classification and aims at filtering text redundancy and improving accuracy of text classification).
The Chinese word segmentation is carried out by adopting the accurate mode of the Jieba word segmentation system (the phenomenon that the word segmentation process needs to be carried out on text data before text representation because excessive n-gram information is lost due to the characteristic granularity based on characters can be avoided by the word segmentation).
Through the steps, the application gives specific experimental data as follows: the patent text corpus data set adopted in the experiment is part of Chinese patent documents (5320) between 2017 and 2019 years of ten-thousand-square scientific and technological resources, and 5 effects of a centrifugal effect, a swing effect, a siphon effect, elastic deformation and a heat effect are selected as classification labels. In order to ensure the stability of the classification result, all the corpus data are randomly disturbed, and then the data set is divided into a training set and a test set according to the proportion of 7: 3. Of these 3724 patent documents were used for training and 1596 for testing.
In order to evaluate the performance of the model provided by the invention, experimental comparison is carried out on the same data set based on a 3CNN model, a GRU model, a 3CNN-BIGRU model, a BIGRU-ATT model and a 3C-BGA model, wherein the 3CNN model is formed by combining three different convolution kernels with window sizes of 3,4 and 5; the 3CNN-BIGRU model is formed by splicing and fusing 1 3CNN channel and 1 bidirectional GRU channel; the BIGRU-ATT model is a BIGRU network channel for introducing an attention mechanism; the 3C-BGA is a multivariate neural network fusion model provided by the text, and integrates 1 3CNN channel and 1 BIGRU-ATT channel. Specific accuracy comparison results are shown in table 1; the experiment resulted in the relationship between the accuracy (accuracuracy) and the number of iterations (epochs) of the different models on the test set, as shown in fig. 3.
TABLE 1 accuracy of each classification model
The experimental results show that the accuracy of the 3CNN model based on the same data set is improved by 2.8% compared with that of the GRU model, because the one-dimensional convolution under different fields can filter and obtain useful data characteristics, and the calculation cost is much smaller than that of the GRU, the classification effect is relatively good; the classification accuracy of the BIGRU model is improved by about 3% compared with that of 3CNN, and from the model perspective, the classification accuracy is because the BIGRU can fuse the correlation characteristics between forward information and backward information, and the 3CNN can acquire the local characteristics of sentences, but lacks the capture capability of global information; the BIGRU model fused with the attention mechanism acquires more detailed information of the target to be focused by distributing attention weights with different sizes, so that the text recognition capability is improved, and the accuracy of the BIGRU-ATT model is improved by about 2% compared with that of the BIGRU model; the 3CNN-BIGRU model firstly collects the position-invariant local features in the text data through convolution operation, and then captures long-distance dependence information by using the BIGRU. According to experimental results, the accuracy of the 3CNN-BIGRU model is better than that of the single neural network model, and reaches 87.25%. Compared with a 3CNN-BIGRU neural network model, the multivariate neural network fusion (3C-BGA) model provided by the application has the performance effect improved by about 2% under the same condition, which shows that the effect knowledge characteristics in a deeper data set can be learned through the fusion of the multivariate neural network, and the text recognition capability is effectively improved.
In summary, the present invention, from the perspective of the patent resource text on-demand service entity industry, classifies patent texts by using the scientific principles (effect knowledge) reflecting the technologies behind the products described in the patent or existing among the components as classification standards, proposes a bidirectional gated recurrent neural network (BIGRU) integrating and fusing a Convolutional Neural Network (CNN) and an Attention mechanism (Attention) policy, and forms a multi-element neural network path including a word embedding layer, a convolutional layer, a BIGRU layer, an Attention layer, and a Softmax layer. The method can more comprehensively mine the characteristics of patent texts, acquire correct effect knowledge for users, and help the users to generate new innovative principles or technical schemes in the product research and development process. The experimental result proves that under the same experimental condition, compared with a CNN model, a GRU model or a combination model of the CNN model and the GRU model, the classification method provided by the invention has the advantages of best effect and highest accuracy.
While the present invention has been described in detail with reference to the embodiments shown in the drawings, the present invention is not limited to the embodiments, and various changes can be made without departing from the spirit of the present invention within the knowledge of those skilled in the art.
Claims (3)
1. A Chinese patent text effect category classification method based on multivariate neural network fusion is characterized by comprising the following steps: the method comprises the following steps:
step 1: preprocessing a Chinese patent text;
step 2: the word embedding layer takes the preprocessed Chinese patent text as input, words in the text can be converted into vector forms with the same dimensionality according to a vector dictionary trained in advance, and each Chinese patent text represented by a vector can be organized into an embedding matrix Il×n(ii) a The vector dictionary is obtained by separately calculating all preprocessed resource texts through a Word2vec algorithm, each Word in the text has a corresponding vector representation with the same dimension in the vector dictionary, wherein l represents the length of the text, and n represents the dimension of a Word vector;
and step 3: embedded matrix I with convolution layer output by word embedded layerl×nAs input and using moreCarrying out convolution operation on the n multiplied by p convolution kernels, and automatically extracting different effect characteristics of sentences by using the convolution kernels with different window sizes; finally, the effect characteristics captured by each convolution kernel are connected to obtain the output result M of the convolution layerCNNAnd is transmitted to the next BIGRU layer; where p represents the window size of the convolution kernel;
and 4, step 4: output result M of BIGRU layer by convolution layerCNNAs input, the GRU units with positive and negative different transfer directions are used to control the forgetting and selective memory functions of input data, and obtain an output result hj(ii) a Wherein j is equal to [1, l-p +1 ]];
And 5: result h output by Attentition layer to BIGRU layer1,h2,h3...,hl-p+1Introducing an Attention mechanism to calculate Attention weight values a to which each effect characteristic vector should be assigned1,a2,a3...,al-p+1Representing the identification capability of important effect information, calculating a context attention effect characteristic vector C, and transmitting the context attention effect characteristic vector C to a Softmax layer for effect classification;
step 6: and the Softmax layer receives the output effect characteristic vector C of the Attention layer, maps the output effect characteristic vector C into a (0,1) interval by using a Softmax function, and classifies the patent effect types.
2. The method for classifying Chinese patent text effect categories based on multivariate neural network fusion as recited in claim 1, wherein: the preprocessing comprises Chinese word segmentation and word stop of the Chinese patent text after word segmentation.
3. The method for classifying Chinese patent text effect categories based on multivariate neural network fusion as claimed in claim 2, wherein: and performing Chinese word segmentation by adopting an accurate mode of a Jieba word segmentation system.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910776137.9A CN110674291A (en) | 2019-08-22 | 2019-08-22 | Chinese patent text effect category classification method based on multivariate neural network fusion |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910776137.9A CN110674291A (en) | 2019-08-22 | 2019-08-22 | Chinese patent text effect category classification method based on multivariate neural network fusion |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110674291A true CN110674291A (en) | 2020-01-10 |
Family
ID=69075448
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910776137.9A Withdrawn CN110674291A (en) | 2019-08-22 | 2019-08-22 | Chinese patent text effect category classification method based on multivariate neural network fusion |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110674291A (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112632217A (en) * | 2020-12-10 | 2021-04-09 | 国网江苏省电力有限公司电力科学研究院 | Patent system dividing method, device and storage medium suitable for power industry |
CN113033212A (en) * | 2021-03-31 | 2021-06-25 | 中国邮政储蓄银行股份有限公司 | Text data processing method and device |
CN116069760A (en) * | 2023-01-09 | 2023-05-05 | 青岛中投创新技术转移有限公司 | Patent management data processing system, device and method |
-
2019
- 2019-08-22 CN CN201910776137.9A patent/CN110674291A/en not_active Withdrawn
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112632217A (en) * | 2020-12-10 | 2021-04-09 | 国网江苏省电力有限公司电力科学研究院 | Patent system dividing method, device and storage medium suitable for power industry |
CN112632217B (en) * | 2020-12-10 | 2022-12-20 | 国网江苏省电力有限公司电力科学研究院 | Patent system dividing method, device and storage medium suitable for power industry |
CN113033212A (en) * | 2021-03-31 | 2021-06-25 | 中国邮政储蓄银行股份有限公司 | Text data processing method and device |
CN113033212B (en) * | 2021-03-31 | 2024-04-30 | 中国邮政储蓄银行股份有限公司 | Text data processing method and device |
CN116069760A (en) * | 2023-01-09 | 2023-05-05 | 青岛中投创新技术转移有限公司 | Patent management data processing system, device and method |
CN116069760B (en) * | 2023-01-09 | 2023-12-15 | 青岛华慧泽知识产权代理有限公司 | Patent management data processing system, device and method |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Xing et al. | SelfMatch: Robust semisupervised time‐series classification with self‐distillation | |
US20210216723A1 (en) | Classification model training method, classification method, device, and medium | |
CN110334705B (en) | Language identification method of scene text image combining global and local information | |
Cao et al. | Cross-modal hamming hashing | |
CN112749274B (en) | Chinese text classification method based on attention mechanism and interference word deletion | |
CN110826337A (en) | Short text semantic training model obtaining method and similarity matching algorithm | |
CN111709242B (en) | Chinese punctuation mark adding method based on named entity recognition | |
CN109471946B (en) | Chinese text classification method and system | |
CN110175221B (en) | Junk short message identification method by combining word vector with machine learning | |
CN106294593A (en) | In conjunction with subordinate clause level remote supervisory and the Relation extraction method of semi-supervised integrated study | |
CN112732916A (en) | BERT-based multi-feature fusion fuzzy text classification model | |
CN106383877A (en) | On-line short text clustering and topic detection method of social media | |
CN113806547B (en) | Deep learning multi-label text classification method based on graph model | |
CN113806746A (en) | Malicious code detection method based on improved CNN network | |
CN110674291A (en) | Chinese patent text effect category classification method based on multivariate neural network fusion | |
CN114092742B (en) | Multi-angle-based small sample image classification device and method | |
CN111859936B (en) | Cross-domain establishment oriented legal document professional jurisdiction identification method based on deep hybrid network | |
CN112749556B (en) | Multi-language model training method and device, storage medium and electronic equipment | |
CN111061837A (en) | Topic identification method, device, equipment and medium | |
CN111651566B (en) | Multi-task small sample learning-based referee document dispute focus extraction method | |
CN113051914A (en) | Enterprise hidden label extraction method and device based on multi-feature dynamic portrait | |
CN115114409B (en) | Civil aviation unsafe event combined extraction method based on soft parameter sharing | |
CN114417851B (en) | Emotion analysis method based on keyword weighted information | |
CN110705272A (en) | Named entity identification method for automobile engine fault diagnosis | |
Wu et al. | Document layout analysis via dynamic residual feature fusion |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WW01 | Invention patent application withdrawn after publication |
Application publication date: 20200110 |
|
WW01 | Invention patent application withdrawn after publication |