CN110362807A - Variant word recognition method and system based on self-encoding encoder - Google Patents
Variant word recognition method and system based on self-encoding encoder Download PDFInfo
- Publication number
- CN110362807A CN110362807A CN201810252275.2A CN201810252275A CN110362807A CN 110362807 A CN110362807 A CN 110362807A CN 201810252275 A CN201810252275 A CN 201810252275A CN 110362807 A CN110362807 A CN 110362807A
- Authority
- CN
- China
- Prior art keywords
- self
- variant word
- coding unit
- encoding encoder
- word
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/20—Natural language analysis
- G06F40/279—Recognition of textual entities
- G06F40/284—Lexical analysis, e.g. tokenisation or collocates
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/06—Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
- G06N3/061—Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using biological neurons, e.g. biological neurons connected to an integrated circuit
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- General Health & Medical Sciences (AREA)
- General Physics & Mathematics (AREA)
- Artificial Intelligence (AREA)
- Computational Linguistics (AREA)
- Molecular Biology (AREA)
- General Engineering & Computer Science (AREA)
- Computing Systems (AREA)
- Data Mining & Analysis (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Evolutionary Computation (AREA)
- Neurology (AREA)
- Microelectronics & Electronic Packaging (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Machine Translation (AREA)
Abstract
The present invention provides a kind of variant word recognition method based on self-encoding encoder, and step includes: to carry out participle and vectorization using corpus related with variant word as data set;Batch sample is generated from data set, is input in the respective coding unit of self-encoding encoder and is carried out non-supervisory pre-training, obtains the neuron parameter of respective coding unit neural network;Batch positive sample and negative sample are generated from data set, are input in the self-encoding encoder of the neuron parameter based on above-mentioned respective coding unit neural network the training that exercises supervision, are obtained each neuron parameter of entire neural network;Certain the known variant word and its context vector that will include in document, then be input in the self-encoding encoder of each neuron parameter based on above-mentioned entire neural network, identify the candidate word with the known variant word association.The variant word identifying system based on self-encoding encoder that the present invention also provides a kind of.
Description
Technical field
The present invention relates to artificial intelligence text analyzing fields, and in particular to a kind of variant word identification side based on self-encoding encoder
Method and system.
Background technique
Variant word is exactly a kind of striking features of the netspeak as language lack of standardization, people often for avoid examining,
Show emotion, satirize, entertaining etc. demands by relatively serious, specification, sensitivity word with word relatively lack of standardization, insensitive come generation
It replaces, for replacing the neologisms of original word to be just called variant word (Morph).Variant word original word (target entity corresponding with its
Word) meeting coexist in non-standard text and specification text respectively or even variant word can penetrate into specification text.Variant word makes to go
Text is more lively, and dependent event, message are also propagated more extensive.But because variant word is usually certain metaphor,
No longer it is the meaning of its surface words, to make style and document (such as news) on network that there is huge difference.By
How this, identify these variant words and the target entity word problem corresponding to it, the natural language processing for downstream
Technology has great importance.
Deep learning (deep learning) is a branch of machine learning, it attempt using comprising labyrinth or
The multiple process layers being made of multiple nonlinear transformation carry out the algorithm of higher level of abstraction to data.The benefit of deep learning is with non-
The feature learning and layered characteristic of supervised or Semi-supervised extract highly effective algorithm and obtain feature by hand to substitute, therefore extensive
Applied to each artificial intelligence field.
Currently variant word is carried out there are mainly two types of knowledge method for distinguishing:
1, rule-based identification and normalization method are known using the methods of accurate matching, classifier construction rule
Other variant word.Main means include then being detected the similar letter of certain spcial character conversion forming shape again;To keyword
It carries out phonetically similar word replacement or phonetic replacement, keyword is split;It is angularly calculated not from voice phase Sihe font is similar
The similarity of good text variants;The voice mapping model that Chinese character is established based on standard Chinese corpus, to information source/channel
Model is extended (eXtended Source Channel Model, XSCM), the similarity being then based between phonetic Chinese character
Be replaced etc..
2, identification and normalization method based on statistics and rule, i.e., to extraction statistical nature and rule-based spy first
Sign realizes the standardization of Chinese non-standard word by classification.Main technical solution includes that Chinese non-standard is realized by classification
The standardization of word, the text normalization method based on hidden Markov model are used for text specification by building standardization dictionary
Change task dispatching etc..The feature that regular drive is extracted both include: Levenshtein distance between phonetic, both between phonetic
Whether kinds of characters number, non-standard word are Pinyin abbreviations of modular word etc..
Above-mentioned rule-based identification and normalization method are limited to the rule of Manual definition, and efficiency is lower, applicable surface
It is relatively narrow.Although above-mentioned identification and normalization method based on statistics and rule using the method for statistical learning, still according to
A large amount of manual working is relied to carry out Feature Engineering, limited efficacy, flexibility is lower.Specifically, rule-based identification and rule
Generalized method is due to variant word inherently " anti-rule " " irony speech ", and the speed of rule variation is far more than manual analysis.This
Outside, there is the variant word largely deformed according to target word Deep Semantics, we are difficult with rule and the standardization of statistical disposition variant word
Task.Identification and normalization method based on statistics and rule are substantially still regular drive, need to extract artificial progress
A large amount of Feature Engineerings, limited efficacy, flexibility is lower, and robustness is poor, and over the long term, the workload of maintenance upgrade is huge.
Summary of the invention
The purpose of the present invention is to propose to a kind of variant word recognition method and system based on self-encoding encoder, this method use from
Encoder first carries out unsupervised pre-training, carries out supervised training afterwards, can be under seldom manual intervention automatically from a large amount of languages
Required feature is extracted in material, is then differentiated, solving the problems, such as that corpus is less causes, so that neural network is able to use
In variant word identification mission.
In order to achieve the above objectives, the present invention adopts the following technical scheme:
A kind of variant word recognition method based on self-encoding encoder, step include:
Will corpus related with variant word as data set, segmented to obtain lexical item, to lexical item carry out vectorization;
Batch sample is generated from above-mentioned data set, which includes the lexical item and its context of vectorization;
Above-mentioned batch sample is input in the respective coding unit of self-encoding encoder neural network based carry out it is non-supervisory
Pre-training, obtain the neuron parameter of respective coding unit neural network;
Generate batch positive sample and negative sample from above-mentioned data set, the positive sample include vectorization above-mentioned variant word and
Target word pair and its context, the negative sample include the random lexical item of a pair and its context of vectorization;
Above-mentioned batch positive sample and negative sample are input to the neural radix scrophulariae based on above-mentioned respective coding unit neural network
Exercise supervision training in several self-encoding encoders, obtains each neuron parameter of entire neural network;
Certain the known variant word and its context vector that will include in document, then be input to based on above-mentioned entire nerve net
In the self-encoding encoder of each neuron parameter of network, the candidate word with the known variant word association is identified.
Further, segmenting the tool used includes jieba, NLPIR.
Further, the method that vectorization uses includes Word2Vec, WordRank.
Further, before pre-training, the neuron parameter of respective coding unit neural network is initialized, standard height is used
This distribution carries out random value.
Further, when pre-training, using loss function as the difference letter of the reconstruction layer and input layer of respective coding unit
Number first trains first from coding unit, then successively trains other respectively coding units using gradient descent method.
Further, the label of the positive sample is association, and the label of the negative sample is not to be associated with.
Further, the candidate word identified sorts according to the association probability size of the known variant word.
A kind of variant word identifying system based on self-encoding encoder, including for storing depositing for variant word correlation corpus data set
Store up module and for identification self-encoding encoder of variant word;The self-encoding encoder is based on neural network, comprising:
Two including multilayer series connection is from coding unit from coding module, for will be at the lexical item of vectorization and its context
Reason is compressed encoding;
One supervised training module, including a softmax unit are compiled for receiving described two each compressions from coding module
Code is simultaneously identified, exports variant word recognition result by the softmax unit.
Further, it is described from coding unit include input layer, coding layer and rebuild layer, the input layer for input contains
Long vector and context vector to be encoded, the coding layer form short amount for compressed encoding, and the reconstruction layer is for passing through institute
It states short amount and rebuilds long vector and context vector.
Further, it is described from coding module include 3~10 layers from coding unit, since first layer, every layer of dimension by
Layer successively decreases, and first layer is 100~100000 dimensions, and the last layer is 10~100 dimensions.
Further, the neural network that the supervised training module uses is 1~100 layer, and dimension is 10~1000 dimensions.
Traditional statistical learning method or rule-based method, first is that need a large amount of Feature Engineering, manual intervention compared with
Greatly;Two are a lack of wide usage and flexibility, are not enough to solve the problems, such as that in a wide range of network text, variant word identifies.Pass through base
In the model of neural network, required feature can be automatically extracted from a large amount of corpus under seldom manual intervention, then
Differentiated.But the main effect of neural network is limited in data scale, and variant word correlation corpus is less leads to nerve net
Network is not directly applicable the task.This method is first carried out unsupervised pre-training, is carried out supervised training afterwards using self-encoding encoder, is solved
Corpus of having determined less the problem of causing, can use less so that neural network is able to use in variant word identification mission
Corpus come achieve the purpose that identify variant word.
Detailed description of the invention
Fig. 1 is a kind of variant word recognition method flow chart based on self-encoding encoder of the invention.
Fig. 2 is a kind of structural schematic diagram from coding module.
Specific embodiment
To enable features described above and advantage of the invention to be clearer and more comprehensible, special embodiment below, and institute's attached drawing is cooperated to make
Detailed description are as follows.
The present embodiment provides a kind of variant word recognition method based on self-encoding encoder, as shown in Figure 1, step includes:
1, it collects related corpus and constructs data set.
The training of self-encoding encoder needs data set, so needing to collect corpus related with variant word in advance.Corpus comes
Source can come from the social networks such as microblogging, Twitter, is stored in the database after collection, establishes index.
2, self-encoding encoder model is realized.
According to self-encoding encoder modular concept, using some neural network platforms and deep learning frame implementation model, such as
Tensorflow, Theano, caffe etc..Concrete model is as follows:
1) from coding unit.
From the foundation structure that coding unit is in entire self-encoding encoder model, entire model is by multilayer from coding unit structure
At.It include input layer, coding layer and reconstruction layer from coding unit.Input layer is long vector and context vector to be encoded, is compiled
Code layer is the short amount after coding compression, and rebuilding layer is to rebuild long vector and context vector by the short amount after coding.
The dimension of input layer according to from coding unit in a model where level it is different and different, the dimension of coding layer has compared with input layer
It is reduced, the dimension for rebuilding layer is identical as input layer.Be attached between each layer using the mode connected entirely, using ReLu,
The activation primitives such as sigmoid, tanh are activated.It is input layer from the input of coding unit, output is coding layer.
2) from coding module.
It is made of from coding module multilayer from coding unit, is each upper one from coding unit from the input of coding unit
Output, the input of first unit is the input of entire model, and the output of the last one unit is the output from coding module,
The input of first unit is the lexical item and its context of vectorization.
It can be differed according to the actual situation at 3~10 layers from the number of plies of coding unit, every layer of the coding from coding unit
The dimension of layer all successively decreases, and according to the dimension of the input layer of entire model, can successively successively decrease, most since 100~100000 dimensions
The coding output that boil down to 10~100 is tieed up afterwards.
3) supervised training module.
Vector after self-encoding encoder coding can be better used in actual task, i.e. variant word and target word is associated with
Task.Two are established from coding module, they by input variant word and candidate the two vectorizations of target word lexical item and its
Context exports the compressed encoding of two lexical items.The compressed encoding of two lexical items is respectively connected to supervised training module, is exported
The classification results of task, i.e. final output are formed by softmax unit.The multilayer fully-connected network that supervised training module uses
It can according to the actual situation be the neural network of 10~1000 dimensions using 1~100 layer, dimension.
3, self-encoding encoder is trained using data set.
Before using self-encoding encoder model, need to be trained model.It is instructed using the data set collected in step 1
Practice model, first has to initialize data set and model.Corpus in data set needs first to carry out segmenting etc. pretreatment behaviour
Make, common tool includes jieba, NLPIR etc..Due to being all lexical item in corpus, and the input of self-encoding encoder model be to
Amount, so needing first to carry out vectorization, optional Word2Vec, WordRank etc. to the lexical item in corpus.
1) first step of training is pre-training.
Pre-training does not use supervised training module, trains using only unsupervised method from coding module.Concrete operations
It is as follows: first to initialize each neuron parameter from coding unit neural network, be distributed using standard gaussian and carry out random value.
Then batch sample is generated from data set, sample includes the lexical item and its context of vectorization, is then inputted into coding mould certainly
Block.First is first trained from coding unit, input layer is the lexical item and its context of the vectorization of input, and loss function is to rebuild
The function of layer and the difference of input layer, function can be quadratic loss function, cross entropy loss function etc..Use gradient descent method
It is trained, successively trains other respectively coding units, the input layer of each unit accepts the coding layer of a unit.Finally
Obtain entirely from coding module in each unit neural network neuron parameter, complete pre-training.
2) second step of training is trained supervision module.
Pre-training has been obtained from after the parameter of coding module, then will be whole as one from coding module and supervised training module
Body is trained according to actual task data.Batch sample is generated from corpus, each sample includes a pair of of lexical item of vectorization
(i.e. variant word and candidate target word) and its context and label, including positive sample and negative sample, wherein use in positive sample
Known variant word and target word pair, label are association;The word pair of random sampling is used in negative sample, label is not to be associated with.It will
Sample batch input self-encoding encoder model, is trained using gradient descent method, finally obtains each neuron in entire model
Parameter, complete supervised training.
4, new data are identified using self-encoding encoder.
After the completion of training, i.e., working model identifies new data.For the document containing given variant word, first should
Variant word and its context vector in input model, are found in several highest lexical items of its association probability as identification knot
Fruit.For variant word and candidate the two lexical items of target word, after input model, the last softmax layer of model can obtain it
Corresponding associated probability.It is ranked up according to association probability size, it can obtain several candidate target word lexical items, i.e., it is complete
At identification mission.
The above embodiments are merely illustrative of the technical solutions of the present invention rather than is limited, the ordinary skill of this field
Personnel can be with modification or equivalent replacement of the technical solution of the present invention are made, without departing from the spirit and scope of the present invention, this
The protection scope of invention should be subject to described in claims.
Claims (10)
1. a kind of variant word recognition method based on self-encoding encoder, step include:
Will corpus related with variant word as data set, segmented to obtain lexical item, to lexical item carry out vectorization;
Batch sample is generated from above-mentioned data set, which includes the lexical item and its context of vectorization;
Above-mentioned batch sample is input to carried out in the respective coding unit of self-encoding encoder neural network based it is non-supervisory pre-
Training, obtains the neuron parameter of respective coding unit neural network;
Batch positive sample is generated from above-mentioned data set and negative sample, the positive sample include the above-mentioned variant word and target of vectorization
Word pair and its context, the negative sample include the random lexical item of a pair and its context of vectorization;
Above-mentioned batch positive sample and negative sample are input to the neuron parameter based on above-mentioned respective coding unit neural network
Exercise supervision training in self-encoding encoder, obtains each neuron parameter of entire neural network;
Certain the known variant word and its context vector that will include in document, then be input to based on above-mentioned entire neural network
In the self-encoding encoder of each neuron parameter, the candidate word with the known variant word association is identified.
2. the method according to claim 1, wherein the tool that participle uses includes jieba, NLPIR;Vectorization
The method of use includes Word2Vec, WordRank.
3. the method according to claim 1, wherein initializing respective coding unit nerve net before pre-training
The neuron parameter of network is distributed using standard gaussian and carries out random value.
4. the method according to claim 1, wherein when pre-training, using loss function as respective coding unit
Reconstruction layer and input layer difference functions, first train first from coding unit, then successively train it using gradient descent method
His respective coding unit.
5. the method according to claim 1, wherein the label of the positive sample be association, the negative sample
Label is not to be associated with.
6. the method according to claim 1, wherein the candidate word identified according to the known variant word
The sequence of association probability size.
7. a kind of variant word identifying system based on self-encoding encoder, including the storage for storing variant word correlation corpus data set
Module and for identification self-encoding encoder of variant word;The self-encoding encoder is based on neural network, comprising:
Two are for handling the lexical item of vectorization and its context from coding unit from coding module, including multilayer series connection
Compressed encoding;
One supervised training module, including a softmax unit, for receiving described two each compressed encodings from coding module simultaneously
It is identified, variant word recognition result is exported by the softmax unit.
8. system according to claim 7, which is characterized in that it is described from coding unit include input layer, coding layer and again
Build-up layers, for inputting containing long vector and context vector to be encoded, which forms the input layer for compressed encoding
Short amount, the reconstruction layer are used to rebuild long vector and context vector by the short amount.
9. system according to claim 7, which is characterized in that it is described from coding module include 3~10 layers from coding unit,
Since first layer, every layer of dimension successively successively decreases, and first layer is 100~100000 dimensions, and the last layer is 10~100 dimensions.
10. system according to claim 7, which is characterized in that the neural network that the supervised training module uses be 1~
100 layers, dimension is 10~1000 dimensions.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810252275.2A CN110362807A (en) | 2018-03-26 | 2018-03-26 | Variant word recognition method and system based on self-encoding encoder |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810252275.2A CN110362807A (en) | 2018-03-26 | 2018-03-26 | Variant word recognition method and system based on self-encoding encoder |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110362807A true CN110362807A (en) | 2019-10-22 |
Family
ID=68212164
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810252275.2A Pending CN110362807A (en) | 2018-03-26 | 2018-03-26 | Variant word recognition method and system based on self-encoding encoder |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110362807A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117312864A (en) * | 2023-11-30 | 2023-12-29 | 国家计算机网络与信息安全管理中心 | Training method and device for deformed word generation model based on multi-modal information |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CA2894317A1 (en) * | 2015-06-15 | 2016-12-15 | Deep Genomics Incorporated | Systems and methods for classifying, prioritizing and interpreting genetic variants and therapies using a deep neural network |
CN107133969A (en) * | 2017-05-02 | 2017-09-05 | 中国人民解放军火箭军工程大学 | A kind of mobile platform moving target detecting method based on background back projection |
CN107315734A (en) * | 2017-05-04 | 2017-11-03 | 中国科学院信息工程研究所 | A kind of method and system for becoming pronouns, general term for nouns, numerals and measure words standardization based on time window and semanteme |
CN107423371A (en) * | 2017-07-03 | 2017-12-01 | 湖北师范大学 | A kind of positive and negative class sensibility classification method of text |
-
2018
- 2018-03-26 CN CN201810252275.2A patent/CN110362807A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CA2894317A1 (en) * | 2015-06-15 | 2016-12-15 | Deep Genomics Incorporated | Systems and methods for classifying, prioritizing and interpreting genetic variants and therapies using a deep neural network |
CN107133969A (en) * | 2017-05-02 | 2017-09-05 | 中国人民解放军火箭军工程大学 | A kind of mobile platform moving target detecting method based on background back projection |
CN107315734A (en) * | 2017-05-04 | 2017-11-03 | 中国科学院信息工程研究所 | A kind of method and system for becoming pronouns, general term for nouns, numerals and measure words standardization based on time window and semanteme |
CN107423371A (en) * | 2017-07-03 | 2017-12-01 | 湖北师范大学 | A kind of positive and negative class sensibility classification method of text |
Non-Patent Citations (4)
Title |
---|
AMIRI HADI 等: "Learning text pair similarity with context-sensitive autoencoders", 《PROCEEDINGS OF THE 54TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS》 * |
施振辉 等: "基于字词联合的变体词规范化研究", 《计算机系统应用》 * |
沙灜 等: "中文变体词的识别与规范化综述", 《信息安全学报》 * |
胡新辰: "基于LSTM的语义关系分类研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 * |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117312864A (en) * | 2023-11-30 | 2023-12-29 | 国家计算机网络与信息安全管理中心 | Training method and device for deformed word generation model based on multi-modal information |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110298037A (en) | The matched text recognition method of convolutional neural networks based on enhancing attention mechanism | |
CN109670177A (en) | One kind realizing the semantic normalized control method of medicine and control device based on LSTM | |
CN112883738A (en) | Medical entity relation extraction method based on neural network and self-attention mechanism | |
CN110442684A (en) | A kind of class case recommended method based on content of text | |
CN109858032A (en) | Merge more granularity sentences interaction natural language inference model of Attention mechanism | |
CN109697232A (en) | A kind of Chinese text sentiment analysis method based on deep learning | |
CN110413783B (en) | Attention mechanism-based judicial text classification method and system | |
CN108829818A (en) | A kind of file classification method | |
CN111144448A (en) | Video barrage emotion analysis method based on multi-scale attention convolutional coding network | |
CN104217226B (en) | Conversation activity recognition methods based on deep neural network Yu condition random field | |
CN108399230A (en) | A kind of Chinese financial and economic news file classification method based on convolutional neural networks | |
CN110083700A (en) | A kind of enterprise's public sentiment sensibility classification method and system based on convolutional neural networks | |
CN106778882B (en) | A kind of intelligent contract automatic classification method based on feedforward neural network | |
CN110287323B (en) | Target-oriented emotion classification method | |
CN110134946A (en) | A kind of machine reading understanding method for complex data | |
CN107679110A (en) | The method and device of knowledge mapping is improved with reference to text classification and picture attribute extraction | |
CN110688862A (en) | Mongolian-Chinese inter-translation method based on transfer learning | |
CN109857871A (en) | A kind of customer relationship discovery method based on social networks magnanimity context data | |
CN112883197B (en) | Knowledge graph construction method and system for closed switch equipment | |
CN107273358A (en) | A kind of end-to-end English structure of an article automatic analysis method based on pipe modes | |
CN110046356B (en) | Label-embedded microblog text emotion multi-label classification method | |
CN110188195A (en) | A kind of text intension recognizing method, device and equipment based on deep learning | |
CN111476036A (en) | Word embedding learning method based on Chinese word feature substrings | |
CN103049490B (en) | Between knowledge network node, attribute generates system and the method for generation | |
CN110263174A (en) | - subject categories the analysis method based on focus |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WD01 | Invention patent application deemed withdrawn after publication | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20191022 |