CN112651243B - Abbreviated project name identification method based on integrated structured entity information and electronic device - Google Patents

Abbreviated project name identification method based on integrated structured entity information and electronic device Download PDF

Info

Publication number
CN112651243B
CN112651243B CN202011481330.9A CN202011481330A CN112651243B CN 112651243 B CN112651243 B CN 112651243B CN 202011481330 A CN202011481330 A CN 202011481330A CN 112651243 B CN112651243 B CN 112651243B
Authority
CN
China
Prior art keywords
encoder
belonging
classification probability
entity
abbreviated
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011481330.9A
Other languages
Chinese (zh)
Other versions
CN112651243A (en
Inventor
王玉斌
柳厅文
薛梦鸽
李全刚
苏涛宇
崔诗尧
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Institute of Information Engineering of CAS
Original Assignee
Institute of Information Engineering of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute of Information Engineering of CAS filed Critical Institute of Information Engineering of CAS
Priority to CN202011481330.9A priority Critical patent/CN112651243B/en
Publication of CN112651243A publication Critical patent/CN112651243A/en
Application granted granted Critical
Publication of CN112651243B publication Critical patent/CN112651243B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/279Recognition of textual entities
    • G06F40/289Phrasal analysis, e.g. finite state techniques or chunking
    • G06F40/295Named entity recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/35Clustering; Classification
    • G06F16/355Class or cluster creation or modification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/205Parsing
    • G06F40/216Parsing using statistical methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/237Lexical tools
    • G06F40/242Dictionaries

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Probability & Statistics with Applications (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention discloses an abbreviated project name identification method based on merging structured entity information and an electronic device, comprising the following steps: acquiring a knowledge base anchor text, a common project name dictionary and abbreviated project names, and training a pre-training encoder by combining an entity boundary recognition module, a named entity extraction module and an abbreviated project name recognition module to obtain an abbreviated project name recognition model; inputting a test text into the abbreviated item name recognition model, and recognizing abbreviated item names in the test text. The invention improves the recognition rate and recall rate of the abbreviated item names in the text.

Description

Abbreviated project name identification method based on integrated structured entity information and electronic device
Technical Field
The invention belongs to the field of natural language processing, and particularly relates to an abbreviated project name identification method based on merging structured entity information and an electronic device.
Background
The process of understanding natural language by humans can be seen as a process of extracting information from natural language text, followed by a summary of the information. The extraction information corresponds to an information extraction task in natural language processing, namely, a text processing technology for extracting the fact information of entities, relations, events and the like of a specified type from natural language texts and forming structured data output. The named entity identifies sub-problems belonging to the information extraction field, aims at identifying the named entity from unstructured input text, and generally comprises identifying the boundary of the entity and determining the entity type. And abbreviated project name recognition is a task of named entity recognition that limits the types of entities to project names and the existence of abbreviations in the sequence of entities.
The existing named entity recognition method is mainly based on a deep learning method, and the framework of the named entity recognition method comprises an input distribution representation layer, a context coding layer and a decoding layer. Because named entity identification data needs professional labeling, the labeling workload is large, and the labeling data is not easy to obtain, researchers propose a series of deep learning methods to integrate external knowledge into the existing named entity identification method, so that the model performance is improved to the greatest extent on limited training data. For example, a large amount of unlabeled text is incorporated into the BERT: pre-training of Deep Bidirectional Transformers for Language Understanding (simply referred to as BERT), which aims to make the model better understand semantic information, structural information, etc. existing in natural language. Category description information of entity tags is incorporated into A Unified MRC Framework for Named Entity Recognition (abbreviated as MRC-NER) to enable a model to better understand semantic information of a current entity category.
However, in the abbreviated item name recognition scenario, because of the abbreviation phenomenon in the entity sequence, the syntax information and the semantic information which can be provided by the entity sequence are imperfect, and a greater challenge is brought to the boundary recognition of the abbreviated item name. The existing knowledge-incorporated deep learning methods such as BERT, MRC-NER, etc. cannot enhance the boundary recognition information by using the existing knowledge, and thus cannot be directly used for abbreviated item name recognition scenes.
Disclosure of Invention
In order to overcome the defect of the prior knowledge-incorporated deep learning named entity recognition method in a boundary recognition subtask under an abbreviated item name recognition scene, the invention provides an abbreviated item name recognition method and device based on the information of an incorporated structured entity, and the recognition rate and recall rate of the abbreviated item name are improved through entity boundary recognition, named entity extraction and abbreviated item name recognition.
The technical scheme adopted by the invention for solving the technical problems mainly comprises the following steps:
an abbreviated project name recognition method based on merging structured entity information comprises the following steps:
1) Marking entity sequences in the encyclopedia knowledge base to obtain knowledge base anchor texts, and constructing a common item name dictionary through collected common item names;
2) Inputting the knowledge base anchor text into a pre-training encoder, classifying the character strings output by the pre-training encoder according to the fact that the character string classification probability belonging to the entity sequence is as high as possible and the character string classification probability not belonging to the entity sequence is as low as possible, so as to train the pre-training encoder, and obtaining a first encoder;
3) Matching the text of the knowledge base anchor text with the common project names in the common project name dictionary, inputting the matching result into a first encoder, classifying the character strings output by the first encoder according to the character string classification probability belonging to the project name entity sequence as high as possible and the character string classification probability not belonging to the project name entity sequence as low as possible so as to train the first encoder and obtain a second encoder;
4) Inputting marked abbreviation item name recognition data into a second encoder, classifying the character strings output by the second encoder according to the fact that the character string classification probability belonging to the abbreviation item name entity sequence is as high as possible and the character string classification probability not belonging to the abbreviation item name entity sequence is as low as possible, so as to train the second encoder, and obtaining an abbreviation item name recognition model;
5) Inputting a test text into the abbreviated item name recognition model, and recognizing abbreviated item names in the test text.
Further, the encyclopedia knowledge base includes: wikipedia knowledge base.
Further, the entity sequence is obtained by:
1) Constructing a regular expression;
2) And matching texts of the encyclopedia knowledge base through the regular expression to obtain an entity sequence.
Further, the format of the regular expression includes: r' \ [ {2} (.
Further, the pre-training encoder is a pre-training BERT model.
Further, by using the softmax classifier, the strings output by the pre-training encoder, the strings output by the first encoder, and the strings output by the second encoder are classified.
Further, the softmax classifier includes: a softmax start position classifier and a softmax end position classifier.
Further, the softmax start position classifier and the softmax end position classifier classify the strings output by the pre-training encoder by the following strategy:
1) Classifying the character strings output by the pre-training encoder according to the fact that the classification probability of the beginning characters belonging to the entity sequences is as high as possible and the classification probability of the beginning character strings not belonging to the entity sequences is as low as possible;
2) And classifying the character strings output by the pre-training encoder by a softmax ending position classifier according to the fact that the classification probability of the ending character belonging to the entity sequence is as high as possible and the classification probability of the ending character string not belonging to the entity sequence is as low as possible.
Further, the softmax start position classifier and the softmax end position classifier classify the character string output by the first encoder by the following strategy:
1) Classifying the character strings output by the first encoder by a softmax starting position classifier according to the fact that the classification probability of the starting character belonging to the entity sequence of the project name is as high as possible and the classification probability of the starting character string not belonging to the entity sequence of the project name is as low as possible;
2) And classifying the character strings output by the first encoder by a softmax ending position classifier according to the fact that the classification probability of ending characters belonging to the entity sequence of the project name is as high as possible and the classification probability of ending character strings not belonging to the entity sequence of the project name is as low as possible.
Further, the softmax start position classifier and the softmax end position classifier classify the character string output by the second encoder by the following strategy:
1) Classifying the character strings output by the second encoder according to the fact that the classification probability of the initial character belonging to the abbreviated item name entity sequence is as high as possible and the classification probability of the initial character string not belonging to the abbreviated item name entity sequence is as low as possible;
2) And classifying the character strings output by the second encoder according to the fact that the classification probability of the ending character belonging to the entity sequence of the abbreviated item name is as high as possible and the classification probability of the ending character string not belonging to the entity sequence of the abbreviated item name is as low as possible.
A storage medium having a computer program stored therein, wherein the computer program is arranged to perform the method described above when run.
An electronic device comprising a memory having a computer program stored therein and a processor arranged to run the computer to perform the method described above.
Compared with the prior art, the method improves the recognition rate and recall rate of the abbreviated item names in the text through entity boundary recognition, named entity extraction and abbreviated item name recognition.
Drawings
FIG. 1 is a flow chart of the abbreviated project name recognition model training of the present invention.
Fig. 2 is a schematic diagram of a structured information collection module.
Fig. 3 is a schematic diagram of an entity boundary identification module.
FIG. 4 is a diagram of a named entity extraction module
FIG. 5 is a schematic diagram of an abbreviated project name recognition module
Detailed Description
The present invention will be further described with reference to the following examples, in order to make the above objects, features and advantages of the present invention more comprehensible.
As shown in fig. 1, the training process of the abbreviated project name recognition model of the present invention is composed of four modules:
and a structured information collection module. The method comprises the steps of collecting knowledge base anchor texts, commonly used project names and forming a commonly used project name dictionary.
And an entity boundary identification module. And according to the anchor text of the knowledge base, providing an entity boundary recognition algorithm to carry out algorithm preheating.
And naming entity extraction module. And according to the common project name dictionary, providing a named entity extraction algorithm for algorithm pre-training.
The abbreviation item name recognition module. And carrying out algorithm training according to the abbreviated item name recognition data marked by the expert, and packaging the algorithm into an abbreviated item name recognition model.
First part, structured information collection module (FIG. 2)
The structured information collection module of the invention mainly comprises the following two steps:
a) Knowledge base anchor text is collected. Downloading and decompressing the packaged wikipedia knowledge base, and marking the anchor text as an entity sequence. Wherein "anchor text" is a string shaped as "[ [ arbitrary string ] ]", the present invention designs a regular expression "r' \ [ {2} (.?.
b) The common item names are collected. And (3) crawling the webpage with the designed project name, and manually screening and denoising to obtain a common project name dictionary.
Second part, entity boundary identification module (FIG. 3)
Knowledge base anchor text labeled with the sequence of entities is used as input to the module, while the input "identify entities in the text" is given. And splicing the two sections of input and inputting the two sections of input into the entity boundary recognition module. The main algorithms of the module are a pre-training encoder, which is a pre-training BERT model, and a softmax classifier, which includes a "start position classifier" and an "end position classifier". The codes obtained by the encoder are classified by using a "softmax start position classifier", and the classification target aims to ensure that the classification probability of the start character belonging to the entity sequence is as high as possible, and the classification probability of the start character string not belonging to the entity sequence is as low as possible. The coding obtained by the coder is classified by using a softmax ending position classifier, and the classification target aims to ensure that the classification probability of the ending character belonging to the entity sequence is as high as possible, and the classification probability of the ending character string not belonging to the entity sequence is as low as possible. And finally, transmitting the encoder obtained by the entity boundary identification module to a third part.
Third part, named entity extraction module (FIG. 4)
And matching the knowledge base anchor text obtained by the first part with the item noun, marking the character string obtained by matching as an item name entity as input, and giving an input of 'identifying the item name in the text'. And splicing the two sections of input and inputting the two sections of input into a named entity extraction module. The main algorithms of the module are a pre-training encoder and a softmax classifier, wherein the pre-training encoder is initialized using the encoder obtained in the second part. The softmax classifier includes a "start position classifier" and an "end position classifier". The codes obtained by the encoder are classified by using a 'softmax starting position classifier', and the classification target aims to ensure that the classification probability of the starting character belonging to the entity sequence of the project name is as high as possible, and the classification probability of the starting character string not belonging to the entity sequence of the project name is as low as possible. The coding obtained by the coder is classified by using a softmax ending position classifier, and the classification target aims to ensure that the classification probability of ending characters belonging to the entity sequence of the project name is as high as possible, and the classification probability of ending character strings not belonging to the entity sequence of the project name is as low as possible. And re-predicting the input after each algorithm iteration is finished, and re-inputting the predicted result into the model. The iteration is stopped when the recognition rate of the model for the item name reaches the highest point on the verification set, and the encoder of the highest point round is transferred to the fourth part.
Fourth section, abbreviated project name identification Module (FIG. 5)
And inputting the expert labels into an abbreviated item name recognition module. The main algorithms of the module are a pre-training encoder and a softmax classifier, wherein the pre-training encoder is initialized using the encoder obtained in the third part. The softmax classifier includes a "start position classifier" and an "end position classifier". The codes obtained by the encoder are classified by using a "softmax start position classifier", and the classification target aims to ensure that the classification probability of the start character belonging to the abbreviated item name entity sequence is as high as possible, and the classification probability of the start character string not belonging to the abbreviated item name entity sequence is as low as possible. The codes obtained by the coder are classified by using a softmax ending position classifier, and the classification target aims to ensure that the classification probability of ending characters belonging to the entity sequence of the abbreviated item name is as high as possible, and the classification probability of ending character strings not belonging to the entity sequence of the abbreviated item name is as low as possible. And finally, packaging the encoder of the abbreviated item name recognition module into an abbreviated item name recognition model.
In practical application, only the test text is input into the abbreviated item name recognition model, so that the abbreviated item name in the test text can be recognized.
Experimental data
The recognition rate of the abbreviated project name recognition model obtained by the invention on the project name can reach 98.43%, and the recall rate can reach 97.41%. Compared with a common knowledge integration deep learning method BERT/MRC-NER, the recognition rate of 3.58%/1.46% and the recall rate of 2.07%/1.04% are improved based on the abbreviation project name recognition model integrating the structured entity information.
The above examples are provided for the purpose of describing the present invention only and are not intended to limit the scope of the present invention. The scope of the invention is defined by the appended claims. Various equivalents and modifications that do not depart from the spirit and principles of the invention are intended to be included within the scope of the invention.

Claims (6)

1. An abbreviated project name recognition method based on merging structured entity information comprises the following steps:
1) Marking entity sequences in the encyclopedia knowledge base to obtain knowledge base anchor texts, and constructing a common item name dictionary through collected common item names;
2) Inputting the knowledge base anchor text into a pre-training encoder, classifying the character strings output by the pre-training encoder according to the fact that the character string classification probability belonging to the entity sequence is as high as possible and the character string classification probability not belonging to the entity sequence is as low as possible, so as to train the pre-training encoder, and obtaining a first encoder;
3) Matching the text of the knowledge base anchor text with the common project names in the common project name dictionary, inputting the matching result into a first encoder, classifying the character strings output by the first encoder according to the character string classification probability belonging to the project name entity sequence as high as possible and the character string classification probability not belonging to the project name entity sequence as low as possible so as to train the first encoder and obtain a second encoder;
4) Inputting marked abbreviation item name recognition data into a second encoder, classifying the character strings output by the second encoder according to the fact that the character string classification probability belonging to the abbreviation item name entity sequence is as high as possible and the character string classification probability not belonging to the abbreviation item name entity sequence is as low as possible, so as to train the second encoder, and obtaining an abbreviation item name recognition model;
5) Inputting a test text into the abbreviated item name recognition model, and recognizing abbreviated item names in the test text;
wherein, by using a softmax classifier, the character string output by the pre-training encoder, the character string output by the first encoder and the character string output by the second encoder are classified; the softmax classifier included: a softmax start position classifier and a softmax end position classifier;
the softmax start position classifier and the softmax end position classifier classify the strings output by the pre-training encoder by the following strategy:
classifying the character strings output by the pre-training encoder according to the fact that the classification probability of the beginning characters belonging to the entity sequences is as high as possible and the classification probability of the beginning character strings not belonging to the entity sequences is as low as possible;
classifying the character strings output by the pre-training encoder according to the fact that the classification probability of the ending character belonging to the entity sequence is as high as possible and the classification probability of the ending character string not belonging to the entity sequence is as low as possible;
the softmax start position classifier and the softmax end position classifier classify the character string output by the first encoder by the following strategy:
classifying the character strings output by the first encoder by a softmax starting position classifier according to the fact that the classification probability of the starting character belonging to the entity sequence of the project name is as high as possible and the classification probability of the starting character string not belonging to the entity sequence of the project name is as low as possible;
classifying the character strings output by the first encoder according to the fact that the classification probability of the ending character belonging to the entity sequence of the project name is as high as possible and the classification probability of the ending character string not belonging to the entity sequence of the project name is as low as possible;
the softmax start position classifier and the softmax end position classifier classify the character string output by the second encoder by the following strategy:
classifying the character strings output by the second encoder according to the fact that the classification probability of the initial character belonging to the abbreviated item name entity sequence is as high as possible and the classification probability of the initial character string not belonging to the abbreviated item name entity sequence is as low as possible;
and classifying the character strings output by the second encoder according to the fact that the classification probability of the ending character belonging to the entity sequence of the abbreviated item name is as high as possible and the classification probability of the ending character string not belonging to the entity sequence of the abbreviated item name is as low as possible.
2. The method of claim 1, wherein the entity sequence is obtained by:
1) Constructing a regular expression;
2) And matching texts of the encyclopedia knowledge base through the regular expression to obtain an entity sequence.
3. The method of claim 2, wherein the format of the regular expression comprises: r' \ [ {2} (.
4. The method of claim 1, wherein the pre-training encoder is a pre-training BERT model.
5. A storage medium having a computer program stored therein, wherein the computer program is arranged to perform the method of any of claims 1-4 when run.
6. An electronic device comprising a memory, in which a computer program is stored, and a processor arranged to run the computer program to perform the method of any of claims 1-4.
CN202011481330.9A 2020-12-15 2020-12-15 Abbreviated project name identification method based on integrated structured entity information and electronic device Active CN112651243B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011481330.9A CN112651243B (en) 2020-12-15 2020-12-15 Abbreviated project name identification method based on integrated structured entity information and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011481330.9A CN112651243B (en) 2020-12-15 2020-12-15 Abbreviated project name identification method based on integrated structured entity information and electronic device

Publications (2)

Publication Number Publication Date
CN112651243A CN112651243A (en) 2021-04-13
CN112651243B true CN112651243B (en) 2023-11-03

Family

ID=75354167

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011481330.9A Active CN112651243B (en) 2020-12-15 2020-12-15 Abbreviated project name identification method based on integrated structured entity information and electronic device

Country Status (1)

Country Link
CN (1) CN112651243B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10388274B1 (en) * 2016-03-31 2019-08-20 Amazon Technologies, Inc. Confidence checking for speech processing and query answering
CN111078875A (en) * 2019-12-03 2020-04-28 哈尔滨工程大学 Method for extracting question-answer pairs from semi-structured document based on machine learning
CN111626063A (en) * 2020-07-28 2020-09-04 浙江大学 Text intention identification method and system based on projection gradient descent and label smoothing
CN112000791A (en) * 2020-08-26 2020-11-27 哈电发电设备国家工程研究中心有限公司 Motor fault knowledge extraction system and method

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11899786B2 (en) * 2019-04-15 2024-02-13 Crowdstrike, Inc. Detecting security-violation-associated event data

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10388274B1 (en) * 2016-03-31 2019-08-20 Amazon Technologies, Inc. Confidence checking for speech processing and query answering
CN111078875A (en) * 2019-12-03 2020-04-28 哈尔滨工程大学 Method for extracting question-answer pairs from semi-structured document based on machine learning
CN111626063A (en) * 2020-07-28 2020-09-04 浙江大学 Text intention identification method and system based on projection gradient descent and label smoothing
CN112000791A (en) * 2020-08-26 2020-11-27 哈电发电设备国家工程研究中心有限公司 Motor fault knowledge extraction system and method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
基于CNN和深层语义匹配的中文实体链接模型;吴晓崇;段跃兴;张月琴;闫雄;;计算机工程与科学(08);第187-193页 *
基于深度学习的患者安全事件的命名实体识别;周亮杰;马敬东;;中华医学图书情报杂志(06);第5-10页 *

Also Published As

Publication number Publication date
CN112651243A (en) 2021-04-13

Similar Documents

Publication Publication Date Title
CN113177124B (en) Method and system for constructing knowledge graph in vertical field
CN110334213B (en) Method for identifying time sequence relation of Hanyue news events based on bidirectional cross attention mechanism
CN113468888A (en) Entity relation joint extraction method and device based on neural network
CN109241330A (en) The method, apparatus, equipment and medium of key phrase in audio for identification
CN113946677B (en) Event identification and classification method based on bidirectional cyclic neural network and attention mechanism
CN112749562A (en) Named entity identification method, device, storage medium and electronic equipment
CN113191148A (en) Rail transit entity identification method based on semi-supervised learning and clustering
CN116416480B (en) Visual classification method and device based on multi-template prompt learning
CN116127090B (en) Aviation system knowledge graph construction method based on fusion and semi-supervision information extraction
CN116956929B (en) Multi-feature fusion named entity recognition method and device for bridge management text data
CN114881043B (en) Deep learning model-based legal document semantic similarity evaluation method and system
CN113051887A (en) Method, system and device for extracting announcement information elements
CN110866172B (en) Data analysis method for block chain system
CN116089610A (en) Label identification method and device based on industry knowledge
CN113901813A (en) Event extraction method based on topic features and implicit sentence structure
CN112148879B (en) Computer readable storage medium for automatically labeling code with data structure
CN112651243B (en) Abbreviated project name identification method based on integrated structured entity information and electronic device
CN115408506B (en) NL2SQL method combining semantic analysis and semantic component matching
CN116595023A (en) Address information updating method and device, electronic equipment and storage medium
CN116451691A (en) Small sample named entity identification method for entity hierarchy information enhanced prototype characterization
CN114595338A (en) Entity relation joint extraction system and method based on mixed feature representation
CN114677526A (en) Image classification method, device, equipment and medium
CN112015891A (en) Method and system for classifying messages of network inquiry platform based on deep neural network
CN110543560A (en) Long text classification and identification method, device and medium based on convolutional neural network
CN113313184B (en) Heterogeneous integrated self-bearing technology liability automatic detection method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant