CN110781290A - Extraction method of structured text abstract of long chapter - Google Patents

Extraction method of structured text abstract of long chapter Download PDF

Info

Publication number
CN110781290A
CN110781290A CN201910957415.0A CN201910957415A CN110781290A CN 110781290 A CN110781290 A CN 110781290A CN 201910957415 A CN201910957415 A CN 201910957415A CN 110781290 A CN110781290 A CN 110781290A
Authority
CN
China
Prior art keywords
sentence
abstract
text
paragraph
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910957415.0A
Other languages
Chinese (zh)
Inventor
杨理想
王云甘
周亚
黄家君
徐慧
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Xingyao Intelligent Technology Co.,Ltd.
Original Assignee
Nanjing Shixing Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Shixing Intelligent Technology Co Ltd filed Critical Nanjing Shixing Intelligent Technology Co Ltd
Priority to CN201910957415.0A priority Critical patent/CN110781290A/en
Publication of CN110781290A publication Critical patent/CN110781290A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/34Browsing; Visualisation therefor
    • G06F16/345Summarisation for human users

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Machine Translation (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

According to the method for extracting the abstract of the long chapter structured text, provided by the invention, a word vector can be dynamically obtained according to surrounding words by adopting a dynamic word embedding method, so that the problem of ambiguous words in the text is solved; adopting discourse structure analysis, reasonably dividing paragraphs according to the relation recognition result between sentences, and enabling a computer to understand texts from a global perspective; the abstract extraction based on the model and the rule is adopted to extract the abstract of each section on the basis of chapter structure analysis, so that the problem of direct interception of the traditional long text abstract is solved; and the problem of extracting the text abstract in multiple fields is solved.

Description

Extraction method of structured text abstract of long chapter
Technical Field
The invention belongs to the technical field of natural language processing, and particularly relates to a method for extracting a structured text abstract of a long chapter.
Background
At present, when a long text is abstracted, the processing of Word embedding, text abstract extraction and chapter structure analysis is generally involved, for Word embedding, words in text data are converted into numerical vectors which can be learned by a machine, the traditional Word embedding is to firstly adopt one-hot coding on the words in the text and then put into a Word2Vec model for learning, and finally the mapping from the text to the numerical vectors is completed.
The text abstract extraction is a process of extracting important sentences in a text as the text abstract by a machine through learning text features, and is actually a classification problem, namely, whether the text sentences are important or not is subjected to binary classification, wherein the important sentences are the text abstract. The current mainstream text abstract extraction method is based on a neural network model, and the method mainly comprises two parts of encoding and decoding. The encoding process is a process of learning text features by a machine, wherein the process comprises sentence encoding, position encoding, article encoding and the like, and methods comprise CNN, RNN, BERT and the like; the decoding process is mainly a classification process, and training of the classifier is finished according to the output result of the coding and a given label.
But the current text abstract extraction mainly has the following problems: (1) the existing abstract extraction model does not well solve the problem of long texts in the encoding process, and the prior art mainly adopts a direct truncation method for the problem of long texts and then encodes the truncated data, so that important information in the long texts can be greatly lost; there are also techniques to add a coded representation between paragraphs during coding, which have certain limitations, such as the input text is not segmented, or there is no correlation between adjacent paragraphs. (2) The data for Chinese abstract extraction disclosed by the prior art is single in related field and short in single data text, and the data is not friendly to a long text abstract extraction training task in a special field.
The analysis of the structure of the chapters is used for identifying semantic relationships between different text blocks, so that the text can be understood from a global perspective, and further, automatic abstract extraction of the text can be further optimized. In the automatic abstract extraction system for the long text, the relations of cause and effect, turning and the like among sentences in the text are analyzed and identified for the chapter structure, and the primary and secondary relations are distinguished.
The problems of the analysis of chapters at present are as follows: how is the chapter structure analyzed for cases without chapter connectors? How does analysis of chapter structure apply to the downstream automatic summarization specific task? In view of the above-mentioned situation, there are still many problems to be solved.
Disclosure of Invention
In order to solve the problems of ambiguous words, direct truncation adopted in the extraction of the long text abstract and no chapter structure analysis and the extraction of the long text abstract in multiple fields in the prior art, the invention provides a method for extracting the long chapter structured text abstract, which comprises the following steps:
(1) into numerical information
Carrying out sentence dividing processing on input long text information according to punctuations, and adopting Bert WordEmbedding dynamic word embedding processing to convert each sentence into a vector matrix of the sentence, namely numerical value information learned by a computer;
(2) analysis of chapter structure
Carrying out implicit discourse relation analysis on every two sentences, namely putting every two adjacent clauses into two bidirectional GRU models for processing, splicing hidden layer information of the two models, putting spliced results into a multilayer perceptron for classification to obtain predicted class probability, taking a class label with the highest probability as a corresponding label, and reasonably segmenting the long text according to the identified class of the label;
(3) abstract extraction
And (3) performing abstract extraction on each paragraph obtained in the step (2) according to two modes based on a model and a rule, wherein the final abstract result output is an output result fusing the two modes.
As an improvement, in the step (3), the abstract extraction based on the model is to input each paragraph of information into the model, the model encodes each sentence of the paragraph, i.e., learns the features, and then decodes the learned features, i.e., classifies each sentence, thereby completing the extraction of the abstract sentences.
The coding is composed of two layers of bidirectional GRU model, the first layer inputs the sentence vector matrix, after the forward and backward GRU model processing, the hidden layer vectors in two directions are spliced and the maximum pooling processing is carried out, the processed result is used as the second layer input, the hidden layer information w of the layer is iPosition information of each represented word, i represents the ith word in the sentence; the operation of the second layer is the same as that of the first layer, and the spliced hidden layer information h jRepresents each sentence information of the paragraph, j represents the jth sentence in the paragraph, and the whole paragraph p is represented by the following formula (1):
Figure BDA0002227796750000021
wherein W pB denotes the weight and bias of each sentence, N pThe number of sentences in the paragraph is shown, and i and j are positive integers of 1, 2 and 3 … ….
As an improvement, the decoding layer further calculates the probability that a sentence in the text belongs to a summary sentence according to the information obtained in the encoding process, and the probability is expressed by a formula (2) as follows:
Figure BDA0002227796750000031
wherein y is j1 indicates that the jth sentence in the paragraph is a summary sentence, W 1,W 2,W 3As model parameters, s jThe dynamic abstract representation is a weighted sum of accessed sentence hiding layers, the weight is the probability that the sentence finally belongs to the abstract sentence, and the formula (3) is expressed as follows:
in equation (3): n and j represent the n and j sentences in the paragraph, n and j are positive integers of 1, 2 and 3 … …, and P (y) n1) represents the probability of the accessed sentence n belonging to the abstract sentence, and the calculation mode is shown in formula (2).
As an improvement, in the step (3), rule-based abstract extraction is to formulate a corresponding rule according to text characteristics of different fields, match keywords and specific patterns having characteristics in the field, recall the matched keywords and words around the specific patterns, and take the recalled sentences as the abstracts of rule extraction.
Has the advantages that: according to the method for extracting the abstract of the long chapter structured text, provided by the invention, a word vector can be dynamically obtained according to surrounding words by adopting a dynamic word embedding method, so that the problem of ambiguous words in the text is solved; adopting discourse structure analysis, reasonably dividing paragraphs according to the relation recognition result between sentences, and enabling a computer to understand texts from a global perspective; the abstract extraction model is adopted to extract the abstract of each section on the basis of chapter structure analysis, so that the problem of direct interception of the traditional long text abstract is solved; the rule-based abstract extraction is used for carrying out feature matching and recalling the abstract sentence of the text according to the text characteristics of each field, so that the problem of abstract extraction of multi-field texts is solved.
Drawings
FIG. 1 is a flow chart of a method of the present invention;
FIG. 2 is a structural diagram of the analysis of the discourse structure of the present invention;
FIG. 3 is a diagram illustrating an abstract structure of the present invention.
Detailed Description
The figures of the present invention are further described below in conjunction with the embodiments. The invention provides a method for extracting a structured text abstract of a long chapter, and a flow chart of the method is shown in figure 1. The specific implementation process is as follows:
firstly, the sentence dividing processing is carried out on the input long text information according to punctuation marks, and the embedding processing of Bert WordEmbelling dynamic words is adopted for each sentence to be converted into a vector matrix of the sentence.
And secondly, performing chapter structure analysis on the text, wherein the model structure of the part is shown in FIG. 2. And (3) putting every two adjacent clauses into two bidirectional GRU models for processing, splicing hidden layer information of the two models, putting spliced results into a multilayer perceptron for classification to obtain predicted class probability, and taking the class label with the highest probability as a corresponding label. The data set adopted by the invention is PDTB, and the research type labels comprise extension (Expansion), time sequence (Temporal), break (Comparison) and cause and effect (containment). The final output result of the part comprises the analysis result of the chapter structure of the input long text, and the long text is reasonably segmented according to the result. The specific segmentation mode is as follows: and (4) carrying out segmentation processing on the sentences with the expansion and turning relations, and not carrying out segmentation processing on the sentences with the time sequence and the causal relation.
Next, the text of the good paragraphs is extracted segment by segment, and the structure of the partial model is shown in fig. 3. Inputting the matrix information of each sentence in the paragraph into the bidirectional GRU model of the first layer, splicing the hidden layer information in the forward direction and the backward direction, performing max-posing treatment, and using the spliced hidden layer information w of the first layer as the input of the next layer iPosition information of each represented word, i represents the ith word in the sentence; the operation of the second layer is the same as that of the first layer, and the spliced hidden layer information h jRepresents the information of each sentence in the paragraph, and j represents the jth sentence in the paragraph; the hidden layer information h after the second layer splicing is adopted for the expression of the paragraph jVia a nonlinear activation function, equation (1) is as follows:
Figure BDA0002227796750000041
wherein W pB represents the weight and bias of each sentence, which are model learning parameters, N pRepresenting the number of sentences in the paragraph.
The decoding process further calculates the probability that the sentence in the paragraph belongs to the abstract sentence according to the information obtained in the encoding process, and the formula (2) is as follows:
Figure BDA0002227796750000042
wherein y is j1 indicates that the jth sentence in the paragraph is a summary sentence, W 1,W 2,W 3As model parameters, s jThe dynamic abstract representation is a weighted sum of accessed sentence hiding layers, the weight is the probability that the sentence finally belongs to the abstract sentence, and the expression is as follows in formula (3):
Figure BDA0002227796750000051
wherein n and j represent the n and j sentences in the paragraph, n and j are positive integers of 1, 2 and 3 … …, and P (y) n1) indicates that the accessed sentence belongs to the abstract sentence, and the calculation mode is shown as formula (2).
The method is completed on the basis of chapter structure analysis, and compared with the traditional method for directly extracting the abstract of the long text, the method provided by the invention can be used for extracting the abstract of the text from global and local angles, so that the extraction precision of the abstract of the long text is improved.
The loss function during the abstract extraction model training is a cross entropy function, and the optimization function is an Adam optimization function. The final output result of the part is a sentence predicted as a summary by the model. The method of the invention finally adopts a rule-based abstract extraction method, and the corresponding rules are formulated according to the text characteristics of different fields. Firstly, matching keywords with the characteristics of the field and a specific mode, then recalling the matched keywords and surrounding words of the specific mode, and finally taking the recalled sentences as abstracts extracted by rules. The invention takes the abstract extracted by the fusion model and the rule as the final abstract extraction result.
The above-mentioned embodiments only express several embodiments of the present invention, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the inventive concept, which falls within the scope of the present invention. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (5)

1. A method for extracting a structured text abstract of a long chapter is characterized by comprising the following steps: inputting long text information, and the abstract extraction step comprises:
(1) into numerical information
Carrying out sentence dividing processing on input long text information according to punctuation marks, and adopting Bert Word Embedding dynamic Word Embedding processing to convert each sentence into a vector matrix of the sentence, namely numerical value information learned by a computer;
(2) analysis of chapter structure
Carrying out implicit discourse relation analysis on every two sentences, namely putting every two adjacent clauses into two bidirectional GRU models for processing, splicing hidden layer information of the two models, putting spliced results into a multilayer perceptron for classification to obtain predicted class probability, taking a class label with the highest probability as a corresponding label, and reasonably segmenting the long text according to the identified class of the label;
(3) abstract extraction
And (3) performing abstract extraction on each paragraph obtained in the step (2) according to two modes based on a model and a rule, wherein the final abstract result output is an output result fusing the two modes.
2. The method for extracting the abstract of the long chapter structured text according to claim 1, wherein: in the step (3), the abstract extraction based on the model is to input each paragraph of information into the model, the model encodes each sentence of the paragraph, namely, the feature learning, and then decodes the learned feature, namely, each sentence is subjected to two classifications, so that the abstract sentence is extracted.
3. The method for extracting the abstract of the long chapter structured text according to claim 2, wherein: the coding is bi-directional with two layersGRU model composition, wherein the input of the first layer is a sentence vector matrix, after the forward and backward GRU model processing, the maximum pooling processing is carried out after the hidden layer vectors in two directions are spliced, the processed result is taken as the input of the second layer, and the hidden layer information w of the layer iPosition information of each represented word, i represents the ith word in the sentence; the operation of the second layer is the same as that of the first layer, and the spliced hidden layer information h jRepresents each sentence information of the paragraph, j represents the jth sentence in the paragraph, and the whole paragraph p is represented by formula (1):
wherein W pB denotes the weight and bias of each sentence, N pThe number of sentences in the paragraph is shown, and i and j are positive integers of 1, 2 and 3 … ….
4. The method for extracting the abstract of the long chapter structured text according to claim 3, wherein: the decoding layer further calculates the probability that the sentence in the text belongs to the abstract sentence according to the information obtained in the encoding process, and the probability is expressed by a formula (2):
wherein y is j1 indicates that the jth sentence in the paragraph is a summary sentence, W 1,W 2,W 3As model parameters, s jThe dynamic abstract representation is a weighted sum of accessed sentence hiding layers, the weight is the probability that the sentence finally belongs to the abstract sentence, and the formula (3) is used for representing that:
Figure FDA0002227796740000022
wherein n and j represent the n and j sentences in the paragraph, n and j are positive integers of 1, 2 and 3 … …, and P (y) n1) represents the probability of belonging to the abstract sentence in the accessed sentence, and the calculation mode is shown as formula (2)。
5. The method for extracting the abstract of the long chapter structured text according to claim 1, wherein: and (3) in the rule-based abstract extraction, corresponding rules are formulated according to the text characteristics of different fields, keywords with characteristics in the field and specific patterns are matched, the matched keywords and words around the specific patterns are recalled, and the recalled sentences are used as the abstracts of the rule extraction.
CN201910957415.0A 2019-10-10 2019-10-10 Extraction method of structured text abstract of long chapter Pending CN110781290A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910957415.0A CN110781290A (en) 2019-10-10 2019-10-10 Extraction method of structured text abstract of long chapter

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910957415.0A CN110781290A (en) 2019-10-10 2019-10-10 Extraction method of structured text abstract of long chapter

Publications (1)

Publication Number Publication Date
CN110781290A true CN110781290A (en) 2020-02-11

Family

ID=69384923

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910957415.0A Pending CN110781290A (en) 2019-10-10 2019-10-10 Extraction method of structured text abstract of long chapter

Country Status (1)

Country Link
CN (1) CN110781290A (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111428525A (en) * 2020-06-15 2020-07-17 华东交通大学 Implicit discourse relation identification method and system and readable storage medium
CN112307175A (en) * 2020-12-02 2021-02-02 龙马智芯(珠海横琴)科技有限公司 Text processing method, text processing device, server and computer readable storage medium
CN112732899A (en) * 2020-12-31 2021-04-30 平安科技(深圳)有限公司 Abstract statement extraction method, device, server and computer readable storage medium
CN112836501A (en) * 2021-01-18 2021-05-25 同方知网(北京)技术有限公司 Automatic knowledge element extraction method based on Bert + BiLSTM + CRF
CN113076720A (en) * 2021-04-29 2021-07-06 新声科技(深圳)有限公司 Long text segmentation method and device, storage medium and electronic device
CN113361261A (en) * 2021-05-19 2021-09-07 重庆邮电大学 Method and device for selecting legal case candidate paragraphs based on enhance matrix
CN114265929A (en) * 2021-12-24 2022-04-01 河南大学 Method and device for automatically generating text abstract fusing multilevel theme characteristics
CN115952279A (en) * 2022-12-02 2023-04-11 杭州瑞成信息技术股份有限公司 Text outline extraction method and device, electronic device and storage medium
CN116432752A (en) * 2023-04-27 2023-07-14 华中科技大学 Construction method and application of implicit chapter relation recognition model

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106407178A (en) * 2016-08-25 2017-02-15 中国科学院计算技术研究所 Session abstract generation method and device
CN107510452A (en) * 2017-09-30 2017-12-26 扬美慧普(北京)科技有限公司 A kind of ECG detecting method based on multiple dimensioned deep learning neutral net
CN110032638A (en) * 2019-04-19 2019-07-19 中山大学 A kind of production abstract extraction method based on coder-decoder
CN110298033A (en) * 2019-05-29 2019-10-01 西南电子技术研究所(中国电子科技集团公司第十研究所) Keyword corpus labeling trains extracting tool

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106407178A (en) * 2016-08-25 2017-02-15 中国科学院计算技术研究所 Session abstract generation method and device
CN107510452A (en) * 2017-09-30 2017-12-26 扬美慧普(北京)科技有限公司 A kind of ECG detecting method based on multiple dimensioned deep learning neutral net
CN110032638A (en) * 2019-04-19 2019-07-19 中山大学 A kind of production abstract extraction method based on coder-decoder
CN110298033A (en) * 2019-05-29 2019-10-01 西南电子技术研究所(中国电子科技集团公司第十研究所) Keyword corpus labeling trains extracting tool

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
侯圣峦等: "面向中文的修辞结构关系分类体系及无歧义标注方法" *

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111428525B (en) * 2020-06-15 2020-09-15 华东交通大学 Implicit discourse relation identification method and system and readable storage medium
CN111428525A (en) * 2020-06-15 2020-07-17 华东交通大学 Implicit discourse relation identification method and system and readable storage medium
CN112307175A (en) * 2020-12-02 2021-02-02 龙马智芯(珠海横琴)科技有限公司 Text processing method, text processing device, server and computer readable storage medium
CN112307175B (en) * 2020-12-02 2021-11-02 龙马智芯(珠海横琴)科技有限公司 Text processing method, text processing device, server and computer readable storage medium
CN112732899A (en) * 2020-12-31 2021-04-30 平安科技(深圳)有限公司 Abstract statement extraction method, device, server and computer readable storage medium
CN112836501A (en) * 2021-01-18 2021-05-25 同方知网(北京)技术有限公司 Automatic knowledge element extraction method based on Bert + BiLSTM + CRF
CN113076720B (en) * 2021-04-29 2022-01-28 新声科技(深圳)有限公司 Long text segmentation method and device, storage medium and electronic device
CN113076720A (en) * 2021-04-29 2021-07-06 新声科技(深圳)有限公司 Long text segmentation method and device, storage medium and electronic device
CN113361261A (en) * 2021-05-19 2021-09-07 重庆邮电大学 Method and device for selecting legal case candidate paragraphs based on enhance matrix
CN114265929A (en) * 2021-12-24 2022-04-01 河南大学 Method and device for automatically generating text abstract fusing multilevel theme characteristics
CN115952279A (en) * 2022-12-02 2023-04-11 杭州瑞成信息技术股份有限公司 Text outline extraction method and device, electronic device and storage medium
CN115952279B (en) * 2022-12-02 2023-09-12 杭州瑞成信息技术股份有限公司 Text outline extraction method and device, electronic device and storage medium
CN116432752A (en) * 2023-04-27 2023-07-14 华中科技大学 Construction method and application of implicit chapter relation recognition model
CN116432752B (en) * 2023-04-27 2024-02-02 华中科技大学 Construction method and application of implicit chapter relation recognition model

Similar Documents

Publication Publication Date Title
CN110781290A (en) Extraction method of structured text abstract of long chapter
CN112069811B (en) Electronic text event extraction method with multi-task interaction enhancement
CN108009148B (en) Text emotion classification representation method based on deep learning
CN110321563B (en) Text emotion analysis method based on hybrid supervision model
CN111738004A (en) Training method of named entity recognition model and named entity recognition method
CN111382575A (en) Event extraction method based on joint labeling and entity semantic information
CN113255320A (en) Entity relation extraction method and device based on syntax tree and graph attention machine mechanism
CN110263325B (en) Chinese word segmentation system
CN111145914B (en) Method and device for determining text entity of lung cancer clinical disease seed bank
CN113886601B (en) Electronic text event extraction method, device, equipment and storage medium
CN114153971B (en) Error correction recognition and classification equipment for Chinese text containing errors
CN112232053A (en) Text similarity calculation system, method and storage medium based on multi-keyword pair matching
CN114492441A (en) BilSTM-BiDAF named entity identification method based on machine reading understanding
CN111753058A (en) Text viewpoint mining method and system
CN115310448A (en) Chinese named entity recognition method based on combining bert and word vector
CN113705315A (en) Video processing method, device, equipment and storage medium
CN110472245A (en) A kind of multiple labeling emotional intensity prediction technique based on stratification convolutional neural networks
CN114387537A (en) Video question-answering method based on description text
CN116341519A (en) Event causal relation extraction method, device and storage medium based on background knowledge
CN112507717A (en) Medical field entity classification method fusing entity keyword features
CN116562291A (en) Chinese nested named entity recognition method based on boundary detection
CN115759090A (en) Chinese named entity recognition method combining soft dictionary and Chinese character font features
CN113434698B (en) Relation extraction model establishing method based on full-hierarchy attention and application thereof
CN115759102A (en) Chinese poetry wine culture named entity recognition method
CN115526149A (en) Text summarization method for fusing double attention and generating confrontation network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20210316

Address after: 210000 rooms 1201 and 1209, building C, Xingzhi Science Park, Qixia Economic and Technological Development Zone, Nanjing, Jiangsu Province

Applicant after: Nanjing Xingyao Intelligent Technology Co.,Ltd.

Address before: Room 1211, building C, Xingzhi Science Park, 6 Xingzhi Road, Nanjing Economic and Technological Development Zone, Jiangsu Province, 210000

Applicant before: Nanjing Shixing Intelligent Technology Co.,Ltd.

RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200211