CN111581350A - Multi-task learning, reading and understanding method based on pre-training language model - Google Patents

Multi-task learning, reading and understanding method based on pre-training language model Download PDF

Info

Publication number
CN111581350A
CN111581350A CN202010365779.2A CN202010365779A CN111581350A CN 111581350 A CN111581350 A CN 111581350A CN 202010365779 A CN202010365779 A CN 202010365779A CN 111581350 A CN111581350 A CN 111581350A
Authority
CN
China
Prior art keywords
question
language model
answer
answered
training
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010365779.2A
Other languages
Chinese (zh)
Inventor
王春辉
胡勇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Knowledge Intelligence Technology Beijing Co ltd
Original Assignee
Knowledge Intelligence Technology Beijing Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Knowledge Intelligence Technology Beijing Co ltd filed Critical Knowledge Intelligence Technology Beijing Co ltd
Priority to CN202010365779.2A priority Critical patent/CN111581350A/en
Publication of CN111581350A publication Critical patent/CN111581350A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/332Query formulation
    • G06F16/3329Natural language query formulation or dialogue systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/3331Query processing
    • G06F16/334Query execution
    • G06F16/3344Query execution using natural language analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/205Parsing
    • G06F40/211Syntactic parsing, e.g. based on context-free grammar [CFG] or unification grammars
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/30Semantic analysis

Abstract

The invention discloses a multi-task learning, reading and understanding method based on a pre-training language model. The method comprises the following steps: training based on a corpus to establish a pre-training language model, and obtaining context perception representation of input documents and problems by using the pre-training language model; obtaining the vector representation of each word by setting semantic information between interaction layer fusion problems formed by attention networks and documents; and performing multi-task learning based on whether the question can be answered or not and the task of obtaining the answer to obtain the result whether the question can be answered or not and the answer to the question. According to the method, the inclusion relation between sentence pairs can be obtained by establishing the pre-training language model; semantic information between the problems and the documents can be fully fused by setting an interaction layer, so that the model has better expression capability; by performing multitask learning, whether a question is answered or not can be adaptively predicted, and an answer to the question can be acquired.

Description

Multi-task learning, reading and understanding method based on pre-training language model
Technical Field
The invention belongs to the technical field of natural language understanding, and particularly relates to a multi-task learning, reading and understanding method based on a pre-training language model.
Background
Large-scale data makes machine-reading understanding a key task within natural language understanding tasks. The current machine reading understanding task can be divided into: a full fill type and a segment extraction type. The machine reading understanding task of segment extraction requires continuous text to be extracted from the input document as an answer. However, most of the machine reading understanding tasks of segment extraction have a strong assumption that every question can find an answer from an article. Under this assumption, only the boundaries of the answer need to be found by simple pattern matching, ignoring whether the question can be really answered or not, so true natural language understanding still cannot be achieved, and the predictive ability of whether the question can be answered or not is lacking. However, in the real world, unanswered questions are ubiquitous.
Currently, there are two main methods for predicting whether a question can be answered: one is to use a simple classifier to classify whether the question is answered or not answered. The disadvantage of this approach is the lack of interaction and implication relationships between the problem and the document; secondly, by using a verifiable mechanism, firstly, a plausible answer is extracted, then verification is carried out on the basis of the plausible answer, and whether the question can be answered or not is judged. However, the plausible answer may be wrong, for example, when the question is determined to be unanswerable, the plausible answer extracted by the model becomes the wrong answer. It is not reasonable to verify on the wrong answer.
Disclosure of Invention
In order to solve the problems in the prior art, the invention provides a multi-task learning, reading and understanding method based on a pre-training language model.
In order to achieve the purpose, the invention adopts the following technical scheme:
a multi-task learning, reading and understanding method based on a pre-training language model comprises the following steps:
step 1, training based on a corpus to establish a pre-training language model, and obtaining context perception representation of input documents and questions by using the pre-training language model, wherein the input documents and questions are represented by word vectors, position vectors and paragraph vectors;
step 2, obtaining the vector representation of each word by setting semantic information between interaction layer fusion problems formed by attention networks and documents;
and 3, performing multi-task learning based on the question answering prediction task and the answer obtaining task to obtain the question answering result and the question answer.
Compared with the prior art, the invention has the following beneficial effects:
the pre-training language model is established by training based on the corpus, so that the implication relation between sentence pairs can be obtained; semantic information between the problems and the documents can be fully fused by setting an interaction layer, so that the model has better expression capability; by performing multitask learning, whether a question is answered or not can be adaptively predicted, and an answer to the question can be acquired.
Drawings
Fig. 1 is a flowchart of a multitask learning, reading and understanding method based on a pre-trained language model according to an embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings.
The embodiment of the invention provides a multi-task learning, reading and understanding method based on a pre-training language model, a flow chart is shown in figure 1, and the method comprises the following steps:
s101, training based on a corpus to establish a pre-training language model, and obtaining context perception representation of input documents and questions by using the pre-training language model, wherein the input documents and questions are represented by word vectors, position vectors and paragraph vectors;
s102, obtaining vector representation of each word by setting semantic information between interaction layer fusion problems formed by an attention network and documents;
s103, multi-task learning based on the question answering prediction task and the answer obtaining task is carried out, and the question answering result and the question answer are obtained.
In this embodiment, step S101 is mainly used to establish a pre-training language model. The input to the pre-trained language model is documents and questions, represented as word vectors, position vectors, and paragraph vectors; the output of the pre-trained language model is a context-aware representation of the input documents and questions. The first position of the context is added with a [ CLS ] vector representing the implication relation between the sentence pair, and a [ SEP ] vector for distinguishing two sentences is added between the two sentences. The pre-training language model is established based on large-scale corpus training, so that external common knowledge, lexical, syntactic and grammatical relations can be fully captured, and the implication relation between sentence pairs can be learned.
In the present embodiment, step S102 is mainly used to enable the problem and the document to be fused more deeply. And setting an attention (attention) network, repeatedly carrying out attention operation on the question and the document, and if the attention operation is carried out for 3 times, enabling the question and the document to be fully fused to obtain a hidden state (represented as a vector) of each word in the document.
In the present embodiment, step S103 is mainly used for predicting whether the question can be answered or not based on the multitask learning and outputting the answer to the question. The multi-task learning is a sub-field of machine learning, and aims to utilize useful information contained in a plurality of learning tasks to help each task to learn to obtain a more accurate learner, and to learn a more universal representation by utilizing the similarity between different tasks, so that the model performance is further improved. Both experiments and theory show that when all or a part of the tasks in multi-task learning are related, the combined learning of multiple tasks can obtain better performance than the individual learning. In this embodiment, the multitask includes 2 tasks: one to predict whether a question is answered and one to obtain an answer to the question. If the prediction result is not answerable, outputting No Answer; and if the prediction result is answered, outputting the answer to the question. Obviously, the 2 tasks of predicting whether a question can be answered and obtaining answers to the question are related, and the model performance can be improved by utilizing multi-task learning.
As an alternative embodiment, the pre-trained language model is a multi-layered bi-directional Transformer encoder.
This embodiment shows a specific structure of the pre-trained language model. The Pre-training Language model using the multi-layer bidirectional Transformer coder is also called BERT (Pre-training of Deep bidirectional transformers for Language Understanding, Encoder of bidirectional Transformer), which has the best effect among 11 natural Language processing tasks. The multi-layer bidirectional Transformer encoder is formed by stacking a plurality of layers of Transformer structures, wherein each layer is composed of a self-attention network and a forward propagation network connected through a residual error network and a layer normalization network. In the model training process, the parameters of the model are optimized mainly by masking the language model and predicting the next sentence at the same time. The special training method can capture the co-occurrence relation of the contexts and learn the implication relation between two sentence pairs.
As an alternative embodiment, the method for predicting whether the question is answered and obtaining the answer in step S103 includes:
performing secondary classification by using the implication relation between the question and the document, scoring based on the interactive information and the implication relation between the question and the document, and if the score is higher than a set threshold value, the question can be answered; otherwise the question is not answered.
And scoring each word, then normalizing to obtain probability distribution of the starting position and the ending position, wherein the content between indexes corresponding to the maximum value in the two probability distributions is the answer.
This embodiment provides a specific solution for predicting whether a question can be answered and obtaining an answer to the question by using multi-task learning. To facilitate understanding of the technical solution, a simple example is given below, assuming that the input document D and the corresponding question Q are respectively represented as:
D=([CLS],q1,q2,qm,[SEP],x1,x2,…,xn,[SEP]) (1)
Q=(q1,q2,…,qm) (2)
wherein m represents the length of the question, n represents the length of the article, [ CLS ] represents the implication relationship between sentence pairs, [ SEP ] represents the separator of two sentences;
the input documents and questions are pre-trained with the language model BERT, resulting in intermediate representations d BERT (d) and q BERT (q). dtIs the t-th word vector in d, t is 1,2, …, m + n +3, d1Is [ CLS ]]The corresponding vector. Repeatedly performing attention operation (multiple iterations) on the problems and the documents through an interaction layer to obtain:
d′t=q*Softmax(RELU(dt*qT)) (3)
wherein RELU and Softmax represent activation functions.
In the task of predicting whether the question can be answered, the implication relation between the question and the document is used. d'1Is d1([CLS]Corresponding vector) processed by the inter-layer, i.e., formula (3), d'1The method not only contains the interaction information between the question and the document, but also contains the implication relationship between the question and the document. To d'1Two classifications were made and scored to obtain score. If score is above a set threshold, the question is considered to be answerable; otherwise the question is not answered. The scoring formula is:
score=σ(wp*RELU(d′1wc+bc)) (4)
wherein, wp、wcAnd bcAnd outputting a value between 0 and 1 as a training parameter, wherein the sigma is an activation function.
In the task of obtaining answers, calculating a score for each word vector after passing through an interaction layer, and then normalizing through Softmax to obtain probability distribution s of a starting positiontAnd probability distribution e of end positiont
st=Softmax(RELU(ws(d′t*qT)+bs)) (5)
et=Softmax(RELU(we(d′t*qT)+be)) (6)
Wherein, ws、bs、we、beAre training parameters.
The segment between the two indexes (t) corresponding to the maximum value of s and e is the answer of the question.
When multi-task learning is carried out, the Loss function Loss is a Loss function Loss of two tasks1、Loss2Weighted summation of (2):
Loss=μLoss1+λLoss2(7)
where μ and λ are weights of loss functions of the two tasks, and μ + λ is 1.
A set of experimental data on the SQuAD2.0 data set using the method of the present invention and the prior art is given below. The SQuAD2.0 data set contains 15 ten thousand questions, of which one third is accounted for by unanswered questions. Evaluation indexes used were EM value (Exact Match, absolute Match) and F1 value (F1 score). The EM is used for measuring the consistency of the predicted answer and the standard answer, and the F1 value is used for measuring the similarity of the predicted answer and the standard answer. The results of the experiment are shown in table 1. The existing 1 is a scheme of using a simple classifier to perform two-classification on whether a question can be answered or not, the existing 2 is a verification mechanism scheme, BERT is a scheme of using a pre-training language model BERT, the scheme of removing an interaction layer is a scheme of not using the interaction layer in the invention, and the scheme of removing multiple tasks is a scheme of not using multiple tasks in the invention.
TABLE 1 comparison of experimental data
Prior art 1 Prior art 2 BERT Removing the interaction layer De-multitasking The invention
EM 65.10% 72.30% 76.06% 76.80% 76.56% 77.12%
F1 67.60% 74.80% 80.07% 80.78% 80.49% 81.11%
As can be seen from table 1, EM and F1 in the latter 4 schemes are improved over the 2 schemes in the prior art, and the scheme of the present invention has the best effect.
The above description is only for the purpose of illustrating a few embodiments of the present invention, and should not be taken as limiting the scope of the present invention, in which all equivalent changes, modifications, or equivalent scaling-up or down, etc. made in accordance with the spirit of the present invention should be considered as falling within the scope of the present invention.

Claims (3)

1. A multi-task learning, reading and understanding method based on a pre-training language model is characterized by comprising the following steps:
step 1, training based on a corpus to establish a pre-training language model, and obtaining context perception representation of input documents and questions by using the pre-training language model, wherein the input documents and questions are represented by word vectors, position vectors and paragraph vectors;
step 2, obtaining the vector representation of each word by setting semantic information between interaction layer fusion problems formed by attention networks and documents;
and 3, performing multi-task learning based on the question answering prediction task and the answer obtaining task to obtain the question answering result and the question answer.
2. The method of claim 1, wherein the pre-trained language model is a multi-layered bi-directional Transformer encoder.
3. The method for learning, reading and understanding multitasks based on pre-trained language model as claimed in claim 1, wherein said step 3 of predicting whether the question can be answered and obtaining the answer comprises:
performing secondary classification by using the implication relation between the question and the document, scoring based on the interactive information and the implication relation between the question and the document, and if the score is higher than a set threshold value, the question can be answered; otherwise the question is not answered;
and scoring each word, then normalizing to obtain probability distribution of the starting position and the ending position, wherein the content between indexes corresponding to the maximum value in the two probability distributions is the answer.
CN202010365779.2A 2020-04-30 2020-04-30 Multi-task learning, reading and understanding method based on pre-training language model Pending CN111581350A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010365779.2A CN111581350A (en) 2020-04-30 2020-04-30 Multi-task learning, reading and understanding method based on pre-training language model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010365779.2A CN111581350A (en) 2020-04-30 2020-04-30 Multi-task learning, reading and understanding method based on pre-training language model

Publications (1)

Publication Number Publication Date
CN111581350A true CN111581350A (en) 2020-08-25

Family

ID=72113324

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010365779.2A Pending CN111581350A (en) 2020-04-30 2020-04-30 Multi-task learning, reading and understanding method based on pre-training language model

Country Status (1)

Country Link
CN (1) CN111581350A (en)

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112182151A (en) * 2020-09-23 2021-01-05 清华大学 Reading understanding task identification method and device based on multiple languages
CN112256765A (en) * 2020-10-29 2021-01-22 浙江大华技术股份有限公司 Data mining method, system and computer readable storage medium
CN112613316A (en) * 2020-12-31 2021-04-06 北京师范大学 Method and system for generating ancient Chinese marking model
CN113254575A (en) * 2021-04-23 2021-08-13 中国科学院信息工程研究所 Machine reading understanding method and system based on multi-step evidence reasoning
CN113360606A (en) * 2021-06-24 2021-09-07 哈尔滨工业大学 Knowledge graph question-answer joint training method based on Filter
CN113688876A (en) * 2021-07-30 2021-11-23 华东师范大学 Financial text machine reading understanding method based on LDA and BERT
CN114218379A (en) * 2021-11-23 2022-03-22 中国人民解放军国防科技大学 Intelligent question-answering system-oriented method for attributing questions which cannot be answered
CN114444488A (en) * 2022-01-26 2022-05-06 中国科学技术大学 Reading understanding method, system, device and storage medium for few-sample machine
CN115080715A (en) * 2022-05-30 2022-09-20 重庆理工大学 Span extraction reading understanding method based on residual error structure and bidirectional fusion attention
CN115587175A (en) * 2022-12-08 2023-01-10 阿里巴巴达摩院(杭州)科技有限公司 Man-machine conversation and pre-training language model training method and system and electronic equipment
CN116074317A (en) * 2023-02-20 2023-05-05 王春辉 Service resource sharing method and server based on big data
CN116072119A (en) * 2023-03-31 2023-05-05 北京华录高诚科技有限公司 Voice control system, method, electronic equipment and medium for emergency command
WO2024074100A1 (en) * 2022-10-04 2024-04-11 阿里巴巴达摩院(杭州)科技有限公司 Method and apparatus for natural language processing and model training, device and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109977199A (en) * 2019-01-14 2019-07-05 浙江大学 A kind of reading understanding method based on attention pond mechanism
CA3089830A1 (en) * 2018-01-29 2019-08-01 EmergeX, LLC System and method for facilitating affective-state-based artificial intelligence
WO2019173280A1 (en) * 2018-03-06 2019-09-12 The Regents Of The University Of California Compositions and methods for the diagnosis and detection of tumors and cancer prognosis
CN110647629A (en) * 2019-09-20 2020-01-03 北京理工大学 Multi-document machine reading understanding method for multi-granularity answer sorting
CN110688491A (en) * 2019-09-25 2020-01-14 暨南大学 Machine reading understanding method, system, device and medium based on deep learning
CN110969014A (en) * 2019-11-18 2020-04-07 南开大学 Opinion binary group extraction method based on synchronous neural network

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA3089830A1 (en) * 2018-01-29 2019-08-01 EmergeX, LLC System and method for facilitating affective-state-based artificial intelligence
WO2019173280A1 (en) * 2018-03-06 2019-09-12 The Regents Of The University Of California Compositions and methods for the diagnosis and detection of tumors and cancer prognosis
CN109977199A (en) * 2019-01-14 2019-07-05 浙江大学 A kind of reading understanding method based on attention pond mechanism
CN110647629A (en) * 2019-09-20 2020-01-03 北京理工大学 Multi-document machine reading understanding method for multi-granularity answer sorting
CN110688491A (en) * 2019-09-25 2020-01-14 暨南大学 Machine reading understanding method, system, device and medium based on deep learning
CN110969014A (en) * 2019-11-18 2020-04-07 南开大学 Opinion binary group extraction method based on synchronous neural network

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
杨荣钦: "基于多任务学习的密集注意力网络在机器阅读上的应用" *
苏立新;郭嘉丰;范意兴;兰艳艳;徐君: "面向多片段答案的抽取式阅读理解模型" *

Cited By (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112182151B (en) * 2020-09-23 2021-08-17 清华大学 Reading understanding task identification method and device based on multiple languages
CN112182151A (en) * 2020-09-23 2021-01-05 清华大学 Reading understanding task identification method and device based on multiple languages
CN112256765A (en) * 2020-10-29 2021-01-22 浙江大华技术股份有限公司 Data mining method, system and computer readable storage medium
CN112613316A (en) * 2020-12-31 2021-04-06 北京师范大学 Method and system for generating ancient Chinese marking model
CN113254575A (en) * 2021-04-23 2021-08-13 中国科学院信息工程研究所 Machine reading understanding method and system based on multi-step evidence reasoning
CN113254575B (en) * 2021-04-23 2022-07-22 中国科学院信息工程研究所 Machine reading understanding method and system based on multi-step evidence reasoning
CN113360606A (en) * 2021-06-24 2021-09-07 哈尔滨工业大学 Knowledge graph question-answer joint training method based on Filter
CN113688876B (en) * 2021-07-30 2023-08-22 华东师范大学 Financial text machine reading and understanding method based on LDA and BERT
CN113688876A (en) * 2021-07-30 2021-11-23 华东师范大学 Financial text machine reading understanding method based on LDA and BERT
CN114218379A (en) * 2021-11-23 2022-03-22 中国人民解放军国防科技大学 Intelligent question-answering system-oriented method for attributing questions which cannot be answered
CN114218379B (en) * 2021-11-23 2024-02-06 中国人民解放军国防科技大学 Attribution method for question answering incapacity of intelligent question answering system
CN114444488A (en) * 2022-01-26 2022-05-06 中国科学技术大学 Reading understanding method, system, device and storage medium for few-sample machine
CN115080715A (en) * 2022-05-30 2022-09-20 重庆理工大学 Span extraction reading understanding method based on residual error structure and bidirectional fusion attention
CN115080715B (en) * 2022-05-30 2023-05-30 重庆理工大学 Span extraction reading understanding method based on residual structure and bidirectional fusion attention
WO2024074100A1 (en) * 2022-10-04 2024-04-11 阿里巴巴达摩院(杭州)科技有限公司 Method and apparatus for natural language processing and model training, device and storage medium
CN115587175B (en) * 2022-12-08 2023-03-14 阿里巴巴达摩院(杭州)科技有限公司 Man-machine conversation and pre-training language model training method and system and electronic equipment
CN115587175A (en) * 2022-12-08 2023-01-10 阿里巴巴达摩院(杭州)科技有限公司 Man-machine conversation and pre-training language model training method and system and electronic equipment
CN116074317A (en) * 2023-02-20 2023-05-05 王春辉 Service resource sharing method and server based on big data
CN116074317B (en) * 2023-02-20 2024-03-26 新疆八达科技发展有限公司 Service resource sharing method and server based on big data
CN116072119A (en) * 2023-03-31 2023-05-05 北京华录高诚科技有限公司 Voice control system, method, electronic equipment and medium for emergency command

Similar Documents

Publication Publication Date Title
CN111581350A (en) Multi-task learning, reading and understanding method based on pre-training language model
CN110134771B (en) Implementation method of multi-attention-machine-based fusion network question-answering system
CN107798140B (en) Dialog system construction method, semantic controlled response method and device
Mairesse et al. Controlling user perceptions of linguistic style: Trainable generation of personality traits
Wu et al. Emotion recognition from text using semantic labels and separable mixture models
CN110032635A (en) One kind being based on the problem of depth characteristic fused neural network to matching process and device
CN110390049B (en) Automatic answer generation method for software development questions
CN113239666B (en) Text similarity calculation method and system
CN113987147A (en) Sample processing method and device
Johannsen et al. More or less supervised supersense tagging of Twitter
CN108256968A (en) A kind of electric business platform commodity comment of experts generation method
CN107679225A (en) A kind of reply generation method based on keyword
CN111339772B (en) Russian text emotion analysis method, electronic device and storage medium
CN111666400A (en) Message acquisition method and device, computer equipment and storage medium
CN116010581A (en) Knowledge graph question-answering method and system based on power grid hidden trouble shooting scene
CN115905487A (en) Document question and answer method, system, electronic equipment and storage medium
CN115062139A (en) Automatic searching method for dialogue text abstract model
CN114282592A (en) Deep learning-based industry text matching model method and device
Bharathi et al. Machine Learning Based Approach for Sentiment Analysis on Multilingual Code Mixing Text.
CN114429121A (en) Method for extracting emotion and reason sentence pairs of test corpus
CN114021658A (en) Training method, application method and system of named entity recognition model
CN114239565A (en) Deep learning-based emotion reason identification method and system
Cui et al. Aspect level sentiment classification based on double attention mechanism
CN113641778A (en) Topic identification method for dialog text
Gao et al. Chinese short text classification method based on word embedding and Long Short-Term Memory Neural Network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination