CN111125316B - Knowledge base question-answering method integrating multiple loss functions and attention mechanism - Google Patents

Knowledge base question-answering method integrating multiple loss functions and attention mechanism Download PDF

Info

Publication number
CN111125316B
CN111125316B CN201911369897.4A CN201911369897A CN111125316B CN 111125316 B CN111125316 B CN 111125316B CN 201911369897 A CN201911369897 A CN 201911369897A CN 111125316 B CN111125316 B CN 111125316B
Authority
CN
China
Prior art keywords
question
relation
candidate
answer
knowledge base
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911369897.4A
Other languages
Chinese (zh)
Other versions
CN111125316A (en
Inventor
杨新武
张煜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing University of Technology
Original Assignee
Beijing University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing University of Technology filed Critical Beijing University of Technology
Priority to CN201911369897.4A priority Critical patent/CN111125316B/en
Publication of CN111125316A publication Critical patent/CN111125316A/en
Application granted granted Critical
Publication of CN111125316B publication Critical patent/CN111125316B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/332Query formulation
    • G06F16/3329Natural language query formulation or dialogue systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/3331Query processing
    • G06F16/334Query execution
    • G06F16/3344Query execution using natural language analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/02Knowledge representation; Symbolic representation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Databases & Information Systems (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Evolutionary Computation (AREA)
  • Human Computer Interaction (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Electrically Operated Instructional Devices (AREA)

Abstract

The invention discloses a knowledge base question-answering method integrating multiple loss functions and an attention mechanism, which takes a question and a candidate answer as input, adopts Bi-LSTM and Bi-GRU as main feature extractors, integrates the attention mechanism at the same time, optimizes a model by using two loss functions, updates model parameters by adopting calculation loss value back propagation, and trains a network model until the model converges. And finally mapping the question and the candidate answers to a feature space with the same dimension through network training, calculating the semantic similarity of the question and the candidate answers by using the inner product of feature vectors of the question and the candidate answers, and simultaneously expanding the difference between different answers by using the cosine similarity between the candidate answers. The method has the advantages that the SimpleQuestions data set is tested, the model has strong characteristic mapping capability and high accuracy rate, and accordingly the superiority of the method is proved.

Description

Knowledge base question-answering method integrating multiple loss functions and attention mechanism
Technical Field
The invention relates to the field of deep learning, natural language processing and knowledge base question answering, in particular to a knowledge base question answering method integrating a multi-loss function and an attention mechanism.
Background
The knowledge base question-answering system can answer questions asked by users quickly and accurately, and compared with a search engine, the retrieval is more efficient, so that the knowledge base question-answering system becomes a new hotspot of research in the field of natural processing recently. The knowledge base question-answering process is divided into a plurality of modules, wherein the text matching of the answers is a key step of the knowledge base question-answering.
In the prior knowledge base question-answering method, a semantic parser is constructed by utilizing professional knowledge of semantics, a natural language question is converted into a logic form, and the logic form is converted into a corresponding database query expression to obtain a final answer. Although the method has a good effect, a large number of manual rules need to be set for the data set, and meanwhile, a semantic analyzer needs to be built by a semantic expert, so that the generalization capability is poor.
In recent years, deep learning methods have achieved very good results in the fields of text classification, emotion classification, named entity recognition and the like of natural language processing. The good result can be obtained mainly because the feature extractor adopted by the deep learning can learn a strong feature mapping capability, and this provides possibility for designing complex high-precision text matching models.
Two main methods exist at present for the matching process of answers in the knowledge base question answering. Firstly, a semantic analysis based method is used, the method is used for inquiring answers by mapping phrases in a question with a pre-constructed vocabulary and constructing a syntax tree. However, this method requires manual rule definition and is difficult to generalize. Another is a method based on deep learning, which selects answers by mapping questions and answers to corresponding vectors such that the questions are closer to the correct answers and the wrong answers are further apart. The method provided by the method simultaneously measures the distance between the answers, so that the distance between similar answers is longer, the difference between answer representations is enlarged, and the model performance is improved; in addition, an Attention mechanism is also merged into the method, the vector representation of the question is dynamically updated according to different aspects of answer vector representation, and the model performance is further improved. By performing experiments on SimpleQuestions data sets, the end-to-end accuracy rate of the method is over 77.2%, and compared with other methods, the method has strong competitiveness.
Disclosure of Invention
The problem that the precision of the existing knowledge base question-answering technology is not high, a large number of manual rules are needed, and the generalization performance is poor is solved.
The technical scheme adopted by the invention is a text matching method of a neural network and a fusion multi-loss function, the method maps the question description and the candidate answer of a user into a characteristic vector represented by a one-dimensional array on a specific space S by using the network, and the answer with the maximum score is selected as the candidate answer by calculating the matrix inner product score of the two vectors.
The method comprises three processes of gait energy image extraction, model training and identification, and specifically comprises the following steps:
a text matching method fusing a plurality of loss functions is characterized in that: the method comprises three processes of data preparation, model training and text matching, and specifically comprises the following steps;
step S1, data preparation process:
step S1.1, entity identification:
and identifying the entity in the problem to obtain an entity sequence.
Step S1.2, obtaining candidate answers:
and querying a knowledge base according to the result of the entity identification to obtain a candidate relation corresponding to the entity. Taking the corresponding relation of the correct answer as a positive example; the other relationships are negative examples.
Step S1.3, training set preparation:
for each problem sample in the data set, the above processing is carried out, and for the negative examples which are less than 50, other relations are randomly up-sampled from the relation pool to form the negative examples, so that the proportion of the positive samples to the negative samples is 1: 50.
Step S2, model training process:
and step S2.1, inputting the user question and the correct relation and the candidate relation obtained in the step S1 into the neural network proposed by the method, and respectively obtaining the feature representation q of the question, the feature representation r of the correct relation and the feature representation r' of the error relation through network mapping.
And step S2.2, forming a triad pair of (q, r, r') by the feature vectors obtained by the processing of the step S2.1, and calculating an inner product score1 of the question and the correct answer and an inner product score2 of the question and the wrong answer.
S2.3, calculating the loss value of the triad pair obtained in the step S2.2 according to a loss calculation formula provided by the method, and optimizing the neural network model by taking a loss function value as a target;
the formula is as follows
loss1=max(0,-y*(S(r;q)-S(r’;q))+margin)
loss2=y*|cos(r,r’)|*max(0,cos(r,r’)-γ)
Wherein y in loss1 indicates whether the candidate answer is correct, and takes a value of +1 or-1, because r is specified as correct, y is always 1, S is a function for calculating the product of the two, and margin is a set hyper-parameter and defaults to 0. Loss2 is a weighted cosine similarity Loss function, where cos represents the cosine distance between the two, y represents whether the two involved in the calculation are of the same class, where y is constant equal to 1 because r is not the same as r', and γ is a hyperparameter with a default of 0.
Step S2.4, repeating the step S2.1 to the step S2.3 until the neural network model converges;
step S3, an identification process;
s3.1, obtaining a candidate relation set of the user questions through the user questions;
and S3.2, taking the candidate relation set and the problem obtained in the step S3.1 as the input of the neural network, and obtaining the score of each candidate relation through network calculation.
And S3.3, sequencing the scores of the candidate relations, and selecting the answer with the maximum score as the final correct relation. The input text of the network is guaranteed to be noiseless and aligned.
Each training sample is subjected to network mapping to obtain a feature vector represented by a one-dimensional array, and the dimension of the feature is limited to 128-dimension to 512-dimension.
Text matching score calculation formula: s ═ Σif(qi,ri). Wherein q isiFor feature coding of the problem, riFeatures for different aspects of the answer are encoded. The attention mechanism is utilized to influence the question representation according to different aspects of the answer, and the specific formula is as follows:
Figure BDA0002339395050000031
Figure BDA0002339395050000032
qj=∑iaij×hi (3)
wherein h isjIndicating the hidden state of the jth word in the question, riShowing the hidden state of the ith part of the relationship, wijThe similarity of the current word to the current relational representation is calculated. Calculating the similarity between each word in the question and the relation and then normalizing to obtain aijI.e., to indicate which word in the question is more attentive to the current candidate answer. And finally, carrying out weighted summation on the obtained weights and the original question representation to obtain the final representation of the question for the current answer. In addition, a function f for measuring the similarity of the twoattThe calculation method is very diverse Hu. The traditional method is to calculate the inner product between vectors, and can also introduce weight matrix calculation or use a fully connected neural network after splicing the vectors, and the like.
Triple loss calculation formula:
loss1=max(0,-y*(S(rpos;q)-S(rneg;q))+margin) (4)
loss2=y*|cos(rpos,rneg)|*max(0,cos(rpos,rneg)-γ) (5)
the formula of the text matching score performs the final selection by calculating the degree of matching of the question and the answer. In this process, a loss function such as equation (4) is used, and the goal is to increase the degree of matching between the question and the correct answer and decrease the degree of matching between the question and the wrong answer. This loss function is fixed for a given hyper-parameter margin and takes a value of 0-1. For a similar answer, the score should be similar, and the interval between the correct answer and the incorrect answer is close to the value of margin, so that the direction distance of the similar answer is too close by the model, which is not favorable for the model to distinguish the correct answer. Therefore, the method adds the loss function of formula (5) as a target on the basis of the loss function, so that the longer the direction distance between different answer representations is, the better.
The method of the invention performs end-to-end experiments on SimpleQuestions data sets, and the experimental result is superior to other recent methods for performing experiments on the data sets.
Drawings
Fig. 1 is an overall flow chart according to the present invention.
Fig. 2 is a model structure diagram according to the present invention.
Fig. 3 is a flowchart of step S2 according to the present invention.
Fig. 4 is a flowchart of step S3 according to the present invention.
FIG. 5 is a flow chart of the question answering method of the present invention.
Detailed Description
For the purpose of promoting a better understanding of the objects, features and advantages of the invention, reference is made to the following detailed description taken in conjunction with the accompanying drawings.
Step S1, the entity in the question is obtained.
Step S1.1, firstly, processing a training data set, marking data in a question by using a knowledge base, and converting an entity recognition problem into a sequence marking task for processing.
And S1.2, training the processed data set by using the existing sequence labeling network model, such as Bi-LSTM + CRF.
And S1.3, marking the trained model on the test set, and marking out the entity of the test set.
And S1.4, regarding some entities, the entity marked in the question is not completely the same as the entity in the knowledge base, so that the used entities are processed by using name lists, and the entity in the question is linked with the entity in the knowledge base, so that the sequence number of the entity in the question in the knowledge base is obtained.
Step S2, training a text matching model fused with the multi-loss function:
and S2.1, obtaining entity recognition results of the training set and the test set in the step 1, and inquiring a knowledge base by using the entity recognition results to obtain all answers corresponding to the entity in the knowledge base, wherein the answers are called as candidate answers. If the candidate answers corresponding to an entity in the training set are less, the wrong candidate answers can be added in an up-sampling mode for training.
And S2.2, acquiring a certain number of triad pairs (q, r, r') from the training set in each training, namely the question, the correct answer corresponding to the question and the wrong answer corresponding to the question. The question q is characterized by Bi-LSTM and then attention mechanism is carried out with the result of answer representation, and three different aspects of question representation, namely q1, q2 and q3, are obtained. Similarly, the answer is divided and represented by r1, r2 and r3, and the matching score S, S ═ Σ of the answer and the question is finally calculatedif(qi,ri)。
And step S2.3, obtaining scores of the questions and different answers after the forward propagation is finished. And calculating loss values loss1 and loss2 of the training by using the obtained scores, and reversely propagating and optimizing model parameters by using an Adam optimizer with the loss1 and the loss2 as targets.
And step S2.4, repeatedly executing the step S2.2 to the step S2.4 until the model converges.
Step S3: and (3) prediction process:
and S3.1, utilizing the pre-trained entity recognition model to perform entity recognition on the question of the user.
S3.2, inquiring the entity identification result into a knowledge base to obtain the correspondence of the problem entity in the knowledge base; entity-related triples are then obtained as candidate answers.
And step S3.3, calculating the candidate answers and the question matching scores acquired in the step S3.2 in sequence.
Step S3.4, the most correct answer with the highest score is selected and pushed to the user.
The method adopts SimpleQuestions data set. This data set is all a first order problem, with 75910 samples in the training set, 10845 in the validation set, and 21687 in the test set. The hyper-parameters used during training when the model recognition efficiency is highest are shown in the following table X:
parameter(s) Means of Numerical value
max_nrof_epochs Algebra of training 32
epoch_size Number of times of training in each generation 64
neg_ans Number of candidate answers to question upsampling 50
Optimizer Selected optimizer Adam
learning_rate Learning rate 0.001
The end-to-end experimental results are as follows, wherein the base model is the experimental result without the multiple loss function and attention mechanism proposed by the method:
model (model) Results of the experiment
basemodel 76.31
multi-loss+attentionmodel 77.23
The method constructs a text matching model fusing multiple loss functions, and further improves the model accuracy by fusing an attention mechanism and the loss functions of multiple optimization targets. The method can be applied to the fields of search engines, intelligent customer service and the like.
The above description is only an embodiment of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can understand that the modifications or substitutions are included in the scope of the present invention, and therefore, the scope of the present invention should be determined by the protection scope of the claims.

Claims (3)

1. A knowledge base question-answering method fusing a multi-loss function and an attention mechanism is characterized by comprising the following steps: the method comprises three processes of data preparation, neural network model training and text matching, and specifically comprises the following steps;
in step S1, the data preparation process is as follows:
step S1.1, entity identification:
identifying an entity in the problem to obtain an entity sequence;
step S1.2, obtaining candidate answers:
inquiring a knowledge base according to the result of entity identification to obtain a candidate relation corresponding to the entity; taking the corresponding relation of the correct answer as a positive example; other relationships are negative examples;
step S1.3, training set preparation:
performing the above processing on each problem sample in the data set, and randomly upsampling other relations from the relation pool to form a negative example for the sample with less negative examples;
step S2, neural network model training process:
s2.1, processing the user question in step S1 to obtain an entity sequence of the question and a corresponding candidate relation sequence, inputting the sequences into Word Embedding to obtain vectors of each sequence, and then extracting sentence characteristics from two directions of positive sequence and negative sequence by using the vectors through Bi-LSTM; obtaining a hidden vector of each word in the question in a positive sequence network; obtaining a hidden vector of each word in a reverse order network, and then splicing the hidden vectors of each word obtained in the forward order network and the reverse order network; respectively obtaining a problem feature vector representation q, a correct feature vector representation r and an error feature vector representation r';
step S2.2, forming a ternary group pair of (q, r, r') by the feature vectors obtained by the processing in the step S2.1, inputting the ternary group pair into an Attention Model for calculating and matching Score, and obtaining an inner product Score S1 expressed by a calculated question and a correct answer and an inner product Score S2 expressed by a question and a wrong answer;
s2.3, according to the provided loss calculation formula, two loss functions are adopted for calculation, and the difference of expression between the relations is enlarged on the basis of enabling the problem and correct relation score to be higher and the error relation score to be lower;
calculating the loss value of the triad pair obtained in the step S2.2, and optimizing the neural network model by taking the loss function value as a target; the formula is as follows:
loss=max(0,-y*(S(r;q)-S(r’;q))+margin)
loss2=y*|cos(r,r’)|*max(0,cos(r,r’)-γ)
step S2.4, repeating the step S2.1 to the step S2.3 until the neural network model converges;
step S3, an identification process;
s3.1, obtaining a candidate relation set of the user questions through the user questions; the user problem is subjected to entity identification to obtain an identification result, the identification result is sent to a constructed knowledge base to be inquired, all relations corresponding to candidate entities are obtained, and the relations are used as a set of candidate relations; in the candidate relation set, the relation matched with the user problem is used as a positive example, and other relations are used as negative examples;
s3.2, taking the candidate relation set and the problem obtained in the step S3.1 as the input of a neural network, and obtaining the score of each candidate relation through network calculation;
s3.3, sorting the scores of the candidate relations, and selecting the relation with the largest score as the final correct relation; ensuring that the input text of the network is noiseless and aligned;
each training sample is subjected to network mapping to obtain a feature vector represented by a one-dimensional array, and the dimension of the feature is limited between 128 dimensions and 512 dimensions;
text matching score calculation formula: s ═ Σif(qi,ri) (ii) a Wherein q isiFor feature coding of the problem, riCoding features for different aspects of the answer; the attention mechanism is utilized to influence the question representation according to different aspects of the answer, and the specific formula is as follows:
wij=fatt(hj,ri) (1)
Figure FDA0003544155070000021
qj=∑iaij×hi (3)
wherein h isjIndicating the hidden state of the jth word in the question, riShowing the hidden state of the ith part of the relationship, wijCalculating the similarity of the current word and the current relation representation; calculating the similarity between each word in the question and the relation and then normalizing to obtain aijI.e. indicating which word in the question is more attentive for the current candidate answer; and finally, carrying out weighted summation on the obtained weights and the original question representation to obtain the final representation of the question for the current answer.
2. The method of claim 1, wherein the method comprises the steps of: the output dimension of each training sample at the network mapping layer is the same, so that the inner product calculation of the vector can be carried out.
3. The method of claim 1, wherein the method comprises the steps of: the data set is processed to sample negative examples based on the knowledge base used to augment the training samples.
CN201911369897.4A 2019-12-26 2019-12-26 Knowledge base question-answering method integrating multiple loss functions and attention mechanism Active CN111125316B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911369897.4A CN111125316B (en) 2019-12-26 2019-12-26 Knowledge base question-answering method integrating multiple loss functions and attention mechanism

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911369897.4A CN111125316B (en) 2019-12-26 2019-12-26 Knowledge base question-answering method integrating multiple loss functions and attention mechanism

Publications (2)

Publication Number Publication Date
CN111125316A CN111125316A (en) 2020-05-08
CN111125316B true CN111125316B (en) 2022-04-22

Family

ID=70503491

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911369897.4A Active CN111125316B (en) 2019-12-26 2019-12-26 Knowledge base question-answering method integrating multiple loss functions and attention mechanism

Country Status (1)

Country Link
CN (1) CN111125316B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112256847B (en) * 2020-09-30 2023-04-07 昆明理工大学 Knowledge base question-answering method integrating fact texts
CN112487172B (en) * 2020-12-16 2023-07-18 北京航空航天大学 Active learning method oriented to deep answer recommendation model
CN112818808B (en) * 2021-01-27 2024-01-19 南京大学 High-precision gait recognition method combining two vector embedding spaces
CN113505209A (en) * 2021-07-09 2021-10-15 吉林大学 Intelligent question-answering system for automobile field

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109344244A (en) * 2018-10-29 2019-02-15 山东大学 A kind of the neural network relationship classification method and its realization system of fusion discrimination information
CN109766427A (en) * 2019-01-15 2019-05-17 重庆邮电大学 A kind of collaborative virtual learning environment intelligent answer method based on stacking Bi-LSTM network and collaboration attention
CN110134771A (en) * 2019-04-09 2019-08-16 广东工业大学 A kind of implementation method based on more attention mechanism converged network question answering systems
CN110188175A (en) * 2019-04-29 2019-08-30 厦门快商通信息咨询有限公司 A kind of question and answer based on BiLSTM-CRF model are to abstracting method, system and storage medium
CN110413704A (en) * 2019-06-27 2019-11-05 浙江大学 Entity alignment schemes based on weighting neighbor information coding

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11501076B2 (en) * 2018-02-09 2022-11-15 Salesforce.Com, Inc. Multitask learning as question answering

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109344244A (en) * 2018-10-29 2019-02-15 山东大学 A kind of the neural network relationship classification method and its realization system of fusion discrimination information
CN109766427A (en) * 2019-01-15 2019-05-17 重庆邮电大学 A kind of collaborative virtual learning environment intelligent answer method based on stacking Bi-LSTM network and collaboration attention
CN110134771A (en) * 2019-04-09 2019-08-16 广东工业大学 A kind of implementation method based on more attention mechanism converged network question answering systems
CN110188175A (en) * 2019-04-29 2019-08-30 厦门快商通信息咨询有限公司 A kind of question and answer based on BiLSTM-CRF model are to abstracting method, system and storage medium
CN110413704A (en) * 2019-06-27 2019-11-05 浙江大学 Entity alignment schemes based on weighting neighbor information coding

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
End-to-end Answer Selection via Attention-Based Bi-LSTM Network;Yuqi Ren.etc;《2018 1st IEEE International Conference on Hot Information-Centric Networking (HotICN)》;20190110;第264-265页 *
Inner Attention Based bi-LSTMs with Indexing for non-Factoid Question Answering;Akshay Sharma.etc;《2018 17th IEEE International Conference on Machine Learning and Applications (ICMLA)》;20190117;第1-7页 *

Also Published As

Publication number Publication date
CN111125316A (en) 2020-05-08

Similar Documents

Publication Publication Date Title
CN111125316B (en) Knowledge base question-answering method integrating multiple loss functions and attention mechanism
CN109271505B (en) Question-answering system implementation method based on question-answer pairs
CN109918491B (en) Intelligent customer service question matching method based on knowledge base self-learning
CN112015868B (en) Question-answering method based on knowledge graph completion
CN111563149B (en) Entity linking method for Chinese knowledge map question-answering system
CN112667794A (en) Intelligent question-answer matching method and system based on twin network BERT model
CN112417894B (en) Conversation intention identification method and system based on multi-task learning
CN111143539B (en) Knowledge graph-based teaching field question-answering method
CN111046155A (en) Semantic similarity calculation method based on FSM multi-turn question answering
CN110909116B (en) Entity set expansion method and system for social media
CN116127095A (en) Question-answering method combining sequence model and knowledge graph
CN113962219A (en) Semantic matching method and system for knowledge retrieval and question answering of power transformer
CN115599899B (en) Intelligent question-answering method, system, equipment and medium based on aircraft knowledge graph
CN114818703B (en) Multi-intention recognition method and system based on BERT language model and TextCNN model
CN116992007B (en) Limiting question-answering system based on question intention understanding
CN112115242A (en) Intelligent customer service question-answering system based on naive Bayes classification algorithm
CN111966810A (en) Question-answer pair ordering method for question-answer system
CN111782788A (en) Automatic emotion reply generation method for open domain dialogue system
CN113672720A (en) Power audit question and answer method based on knowledge graph and semantic similarity
CN112632250A (en) Question and answer method and system under multi-document scene
CN115080710A (en) Intelligent question-answering system adaptive to knowledge graphs in different fields and construction method thereof
CN110874392B (en) Text network information fusion embedding method based on depth bidirectional attention mechanism
CN111666374A (en) Method for integrating additional knowledge information into deep language model
CN111581365B (en) Predicate extraction method
CN116737911A (en) Deep learning-based hypertension question-answering method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant