CN111046155A - Semantic similarity calculation method based on FSM multi-turn question answering - Google Patents

Semantic similarity calculation method based on FSM multi-turn question answering Download PDF

Info

Publication number
CN111046155A
CN111046155A CN201911183824.6A CN201911183824A CN111046155A CN 111046155 A CN111046155 A CN 111046155A CN 201911183824 A CN201911183824 A CN 201911183824A CN 111046155 A CN111046155 A CN 111046155A
Authority
CN
China
Prior art keywords
question
user
semantic similarity
similarity calculation
knowledge base
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911183824.6A
Other languages
Chinese (zh)
Inventor
王黎成
高阳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhongbo Information Technology Institute Co ltd
Original Assignee
Zhongbo Information Technology Institute Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhongbo Information Technology Institute Co ltd filed Critical Zhongbo Information Technology Institute Co ltd
Priority to CN201911183824.6A priority Critical patent/CN111046155A/en
Publication of CN111046155A publication Critical patent/CN111046155A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/332Query formulation
    • G06F16/3329Natural language query formulation or dialogue systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/3331Query processing
    • G06F16/334Query execution
    • G06F16/3344Query execution using natural language analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures

Abstract

The invention discloses a semantic similarity calculation method based on FSM multi-round questions and answers, which is characterized in that the semantic similarity calculation method based on FSM multi-round questions and answers and a knowledge base are used for carrying out multi-round matching on data brought into a Transformer DSSM semantic similarity calculation model according to user input problems, and candidate answers are returned to users, so that the problem that a large amount of human resources are often consumed when a traditional customer service system is faced with the situations of insufficient peak number of services and insufficient number of low-ebb people is solved, and the efficiency of obtaining answers of common problems in related fields by the users is improved.

Description

Semantic similarity calculation method based on FSM multi-turn question answering
Technical Field
The invention relates to a method for improving the matching speed and precision of question-answer pairs in a telecommunication intelligent customer service system, in particular to a semantic similarity calculation method based on FSM multi-turn question-answers.
Background
Natural language processing is an important direction in the fields of computer science and artificial intelligence. It studies various theories and methods that enable efficient communication between humans and computers using natural language. Is the field of computer science, artificial intelligence, linguistics focusing on the interaction between computers and human (natural) language. Because the traditional manual customer service system usually consumes a large amount of human resources under the conditions of insufficient peak number of people and insufficient valley number of people. Therefore, the application of natural language processing in the intelligent customer service question-answering system can improve the efficiency of obtaining answers of common questions in related fields by users on one hand, and can solve the problems of low response speed, high cost and the like of traditional manual customer service on the other hand.
In deep learning, a method for performing characterization learning based on data is motivated to establish and simulate a neural network for analyzing and learning of human brain, which simulates the mechanism of human brain to interpret data such as image, sound and text. It can discover a distributed feature representation of data by combining lower-level features to form a more abstract higher-level feature representation attribute class or feature. In the intelligent customer service question-answering system, for the telecommunication field, because a large amount of common questions and data corresponding to the common questions exist, a knowledge base is formed by sorting the data, a technology for calculating the semantic similarity of two sentences to the semantic similarity in deep learning is introduced, according to the questions input by the user, a semantic similarity calculation model transformer DSSM which is trained in advance is adopted, the answers which are most similar to the questions input by the user are obtained in the knowledge base in a matching mode, and the answers are output to the user.
The semantic similarity calculation method based on FSM multi-turn question answering is mainly different from the traditional question answering system in that the system can continuously communicate with a user for multiple times, the communication content of each time is performed based on the last or previous communication results, and the method aims to gradually narrow the matching range of user questions and questions in a knowledge base and improve the matching precision of question-answer pairs in each turn of question answering process for users unfamiliar with services, so that the users are guided to find out needed answers. Therefore, in the semantic similarity calculation method based on multiple rounds of FSM question answering, in addition to the requirement of performing similarity calculation matching on the questions given by the user, whether the next round of question answering process is adopted needs to be determined according to the feedback result of the user.
Disclosure of Invention
The purpose of the invention is as follows: the invention provides a novel method for improving the matching speed and precision of question-answer pairs in a traditional telecommunication FAQ question-answer system, namely, a plurality of rounds of question-answers based on FSM are introduced on the basis of a semantic similarity calculation model.
The technical scheme is as follows: the semantic similarity calculation method based on FSM multi-turn questions and answers comprises two parts, namely a semantic similarity calculation model based on Transformer DSSM and a multi-turn questions and answers based on FSM. The semantic similarity calculation model part based on the Transformer DSSM comprises the following steps: (1) the method comprises the following steps of (1) replacing the traditional bidirectional RNN with a Transformer to extract the characteristics of an input sentence, so that the model training speed and the question-answer pair matching speed are increased; (2) and (5) performing feature vector similarity calculation by using DSSM (direct sequence spread spectrum), and returning top-k candidate answers to the user. The FSM-based multi-round question-answering portion includes: (1) an FSM for multi-turn question answering constructed according to specific service scenes; (2) and introducing a user problem type judgment and feedback function.
Has the advantages that: the invention has the obvious advantages that the semantic similarity calculation method based on FSM multi-turn question answering is utilized to solve the problem that the traditional customer service system usually consumes a large amount of human resources under the conditions of insufficient number of peak service people and insufficient number of valley service people, and the efficiency of obtaining answers of common problems in the related field by users is improved. The method is not only suitable for the intelligent customer service system in the telecommunication field, but also suitable for other vertical fields with a large number of question-answer pair knowledge bases.
Drawings
Fig. 1 is a general structural view of the present invention.
FIG. 2 is a process flow diagram of the present invention.
Fig. 3 shows a structure diagram of a DSSM model.
FIG. 4 shows a diagram of the transform encoder architecture.
FIG. 5 is a block diagram of the global-attitude calculation.
Detailed Description
As can be seen from FIG. 1, the whole system is mainly divided into a problem analysis module, a problem retrieval module, a problem matching module, a scoring module, a SimNet module and an FAQ data set storage module. The functions and the interaction modes of the modules are as follows:
(1) a problem analysis module: according to the questions input by the user, the scenes of the questions are judged (domain correlation, encyclopedia knowledge and chatting), question-answer pair data in a knowledge base are called according to the corresponding question scenes, each character in the input questions is converted into an index of the character in a dictionary, and the index is used for acquiring word vectors which are trained and stored in advance through BERT.
(2) A question retrieval module: and performing a first round of semantic similarity calculation according to the converted user questions and question-answer pair data extracted from the knowledge base, returning the retrieved answers to the user, and waiting for the user to perform feedback.
(3) A problem matching module: if the user feeds back the correct answer, the subsequent operation is stopped, the whole question-answering process is finished, and if the user feeds back the wrong answer, the question-answering of the type is selected from the knowledge base according to the question type selected by the user to carry out semantic similarity calculation on the data.
(4) A scoring module: and the scoring model ranks the N answers matched by the semantic similarity calculation according to the top-k and the threshold which are set in advance, returns top-k candidate answers, judges according to the threshold, and outputs the score to the user if the score of the candidate answer is higher than the threshold.
(5) A SimNet module: the model is mainly used for storing a Transformer model and providing a corresponding interface for training and calling the model.
(6) FAQ data set storage module: the system is used for storing the collected question-answer pair data set in advance, and the data set can contain chatting and encyclopedic related question-answer pair data besides vertical field data, so that the user experience is improved. As can be seen from fig. 2, the semantic similarity calculation method based on multiple rounds of questions and answers by FSM includes the following specific processing flows:
(1) the user inputs the extracted problem to the system, the system judges the scene according to the input problem, calls a corresponding data set in the knowledge base according to the judgment result, calculates the semantic similarity by using the word vector and the DSSM model which are trained in advance, and returns the calculation result to the user;
(2) the user judges whether the question is the answer required by the user according to the question, if so, the question-answer pair matching process is ended, otherwise, the subsequent operation is continued;
(3) if the returned answer is not needed by the user, inquiring whether the user wants to search a specific type of question, if so, outputting a keyword of the type of question, and selecting the keyword by the user; if the question is not the type, returning a plurality of question candidate types, selecting by a user, and inputting a question keyword;
(4) according to the selected question type and the keywords, further narrowing the question-answer pair data to be matched, carrying out semantic similarity calculation on the questions of the user, and returning candidate answers to the user;
(6) if the returned answer is the user's need, the question-answering process is ended, otherwise, the step 1 is returned to continue, or the user exits by himself.
Fig. 3 to 5 are structural diagrams of a model for calculating semantic similarity, which are described below: the DSSM may be divided into three layers from bottom to top: the structure of the input layer, the presentation layer, and the matching layer is shown in FIG. 3 below.
1. Input layer
User questions and questions to be matched of the knowledge base are converted into three-dimensional arrays represented by word vectors (BERT) respectively, the three-dimensional arrays are used as input of a representation layer, the word vectors are obtained by training of online public text data, and a vector space is 768 dimensions.
2. Presentation layer
The presentation layer of the DSSM adopts a Transformer encoder part to replace the traditional CNN/RNN structure. Firstly, user question feature vectors output by an input layer and problem feature vectors to be matched of a knowledge base are respectively coded, so that more abstract features of each word in a sentence are extracted, and then sentence feature representation originally composed of each word vector is converted into new sentence features represented by one 768-dimensional vector through global-attribute. The overall structure of the transform Encoder part is shown in fig. 4, and is divided into 6 small encoders, wherein each Encoder comprises two modules, self-attack and full-connect. The calculation method of global-attitude is shown in FIG. 5 below.
3. Matching layer
After the sentence feature vectors of the user question and the question to be matched in the knowledge base are obtained by the representation layer, the semantic similarity between the sentence feature vectors can be represented by cosine distance of the two semantic vectors (768 dimensions):
Figure BDA0002291947790000041
wherein Q represents a user question and S represents a question to be matched by the knowledge base.
Therefore, for the problem input by the user, when the semantic similarity calculation is required to be carried out on the problem input by the user and all the problem contents selected in the knowledge base, firstly, the feature vector representation of the user problem and the feature vector representation of a plurality of problems to be matched in the knowledge base are respectively obtained by the front 2 layers, then, the cosine similarity calculation is carried out on the feature vector of the user problem and the feature vector of each problem to be matched in the knowledge base in sequence, and finally, the semantic similarity between the user problem and each matched problem can be converted into a posterior probability through a softmax function:
Figure BDA0002291947790000042
wherein r is a smoothing factor of softmax, S + is a positive sample of all problems to be matched, S-is a negative sample (random negative sampling is adopted) of all problems to be matched, and S is the whole sample space of all problems to be matched.
In the training phase, we minimize the loss function by maximum likelihood estimation:
Figure BDA0002291947790000043
the residuals will propagate backward in the transform of the representation layer, and finally the model is converged by Stochastic Gradient Descent (SGD) to obtain the parameters { Wi, bi } of each network layer.
Although the preferred embodiments of the present invention have been described in detail, the present invention is not limited to the details of the embodiments, and various equivalent modifications can be made within the technical spirit of the present invention, and the scope of the present invention is also within the scope of the present invention.

Claims (6)

1. A semantic similarity calculation method based on FSM multi-turn questions and answers is characterized in that efficiency and accuracy of question matching are provided through the use of an FSM multi-turn questions and DSSM semantic similarity calculation model.
2. The semantic similarity calculation method based on FSM multi-round question answering according to claim 1, characterized in that model training and calculation speed is increased by converting a bidirectional RNN model of a traditional DSSM (direct sequence spread spectrum) representation layer into a Transformer capable of parallel processing of input data.
3. The method for calculating the semantic similarity based on multiple rounds of questions and answers of the FSM according to claim 1, characterized in that a more flexible migration framework is provided, the question and answer process based on claim 1 can be used for intelligent customer service systems in different vertical fields, and during migration, only the common question and answer beneficial data need to be sorted in advance and stored in a knowledge base, and the DSSM model is trained according to the data in the knowledge base.
4. A semantic similarity calculation method based on FSM multi-turn question answering is characterized by comprising the following steps
Step 1, a user inputs the extracted problem to a system, the system carries out scene judgment according to the input problem, calls a corresponding data set in a knowledge base according to a judgment result, carries out semantic similarity calculation by using a word vector and a DSSM module which are trained in advance, and returns a calculation result to the user;
step 2, the user judges whether the question is the answer required by the user according to the question, if so, the question-answer pair matching process is ended, otherwise, the follow-up operation is continued;
step 3, if the returned answer is not needed by the user, inquiring whether the user wants to search a specific type of question, if so, outputting a question keyword of the type, and selecting the keyword by the user; if the question is not the type, returning a plurality of question candidate types, selecting by a user, and inputting a question keyword;
step 4, further narrowing the question and answer pair data to be matched according to the selected question type and the keywords, carrying out semantic similarity calculation on the questions of the user, and returning candidate answers to the user;
and 5, if the returned answer is required by the user, ending the question-answering process, otherwise, returning to the step 1 to continue, or the user quits.
5. The method for semantic similarity calculation based on FSM multi-turn question answering of claim 4,
the DSSM module includes:
the input layer is used for converting the user problems and the problems to be matched of the knowledge base into three-dimensional arrays represented by word vectors respectively and taking the three-dimensional arrays as the input of the representation layer, wherein the word vectors are obtained by training online public text data, and the vector space is 768 dimensions;
the presentation layer adopts a Transformer encoder part to respectively encode the user problem feature vector output by the input layer and the problem feature vector to be matched of the knowledge base, so as to extract more abstract features of each word in the sentence, and converts the sentence feature representation originally composed of each word vector into a new sentence feature represented by a 768-dimensional vector through global-attribute; the Transformer Encoder is divided into 6 small Encoders in total, wherein each Encoder comprises two modules of self-attack and full-connect;
and the matching layer is used for obtaining sentence characteristic vectors of the user question and the question to be matched of the knowledge base respectively through the representation layer, and the semantic similarity between the sentence characteristic vectors and the sentence characteristic vectors can be represented by cosine distance of the two semantic vectors (768 dimensions):
Figure FDA0002291947780000021
q represents a user problem, and S represents a problem to be matched of a knowledge base;
for the problem input by the user, when the semantic similarity calculation is required to be carried out on the semantic similarity calculation with all the problem contents selected in the knowledge base, firstly, the feature vector representation of the user problem and the feature vector representation of a plurality of problems to be matched in the knowledge base are respectively obtained by the front 2 layers, then, the cosine similarity calculation is carried out on the feature vector of the user problem and the feature vector of each problem to be matched in the knowledge base in sequence, and finally, the semantic similarity between the user problem and each matched problem can be converted into a posterior probability through a softmax function:
Figure FDA0002291947780000022
wherein r is a smoothing factor of softmax, S + is a positive sample of all problems to be matched, S-is a negative sample of all problems to be matched, and S is the whole sample space of all problems to be matched;
in the training phase, the loss function is minimized by maximum likelihood estimation:
Figure FDA0002291947780000023
the residuals will propagate backward in the transform of the representation layer, and finally the model is converged by Stochastic Gradient Descent (SGD) to obtain the parameters { Wi, bi } of each network layer.
6. A semantic similarity calculation system based on FSM multi-turn question answering is characterized by comprising the following modules:
a problem analysis module: according to the questions input by the user, the scenes of the questions are judged, question and answer pair data in a knowledge base are called according to the corresponding question scenes, and each character in the input questions is converted into an index of the character in a dictionary for acquiring a word vector pre-trained and stored through BERT;
a question retrieval module: performing a first round of semantic similarity calculation according to the converted user questions and question-answer pair data extracted from the knowledge base, returning the retrieved answers to the user, and waiting for the user to perform feedback;
a problem matching module: if the user feeds back the correct answer, stopping the subsequent operation, ending the whole question-answering process, and if the user feeds back the wrong answer, selecting the question-answering of the type from the knowledge base according to the question type selected by the user to carry out semantic similarity calculation on the data;
a scoring module: the scoring model ranks the N answers matched by semantic similarity calculation according to scores according to top-k and threshold which are set in advance, returns top-k candidate answers, judges according to the threshold, and outputs the score to the user if the score of the candidate answer is higher than the threshold;
a SimNet module: the model is mainly used for storing a Transformer model and providing a corresponding interface for training and calling the model;
(6) FAQ data set storage module: the system is used for storing the collected question-answer pair data set in advance, and the data set can contain chatting and encyclopedic related question-answer pair data besides vertical field data, so that the user experience is improved.
CN201911183824.6A 2019-11-27 2019-11-27 Semantic similarity calculation method based on FSM multi-turn question answering Pending CN111046155A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911183824.6A CN111046155A (en) 2019-11-27 2019-11-27 Semantic similarity calculation method based on FSM multi-turn question answering

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911183824.6A CN111046155A (en) 2019-11-27 2019-11-27 Semantic similarity calculation method based on FSM multi-turn question answering

Publications (1)

Publication Number Publication Date
CN111046155A true CN111046155A (en) 2020-04-21

Family

ID=70233796

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911183824.6A Pending CN111046155A (en) 2019-11-27 2019-11-27 Semantic similarity calculation method based on FSM multi-turn question answering

Country Status (1)

Country Link
CN (1) CN111046155A (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111552781A (en) * 2020-04-29 2020-08-18 焦点科技股份有限公司 Method for retrieving and reading by combined machine
CN111783428A (en) * 2020-07-07 2020-10-16 杭州叙简科技股份有限公司 Emergency management type objective question automatic generation system based on deep learning
CN112784600A (en) * 2021-01-29 2021-05-11 北京百度网讯科技有限公司 Information sorting method and device, electronic equipment and storage medium
CN113283245A (en) * 2021-03-30 2021-08-20 中国科学院软件研究所 Text matching method and device based on double-tower structure model
CN113282733A (en) * 2021-06-11 2021-08-20 上海寻梦信息技术有限公司 Customer service problem matching method, system, device and storage medium
CN113590790A (en) * 2021-07-30 2021-11-02 北京壹心壹翼科技有限公司 Question retrieval method, device, equipment and medium applied to multiple rounds of question answering
CN114490965A (en) * 2021-12-23 2022-05-13 北京百度网讯科技有限公司 Question processing method and device, electronic equipment and storage medium
CN115795018A (en) * 2023-02-13 2023-03-14 广州海昇计算机科技有限公司 Multi-strategy intelligent searching question-answering method and system for power grid field

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102637192A (en) * 2012-02-17 2012-08-15 清华大学 Method for answering with natural language
CN103902652A (en) * 2014-02-27 2014-07-02 深圳市智搜信息技术有限公司 Automatic question-answering system
CN106663129A (en) * 2016-06-29 2017-05-10 深圳狗尾草智能科技有限公司 A sensitive multi-round dialogue management system and method based on state machine context
CN108717433A (en) * 2018-05-14 2018-10-30 南京邮电大学 A kind of construction of knowledge base method and device of programming-oriented field question answering system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102637192A (en) * 2012-02-17 2012-08-15 清华大学 Method for answering with natural language
CN103902652A (en) * 2014-02-27 2014-07-02 深圳市智搜信息技术有限公司 Automatic question-answering system
CN106663129A (en) * 2016-06-29 2017-05-10 深圳狗尾草智能科技有限公司 A sensitive multi-round dialogue management system and method based on state machine context
CN108717433A (en) * 2018-05-14 2018-10-30 南京邮电大学 A kind of construction of knowledge base method and device of programming-oriented field question answering system

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111552781B (en) * 2020-04-29 2021-03-02 焦点科技股份有限公司 Method for retrieving and reading by combined machine
CN111552781A (en) * 2020-04-29 2020-08-18 焦点科技股份有限公司 Method for retrieving and reading by combined machine
CN111783428A (en) * 2020-07-07 2020-10-16 杭州叙简科技股份有限公司 Emergency management type objective question automatic generation system based on deep learning
CN111783428B (en) * 2020-07-07 2024-01-23 杭州叙简科技股份有限公司 Emergency management objective question automatic generation system based on deep learning
CN112784600B (en) * 2021-01-29 2024-01-16 北京百度网讯科技有限公司 Information ordering method, device, electronic equipment and storage medium
CN112784600A (en) * 2021-01-29 2021-05-11 北京百度网讯科技有限公司 Information sorting method and device, electronic equipment and storage medium
CN113283245A (en) * 2021-03-30 2021-08-20 中国科学院软件研究所 Text matching method and device based on double-tower structure model
CN113282733A (en) * 2021-06-11 2021-08-20 上海寻梦信息技术有限公司 Customer service problem matching method, system, device and storage medium
CN113282733B (en) * 2021-06-11 2024-04-09 上海寻梦信息技术有限公司 Customer service problem matching method, system, equipment and storage medium
CN113590790A (en) * 2021-07-30 2021-11-02 北京壹心壹翼科技有限公司 Question retrieval method, device, equipment and medium applied to multiple rounds of question answering
CN113590790B (en) * 2021-07-30 2023-11-28 北京壹心壹翼科技有限公司 Question retrieval method, device, equipment and medium applied to multi-round question and answer
CN114490965A (en) * 2021-12-23 2022-05-13 北京百度网讯科技有限公司 Question processing method and device, electronic equipment and storage medium
CN115795018A (en) * 2023-02-13 2023-03-14 广州海昇计算机科技有限公司 Multi-strategy intelligent searching question-answering method and system for power grid field

Similar Documents

Publication Publication Date Title
CN111046155A (en) Semantic similarity calculation method based on FSM multi-turn question answering
CN110795543B (en) Unstructured data extraction method, device and storage medium based on deep learning
CN109918491B (en) Intelligent customer service question matching method based on knowledge base self-learning
CN111259127B (en) Long text answer selection method based on transfer learning sentence vector
WO2023273170A1 (en) Welcoming robot conversation method
CN110516085A (en) The mutual search method of image text based on two-way attention
CN109101479A (en) A kind of clustering method and device for Chinese sentence
CN111414461B (en) Intelligent question-answering method and system fusing knowledge base and user modeling
CN110413783B (en) Attention mechanism-based judicial text classification method and system
CN109635083B (en) Document retrieval method for searching topic type query in TED (tele) lecture
CN108628935A (en) A kind of answering method based on end-to-end memory network
CN110096567A (en) Selection method, system are replied in more wheels dialogue based on QA Analysis of Knowledge Bases Reasoning
CN109145083B (en) Candidate answer selecting method based on deep learning
CN116127095A (en) Question-answering method combining sequence model and knowledge graph
Agarwal et al. EDUQA: Educational domain question answering system using conceptual network mapping
CN116166782A (en) Intelligent question-answering method based on deep learning
CN113221530A (en) Text similarity matching method and device based on circle loss, computer equipment and storage medium
CN111523328B (en) Intelligent customer service semantic processing method
Wang et al. On distinctive image captioning via comparing and reweighting
Satar et al. Semantic role aware correlation transformer for text to video retrieval
CN111581364A (en) Chinese intelligent question-answer short text similarity calculation method oriented to medical field
CN113157885B (en) Efficient intelligent question-answering system oriented to knowledge in artificial intelligence field
CN108959467B (en) Method for calculating correlation degree of question sentences and answer sentences based on reinforcement learning
CN112632250A (en) Question and answer method and system under multi-document scene
CN110826341A (en) Semantic similarity calculation method based on seq2seq model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination