CN110968674B - Method for constructing question and comment pairs based on word vector representation - Google Patents

Method for constructing question and comment pairs based on word vector representation Download PDF

Info

Publication number
CN110968674B
CN110968674B CN201911229576.4A CN201911229576A CN110968674B CN 110968674 B CN110968674 B CN 110968674B CN 201911229576 A CN201911229576 A CN 201911229576A CN 110968674 B CN110968674 B CN 110968674B
Authority
CN
China
Prior art keywords
question
comments
answer
sentence
sentences
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911229576.4A
Other languages
Chinese (zh)
Other versions
CN110968674A (en
Inventor
钱宇
袁华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Electronic Science and Technology of China
Original Assignee
University of Electronic Science and Technology of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Electronic Science and Technology of China filed Critical University of Electronic Science and Technology of China
Priority to CN201911229576.4A priority Critical patent/CN110968674B/en
Publication of CN110968674A publication Critical patent/CN110968674A/en
Application granted granted Critical
Publication of CN110968674B publication Critical patent/CN110968674B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/3331Query processing
    • G06F16/334Query execution
    • G06F16/3344Query execution using natural language analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/06Buying, selling or leasing transactions
    • G06Q30/0601Electronic shopping [e-shopping]
    • G06Q30/0623Item investigation
    • G06Q30/0625Directed, with specific intent or strategy
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The invention discloses a method for constructing a question comment pair based on word vector representation, which comprises the following steps: acquiring a question and answer corpus and a comment corpus, and splicing the question and the answer of the question; respectively representing the spliced question-answer sentences and comments by using a word2vec tool and a word bag model based on word vectors to obtain sentence vectors of the spliced question-answer sentences and sentence vectors of the comment sentences; calculating the similarity between the spliced question-answer sentences and comment sentences by using the obtained sentence vectors; for each question, ranking the comments according to the obtained similarity, and taking the n comments with the highest similarity as candidate comments of the question; and (4) marking the candidate comments of each question by the expert, marking out comments which can answer the question and comments which cannot answer the question, and obtaining a question comment pair. The invention can effectively reduce the information search cost of consumers when shopping online.

Description

Method for constructing question comment pairs based on word vector representation
Technical Field
The invention relates to the technical field of big data mining, in particular to a method for constructing a problem comment pair based on word vector representation.
Background
At present, electronic commerce develops rapidly, and more people select online shopping. In the process of making a purchase decision by a consumer, information related to the commodity needs to be collected in order to judge whether the commodity is expected. In the conventional off-line shopping process, a consumer attempts to determine whether a product is in anticipation. However, when shopping online, the consumer cannot get the commodity information of one hand because the commodity object cannot be touched. The information generally provided by the merchant is attribute information such as commodity parameters, models and the like, and few experience-related information exists. Meanwhile, in order to sell more commodities, the negative information of the commodities provided by the merchants is rarely provided. Thus, if the merchant is the only source of consumer information, the consumer will be at greater risk while shopping. The goods purchased by the consumer may deviate from the merchant promotions. In order to reduce risks, reading other information acquisition channels such as user comments becomes important.
Senecal et al found in their research that reading comments is an effective means of obtaining information about goods and helping consumers make consumption decisions. When a consumer urgently needs a new mobile phone and wants to know that the delivery speed of the mobile phone is not fast, the consumer can find out an answer by reading the comments. However, for some hot goods with huge amount of comments, it is time-consuming and labor-consuming to find the information needed by the consumer from so many comments.
Therefore, many consumers do not want to obtain the desired information by reading the comments, but obtain the desired information by asking a question in the question and answer area and answering the question by a consumer who wants to buy the product. The study of students of Banerjee et al on the E-commerce question and answer area shows that the E-commerce question and answer area effectively helps consumers to make consumption decisions. However, the questions of the consumers are not answered by people, and even if the questions are answered by people, the answers are not necessarily timely. Although the consumer's question is answered, much time has passed. Assuming that a consumer asks during a limited time promotion in the hope of purchasing the item at a lower price, if the consumer receives the first answer three days later, the consumer can only purchase the item without receiving an answer in order not to miss the promotion. In this case, the answer is of little significance to the consumer who asked the question. Although the questions asked by the merchant can be answered quickly at times, the merchant hardly provides negative information about his/her own goods. If the consumer gives up further information acquisition because the information acquisition cost is too high, and purchases a product without obtaining sufficient information, there is a high possibility that the product does not match the expectation.
From the consumer's perspective, the large difference between the desired attributes and the actual attributes of the good can result in a poor shopping experience, wasting time and money for the consumer. From the aspect of merchants, a large amount of manpower and material resources of the merchants are consumed by consumers when applying for after-sale, and meanwhile, negative evaluation reduces public praise of the merchants in the consumers and is not beneficial to long-term operation. From the platform perspective, reconciling disputes between consumers and merchants also consumes significant costs.
Disclosure of Invention
In order to solve the problems in the prior art, the invention aims to provide a method for constructing a problem comment pair based on word vector representation.
In order to achieve the purpose, the invention adopts the technical scheme that: a method for constructing question and comment pairs based on word vector representation comprises the following steps:
s10, acquiring a question and answer corpus and a comment corpus, and splicing the question and the answer of the question;
s20, representing the spliced question and answer sentences and comments by using a word2vec tool and a word bag model based on word vectors respectively to obtain sentence vectors of the spliced question and answer sentences and sentence vectors of the comment sentences;
s30, calculating the similarity between the spliced question-answer sentences and comment sentences by using the sentence vectors obtained in the step S20;
s40, for each question, ranking the comments according to the similarity obtained in the step S30, and taking the n comments with the highest similarity as candidate comments of the question;
and S50, the expert marks the candidate comments of each question, marks out comments which can answer the question and comments which cannot answer the question, and then obtains a question comment pair.
As a preferred embodiment, in step S20, the word2vec tool includes a CBOW model and a Skip-Gram model, and the word vectors of the joined question-answer sentences and comments are obtained through the CBOW model and the Skip-Gram model.
As another preferred embodiment, in step S20, let the sentence length of the sentence be N, and the word vector of the i-th word be v i If the sentence vector of the sentence is s, the sentence vector can be obtained
Figure BDA0002303163580000031
As another preferred embodiment, in step S30, cosine similarity is used to calculate similarity between sentences, and V is set QA For sentence vectors, V, formed by concatenating question sentences and answer sentences R And if the sentence vector is a comment sentence vector, the cosine similarity between the sentence vector formed by splicing the question sentence and the answer sentence and the comment sentence vector is as follows:
Figure BDA0002303163580000032
the beneficial effects of the invention are:
the E-commerce question-answering task researched by the invention can be regarded as a free document-based question-answering task, when a problem occurs, information required by a consumer is retrieved from existing user comments through an E-commerce comment retrieval model, and comments related to the question are pushed to the consumer, so that the requirement of the consumer for obtaining the information is met. Because the direct corresponding relation does not exist between the user question and the comment, the invention introduces the question answer to improve the matching effect when matching the existing user question and the comment to construct the training sample. By answering questions posed to the consumer with the comment information, the consumer can obtain information he or she needs at a lower cost. The more the consumer's expectation of the goods will approach the true attributes of the goods. Therefore, the consumer can not buy the unexpected commodities because of not knowing the commodities enough, and further waste time and money. Similarly, merchants will get better public praise because consumers do not have a large drop in their expectations for goods. Disputes between the consumer and the merchant that the platform needs to handle are also reduced accordingly.
Drawings
FIG. 1 is a block flow diagram of an embodiment of the present invention;
FIG. 2 is a model framework of CBOW in an embodiment of the present invention;
FIG. 3 is a model framework for a Skip-gram in an embodiment of the present invention.
Detailed Description
Embodiments of the present invention will be described in detail below with reference to the accompanying drawings.
Examples
In the process of labeling the matching relationship between the questions and the comments, because the number of the questions and the comments is large and manual matching cannot be directly performed, the corresponding relationship between the questions and the comments is labeled by adopting an unsupervised matching model and manual screening method.
As shown in fig. 1, in the method for constructing a question and comment pair based on word vector representation, in this embodiment, a correspondence between a question and a comment is obtained by calculating a similarity between the question and the comment, sorting the question according to the similarity, and labeling the sorted question. Because the lengths of the question sentences and the answer sentences are generally short and the contained information is less than that of a complete statement sentence, the question sentences and the answer sentences are spliced in the embodiment, so that the contained information after the question and answer splicing is close to the complete statement sentence as much as possible.
For example, the question is "what material is the back cover of the mobile phone? "; the answer is "glass. "; the comment is that "the back cover of the handset is glass. "in question, only the back cover and the material of the mobile phone are mentioned, but what material is not mentioned specifically. In the answer, only four characters of glass material are provided, and the answer is only watched, so that the answer can not know what is said to be the glass material, and the situation that the reference is unclear exists. The similarity of the information contained in the questions and comments and the similarity of the information contained in the answers and comments are not high, and the effect is not good when the questions and comments are directly matched. However, when the questions and the answers are spliced together, the sentences formed by splicing the question sentences and the answer sentences are closer to the information contained in the comments, and the matching effect is improved.
After the question and the answer are spliced, sentence representation needs to be performed on the sentence and the comment after the question and the answer are spliced respectively. After the sentence vectors are obtained, calculating the cosine similarity between the sentences spliced by the question sentences and the answer sentences and the comments, and sequencing the comments to obtain a plurality of comments which are most similar to the question sentences. And finally, manually screening to obtain question and comment pairs.
Figure BDA0002303163580000051
TABLE 1
Pseudo codes of a construction method of the question and comment pair based on word vector representation are shown in table 1, input data are a processed question and answer corpus and a comment corpus, and an output result is the question and comment pair. First step (line 1): firstly, splicing a question and an answer of the question; second step (lines 2, 3): respectively representing the spliced question and answer sentences and comments by using a word2vec tool and a word bag model based on word vectors to obtain sentence vectors of the spliced question and answer sentences and sentence vectors of the comment sentences; third step (line 4): calculating the similarity between the jointed question and answer sentences and comment sentences by using the sentence vectors obtained in the second step; fourth step (line 5): for each question, ranking the comments according to the similarity obtained in the third step, and taking n comments with the highest similarity as candidate comments of the question; fifth step (line 6): and the expert marks the candidate comments of each question, marks out comments which can answer the question and comments which cannot answer the question, and obtains a question comment pair as output.
The present embodiment is explained in detail below:
in the Word vector characterization, a Word2vec tool is adopted in the Word vector characterization of the embodiment. Word2vec is a Word vector training tool and is constructed based on a statistical language model.
The main idea of a statistical language model is to represent the probability of occurrence of a natural language sequence as the product of the probabilities of occurrence of each word in the sequence. For a natural language sequence s with length t, the ith word in the sequence is w i Then it occurs with a probability according to formula (1), formula (2) and formula (3):
p(s)=p(w 1 ,w 2 ,w 3 ,…,w t ) (1)
p(s)=p(w 1 )p(w 2 |w 1 )p(w 3 |w 2 w 1 )…p(w t |w 1 ,w 2 ,w 3 …w t-1 ) (2)
Figure BDA0002303163580000061
since the computation load is too large when the natural language sequence s is too long, i.e. t is too large. To simplify the operation, the probability of the ith word occurring is related to only the n-1 words before it, using the Markov assumption. When n =2, the model can be simplified as follows, as shown in equation (4):
Figure BDA0002303163580000062
when n is larger, the more the above information is retained, and the larger the calculation amount is.
Word2vec is generally divided into two models, CBOW (Continuous Bag of Words) as shown in FIG. 2 and Skip-Gram as shown in FIG. 3.
The basic idea of CBOW is to predict the current word from context words within a certain range, and the order of the words in the context has no effect on the predicted words. The structure of CBOW is a three-layer neural network, the input layer input of which is the predicted word w t Front and back w t-2 、w t-1 、w t+1 、w t+2 One-hot coding of (1); the hidden layer is a fully connected layer and has no activation function; the output layer is a softmax layer that outputs a probability distribution characterizing the probability of each word in the dictionary occurring given the context. And after training is finished, the parameter weight matrix in the hidden layer is the word vector of all words in the dictionary. The learning target is the maximum likelihood function, see equation (5).
L=∑ w∈c log p(w│Context(w)) (5)
In contrast to the basic idea of CBOW, the basic idea of Skip-Gram is to predict its context from a given word. The structure of Skip-Gram is a three-layer neural network: the input of the input layer is a one-hot encoding of a given word; no activation function in the hidden layer; the output layer is a softmax layer and outputs the probability of each word of the dictionary occurring around the given word. And after training is finished, the parameter weight matrix in the hidden layer is the word vector of all words in the dictionary. The learning target is the maximum likelihood function, see equation (6).
L=∑ w∈c log p(Context(w)│w) (6)
In the Word2vec model, there is a linear relationship between many words, as shown in equation (7).
vec (King) -vec (queen) = vec (man) -vec (woman) (7)
Since the model contains information of surrounding words in the process of training Word vectors, the Word2vec tool is a Word vector tool which is commonly used at present and has a good effect, and therefore the Word2vec tool is selected in the embodiment.
In the embodiment, the text mainly studied is a text in a sentence form based on a bag-of-words model of word vectors. Therefore, the sentence needs to be characterized to obtain a sentence vector.
The embodiment improves the matching effect between the questions and the comments by splicing the questions and the answers. Since the method adopted by the embodiment is to splice the questions and comments directly, the word order of the spliced sentences is different from that of the normal Chinese sentences, and the sentences have some differences in structure. Therefore, a sentence representation model with added word order information cannot be used. In addition, the sentence vector model of the present embodiment needs to adopt an unsupervised model. In summary, the word vectors obtained above are used as input, and a word bag model based on the word vectors is used to characterize the sentence to obtain the sentence vectors.
Bag of words (BOW) is a commonly used text representation method in natural language processing. For a sentence or document, the bag-of-words model ignores its information about word order, syntactic structure, etc., and treats the sentence or document as a set of several words. In this set, each term appears independently of the other terms. One can judge the similarity of two sentences or documents to some extent by calculating the similarity between the two sets.
Therefore, in this embodiment, the above-mentioned sentence "what material is the rear cover of the mobile phone" is the rear cover of the mobile phone after the question and answer are spliced? And (3) glass. "can be expressed as { cell phone, back cover, what, material, glass }, comment statement" cell phone back cover is glass material. "can be expressed as { cell phone, back cover, is, glass, material }. Since the bag-of-words model is unordered, the difference between the two sets is simply an element of "what". The bag-of-words model of the embodiment well solves the problem of disordered language sequence caused by splicing the problem and the answer.
If only bag-of-words models are used for characterization, then the semantic information of the words themselves will be lost. Two sentences with similar semantics but without word repetition will not be similar after being characterized by the bag-of-words model. The present embodiment therefore employs a bag-of-words model based on word vectors. The basic principle of the word bag model based on word vectors is to regard words in a sentence as a set of unnecessary words, sum the word vectors of all the words in the set, then calculate the arithmetic mean value, and use the mean value as a sentence vector of the sentence, as shown in formula (8). This model considers the semantics of a sentence to be equal to the arithmetic mean of the semantics of each word in the sentence. Wherein the sentence length is N, the word vector of the ith word is v i The sentence vector of the sentence is s.
Figure BDA0002303163580000081
The embodiment of the retrieval and sorting model adopts a method for sorting the similarity of texts to perform text retrieval. After sentence vector representations of the question sentences and the answer sentences spliced and sentence vector representations of the comment sentences are obtained, the next task is to find comments capable of answering the questions from a large number of comments. After the question sentence and the answer sentence are spliced, the semantics of the spliced question sentence and answer sentence are close to those of the statement sentence, so the method adopted by the embodiment is to calculate the similarity of the two sentences in the vector space. The higher the similarity, the closer the information contained in the comment is to the question plus the information contained in the answer, so the more likely the comment is to be able to answer the question. Based on this, the embodiment sorts the comments according to the similarity between two sentences, and returns the comments with the highest similarity as candidate comments to the expert so as to facilitate the subsequent annotation work.
In terms of calculating the similarity, the present embodiment specifically adopts cosine similarity to represent the similarity between sentences. The cosine similarity is one of the most frequently used similarities for calculating the text similarity, and is calculated as shown in formula (9).
Wherein V QA For sentence vectors, V, formed by concatenating question sentences and answer sentences R Is a sentence vector of comments. Unlike the euclidean distance, the magnitude of the cosine similarity is only related to the vector direction, and is not related to the vector magnitude. When vector V QA And V R The cosine similarity between the two is 1 when the included angle is 0 degrees; when the directions of the two vectors are completely opposite, namely the included angle is 180 degrees, the cosine similarity between the two vectors is-1; when the two vectors are perpendicular, the cosine similarity between the two vectors is 0. That is, the cosine similarity between sentence vectors is larger when the semantics of two sentences are closer.
Figure BDA0002303163580000091
In summary, the present embodiment mainly introduces the relevant details of seed matching relationship identification and acquisition. Firstly, introducing a seed matching relationship identification and a specific structure of an obtained model; then, introducing a Word2vec tool adopted by Word vector representation; next, a bag-of-words model adopted by sentence vector representation in the model is introduced; finally, a method for retrieving and sequencing the comments by using the cosine similarity between the comments and the question sentences is set forth.
The above embodiments only express specific embodiments of the present invention, and the description is specific and detailed, but not to be understood as limiting the scope of the present invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the inventive concept, which falls within the scope of the present invention.

Claims (1)

1. A method for constructing question and comment pairs based on word vector representation is characterized by comprising the following steps:
s10, acquiring a question and answer corpus and a comment corpus, and splicing the questions and the answers of the questions;
s20, representing the spliced question and answer sentences and comments by using a word2vec tool and a word bag model based on word vectors respectively to obtain sentence vectors of the spliced question and answer sentences and sentence vectors of the comment sentences;
in step S20, the word2vec tool comprises a CBOW model and a Skip-Gram model, and spliced question-answer sentences and word vectors of comments are obtained through the CBOW model and the Skip-Gram model;
in step S20, let the sentence length of the sentence be N and the word vector of the ith word be v i If the sentence vector of the sentence is s, the sentence vector can be obtained
Figure FDA0004051534810000011
S30, calculating the similarity between the spliced question-answer sentences and comment sentences by using the sentence vectors obtained in the step S20;
in step S30, cosine similarity is used to calculate similarity between sentences, and V is set QA For sentence vectors, V, formed by concatenating question sentences and answer sentences R And if the sentence vector is a comment sentence vector, the cosine similarity between the sentence vector formed by splicing the question sentence and the answer sentence and the comment sentence vector is as follows:
Figure FDA0004051534810000012
s40, for each question, ranking the comments according to the similarity obtained in the step S30, and taking the n comments with the highest similarity as candidate comments of the question;
and S50, marking the candidate comments of each question by the expert, and marking out the comments which can answer the question and the comments which cannot answer the question, so that a question comment pair can be obtained.
CN201911229576.4A 2019-12-04 2019-12-04 Method for constructing question and comment pairs based on word vector representation Active CN110968674B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911229576.4A CN110968674B (en) 2019-12-04 2019-12-04 Method for constructing question and comment pairs based on word vector representation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911229576.4A CN110968674B (en) 2019-12-04 2019-12-04 Method for constructing question and comment pairs based on word vector representation

Publications (2)

Publication Number Publication Date
CN110968674A CN110968674A (en) 2020-04-07
CN110968674B true CN110968674B (en) 2023-04-18

Family

ID=70033159

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911229576.4A Active CN110968674B (en) 2019-12-04 2019-12-04 Method for constructing question and comment pairs based on word vector representation

Country Status (1)

Country Link
CN (1) CN110968674B (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016027714A1 (en) * 2014-08-21 2016-02-25 国立研究開発法人情報通信研究機構 Question sentence generation device and computer program
CN106815311A (en) * 2016-12-21 2017-06-09 杭州朗和科技有限公司 A kind of problem matching process and device
CN106997376A (en) * 2017-02-28 2017-08-01 浙江大学 The problem of one kind is based on multi-stage characteristics and answer sentence similarity calculating method
CN107239574A (en) * 2017-06-29 2017-10-10 北京神州泰岳软件股份有限公司 A kind of method and device of intelligent Answer System knowledge problem matching
CN107368547A (en) * 2017-06-28 2017-11-21 西安交通大学 A kind of intelligent medical automatic question-answering method based on deep learning
CN107980130A (en) * 2017-11-02 2018-05-01 深圳前海达闼云端智能科技有限公司 It is automatic to answer method, apparatus, storage medium and electronic equipment
CN108776677A (en) * 2018-05-28 2018-11-09 深圳前海微众银行股份有限公司 Creation method, equipment and the computer readable storage medium of parallel statement library
CN109241251A (en) * 2018-07-27 2019-01-18 众安信息技术服务有限公司 A kind of session interaction method
CN109271505A (en) * 2018-11-12 2019-01-25 深圳智能思创科技有限公司 A kind of question answering system implementation method based on problem answers pair

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9798818B2 (en) * 2015-09-22 2017-10-24 International Business Machines Corporation Analyzing concepts over time
US10592996B2 (en) * 2016-06-01 2020-03-17 Oath Inc. Ranking answers for on-line community questions
CN108197197A (en) * 2017-12-27 2018-06-22 北京百度网讯科技有限公司 Entity description type label method for digging, device and terminal device

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016027714A1 (en) * 2014-08-21 2016-02-25 国立研究開発法人情報通信研究機構 Question sentence generation device and computer program
CN106815311A (en) * 2016-12-21 2017-06-09 杭州朗和科技有限公司 A kind of problem matching process and device
CN106997376A (en) * 2017-02-28 2017-08-01 浙江大学 The problem of one kind is based on multi-stage characteristics and answer sentence similarity calculating method
CN107368547A (en) * 2017-06-28 2017-11-21 西安交通大学 A kind of intelligent medical automatic question-answering method based on deep learning
CN107239574A (en) * 2017-06-29 2017-10-10 北京神州泰岳软件股份有限公司 A kind of method and device of intelligent Answer System knowledge problem matching
CN107980130A (en) * 2017-11-02 2018-05-01 深圳前海达闼云端智能科技有限公司 It is automatic to answer method, apparatus, storage medium and electronic equipment
CN108776677A (en) * 2018-05-28 2018-11-09 深圳前海微众银行股份有限公司 Creation method, equipment and the computer readable storage medium of parallel statement library
CN109241251A (en) * 2018-07-27 2019-01-18 众安信息技术服务有限公司 A kind of session interaction method
CN109271505A (en) * 2018-11-12 2019-01-25 深圳智能思创科技有限公司 A kind of question answering system implementation method based on problem answers pair

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
"院长信箱问答系统的研究与设计";谢晋;《中国优秀硕博毕业论文》;全文 *

Also Published As

Publication number Publication date
CN110968674A (en) 2020-04-07

Similar Documents

Publication Publication Date Title
Wang et al. Mapping customer needs to design parameters in the front end of product design by applying deep learning
CN107679234B (en) Customer service information providing method, customer service information providing device, electronic equipment and storage medium
CN108021616B (en) Community question-answer expert recommendation method based on recurrent neural network
CN111062775A (en) Recommendation system recall method based on attention mechanism
CN107944911B (en) Recommendation method of recommendation system based on text analysis
CN109584006B (en) Cross-platform commodity matching method based on deep matching model
CN111062220B (en) End-to-end intention recognition system and method based on memory forgetting device
Shen et al. A voice of the customer real-time strategy: An integrated quality function deployment approach
CN107918778A (en) A kind of information matching method and relevant apparatus
CN112182145A (en) Text similarity determination method, device, equipment and storage medium
CN110858226A (en) Conversation management method and device
CN111241397A (en) Content recommendation method and device and computing equipment
CN114266443A (en) Data evaluation method and device, electronic equipment and storage medium
CN114861050A (en) Feature fusion recommendation method and system based on neural network
CN114490961A (en) Customer service method, system, device and storage medium based on multiple rounds of conversations
Kim et al. Accurate and prompt answering framework based on customer reviews and question-answer pairs
CN113761910A (en) Comment text fine-grained emotion analysis method integrating emotional characteristics
CN113158670A (en) E-commerce comment suggestion extraction method based on entity emotion recognition
CN110968674B (en) Method for constructing question and comment pairs based on word vector representation
CN111523315B (en) Data processing method, text recognition device and computer equipment
CN114911940A (en) Text emotion recognition method and device, electronic equipment and storage medium
KR20220118703A (en) Machine Learning based Online Shopping Review Sentiment Prediction System and Method
CN114971767A (en) Information processing method, information processing device, electronic equipment and storage medium
CN114255096A (en) Data requirement matching method and device, electronic equipment and storage medium
Kumar A Machine Learning-based Automated Approach for Mining Customer Opinion

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant