CN110069613A - A kind of reply acquisition methods and device - Google Patents

A kind of reply acquisition methods and device Download PDF

Info

Publication number
CN110069613A
CN110069613A CN201910351022.5A CN201910351022A CN110069613A CN 110069613 A CN110069613 A CN 110069613A CN 201910351022 A CN201910351022 A CN 201910351022A CN 110069613 A CN110069613 A CN 110069613A
Authority
CN
China
Prior art keywords
context
corpus
target
contexts
reply
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910351022.5A
Other languages
Chinese (zh)
Inventor
马文涛
崔一鸣
陈致鹏
王士进
胡国平
刘挺
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hebei Xunfei Institute Of Artificial Intelligence
iFlytek Co Ltd
Original Assignee
Hebei Xunfei Institute Of Artificial Intelligence
iFlytek Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hebei Xunfei Institute Of Artificial Intelligence, iFlytek Co Ltd filed Critical Hebei Xunfei Institute Of Artificial Intelligence
Priority to CN201910351022.5A priority Critical patent/CN110069613A/en
Publication of CN110069613A publication Critical patent/CN110069613A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/332Query formulation
    • G06F16/3329Natural language query formulation or dialogue systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/3331Query processing
    • G06F16/334Query execution
    • G06F16/3344Query execution using natural language analysis

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Machine Translation (AREA)

Abstract

This application discloses a kind of reply acquisition methods and devices, this method comprises: after getting target context, it first obtains and target context is in semantically similar each group corpus context, wherein, dialog history before target context includes the target problem and target problem that quizmaster proposes is above, dialog history before corpus context includes problem corpus and problem corpus is above, then, after getting the corresponding reply corpus of the problems in each group corpus context corpus, it therefrom chooses at least one and replys corpus, at least one reply to be selected as target problem.It can be seen that, the application is to be based on and target context is in semantically similar each group corpus context, to obtain the reply to be selected of target problem, the reply to be selected obtained can be enable to be returned to the key content of target problem, so as to meet the dialogue demand of quizmaster, the reasonability replied and obtain result is improved.

Description

Reply acquisition method and device
Technical Field
The application relates to the technical field of artificial intelligence, in particular to a reply acquisition method and device.
Background
With the development of artificial intelligence and natural language processing technology, machines initially have certain ability to understand human language, which makes it possible for people to communicate with machines through human language, and therefore, various man-machine conversation systems have appeared in recent years. Such dialog systems can be divided into two categories according to whether they are task-oriented: one is task-type, with a definite goal or task, intended to complete the task with the shortest interaction time or turns, such as intelligent customer service, cell phone smart assistant, etc.; the other is of the natural interactive type, commonly known as "chat robots," which have no specific goal, and are intended to communicate, or even emotional complaints with humans.
In the natural interactive human-computer dialog system, a reply related to the dialog content is retrieved or generated based on the context of the dialog, but the obtained reply may not be related to the key content of the question, so that the dialog requirement of the questioner cannot be met.
Disclosure of Invention
The present disclosure provides a reply obtaining method and device, which can obtain a reasonable reply related to a key content of a question to satisfy a dialog requirement of a questioner.
The embodiment of the application provides a reply acquisition method, which comprises the following steps:
acquiring a target context, wherein the target context comprises a target question asked by a questioner and a historical dialogue previous to the target question;
acquiring each group of corpus context which is semantically similar to the target context, wherein the corpus context comprises a question corpus and a historical dialogue upper part before the question corpus;
acquiring reply corpora corresponding to the question corpora in each group of corpus context;
and selecting at least one reply corpus as at least one to-be-selected reply of the target question.
Optionally, the obtaining of each group of corpus contexts semantically similar to the target context includes:
searching each group of contexts related to the target context from a pre-constructed dialog corpus;
and screening out each group of contexts similar to the target context in semanteme from the searched each group of contexts to be used as each group of corpus contexts.
Optionally, the screening out, from the searched sets of contexts, sets of contexts that are semantically similar to the target context includes:
defining each searched group of contexts as search contexts;
generating context characteristics corresponding to the search context;
wherein the context features comprise co-occurrence features characterizing importance of co-occurrence words in the search context and the target context and/or semantic features characterizing semantic similarity of the search context and the target context;
and screening out each group of contexts similar to the target context in semanteme from each searched group of contexts according to the context characteristics corresponding to each searched group of contexts.
Optionally, the selecting at least one reply corpus includes:
and selecting at least one reply corpus by analyzing the correlation between the target context and each reply corpus.
Optionally, the selecting at least one reply corpus includes:
and selecting at least one reply corpus by utilizing a pre-constructed correlation model.
Optionally, the correlation model is obtained by training using model training data, where the model training data includes each sample context, and a true reply and a random reply of a sample question included in the sample context; wherein the sample context includes the sample question and a historical dialog context prior to the sample question.
An embodiment of the present application further provides a reply acquiring apparatus, including:
the target context acquiring unit is used for acquiring a target context, wherein the target context comprises a target question asked by a questioner and a historical dialogue previous to the target question;
a corpus context acquiring unit, configured to acquire each group of corpus contexts that are semantically similar to the target context, where the corpus contexts include question corpuses and historical conversation upper parts before the question corpuses;
the reply corpus acquiring unit is used for acquiring reply corpuses corresponding to the question corpuses in each group of corpus contexts;
and the reply corpus selecting unit is used for selecting at least one reply corpus as at least one to-be-selected reply of the target question.
Optionally, the corpus context acquiring unit includes:
the context searching subunit is used for searching each group of contexts related to the target context from a pre-constructed dialogue corpus;
and the context screening subunit is used for screening out each group of contexts which are similar to the target context in semantics from the searched each group of contexts to serve as each group of corpus contexts.
Optionally, the context filtering subunit includes:
a search context defining subunit, configured to define each searched group of contexts as a search context;
a context feature generation subunit, configured to generate a context feature corresponding to the search context;
wherein the context features comprise co-occurrence features characterizing importance of co-occurrence words in the search context and the target context and/or semantic features characterizing semantic similarity of the search context and the target context;
and each group of context screening subunit is used for screening each group of contexts similar to the target context in semantics from each searched group of contexts according to the context characteristics corresponding to each searched group of contexts.
Optionally, the reply corpus selecting unit is specifically configured to:
and selecting at least one reply corpus by analyzing the correlation between the target context and each reply corpus.
Optionally, the reply corpus selecting unit is specifically configured to:
and selecting at least one reply corpus by utilizing a pre-constructed correlation model.
Optionally, the correlation model is obtained by training using model training data, where the model training data includes each sample context, and a true reply and a random reply of a sample question included in the sample context; wherein the sample context includes the sample question and a historical dialog context prior to the sample question.
An embodiment of the present application further provides a reply acquiring apparatus, including: a processor, a memory, a system bus;
the processor and the memory are connected through the system bus;
the memory is configured to store one or more programs, the one or more programs including instructions, which when executed by the processor, cause the processor to perform any one implementation of the above reply retrieval method.
An embodiment of the present application further provides a computer-readable storage medium, where instructions are stored in the computer-readable storage medium, and when the instructions are run on a terminal device, the terminal device is enabled to execute any implementation manner of the reply acquisition method.
The embodiment of the present application further provides a computer program product, which, when running on a terminal device, enables the terminal device to execute any implementation manner of the above reply acquisition method.
According to the reply acquisition method and device provided by the embodiment of the application, after the target context is acquired, each group of corpus context which is similar to the target context in semantics is acquired, wherein the target context comprises a target question asked by a questioner and a historical dialogue context before the target question, the corpus context comprises a question corpus and a historical dialogue context before the question corpus, and then after reply corpuses corresponding to the question corpuses in each group of corpus context are acquired, at least one reply corpus can be selected from the obtained corpus as at least one to-be-selected reply of the target question. Therefore, the method and the device for replying the target question acquire the to-be-selected replies to the target question based on the corpus contexts semantically similar to the target context, can enable the acquired to-be-selected replies to reply to the key contents of the target question, and can further screen the final replies to the target question from the to-be-selected replies so as to meet the dialogue requirements of questioners and improve the reasonability of replying the acquired results.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic flowchart of a reply acquisition method according to an embodiment of the present application;
FIG. 2 is a schematic structural diagram of a correlation model provided in an embodiment of the present application;
fig. 3 is a schematic structural diagram of a reply acquisition method according to an embodiment of the present application;
fig. 4 is a schematic composition diagram of a reply acquisition apparatus according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
First embodiment
Referring to fig. 1, a schematic flow chart of a reply acquisition method provided in this embodiment is shown, where the method includes the following steps:
s101: and acquiring a target context, wherein the target context comprises a target question asked by a questioner and historical dialogue texts before the target question.
In the embodiment, a question which is asked by a questioner to a machine and needs to be answered by the machine is defined as a target question, and a history dialog upper text which comprises the target question which is asked by the questioner and is before the target question is defined as a target context.
It should be noted that the target context may include the target question and all the historical dialog texts before the target question, or the target context may include a part of the historical dialog texts including the target question from the target question onward. It is assumed that the target context includes m sentences, which are defined as u in chronological order from front to back1、u2、…、umWherein u ismRefer to text corresponding to a target questionThe contents.
It should be noted that the present embodiment does not limit the way in which the questioner presents the target question, for example, the questioner may present the target question to the machine by means of voice input, or may present the target question to the machine by means of text input, that is, the target question may be in a voice form or a text form, and the present embodiment also does not limit the language of the target question, such as chinese, english, and the like. In addition, when the questioner presents the target question to the machine by means of text input, the present embodiment also does not limit the type of input method used by the questioner, such as a dog search input method, a hundredth input method, and the like.
S102: and acquiring various groups of language material contexts which are similar to the target context in semanteme, wherein the language material contexts comprise question language materials and historical conversation upper texts before the question language materials.
In this embodiment, after the target context is obtained in step S101, in order to improve the reasonableness of the reply obtaining result and enable the obtained reply content to meet the dialog requirement of the questioner, first, each group of corpus contexts semantically close to the target context is obtained based on the semantic information of the target context, and each obtained corpus context includes a question corpus and a history dialog context before the question corpus, where the corpus context may be collected in advance, multiple rounds of human-machine dialogues may be used as a group of corpus contexts, multiple rounds of human-human dialogues may be used as a group of corpus contexts, and the question corpus in the corpus contexts is a user question.
It should be noted that the corpus context may include the question corpus and all the historical dialog texts before the question corpus, or the corpus context may also include a part of the historical dialog texts including the question corpus from the question corpus onward. Assume that a corpus context includes n sentences, where the n sentences are defined as v in chronological order from front to back1、v2、…、vnWherein v isnIs referred to as questionsAnd text content corresponding to the subject corpus.
Next, the present embodiment will describe a specific process of "acquiring each set of corpus contexts semantically close to the target context" in step S102 through the following steps a-B.
Step A: and searching each group of contexts relevant to the target context from a pre-constructed dialogue corpus.
In this embodiment, after the target context is obtained in step S101, the text search method may be used to search out each group of contexts related to the target context from the pre-constructed dialog corpus, for example, a distributed search engine (elastic search) or a full-text search server (Solr) may be used to search out each group of contexts related to the target context from the pre-constructed dialog corpus. Then, through the following step B, each group of contexts which is semantically similar to the target context is further screened out from the searched each group of contexts.
The dialog corpus may have a plurality of groups of contexts and reply corpora corresponding to the question corpus (i.e., the last question in each group of contexts) in each group of contexts, and the contexts and the reply corpora corresponding to the contexts may be obtained by collecting dialogues of people in daily life and processing sensitive information of the dialogues. Specifically, when constructing a dialog corpus, a large amount of dialog data of people in daily life may be collected, for example, a large amount of real dialog data of people on a social network platform (e.g., a microblog, a cafe, etc.) may be collected, then some sensitive data (e.g., a phone number or an identification number, etc.) in the dialog text may be deleted or replaced, then each group of the dialog data after being processed by sensitive information may be directly used as a group of contexts, or a group of contexts may be extracted from the group of the dialog data with a partially continuous part, where the last sentence in each group of contexts is a user question, and at this time, the reply corpus of each group of contexts and the user question in each group of contexts may be stored in the dialog corpus.
In addition, when the dialog corpus is constructed, in addition to the groups of contexts and the reply corpora corresponding to the contexts obtained by processing the existing real dialog data, other dialog data can be simulated according to the real dialog data, and the groups of contexts and the reply corpora corresponding to each group of contexts are obtained from the real dialog data to construct the dialog corpus.
Similarly, the reply corpus corresponding to each group of contexts may be a text with a second preset length, and the first preset length is usually greater than the second preset length.
It should be noted that, when searching for each set of contexts related to the target context from the dialog corpus, for convenience of description, each context in the dialog corpus may be defined as a sample context, and a correlation value between each sample context and the target context may be calculated, where the correlation value is used to measure the correlation between the corresponding sample context and the target context, for example, the larger the correlation value is, the larger the correlation between the two contexts is. Then, sorting the correlation coefficient values from large to small, and selecting each group of sample contexts corresponding to the correlation coefficient values of the preset number sorted at the front as each group of contexts related to the target context semantically; or, the correlation coefficient values are sorted from small to large, and each group of sample contexts corresponding to the preset number of sorted correlation coefficient values is selected as each group of contexts semantically related to the target context.
In addition, in order to be able to search as many sets of contexts semantically related to the target context as possible, the preset number may be set to be a larger value as possible when the system computation amount allows, for example, it may be set to 1000, that is, 1000 sets of contexts semantically related to the target context may be selected when the system computation amount allows, so as to further obtain, through the subsequent step B, sets of contexts semantically similar to the target context from the 1000 sets of contexts.
And B: and screening out each group of contexts similar to the target context in semanteme from the searched each group of contexts to be used as each group of corpus contexts.
In this embodiment, after each group of contexts related to the target context is searched from the pre-constructed dialog corpus through step a, the semantic similarity between each searched group of contexts and the target context may be calculated, and each group of contexts semantically similar to the target context is screened according to the calculation result, that is, each group of contexts semantically similar to the target context is screened as each group of corpus contexts.
Therefore, after each group of contexts related to the target context is searched, each group of contexts which are similar to the target context in semantics can be screened out to be used as a basis for acquiring the reply to be selected of the target problem, so that the acquired content of the reply to be selected is semantically related to the content of the target context, and the conversation requirement of a questioner is further met.
Next, the present embodiment will describe a specific process of "screening out groups of contexts semantically close to the target context from the searched groups of contexts" in step B through the following steps B1-B3.
Step B1: each set of searched contexts is defined as a search context.
In the present embodiment, for convenience of description, each set of contexts searched from the corpus of dialogues that are related to the target context is defined as a search context.
Step B2: and generating context characteristics corresponding to the search context.
In this embodiment, for each group of search contexts, in order to determine whether the search context is semantically similar to the target context, first, a context feature corresponding to the search context may be generated.
The context features corresponding to the search context comprise co-occurrence features and/or semantic features, the co-occurrence features represent the importance of co-occurrence words in the search context and the target context, and the semantic features represent the semantic similarity of the search context and the target context.
It should be noted that, in this embodiment, how to generate the context features corresponding to the search contexts is described based on a certain group of search contexts in all search contexts, and the processing manners of other groups of search contexts are similar to each other and are not described again.
The co-occurrence characteristics are described below.
One way to generate the co-occurrence features is: firstly, utilizing a word segmentation method to segment words of a search context to obtain each word contained in the search context, then deleting stop words which are contained in the search context and have no definite meaning, such as 'in' and 'in', and then calculating the weight of the remaining words in the search context, wherein the larger the weight is, the higher the importance degree of the corresponding word in the search context is.
Similarly, the word segmentation method may be utilized to segment the target context to obtain each word included in the target context, and then delete stop words having no definite meaning, such as "of" and "in" included therein, and then calculate the weight of the remaining words in the target context, where the larger the weight value is, the higher the importance of the corresponding word in the target context is.
In this embodiment, the word that both the search context and the target context have is defined as a co-occurrence word, which may be composed of at least one word.
Furthermore, the weights of the co-occurring words in the search context and the target context may be respectively summed, the two summed calculation results are harmonic-averaged, and the calculation result is used as the co-occurring feature corresponding to the search context to represent the importance of all the co-occurring words contained in the search context and the target context, where the specific calculation formula is as follows:
fw=fs*fh*2/(fs+fh) (3)
wherein,representing the weight of the ith co-occurrence word in the search context, wherein the greater the weight value, the higher the importance of the ith co-occurrence word in the search context is; n represents the total number of co-occurring words in the search context and the target context; f. ofsRepresenting the total weight of all co-occurring words in the search context and the target context, wherein the greater the weight value, the higher the importance of all co-occurring words in the search context;representing the weight of the ith co-occurrence word in the target context, wherein the greater the weight value, the higher the importance of the ith co-occurrence word in the target context is; f. ofhRepresenting the total weight of all the co-occurring words in the search context and the target context in the target context, wherein the greater the weight value, the higher the importance of all the co-occurring words in the target context; f. ofwFor co-occurrence features corresponding to the search context,it represents a pair fsAnd fhA calculation result obtained after performing harmonic mean calculation, fwThe larger the value of (a), the higher the importance of all co-occurring words in the search context and the target context.
For example, the following steps are carried out: suppose that the words obtained after preprocessing such as word segmentation and the like on the search context are 'you', 'now', 'good' and 'do', the weights of the words in the search context are respectively calculated to be '0.2', '0.3', '0.4' and '0.1', the words obtained after preprocessing such as word segmentation and the like on the target context are 'you', 'true', 'good' and 'how', and the weights of the words in the target context are respectively calculated to be '0.2', '0.3' and '0.2'.
It can be seen that if the co-occurrence words in the search context and the target context are "you" and "good", the total weight of the two co-occurrence words in the search context can be calculated to be 0.6, i.e. 0.2+ 0.4 to 0.6, using formula (1), the total weight of the two co-occurrence words in the target context can be calculated to be 0.5, i.e. 0.2+0.3 to 0.5, using formula (2), and further, the co-occurrence feature corresponding to the search context can be calculated to be 0.545, i.e. f, using formula (3)w0.6 × 0.5 × 2/(0.5+0.6) ═ 0.545, which characterizes the importance of all co-occurring words contained in the search context and target context.
The co-occurrence features corresponding to the search context are introduced above, and the semantic features corresponding to the search context are introduced below, specifically, the generation manner of the semantic features may include the following steps (1) - (3):
step (1): and generating a semantic representation result of the target context.
In this embodiment, after segmenting the target context to obtain each word included in the target context, a vector generation method may be used to generate a word vector corresponding to each word in the target context, for example, a word vector corresponding to each word in the target context may be queried in a manner of querying a semantic dictionary, and then, a semantic representation result of the target context may be generated according to the word vector corresponding to each word and a weight of each word in the target context, where a specific calculation formula is as follows:
wherein S represents a semantic representation result of the target context; m represents the total number of words contained in the target context; eiRepresenting a word vector corresponding to the ith word in the target context; w is aiThe weight value of the ith word in the target context is represented, and the higher the weight value is, the higher the importance of the ith word in the target context is represented.
Step (2): semantic representation results of the search context are generated.
In this embodiment, after segmenting words of a search context to obtain words included in the search context, a vector generation method may be used to generate word vectors corresponding to the words in the search context, for example, the word vectors corresponding to the words in the search context may be queried in a manner of querying a semantic dictionary, and then, semantic representation results of the search context may be generated according to the word vectors corresponding to the words and the weight of the words in the search context, where a specific calculation formula is as follows:
wherein H represents the semantic representation result of the search context; m' represents the total number of words contained in the search context; e'jRepresenting a word vector corresponding to the jth word in the search context; gamma rayjRepresents the weight of the jth word in the search context, the larger the weight value is, the important of the jth word in the search context isThe higher the degree.
It should be noted that, the execution order of steps (1) and (2) is not limited in the embodiments of the present application.
And (3): and generating semantic features corresponding to the search context according to the generated semantic representation result.
In this embodiment, after the semantic representation result S of the target context is generated through the step (1), and the semantic representation result H of the search context is generated through the step (2), the cosine distance between the semantic representation result of the target context and the semantic representation result of the search context may be calculated to obtain the semantic similarity between the search context and the target context, which is used as the semantic feature corresponding to the search context, and the specific calculation formula is as follows:
fm=cosine(S,H) (6)
wherein f ismIs the semantic feature corresponding to the search context to represent the semantic similarity between the search context and the target context, and fmThe semantic distance value is also used for measuring the semantic distance between the search context and the target context; cosine represents a cosine distance calculation formula; s represents a semantic representation result of the target context; h represents the semantic representation result of the search context.
It is understood that f in the above formula (6)mThe larger the value is, the smaller the semantic distance between the search context and the target context is, that is, the higher the semantic similarity between the search context and the target context is.
Step B3: and screening out each group of contexts similar to the target context in semantics from each searched group of contexts according to the context characteristics corresponding to each searched group of contexts.
After the context features corresponding to each group of search contexts are generated through the step B2, that is, after the co-occurrence features and/or semantic features corresponding to each group of search contexts are generated, the co-occurrence features and/or semantic features may be summed according to their respective weights, and the calculation result is used to represent the semantic similarity between the corresponding search context and the target context, where the specific calculation formula is as follows:
f=wwfw+wmfm(7)
wherein f represents a semantic closeness value of the corresponding search context and the target context; w is awCo-occurrence feature f corresponding to the search contextwWeight of (1), weight wwThe larger the size, the co-occurrence feature f is representedwThe greater the importance of, the weight wwCan be adjusted according to the experimental result; w is amRepresenting the semantic feature f corresponding to the search contextmWeight of (1), weight wmThe larger the semantic feature f is representedmThe greater the importance of, the weight wmCan be adjusted according to the experimental result.
Specifically, after calculating the semantic similarity degree value between each group of search contexts and the target context by using the formula (7), the semantic similarity degree values may be sorted from large to small, and each group of search contexts corresponding to the semantic similarity degree value of the preset number (or within the preset numerical range) sorted in the front may be selected as each group of context semantically similar to the target context; or, the semantic similarity degree values are sorted from small to large, and each group of search contexts corresponding to the semantic similarity degree value of the sorted preset number (or within the preset numerical range) is selected as each group of contexts similar to the target context in semantics; or selecting all groups of search contexts corresponding to the semantic proximity values higher than the preset threshold value as groups of contexts similar to the target context in semantics.
Alternatively, each group of contexts semantically similar to the target context may be screened from the searched groups of contexts only according to the co-occurrence features corresponding to each group of search contexts or only according to the semantic features corresponding to each group of search contexts.
Specifically, an alternative implementation is to calculate each search set using the above equation (3)Co-occurrence eigenvalue f corresponding to the cable contextwThen, the co-occurrence feature values may be sorted from large to small, and each group of search contexts corresponding to the co-occurrence feature values sorted in the previous preset number (or within a preset numerical range) may be selected as each group of contexts semantically similar to the target context; or, sorting the co-occurrence feature values from small to large, and selecting each group of search contexts corresponding to the sorted co-occurrence feature values of a preset number (or within a preset numerical range) as each group of contexts semantically similar to the target context; or selecting each group of search contexts corresponding to all co-occurrence feature values higher than a preset threshold value as each group of contexts similar to the target context in semantics.
Another optional implementation manner is that the semantic feature value f corresponding to each group of search contexts is calculated by using the above formula (6)mThen, the semantic feature values may be sorted from large to small, and each group of search contexts corresponding to the semantic feature values sorted in the previous preset number (or within a preset numerical range) may be selected as each group of contexts semantically similar to the target context; or, the semantic feature values are sorted from small to large, and each group of search contexts corresponding to the semantic feature values of the sorted preset number (or within the preset numerical range) is selected as each group of contexts similar to the target context in semantics; or selecting each group of search contexts corresponding to all semantic feature values higher than a preset threshold value as each group of contexts similar to the target context in semantics.
It should be noted that, by using the co-occurrence words in the search context and the target context, it is possible to more accurately screen out each group of contexts that are semantically similar to the target context from each group of contexts that are related to the target context and searched out from the dialog corpus, and use the screened group of contexts as each group of corpus contexts.
S103: and acquiring reply linguistic data corresponding to the question linguistic data in each group of linguistic data contexts.
In this embodiment, after obtaining each group of corpus contexts semantically similar to the target context through step S102, the reply corpus corresponding to the question corpus in each group of corpus contexts may be obtained from the pre-constructed dialog corpus.
S104: and selecting at least one reply corpus as at least one to-be-selected reply of the target question.
In this embodiment, after the reply corpora corresponding to the question corpora in each corpus context are obtained in step S103, the semantic correlation degree between each reply corpus and the target context may be calculated, and then at least one reply corpus may be selected from the calculation results to serve as at least one to-be-selected reply to the target question.
It should be noted that, a specific implementation manner of "selecting at least one reply corpus" in the step S104 will be described in the second embodiment.
In summary, in the reply acquiring method provided in this embodiment, after the target context is acquired, each group of corpus contexts that is semantically similar to the target context is acquired, and then after the reply corpus corresponding to the question corpus in each group of corpus contexts is acquired, at least one reply corpus can be selected from the obtained set of corpus contexts to serve as at least one to-be-selected reply of the target question. Therefore, in the embodiment, the to-be-selected replies to the target question are obtained based on each group of corpus contexts semantically similar to the target context, so that the obtained to-be-selected replies can reply to the key content of the target question, and then the final replies to the target question can be screened from the to-be-selected replies, so that the dialogue requirements of questioners are met, and the reasonability of replying the obtained results is improved.
Second embodiment
The present embodiment will describe a specific implementation process of "selecting at least one reply corpus" in step S104 in the first embodiment.
It will be appreciated that the answer to a question should be semantically highly relevant to the question and even the context of the question to ensure that the answer is a legitimate answer that can be answered to the key contents of the question, rather than some high frequency answers that are semantically barely relevant to the question, like "i don't know".
Based on this, in this embodiment, an optional implementation manner is that the specific implementation process of "selecting at least one reply corpus" in step S104 may include: and selecting at least one reply corpus by analyzing the correlation between the target context and each reply corpus.
In this implementation manner, the existing or future semantic relevance calculation method may be used to calculate the relevance between the target context and each reply corpus, and then at least one reply corpus is selected from the calculated results, for example, the relevance between the target context and each reply corpus may be determined by using a pre-established relevance model or directly using a relevance calculation method, and then at least one reply corpus is selected according to the relevance determination result.
It should be noted that, in the following, how to determine the correlation between the target context and a certain reply corpus by using a pre-constructed correlation model will be described with reference to a certain reply corpus of all the reply corpuses obtained in step S103, and the processing manners of other reply corpuses are similar to the above, and are not described in detail.
Specifically, the pre-constructed correlation model of the present embodiment may be formed by a multi-layer network, as shown in fig. 2, and the model structure includes an input layer, an embedding layer, a representation layer, and a matching layer.
The input layer is used for inputting the reply linguistic data and the target context. Specifically, as shown in fig. 2, the reply corpus may be input to the position of the "true reply" on the left side of the input layer in fig. 2, while the "target context" is input to the position of the "context" in the middle of the input layer in fig. 2.
Specifically, as shown in fig. 2, in the Embedding layer, word vectors corresponding to each word in the reply corpus and the target context may be queried by querying a pre-trained word vector dictionary (Embedding Matrix), that is, word sequences of the reply corpus and the target context are converted into word vector sequences.
The presentation layer is used for coding word vector sequences corresponding to the reply corpus and the target context respectively output by the embedding layer so as to obtain coding vectors corresponding to the reply corpus and the target context respectively. For example, in the presentation layer, a Bag-of-words model (BOW for short), a Convolutional Neural Network (CNN for short), a Recurrent Neural Network (RNN for short), or the like may be used to encode the word vector sequences corresponding to the reply corpus and the target context output by the embedding layer, so as to obtain a corresponding encoding vector.
The matching layer is used for performing matching calculation on the reply linguistic data output by the presentation layer and the coding vectors corresponding to the target context respectively, and determining the correlation between the target context and the reply linguistic data according to a calculation result. Specifically, a cosine distance value between the code vectors corresponding to the reply corpus and the target context output by the presentation layer may be calculated, and as a matching calculation result, the larger the value of the matching calculation result, the higher the correlation between the target context and the reply corpus is. Alternatively, a Multi-Layer complete connection (MLP) network trained in advance may be used to perform matching calculation on the coding vectors corresponding to the reply corpus and the target context output by the presentation Layer, so as to obtain a matching calculation result.
Furthermore, after the pre-constructed correlation model is used to determine the matching calculation results between the target context and each reply corpus, the matching calculation results can be sorted from large to small, and at least one reply corpus corresponding to the matching calculation result value with the preset number of the sorted front is selected as at least one to-be-selected reply of the target problem; or, the matching calculation results are sorted from small to large, and at least one reply corpus corresponding to the sorted preset number of matching calculation result values is selected as at least one to-be-selected reply of the target problem; or selecting at least one reply corpus corresponding to all the matching calculation result values higher than a preset threshold value as at least one to-be-selected reply of the target problem.
In this embodiment, an optional implementation manner is that the correlation model is obtained by training using model training data, where the model training data includes each sample context, and true reply and random reply of a sample question included in the sample context; wherein the sample context includes the sample question and a historical dialog context prior to the sample question.
Next, the present embodiment will describe a process of constructing a correlation model. The method specifically comprises the following steps C1-C3:
step C1: model training data is formed.
In this embodiment, in order to construct the relevance model, first, a large number of human conversation contexts (which may constitute the conversation corpus) need to be collected in advance, for example, a large number of real conversation data of people on social network platforms such as microblogs and posts can be collected in advance, then, the last question in each human conversation context is taken as a sample question, and a historical conversation context including the sample question and before the sample question is defined as a sample context, and each sample question corresponds to real reply content and random reply content, and these collected data are taken as model training data.
Step C2: and constructing a correlation model.
An initial correlation model may be constructed and model parameters initialized.
It should be noted that the execution sequence of step C1 and step C2 is not limited in this embodiment.
Step C3: and training the correlation model by using the pre-collected model training data.
In this embodiment, after the model training data is collected in step C1, the correlation model constructed in step C2 may be trained using the model training data, and the correlation model may be obtained by training through multiple rounds of model training until the training end condition is satisfied.
Specifically, during the current round of training, a sample context needs to be selected from the model training data, at this time, the target context in the embodiment is replaced with the sample context, the reply corpus obtained through step S103 in the embodiment is replaced with the real reply content included in the sample context, and the correlation between the sample context and the real reply content is determined according to the manner described in the embodiment, and a specific flow may be shown in the left and middle two columns of block diagrams in fig. 2.
Meanwhile, the reply corpus acquired in step S103 in the above embodiment may be replaced with the random reply content included in the sample, and the correlation between the sample context and the random reply content is determined according to the manner described in the above embodiment, and the specific flow may be shown in the middle and right two columns of block diagrams in fig. 2.
Then, on the basis of the correlation between the sample context and the real reply content and the correlation between the sample context and the random reply content, the parameters of the correlation model are updated by comparing the difference between the sample context and the real reply content, and the current round of training of the correlation model is completed.
In the present round of training, an alternative implementation manner is that an objective function may be used for training in the training process of the correlation model, for example, a loss function change loss or binary cross entropy (binary _ cross entropy) may be used as the objective function to widen the difference between the sample context and the true reply based on the correlation between the sample context and the random reply, so that the correlation model has the capability of distinguishing between the rational reply and the non-rational reply.
Moreover, when the correlation model is trained by using an objective function such as the loss function change or binary _ cross, the model parameters of the correlation model may be continuously updated according to the change of the value of the objective function, for example, the model parameters may be updated by using a back propagation algorithm until the value of the objective function meets the requirement (e.g., tends to 0 or the change amplitude is small), and the update of the model parameters is stopped, thereby completing the training of the correlation model.
In summary, in the embodiment, a pre-constructed relevance model is used to determine the relevance between the target context and each reply corpus, and then at least one reply corpus is selected as each candidate reply of the target question according to the relevance determination result, so that it can be ensured that the obtained candidate replies are semantically related to the target context in content, and it can also be ensured that the candidate replies can reply to the key content of the target question, rather than just some meaningless or high-frequency replies only marginally related to the target context in semantic, and further, the dialog requirement of the questioner can be met.
Third embodiment
For convenience of understanding, this embodiment will be combined with a schematic structural diagram of a reply acquisition method shown in fig. 3. The overall implementation process of the reply acquisition method provided by the embodiment of the application is introduced.
As shown in fig. 3, the structure includes a reply obtaining module, which is used to obtain at least one candidate reply of the target question.
Specifically, the overall implementation process of the embodiment of the present application is as follows: firstly, a target context can be obtained, wherein the target context comprises a target question input by a questioner in a voice or text mode and a historical dialogue upper part before the target question; then, screening out each group of contexts which are similar to the target context in semantics from a pre-constructed dialogue corpus according to the acquired target context by using a reply acquisition module to serve as each group of corpus contexts, and acquiring reply corpora corresponding to the problem corpora in each group of corpus contexts; then, the reply retrieval module may perform a correlation calculation using the retrieved reply corpora and a target context including the target question and the historical dialogue texts before the target question, selecting at least one reply corpus as each candidate reply of the target question according to the correlation determination result, thereby ensuring that the obtained optional replies are semantically related to the target context in content and can be reasonably replied to the key content of the target question, rather than some meaningless high frequency replies that are only semantically marginally related to the target context, and furthermore, the dialogue requirements of the questioner can be met, and furthermore, the acquired responses to be selected can be output in a voice and/or text mode, so that the final response of the target question can be selected from the responses to be selected. It should be noted that, for a specific reply acquisition process, reference is made to detailed descriptions of steps S101 to S104 in the first embodiment and the second embodiment.
Fourth embodiment
In this embodiment, a reply acquiring apparatus will be described, and for related contents, please refer to the above method embodiment.
Referring to fig. 4, a schematic composition diagram of a reply obtaining apparatus provided in this embodiment is shown, where the apparatus 400 includes:
a target context acquiring unit 401, configured to acquire a target context, where the target context includes a target question posed by a questioner and a historical dialogue context before the target question;
a corpus context acquiring unit 402, configured to acquire each group of corpus contexts that are semantically similar to the target context, where the corpus contexts include question corpuses and historical conversation upper parts before the question corpuses;
a reply corpus acquiring unit 403, configured to acquire a reply corpus corresponding to the question corpus in each group of corpus contexts;
a reply corpus selecting unit 404, configured to select at least one reply corpus as at least one candidate reply of the target question.
In an implementation manner of this embodiment, the corpus context acquiring unit 402 includes:
the context searching subunit is used for searching each group of contexts related to the target context from a pre-constructed dialogue corpus;
and the context screening subunit is used for screening out each group of contexts which are similar to the target context in semantics from the searched each group of contexts to serve as each group of corpus contexts.
In an implementation manner of this embodiment, the context filtering subunit includes:
a search context defining subunit, configured to define each searched group of contexts as a search context;
a context feature generation subunit, configured to generate a context feature corresponding to the search context;
wherein the context features comprise co-occurrence features characterizing importance of co-occurrence words in the search context and the target context and/or semantic features characterizing semantic similarity of the search context and the target context;
and each group of context screening subunit is used for screening each group of contexts similar to the target context in semantics from each searched group of contexts according to the context characteristics corresponding to each searched group of contexts.
In an implementation manner of this embodiment, the reply corpus selecting unit 404 is specifically configured to:
and selecting at least one reply corpus by analyzing the correlation between the target context and each reply corpus.
In an implementation manner of this embodiment, the reply corpus selecting unit 404 is specifically configured to:
and selecting at least one reply corpus by utilizing a pre-constructed correlation model.
In an implementation manner of this embodiment, the correlation model is obtained by training using model training data, where the model training data includes each sample context, and true reply and random reply of a sample question included in the sample context; wherein the sample context includes the sample question and a historical dialog context prior to the sample question.
Further, an embodiment of the present application further provides reply acquiring apparatus, including: a processor, a memory, a system bus;
the processor and the memory are connected through the system bus;
the memory is used for storing one or more programs, and the one or more programs comprise instructions which, when executed by the processor, cause the processor to execute any one of the implementation methods of the reply retrieval method.
Further, an embodiment of the present application further provides a computer-readable storage medium, where instructions are stored in the computer-readable storage medium, and when the instructions are run on a terminal device, the terminal device is caused to execute any implementation method of the above reply acquisition method.
Further, an embodiment of the present application further provides a computer program product, which when running on a terminal device, causes the terminal device to execute any implementation method of the above reply acquisition method.
As can be seen from the above description of the embodiments, those skilled in the art can clearly understand that all or part of the steps in the above embodiment methods can be implemented by software plus a necessary general hardware platform. Based on such understanding, the technical solution of the present application may be essentially or partially implemented in the form of a software product, which may be stored in a storage medium, such as a ROM/RAM, a magnetic disk, an optical disk, etc., and includes several instructions for enabling a computer device (which may be a personal computer, a server, or a network communication device such as a media gateway, etc.) to execute the method according to the embodiments or some parts of the embodiments of the present application.
It should be noted that, in the present specification, the embodiments are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments may be referred to each other. The device disclosed by the embodiment corresponds to the method disclosed by the embodiment, so that the description is simple, and the relevant points can be referred to the method part for description.
It is further noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (13)

1. A reply acquisition method, comprising:
acquiring a target context, wherein the target context comprises a target question asked by a questioner and a historical dialogue previous to the target question;
acquiring each group of corpus context which is semantically similar to the target context, wherein the corpus context comprises a question corpus and a historical dialogue upper part before the question corpus;
acquiring reply corpora corresponding to the question corpora in each group of corpus context;
and selecting at least one reply corpus as at least one to-be-selected reply of the target question.
2. The method according to claim 1, wherein said obtaining sets of corpus contexts semantically similar to the target context comprises:
searching each group of contexts related to the target context from a pre-constructed dialog corpus;
and screening out each group of contexts similar to the target context in semanteme from the searched each group of contexts to be used as each group of corpus contexts.
3. The method of claim 2, wherein the filtering out sets of contexts from the searched sets of contexts that are semantically similar to the target context comprises:
defining each searched group of contexts as search contexts;
generating context characteristics corresponding to the search context;
wherein the context features comprise co-occurrence features characterizing importance of co-occurrence words in the search context and the target context and/or semantic features characterizing semantic similarity of the search context and the target context;
and screening out each group of contexts similar to the target context in semanteme from each searched group of contexts according to the context characteristics corresponding to each searched group of contexts.
4. The method according to any one of claims 1 to 3, wherein the selecting at least one reply corpus comprises:
and selecting at least one reply corpus by analyzing the correlation between the target context and each reply corpus.
5. The method according to claim 4, wherein said selecting at least one reply corpus comprises:
and selecting at least one reply corpus by utilizing a pre-constructed correlation model.
6. The method of claim 5, wherein the correlation model is trained using model training data, the model training data comprising sample contexts, true replies and random replies to sample questions comprised by the sample contexts; wherein the sample context includes the sample question and a historical dialog context prior to the sample question.
7. A reply retrieval apparatus, comprising:
the target context acquiring unit is used for acquiring a target context, wherein the target context comprises a target question asked by a questioner and a historical dialogue previous to the target question;
a corpus context acquiring unit, configured to acquire each group of corpus contexts that are semantically similar to the target context, where the corpus contexts include question corpuses and historical conversation upper parts before the question corpuses;
the reply corpus acquiring unit is used for acquiring reply corpuses corresponding to the question corpuses in each group of corpus contexts;
and the reply corpus selecting unit is used for selecting at least one reply corpus as at least one to-be-selected reply of the target question.
8. The apparatus according to claim 7, wherein said corpus context acquiring unit comprises:
the context searching subunit is used for searching each group of contexts related to the target context from a pre-constructed dialogue corpus;
and the context screening subunit is used for screening out each group of contexts which are similar to the target context in semantics from the searched each group of contexts to serve as each group of corpus contexts.
9. The apparatus of claim 8, wherein the context filtering subunit comprises:
a search context defining subunit, configured to define each searched group of contexts as a search context;
a context feature generation subunit, configured to generate a context feature corresponding to the search context;
wherein the context features comprise co-occurrence features characterizing importance of co-occurrence words in the search context and the target context and/or semantic features characterizing semantic similarity of the search context and the target context;
and each group of context screening subunit is used for screening each group of contexts similar to the target context in semantics from each searched group of contexts according to the context characteristics corresponding to each searched group of contexts.
10. The apparatus according to any one of claims 7 to 9, wherein the reply corpus selecting unit is specifically configured to:
and selecting at least one reply corpus by analyzing the correlation between the target context and each reply corpus.
11. A reply retrieval device, comprising: a processor, a memory, a system bus;
the processor and the memory are connected through the system bus;
the memory is to store one or more programs, the one or more programs comprising instructions, which when executed by the processor, cause the processor to perform the method of any of claims 1-6.
12. A computer-readable storage medium having stored therein instructions that, when executed on a terminal device, cause the terminal device to perform the method of any one of claims 1-6.
13. A computer program product, characterized in that the computer program product, when run on a terminal device, causes the terminal device to perform the method of any of claims 1-6.
CN201910351022.5A 2019-04-28 2019-04-28 A kind of reply acquisition methods and device Pending CN110069613A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910351022.5A CN110069613A (en) 2019-04-28 2019-04-28 A kind of reply acquisition methods and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910351022.5A CN110069613A (en) 2019-04-28 2019-04-28 A kind of reply acquisition methods and device

Publications (1)

Publication Number Publication Date
CN110069613A true CN110069613A (en) 2019-07-30

Family

ID=67369389

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910351022.5A Pending CN110069613A (en) 2019-04-28 2019-04-28 A kind of reply acquisition methods and device

Country Status (1)

Country Link
CN (1) CN110069613A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110674276A (en) * 2019-09-23 2020-01-10 深圳前海微众银行股份有限公司 Robot self-learning method, robot terminal, device and readable storage medium
CN111914565A (en) * 2020-07-15 2020-11-10 海信视像科技股份有限公司 Electronic equipment and user statement processing method
CN112445906A (en) * 2019-08-28 2021-03-05 北京搜狗科技发展有限公司 Method and device for generating reply message

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101566998A (en) * 2009-05-26 2009-10-28 华中师范大学 Chinese question-answering system based on neural network
US20160306852A1 (en) * 2015-03-11 2016-10-20 International Business Machines Corporation Answering natural language table queries through semantic table representation
CN107305578A (en) * 2016-04-25 2017-10-31 北京京东尚科信息技术有限公司 Human-machine intelligence's answering method and device
CN108170749A (en) * 2017-12-21 2018-06-15 北京百度网讯科技有限公司 Dialogue method, device and computer-readable medium based on artificial intelligence
CN109033318A (en) * 2018-07-18 2018-12-18 北京市农林科学院 Intelligent answer method and device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101566998A (en) * 2009-05-26 2009-10-28 华中师范大学 Chinese question-answering system based on neural network
US20160306852A1 (en) * 2015-03-11 2016-10-20 International Business Machines Corporation Answering natural language table queries through semantic table representation
CN107305578A (en) * 2016-04-25 2017-10-31 北京京东尚科信息技术有限公司 Human-machine intelligence's answering method and device
CN108170749A (en) * 2017-12-21 2018-06-15 北京百度网讯科技有限公司 Dialogue method, device and computer-readable medium based on artificial intelligence
CN109033318A (en) * 2018-07-18 2018-12-18 北京市农林科学院 Intelligent answer method and device

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112445906A (en) * 2019-08-28 2021-03-05 北京搜狗科技发展有限公司 Method and device for generating reply message
CN110674276A (en) * 2019-09-23 2020-01-10 深圳前海微众银行股份有限公司 Robot self-learning method, robot terminal, device and readable storage medium
CN111914565A (en) * 2020-07-15 2020-11-10 海信视像科技股份有限公司 Electronic equipment and user statement processing method

Similar Documents

Publication Publication Date Title
CN108829822B (en) Media content recommendation method and device, storage medium and electronic device
CN109284357B (en) Man-machine conversation method, device, electronic equipment and computer readable medium
Bala et al. Chat-bot for college management system using AI
CN110175227B (en) Dialogue auxiliary system based on team learning and hierarchical reasoning
KR102288249B1 (en) Information processing method, terminal, and computer storage medium
CN110427461B (en) Intelligent question and answer information processing method, electronic equipment and computer readable storage medium
CN111046132A (en) Customer service question and answer processing method and system for retrieving multiple rounds of conversations
CN110008327B (en) Legal answer generation method and device
CN111291549B (en) Text processing method and device, storage medium and electronic equipment
CN110069612B (en) Reply generation method and device
CN112214593A (en) Question and answer processing method and device, electronic equipment and storage medium
CN108875074A (en) Based on answer selection method, device and the electronic equipment for intersecting attention neural network
CN112417127B (en) Dialogue model training and dialogue generation methods, devices, equipment and media
CN110597968A (en) Reply selection method and device
CN111309887B (en) Method and system for training text key content extraction model
CN111259130B (en) Method and apparatus for providing reply sentence in dialog
CN111694941B (en) Reply information determining method and device, storage medium and electronic equipment
CN110069613A (en) A kind of reply acquisition methods and device
CN111898369A (en) Article title generation method, model training method and device and electronic equipment
CN113342958A (en) Question-answer matching method, text matching model training method and related equipment
CN109472030A (en) A kind of system replys the evaluation method and device of quality
CN113553412A (en) Question and answer processing method and device, electronic equipment and storage medium
CN112182145A (en) Text similarity determination method, device, equipment and storage medium
CN111858854A (en) Question-answer matching method based on historical dialogue information and related device
CN107506426A (en) A kind of implementation method of intelligent television automated intelligent response robot

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20190730

RJ01 Rejection of invention patent application after publication