CN110895558A - Dialog reply method and related device - Google Patents

Dialog reply method and related device Download PDF

Info

Publication number
CN110895558A
CN110895558A CN201810968436.8A CN201810968436A CN110895558A CN 110895558 A CN110895558 A CN 110895558A CN 201810968436 A CN201810968436 A CN 201810968436A CN 110895558 A CN110895558 A CN 110895558A
Authority
CN
China
Prior art keywords
expression
target
text
keyword
reply
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810968436.8A
Other languages
Chinese (zh)
Other versions
CN110895558B (en
Inventor
贺宇
王福强
王东宇
周泽南
姚嘉
马龙
苏雪峰
郑砚琼
黄晓烽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Sogou Technology Development Co Ltd
Original Assignee
Beijing Sogou Technology Development Co Ltd
Sogou Hangzhou Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Sogou Technology Development Co Ltd, Sogou Hangzhou Intelligent Technology Co Ltd filed Critical Beijing Sogou Technology Development Co Ltd
Priority to CN201810968436.8A priority Critical patent/CN110895558B/en
Publication of CN110895558A publication Critical patent/CN110895558A/en
Application granted granted Critical
Publication of CN110895558B publication Critical patent/CN110895558B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The application discloses a dialog reply method and a related device, wherein the method comprises the following steps: when a target text is formed based on input of a user, and a reply text is obtained based on the target text, keywords of the reply text are determined at least by taking the expressions as reference factors, target expressions expressing the meanings of the keywords are obtained based on keyword search, and the target text is replied by the target expressions. When a user carries out conversation, the target expression of the reply text corresponding to the target text is obtained, and the new target expression reply mode enables interactive chat to be more intelligent and interesting and enables emotional expression to be more vivid; the expression defect of the reply text is avoided, the emotional acceptance of the user is higher, the user is attracted to carry out multi-round interactive chatting, and the chatting experience of the user is further improved.

Description

Dialog reply method and related device
Technical Field
The present application relates to the field of computer communications technologies, and in particular, to a method for dialog reply and a related device.
Background
With the rapid development of artificial intelligence technology, chat robots for simulating human conversation or chat are more and more popular, and can be applied to website platforms, instant messaging applications, chat rooms, social robots and the like.
Most of the traditional chat robots adopt text content to perform dialogue chat with users, namely, the chat robots firstly form target texts based on the input of the users, then obtain reply texts based on the target texts in a retrieval or generation mode, and reply the input of the users.
However, "chat emoticons" have become an indispensable chat element in the current conversational chat process. Various chat expressions can not only increase the interest of conversation and chat, but also can show the meaning that the pure text content cannot express. Therefore, in the prior art, the dialog reply is performed by using the reply text in the form of text, which is relatively single and fixed, and has boring content and limited expression capability, thereby causing low chat effect.
Disclosure of Invention
The technical problem to be solved by the application is to provide a method and a related device for dialogue reply, so that interactive chat is more intelligent and interesting, and emotional expression is more vivid; the expression defect of the reply text is avoided, the emotional acceptance of the user is higher, the user is attracted to carry out multi-round interactive chatting, and the chatting experience of the user is improved.
In a first aspect, an embodiment of the present application provides a method for dialog reply, where the method includes:
responding to the input of a user to form a target text, and obtaining a corresponding reply text according to the target text;
determining keywords of the reply text at least by taking the expression as a reference factor;
searching a target expression expressing the meaning of the keyword according to the keyword;
and displaying the target expression to the user.
Optionally, the determining the keyword of the reply text by using at least the expression as a reference factor includes:
multiplying and determining the expression weight of each word in the reply text according to the frequency of each word in the expression search log and the inverse file frequency of the conventional search log;
determining words of which the expression weights are larger than a preset value in the reply text;
and determining a word as a keyword from the words with the expression weights larger than a preset value according to a preset rule.
Optionally, determining a word from the words with expression weights larger than a preset value according to a preset rule as a keyword specifically includes:
determining the words with the maximum expression weight in the words with the expression weight larger than a preset value as keywords; or the like, or, alternatively,
and randomly determining a word as a keyword from the words with the expression weight larger than a preset value.
Optionally, the determining the keyword of the reply text by using at least the expression as a reference factor includes:
determining words expressing the key content of the reply text from the reply text by using a keyword extraction algorithm;
determining expression scores of the words expressing the reply text key contents according to the characteristics of the words expressing the reply text key contents;
and determining the word with the highest expression score of the words expressing the key content of the reply text as the keyword.
Optionally, the searching for the target expression expressing the meaning of the keyword according to the keyword includes:
searching for an expression expressing the meaning of the keyword;
and selecting one expression from the expressions expressing the meanings of the keywords as a target expression according to a preset rule.
Optionally, the selecting one expression from the expressions expressing the keyword meanings as a target expression according to a preset rule includes:
selecting the first N expressions according to the preset sequence of the expressions expressing the keyword meanings, wherein N is a positive integer and is smaller than the number of the expressions expressing the keyword meanings;
and selecting one expression from the first N expressions as a target expression.
Optionally, selecting one expression from the expressions expressing the keyword meanings as a target expression according to a preset rule, specifically:
and randomly selecting one expression from the expressions expressing the meanings of the keywords as a target expression.
In a second aspect, an embodiment of the present application provides an apparatus for dialog reply, where the apparatus includes:
the obtaining unit is used for responding to the input of a user to form a target text and obtaining a corresponding reply text according to the target text;
the determining unit is used for determining the keywords of the reply text at least by taking the expression as a reference factor;
the search unit is used for searching a target expression expressing the meaning of the keyword according to the keyword;
and the display unit is used for displaying the target expression to the user.
Optionally, the determining unit includes:
the first determining subunit is used for determining the expression weight of each word in the reply text by multiplying the frequency of each word in the expression search log in the reply text by the frequency of the reverse file of the conventional search log;
the second determining subunit is used for determining the words of which the expression weights are greater than a preset value in the reply text;
and the third determining subunit is used for determining a word as a keyword from the words with the expression weights larger than the preset value according to a preset rule.
Optionally, the third determining subunit is specifically configured to:
determining the words with the maximum expression weight in the words with the expression weight larger than a preset value as keywords; or the like, or, alternatively,
and randomly determining a word as a keyword from the words with the expression weight larger than a preset value.
Optionally, the determining unit includes:
a fourth determining subunit, configured to determine, from the reply text, a word expressing a key content of the reply text by using a keyword extraction algorithm;
a fifth determining subunit, configured to determine, according to characteristics of the words expressing the reply text key content, expression scores of the words expressing the reply text key content;
and a sixth determining subunit, configured to determine, as the keyword, the word with the highest expression score of the words expressing the key content of the reply text.
Optionally, the search unit includes:
a search subunit, configured to search for an expression expressing the meaning of the keyword;
and the selecting subunit is used for selecting one expression from the expressions expressing the keyword meanings as a target expression according to a preset rule.
Optionally, the selecting subunit includes:
the first selection module is used for selecting the first N expressions according to the preset sequence of the expressions expressing the keyword meanings, wherein N is a positive integer and is smaller than the number of the expressions expressing the keyword meanings;
and the second selection module is used for selecting one expression from the first N expressions as a target expression.
Optionally, the selecting subunit is specifically configured to:
and randomly selecting one expression from the expressions expressing the meanings of the keywords as a target expression.
In a third aspect, an embodiment of the present application provides an apparatus for dialog reply, the apparatus comprising a memory, and one or more programs, wherein the one or more programs are stored in the memory, and the one or more programs configured to be executed by the one or more processors include instructions for:
responding to the input of a user to form a target text, and obtaining a corresponding reply text according to the target text;
determining keywords of the reply text at least by taking the expression as a reference factor;
searching a target expression expressing the meaning of the keyword according to the keyword;
and displaying the target expression to the user.
In a fourth aspect, embodiments of the present application provide a machine-readable medium having stored thereon instructions, which, when executed by one or more processors, cause an apparatus to perform a method of dialog reply as described in one or more of the above first aspects.
Compared with the prior art, the method has the advantages that:
by adopting the technical scheme of the embodiment of the application, after the target text is formed based on the input of the user and the reply text is obtained based on the target text, the keywords of the reply text are determined at least by taking the expressions as reference factors, the target expressions expressing the meanings of the keywords are obtained based on the keyword search, and the target expressions are replied and displayed to the user. When a user carries out conversation, the target expression of the reply text corresponding to the target text is obtained, and the new target expression reply mode enables interactive chat to be more intelligent and interesting and enables emotional expression to be more vivid; the expression defect of the reply text is avoided, the emotional acceptance of the user is higher, the user is attracted to carry out multi-round interactive chatting, and the chatting experience of the user is further improved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments of the present application or the prior art will be briefly introduced below, it is obvious that the drawings in the following description are only some embodiments described in the present application, and other drawings can be obtained by those skilled in the art without creative efforts.
FIG. 1 is a schematic diagram of a prior art dialog interface provided by an embodiment of the present application;
fig. 2 is a flowchart illustrating a method for replying to a dialog according to an embodiment of the present disclosure;
fig. 3 is a flowchart illustrating another dialog reply method according to an embodiment of the present application;
FIG. 4 is a schematic illustration of a dialog interface provided by an embodiment of the present application;
fig. 5 is a schematic structural diagram of a dialog reply device according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of an apparatus for dialog reply according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of a server according to an embodiment of the present application.
Detailed Description
In order to make the technical solutions of the present application better understood, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
At present, a chat robot generally performs a dialogue chat with a user by using text-form content, specifically, in response to an input of the user, such as a voice input, a text input, or an image input, the chat robot first forms a target text based on the user input, then obtains a reply text based on the target text by using a corpus retrieval and/or a text automatic generation manner, and finally displays the reply text to the user to implement the dialogue chat with the user. For example, as shown in fig. 1, which is a schematic diagram of a prior art conversation interface, in response to a voice input from a user, a chat robot forms a target text based on a voice recognition technology, assuming that the target text is "i beautiful? ", performing a corpus search to obtain a reply text of" very beautiful! ", display its reply to the user.
However, chat expressions are often used during chatting to increase the interest of conversational chatting and to show the meaning that plain text content cannot be expressed. The chat robot only carries out dialogue reply by using the reply text in the mode, and not only has single fixed form and boring content, but also has limited text expression capacity, thus the chat effect is low.
In order to solve the above problem, in the embodiment of the present application, a target text is formed according to an input of a user, a reply text is obtained according to the target text, then, a keyword of the reply text is determined at least with an expression as a reference factor, and then, a chat expression expressing a meaning of the keyword is obtained according to a keyword search. And finally, replying and displaying the target expression to the user. The emotional expression of the target expression is more vivid, and the meaning that more reply texts cannot be expressed can be shown, so that the interactive chat is more intelligent and interesting. And then the user prefers to carry out multiple rounds of interactive chatting, and the chatting experience is good.
For example, the scenario of the embodiment of the present application may be a scenario in which a user has a conversation with a chat robot, and the action description of the embodiment of the present application in this scenario is executed by the chat robot. The scene of the embodiment of the application may also be a scene in which the user B sets an automatic reply and performs an automatic conversation with the user a by the background processor when the user a performs a conversation with the user B, and the action description of the embodiment of the application is executed by the background processor in this scene. The present application is not limited in terms of the execution subject as long as the actions disclosed in the embodiments of the present application are executed.
It can be understood that the above two scenarios are only scenario examples provided in the embodiments of the present application, and the embodiments of the present application are not limited to the above two scenarios.
The following describes in detail a specific implementation manner of the dialog reply method and the related apparatus in the embodiment of the present application by using embodiments with reference to the drawings.
Exemplary method referring to fig. 2, a flow diagram of a method for dialog reply in an embodiment of the present application is shown. In this embodiment, the method may include, for example, the steps of:
step 201: and responding to the input of the user to form a target text, and obtaining a corresponding reply text according to the target text.
The input of the user is in various forms, and can be voice input, text input or image input. The way in which the target text is formed varies with the user's input. For voice input, actually input voice data, and the voice data needs to be converted into a target text in a text form by using a voice recognition technology; for text input, the text data is actually input, and a target text in a text form can be directly formed; for image input, image data is actually input, and the image data needs to be converted into target text in text form by using an Optical Character Recognition (OCR) technology and/or an image recognition technology.
As an example, when the user does "i look beautiful? "the voice input, the chat robot recognizes the voice using the voice recognition technology, and converts into" do i look beautiful? "target text; as another example, when the user does "do you hungry? "text input, chat robot can directly form" do you hungry? "target text; as another example, when a user enters an image that carries "good morning" characters, the chat robot recognizes the image using OCR techniques, and converts its characters to form "good morning" target text.
In the embodiment of the application, the corresponding reply text can be obtained based on the target text in a plurality of ways. In an optional implementation manner, the target text is used as search data, the corpus text with higher similarity is searched in corpus search, and the corresponding reply corpus text is used as the reply text. In another optional implementation, the reply text corresponding to the target text is automatically generated by using a deep learning technique. In another optional embodiment, the two embodiments are combined, and a reply text corresponding to the target text is obtained through integration; if the target text is used as the retrieval data, the corpus text with higher similarity can not be retrieved when the corpus is retrieved, and then the reply text corresponding to the target text can be automatically generated by using the deep learning technology.
Step 202: and determining the keywords of the reply text at least by taking the expression as a reference factor.
It should be noted that, in the embodiment of the present application, after the reply text is obtained in step 201, with the expression as a reference factor, the keyword of the reply text may be determined in at least the following two ways:
in the first optional step 202, words with larger expression weights in the reply text are determined, and then a word is determined as a keyword from the words with larger expression weights. Specifically, for each word in the reply text, their expression weights may be calculated, for example, by multiplying the frequency of each word in the expression search log by the inverse file frequency of the regular search log. Specifically, the more times a certain word appears in the expression search log, the greater the frequency thereof, the less times it appears in the conventional search log, the greater the inverse file frequency thereof, and finally, the greater the expression weight of a certain word indicates the greater the probability that the word is used for expression. The words in the reply text with higher probability of being used for expression can be determined, that is, the words in the reply text with expression weights larger than a preset value can be directly determined based on the expression weights of the words in the reply text, the probabilities of the words being used for expression are all higher, and any one of the words can be determined as a keyword. Thus, in some implementations of this embodiment, the step 202 may include, for example, the steps of:
step A: and multiplying the frequency of each word in the reply text in the expression search log and the inverse file frequency of the conventional search log to determine the expression weight of each word in the reply text.
And B: and determining the words with the expression weights larger than a preset value in the reply text.
And C: and determining a word as a keyword from the words with the expression weights larger than a preset value according to a preset rule.
And C, determining a word from the words with expression weights larger than a preset value as a keyword according to a preset rule, wherein the following two modes can be adopted:
in the first mode, although the probabilities that the words with the expression weights larger than the preset value are used for the expressions are all larger, the words with the largest expression weights are considered to have the largest probability of being used for the expressions, and the expressions are taken as reference factors, so that the reply text can be represented most accurately, and then the words can be determined as keywords. Therefore, in some embodiments of this embodiment, the step C may specifically be, for example: and determining the words with the maximum expression weight in the words with the expression weight larger than the preset value as the keywords.
In the second mode, because the probabilities that the words with the expression weights larger than the preset value are used for expressions are all larger, and the expressions are taken as reference factors, any word can be taken as a keyword to represent the reply text, the reply text can be represented more accurately, and in consideration of diversity and randomness, one word can be randomly determined to be taken as the keyword. Therefore, in some embodiments of this embodiment, the step C may specifically be, for example: and randomly determining a word as a keyword from the words with the expression weight larger than a preset value.
As an example, assume that the reply text is "Ha, you too baseball! Extreme severity! For each word in the reply text, firstly, the expression weight is obtained by multiplying the frequency of each word in the expression search log by the inverse file frequency of the conventional search log, and then the expression weight is directly determined to be' haha, you are too excellent! Extreme severity! The words with the emotion weight larger than the preset value in the reply text are the words "too long" and the words "severe". Finally, if the expression weight of the word "too good" is larger than that of the word "bad", in the words "too good" and "bad", the word "too good" with the largest expression weight can be determined as the keyword, and one word can be randomly selected from the words "too good" and "bad" to be determined as the keyword.
In the second optional step 202, words expressing the reply text key content are determined, then expression scores of the words expressing the reply text key content are determined based on respective characteristics, and the word with the highest expression score is determined as the keyword. Specifically, a common keyword extraction algorithm is directly adopted, words expressing the key content of the reply text in the reply text can be determined, and since the final purpose is to determine the keywords of the reply text by taking the expression as a reference factor, the corresponding expression score can be determined based on the characteristics of the words expressing the key content of the reply text, such as the part of speech of the words; then, based on the determined expression scores, the word with the highest expression score is selected to be determined as the keyword. Thus, in some implementations of this embodiment, the step 202 may include, for example, the steps of:
step D: determining words expressing the key content of the reply text from the reply text by using a keyword extraction algorithm.
It should be noted that common keyword extraction algorithms are a word frequency-inverse file frequency algorithm, a TextRank algorithm, and the like, which can extract keywords in a text. The Term Frequency (English: Term Frequency, abbreviated as TF) -Inverse Document Frequency algorithm (English: Inverse Document Frequency, abbreviated as IDF) is used for evaluating the importance degree of a Term in a Document set or a corpus so as to generate keywords; the TextRank algorithm is based on the PageRank algorithm and is used for generating keywords for the text. Therefore, in the above embodiment of the second optional step 202, the keyword extraction algorithm includes a word frequency-inverse file frequency algorithm and/or a TextRank algorithm.
Step E: and determining the expression scores of the words expressing the reply text key contents according to the characteristics of the words expressing the reply text key contents.
Step F: and determining the word with the highest expression score of the words expressing the key content of the reply text as the keyword.
As an example, assume that the reply text is "Do not, it has been too fat! ", first, based on the TextRank algorithm, determine" don't, have been too fat! The words in the reply text that express the key content of the reply text are the word "not to do" and the word "too fat". Then, based on the part of speech of the word "not" and the part of speech of the word "too fat", the expression scores of the word "not" and the word "too fat" are determined, respectively, wherein the expression score of the word "too fat" is higher than the expression score of the word "not". Finally, the word "too fat" with the highest expression score may be determined as the keyword based on the determined expression scores of the word "not to do" and the word "too fat".
Step 203: and searching a target expression expressing the meaning of the keyword according to the keyword.
It can be understood that after the determined keyword is obtained in step 202, a target expression capable of expressing the meaning of the keyword needs to be searched to replace a reply text in the prior art for dialog reply, so that the expression defect of the reply text is avoided, the interactive chat is more intelligent and interesting, and the emotional expression is more vivid. Of course, the target expression expressing the meaning of the keyword can also be automatically generated according to the keyword. The target expression may be a static target expression, such as an expression picture; and dynamic target expressions such as an expression motion picture, an expression video and the like can also be adopted.
It should be noted that, when the keyword is used as search data, a plurality of expressions can be generally obtained through searching, and since the expressions can all express the meaning of the keyword, any one of the expressions can be arbitrarily selected as a target expression, so as to perform reply display according to the input of the user. Thus, in some embodiments of this embodiment, the step 203 may include the following steps:
step G: and searching for an expression expressing the meaning of the keyword.
It should be noted that, in the embodiment of the present application, expressions expressing keyword meanings may be searched in various ways. In an optional implementation manner, a plurality of expressions are stored in the local database, and then the expressions capable of expressing the meanings of the keywords are searched from the plurality of expressions stored in the local database by using the keywords as search data. In another optional implementation, if a network database stores a large number of expressions, an expression search engine is used to search for expressions capable of expressing the meanings of keywords from the large number of expressions stored in the network database by using the keywords as search data. In another optional implementation mode, the two implementation modes are combined, and expressions expressing the meanings of the keywords are integrated; if the keyword is used as search data, the local database is searched, and the expression capable of expressing the meaning of the keyword cannot be found, the expression search engine can be used instead, the keyword is used as search data, and the network database is searched until the expression capable of expressing the meaning of the keyword is searched. Therefore, in some embodiments of this embodiment, the step G may specifically be, for example: and searching a network database and/or a local database for expressions expressing the meanings of the keywords.
Step H: and selecting one expression from the expressions expressing the meanings of the keywords as a target expression according to a preset rule.
It should be noted that, in practical applications, the expressions expressing the keyword meanings that are directly pre-ordered based on the preset rules are searched out in step G, for example, the expressions expressing the keyword meanings are pre-ordered from more to less based on the number of times of screen-up, the expressions in the front order express the keyword meanings that are more accurate and consistent, and in consideration of the accuracy and the consistency of selecting the target expressions, the top N expressions can be selected based on the preset order of each expression, so that one of the top N expressions can be arbitrarily selected as the target expression from the expressions in which the number of times of screen-up is greater and the expression keyword meanings that are more accurate and consistent. Therefore, in some embodiments of this embodiment, the step H may include, for example:
step H1: and selecting the first N expressions according to the preset sequence of the expressions expressing the keyword meanings, wherein N is a positive integer and is less than the number of the expressions expressing the keyword meanings.
Step H2: and selecting one expression from the first N expressions as a target expression.
In this embodiment of the application, the following two ways may be used for selecting one expression from the top N expressions as the target expression in step H2:
in the first mode, although the meanings of the expression keywords of the first N expressions are all more accurate, the meaning of the expression keywords of the first ranked expression can be more accurate than that of other expressions, that is, the meaning of the expression keywords of the first ranked expression is the highest in accuracy and conformity, and the expression of the first ranked expression can be selected from the first N expressions as the target expression.
In the second mode, because the accuracy and the conformity of the first N expressions in expressing the meaning of the keyword are not large, one expression can be randomly selected from the first N expressions as a target expression in consideration of diversity and randomness.
As an example, step G searches for 100 expressions expressing the meaning of the keyword "too good", the 100 expressions having a preset ordering, and first, the top 10 expressions are selected; then, the first expression in the first 10 expressions can be selected as a target expression; one expression can be randomly selected from the previous 10 expressions as a target expression.
It should be further noted that, in practical applications, the expressions searched out in step G are expressions expressing the meaning of the keyword, and considering the diversity and randomness of the target expressions determined each time for the same keyword, one of the expressions may be randomly selected as the target expression. Therefore, in some embodiments of this embodiment, the step H may specifically be, for example: and randomly selecting one expression from the expressions expressing the meanings of the keywords as a target expression.
As an example, the step G searches for 100 expressions expressing the meaning of the keyword "too good", and randomly selects one expression from the 100 expressions as the target expression.
Step 204: and displaying the target expression to the user.
It will be appreciated that after the target expression expressing the meaning of the keyword is searched for in step 203, it will need to be displayed to the user instead of the reply text reply. When the target expression is a static target expression, such as an expression picture, displaying the expression picture to the user; when the target expression is a dynamic target expression, such as an expression motion picture or an expression video, the whole dynamic process of the expression motion picture or the expression video is displayed to the user.
Through various implementation manners provided by the embodiment, after a target text is formed based on input of a user and a reply text is obtained based on the target text, keywords of the reply text are determined at least by taking the expressions as reference factors, the target expressions expressing the meanings of the keywords are obtained based on keyword search, and the target expressions are replied and displayed to the user. When a user carries out conversation, the target expression of the reply text corresponding to the target text is obtained, and the new target expression reply mode enables interactive chat to be more intelligent and interesting and enables emotional expression to be more vivid; the expression defect of the reply text is avoided, the emotional acceptance of the user is higher, the user is attracted to carry out multi-round interactive chatting, and the chatting experience of the user is further improved.
Input "I won the prize! "conducting conversation chat for example, voice data is actually input, and in consideration of accuracy of conversation reply, a specific implementation manner of another method for conversation reply in the embodiment of the present application is described in detail through another embodiment with reference to fig. 3 below.
Referring to fig. 3, a flow diagram of another method for dialog reply in an embodiment of the present application is shown. In this embodiment, the method may include, for example, the steps of:
step 301: in response to a user's voice input "I won a prize! ", the target text" I won a prize! ".
Step 302: according to the target text "I won the prize! ", get the corresponding reply text" haha, you too good! Extreme severity! ".
Step 303: according to the reply text "Ha, you too baseball! Extreme severity! "the frequency of each word in the expression search log is multiplied by the inverse file frequency of the conventional search log to determine the reply text" haha, you too excellent! Extreme severity! "expression weights of the respective words in the phrase.
Wherein the expression weights of the word "too good" and the word "too bad" are greater than a preset value.
Step 304: determine the reply text "haha, you too baseball! Extreme severity! The words with the emotional weight being greater than the preset value are the word "too good" and the word "severe".
Wherein the expression score of the word "too good" is higher than the expression score of the word "severe".
Step 305: the word "too excellent" and the word "too excellent" having the largest emotional weight in the word "severe" are determined as keywords.
Step 306: the network database and/or the local database are searched for expressions that express the meaning of the keyword "too good".
Step 307: the top 10 expressions are selected according to a preset sequence of expressions expressing the meaning of the keyword 'too good'.
Wherein step 306 searches for an expression number that expresses the meaning of the keyword "too good" that is greater than 10, such as 100.
Step 308: and selecting the first-ranked expression from the top 10 expressions to determine the expression as the target expression.
Step 309: displaying the target expression to the user.
As an example, as shown in the schematic diagram of the dialog interface shown in FIG. 4, when the user's input is voice input, the dialog interface displays a voice input dialog box, and the actual input data is voice data, which is assumed to be embodied as "I have won a prize! ", the chat robot uses the above-described embodiments to form the target text" I won the prize! "and based on the target text" I won a prize! "get reply text" haha you too excellent! Extreme severity! After that, the expression is taken as a reference factor, the keyword 'too excellent' of the reply text is determined, the first-ranked expression expressing the meaning of the keyword is obtained as a target expression based on the keyword 'too excellent' search, namely, the target expression 'too excellent' in the figure is displayed to the user on a conversation interface.
Through various implementation manners provided by the embodiment, after a target text is formed based on input of a user and a reply text is obtained based on the target text, keywords of the reply text are determined at least by taking the expressions as reference factors, the target expressions expressing the meanings of the keywords are obtained based on keyword search, and the target expressions are replied and displayed to the user. When a user carries out conversation, the target expression of the reply text corresponding to the target text is obtained, and the new target expression reply mode enables interactive chat to be more intelligent and interesting and enables emotional expression to be more vivid; the expression defect of the reply text is avoided, the emotional acceptance of the user is higher, the user is attracted to carry out multi-round interactive chatting, and the chatting experience of the user is further improved.
Exemplary devices
Referring to fig. 5, a schematic structural diagram of a dialog reply device in the embodiment of the present application is shown. In this embodiment, the apparatus may specifically include:
an obtaining unit 501, configured to form a target text in response to an input of a user, and obtain a corresponding reply text according to the target text;
a determining unit 502, configured to determine a keyword of the reply text by using at least an expression as a reference factor;
a searching unit 503, configured to search, according to the keyword, a target expression expressing the meaning of the keyword;
a display unit 504, configured to display the target expression to the user.
Optionally, the determining unit 502 includes:
the first determining subunit is used for determining the expression weight of each word in the reply text by multiplying the frequency of each word in the expression search log in the reply text by the frequency of the reverse file of the conventional search log;
the second determining subunit is used for determining the words of which the expression weights are greater than a preset value in the reply text;
and the third determining subunit is used for determining a word as a keyword from the words with the expression weights larger than the preset value according to a preset rule.
Optionally, the third determining subunit is specifically configured to:
and determining the words with the maximum expression weight in the words with the expression weight larger than the preset value as the keywords.
Optionally, the third determining subunit is specifically configured to:
and randomly determining a word as a keyword from the words with the expression weight larger than a preset value.
Optionally, the determining unit 502 includes:
a fourth determining subunit, configured to determine, from the reply text, a word expressing a key content of the reply text by using a keyword extraction algorithm;
a fifth determining subunit, configured to determine, according to characteristics of the words expressing the reply text key content, expression scores of the words expressing the reply text key content;
and a sixth determining subunit, configured to determine, as the keyword, the word with the highest expression score of the words expressing the key content of the reply text.
Optionally, the keyword extraction algorithm includes a word frequency-inverse file frequency algorithm and/or a TextRank algorithm.
Optionally, the searching unit 503 includes:
a search subunit, configured to search for an expression expressing the meaning of the keyword;
and the selecting subunit is used for selecting one expression from the expressions expressing the keyword meanings as a target expression according to a preset rule.
Optionally, the search subunit is specifically configured to:
and searching a network database and/or a local database for expressions expressing the meanings of the keywords.
Optionally, the selecting subunit includes:
the first selection module is used for selecting the first N expressions according to the preset sequence of the expressions expressing the keyword meanings, wherein N is a positive integer and is smaller than the number of the expressions expressing the keyword meanings;
and the second selection module is used for selecting one expression from the first N expressions as a target expression.
Optionally, the selecting subunit is specifically configured to:
and randomly selecting one expression from the expressions expressing the meanings of the keywords as a target expression.
Through various implementation manners provided by the embodiment, after a target text is formed based on input of a user and a reply text is obtained based on the target text, keywords of the reply text are determined at least by taking the expressions as reference factors, the target expressions expressing the meanings of the keywords are obtained based on keyword search, and the target expressions are replied and displayed to the user. When a user carries out conversation, the target expression of the reply text corresponding to the target text is obtained, and the new target expression reply mode enables interactive chat to be more intelligent and interesting and enables emotional expression to be more vivid; the expression defect of the reply text is avoided, the emotional acceptance of the user is higher, the user is attracted to carry out multi-round interactive chatting, and the chatting experience of the user is further improved.
Fig. 6 is a block diagram illustrating an apparatus 600 for dialog reply in accordance with an example embodiment. For example, the apparatus 600 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, an exercise device, a personal digital assistant, and the like.
Referring to fig. 6, apparatus 600 may include one or more of the following components: processing component 602, memory 604, power component 606, multimedia component 608, audio component 610, input/output (I/O) interface 612, sensor component 614, and communication component 616.
The processing component 602 generally controls overall operation of the device 600, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing component 602 may include one or more processors 620 to execute instructions to perform all or a portion of the steps of the methods described above. Further, the processing component 602 can include one or more modules that facilitate interaction between the processing component 602 and other components. For example, the processing component 602 can include a multimedia module to facilitate interaction between the multimedia component 608 and the processing component 602.
The memory 604 is configured to store various types of data to support operation at the device 600. Examples of such data include instructions for any application or method operating on device 600, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 604 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
Power supply component 606 provides power to the various components of device 600. The power components 606 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the apparatus 600.
The multimedia component 608 includes a screen that provides an output interface between the device 600 and a user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 608 includes a front facing camera and/or a rear facing camera. The front camera and/or the rear camera may receive external multimedia data when the device 600 is in an operating mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 610 is configured to output and/or input audio signals. For example, audio component 610 includes a Microphone (MIC) configured to receive external audio signals when apparatus 600 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signal may further be stored in the memory 604 or transmitted via the communication component 616. In some embodiments, audio component 610 further includes a speaker for outputting audio signals.
The I/O interface 612 provides an interface between the processing component 602 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor component 614 includes one or more sensors for providing status assessment of various aspects of the apparatus 600. For example, the sensor component 614 may detect an open/closed state of the device 600, the relative positioning of components, such as a display and keypad of the apparatus 600, the sensor component 614 may also detect a change in position of the apparatus 600 or a component of the apparatus 600, the presence or absence of user contact with the apparatus 300, orientation or acceleration/deceleration of the apparatus 600, and a change in temperature of the apparatus 600. The sensor assembly 614 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly 614 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 614 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 616 is configured to facilitate communications between the apparatus 600 and other devices in a wired or wireless manner. The apparatus 600 may access a wireless network based on a communication standard, such as WiFi, 2G or 3G, or a combination thereof. In an exemplary embodiment, the communication component 616 receives broadcast signals or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 616 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the apparatus 600 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described methods.
In an exemplary embodiment, a non-transitory computer readable storage medium comprising instructions, such as the memory 604 comprising instructions, executable by the processor 620 of the apparatus 600 to perform the above-described method is also provided. For example, the non-transitory computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
A non-transitory computer readable storage medium having instructions therein which, when executed by a processor of a mobile terminal, enable the mobile terminal to perform a method of dialog reply, the method comprising:
responding to the input of a user to form a target text, and obtaining a corresponding reply text according to the target text;
determining keywords of the reply text at least by taking the expression as a reference factor;
searching a target expression expressing the meaning of the keyword according to the keyword;
and displaying the target expression to the user.
Fig. 7 is a schematic structural diagram of a server in the embodiment of the present application. The server 700 may vary significantly depending on configuration or performance, and may include one or more Central Processing Units (CPUs) 722 (e.g., one or more processors) and memory 732, one or more storage media 730 (e.g., one or more mass storage devices) storing applications 742 or data 744. Memory 732 and storage medium 730 may be, among other things, transient storage or persistent storage. The program stored in the storage medium 730 may include one or more modules (not shown), each of which may include a series of instruction operations for the server. Further, the central processor 722 may be configured to communicate with the storage medium 730, and execute a series of instruction operations in the storage medium 730 on the server 700.
The server 700 may also include one or more power supplies 726, one or more wired or wireless network interfaces 450, one or more input-output interfaces 758, one or more keyboards 756, and/or one or more operating systems 741, such as Windows Server, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM, etc.
The embodiments in the present description are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other. The device disclosed by the embodiment corresponds to the method disclosed by the embodiment, so that the description is simple, and the relevant points can be referred to the method part for description.
Those of skill would further appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative components and steps have been described above generally in terms of their functionality in order to clearly illustrate this interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. The terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The foregoing is merely a preferred embodiment of the present application and is not intended to limit the present application in any way. Although the present application has been described with reference to the preferred embodiments, it is not intended to limit the present application. Those skilled in the art can now make numerous possible variations and modifications to the disclosed embodiments, or modify equivalent embodiments, using the methods and techniques disclosed above, without departing from the scope of the claimed embodiments. Therefore, any simple modification, equivalent change and modification made to the above embodiments according to the technical essence of the present application still fall within the protection scope of the technical solution of the present application without departing from the content of the technical solution of the present application.

Claims (10)

1. A method of dialog reply, comprising:
responding to the input of a user to form a target text, and obtaining a corresponding reply text according to the target text;
determining keywords of the reply text at least by taking the expression as a reference factor;
searching a target expression expressing the meaning of the keyword according to the keyword;
and displaying the target expression to the user.
2. The method of claim 1, wherein the determining the keywords of the reply text at least by taking the expression as a reference factor comprises:
multiplying and determining the expression weight of each word in the reply text according to the frequency of each word in the expression search log and the inverse file frequency of the conventional search log;
determining words of which the expression weights are larger than a preset value in the reply text;
and determining a word as a keyword from the words with the expression weights larger than a preset value according to a preset rule.
3. The method according to claim 2, wherein the determining a word as the keyword from the words with the expression weights larger than a preset value according to a preset rule specifically comprises:
determining the words with the maximum expression weight in the words with the expression weight larger than a preset value as keywords; or the like, or, alternatively,
and randomly determining a word as a keyword from the words with the expression weight larger than a preset value.
4. The method of claim 1, wherein the determining the keywords of the reply text at least by taking the expression as a reference factor comprises:
determining words expressing the key content of the reply text from the reply text by using a keyword extraction algorithm;
determining expression scores of the words expressing the reply text key contents according to the characteristics of the words expressing the reply text key contents;
and determining the word with the highest expression score of the words expressing the key content of the reply text as the keyword.
5. The method of claim 1, wherein searching for a target expression expressing the meaning of the keyword according to the keyword comprises:
searching for an expression expressing the meaning of the keyword;
and selecting one expression from the expressions expressing the meanings of the keywords as a target expression according to a preset rule.
6. The method of claim 5, wherein the selecting one expression from the expressions expressing the keyword meanings as a target expression according to a preset rule comprises:
selecting the first N expressions according to the preset sequence of the expressions expressing the keyword meanings, wherein N is a positive integer and is smaller than the number of the expressions expressing the keyword meanings;
and selecting one expression from the first N expressions as a target expression.
7. The method according to claim 5, wherein one expression is selected from the expressions expressing the keyword meanings as a target expression according to a preset rule, and specifically:
and randomly selecting one expression from the expressions expressing the meanings of the keywords as a target expression.
8. An apparatus for dialog reply, comprising:
the obtaining unit is used for responding to the input of a user to form a target text and obtaining a corresponding reply text according to the target text;
the determining unit is used for determining the keywords of the reply text at least by taking the expression as a reference factor;
the search unit is used for searching a target expression expressing the meaning of the keyword according to the keyword;
and the display unit is used for displaying the target expression to the user.
9. An apparatus for dialog reply comprising a memory, and one or more programs, wherein the one or more programs are stored in the memory and configured for execution by one or more processors the one or more programs including instructions for:
responding to the input of a user to form a target text, and obtaining a corresponding reply text according to the target text;
determining keywords of the reply text at least by taking the expression as a reference factor;
searching a target expression expressing the meaning of the keyword according to the keyword;
and displaying the target expression to the user.
10. A machine-readable medium having stored thereon instructions, which when executed by one or more processors, cause an apparatus to perform a method of dialog reply as recited in one or more of claims 1-7 above.
CN201810968436.8A 2018-08-23 2018-08-23 Dialogue reply method and related device Active CN110895558B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810968436.8A CN110895558B (en) 2018-08-23 2018-08-23 Dialogue reply method and related device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810968436.8A CN110895558B (en) 2018-08-23 2018-08-23 Dialogue reply method and related device

Publications (2)

Publication Number Publication Date
CN110895558A true CN110895558A (en) 2020-03-20
CN110895558B CN110895558B (en) 2024-01-30

Family

ID=69785073

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810968436.8A Active CN110895558B (en) 2018-08-23 2018-08-23 Dialogue reply method and related device

Country Status (1)

Country Link
CN (1) CN110895558B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111506717A (en) * 2020-04-15 2020-08-07 网易(杭州)网络有限公司 Question answering method, device, equipment and storage medium
CN113094478A (en) * 2021-06-10 2021-07-09 平安科技(深圳)有限公司 Expression reply method, device, equipment and storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104933113A (en) * 2014-06-06 2015-09-23 北京搜狗科技发展有限公司 Expression input method and device based on semantic understanding
WO2016197767A2 (en) * 2016-02-16 2016-12-15 中兴通讯股份有限公司 Method and device for inputting expression, terminal, and computer readable storage medium
CN107038214A (en) * 2017-03-06 2017-08-11 北京小米移动软件有限公司 Expression information processing method and processing device
CN107329990A (en) * 2017-06-06 2017-11-07 北京光年无限科技有限公司 A kind of mood output intent and dialogue interactive system for virtual robot
CN107577661A (en) * 2017-08-07 2018-01-12 北京光年无限科技有限公司 A kind of interaction output intent and system for virtual robot
CN107729320A (en) * 2017-10-19 2018-02-23 西北大学 A kind of emoticon based on Time-Series analysis user conversation emotion trend recommends method
US20180061400A1 (en) * 2016-08-30 2018-03-01 Google Inc. Using textual input and user state information to generate reply content to present in response to the textual input
CN107800866A (en) * 2016-08-30 2018-03-13 三星电子株式会社 Offer method is provided and supports to reply the electronic installation of offer method

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104933113A (en) * 2014-06-06 2015-09-23 北京搜狗科技发展有限公司 Expression input method and device based on semantic understanding
WO2016197767A2 (en) * 2016-02-16 2016-12-15 中兴通讯股份有限公司 Method and device for inputting expression, terminal, and computer readable storage medium
US20180061400A1 (en) * 2016-08-30 2018-03-01 Google Inc. Using textual input and user state information to generate reply content to present in response to the textual input
CN107800866A (en) * 2016-08-30 2018-03-13 三星电子株式会社 Offer method is provided and supports to reply the electronic installation of offer method
CN107038214A (en) * 2017-03-06 2017-08-11 北京小米移动软件有限公司 Expression information processing method and processing device
CN107329990A (en) * 2017-06-06 2017-11-07 北京光年无限科技有限公司 A kind of mood output intent and dialogue interactive system for virtual robot
CN107577661A (en) * 2017-08-07 2018-01-12 北京光年无限科技有限公司 A kind of interaction output intent and system for virtual robot
CN107729320A (en) * 2017-10-19 2018-02-23 西北大学 A kind of emoticon based on Time-Series analysis user conversation emotion trend recommends method

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
BORIS GALITSKY: "Matching parse thickets for open domain question answering", 《DATA & KNOWLEDGE ENGINEERING》, pages 24 - 50 *
孙立茹;余华云;: "基于深度学习的生成式聊天机器人算法综述", 电脑知识与技术, no. 23, pages 233 - 234 *
高飞: "基于深度神经网络模型的非特定域中文智能问答系统研究与实现", 《 CNKI优秀硕士学位论文全文库》, pages 138 - 1904 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111506717A (en) * 2020-04-15 2020-08-07 网易(杭州)网络有限公司 Question answering method, device, equipment and storage medium
CN111506717B (en) * 2020-04-15 2024-02-09 网易(杭州)网络有限公司 Question answering method, device, equipment and storage medium
CN113094478A (en) * 2021-06-10 2021-07-09 平安科技(深圳)有限公司 Expression reply method, device, equipment and storage medium

Also Published As

Publication number Publication date
CN110895558B (en) 2024-01-30

Similar Documents

Publication Publication Date Title
CN107621886B (en) Input recommendation method and device and electronic equipment
CN109582768B (en) Text input method and device
CN110391966B (en) Message processing method and device and message processing device
CN110019885B (en) Expression data recommendation method and device
CN110895558B (en) Dialogue reply method and related device
CN112445906A (en) Method and device for generating reply message
CN112307281A (en) Entity recommendation method and device
CN113411246B (en) Reply processing method and device and reply processing device
CN112000766A (en) Data processing method, device and medium
CN112631435A (en) Input method, device, equipment and storage medium
CN111240497A (en) Method and device for inputting through input method and electronic equipment
CN111831132A (en) Information recommendation method and device and electronic equipment
CN109901726B (en) Candidate word generation method and device and candidate word generation device
CN113420553A (en) Text generation method and device, storage medium and electronic equipment
CN110929122B (en) Data processing method and device for data processing
CN110020153B (en) Searching method and device
CN112214114A (en) Input method and device and electronic equipment
CN111339263A (en) Information recommendation method and device and electronic equipment
CN112462992B (en) Information processing method and device, electronic equipment and medium
CN110765338A (en) Data processing method and device and data processing device
CN112765346B (en) Information processing method and device
CN111666436B (en) Data processing method and device and electronic equipment
CN110413133B (en) Input method and device
CN111273786B (en) Intelligent input method and device
CN111381685B (en) Sentence association method and sentence association device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20220725

Address after: Room 01, floor 9, Sohu Internet building, building 9, No. 1 yard, Zhongguancun East Road, Haidian District, Beijing 100190

Applicant after: BEIJING SOGOU TECHNOLOGY DEVELOPMENT Co.,Ltd.

Address before: 100084. Room 9, floor 01, cyber building, building 9, building 1, Zhongguancun East Road, Haidian District, Beijing

Applicant before: BEIJING SOGOU TECHNOLOGY DEVELOPMENT Co.,Ltd.

Applicant before: SOGOU (HANGZHOU) INTELLIGENT TECHNOLOGY Co.,Ltd.

GR01 Patent grant
GR01 Patent grant