CN110347817B - Intelligent response method and device, storage medium and electronic equipment - Google Patents

Intelligent response method and device, storage medium and electronic equipment Download PDF

Info

Publication number
CN110347817B
CN110347817B CN201910637337.6A CN201910637337A CN110347817B CN 110347817 B CN110347817 B CN 110347817B CN 201910637337 A CN201910637337 A CN 201910637337A CN 110347817 B CN110347817 B CN 110347817B
Authority
CN
China
Prior art keywords
information
behavior
user
preset time
attribute
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910637337.6A
Other languages
Chinese (zh)
Other versions
CN110347817A (en
Inventor
张荣升
阎静思
王怡
毛晓曦
范长杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Netease Hangzhou Network Co Ltd
Original Assignee
Netease Hangzhou Network Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Netease Hangzhou Network Co Ltd filed Critical Netease Hangzhou Network Co Ltd
Priority to CN201910637337.6A priority Critical patent/CN110347817B/en
Publication of CN110347817A publication Critical patent/CN110347817A/en
Application granted granted Critical
Publication of CN110347817B publication Critical patent/CN110347817B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/332Query formulation
    • G06F16/3329Natural language query formulation or dialogue systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/335Filtering based on additional data, e.g. user or group profiles

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Databases & Information Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Computational Linguistics (AREA)
  • Mathematical Physics (AREA)
  • Artificial Intelligence (AREA)
  • Human Computer Interaction (AREA)
  • Machine Translation (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The embodiment of the invention relates to an intelligent response method and device, a storage medium and electronic equipment, belonging to the technical field of artificial intelligence, wherein the method comprises the following steps: receiving dialogue interaction information input by a first user, and acquiring historical conversation information corresponding to the first user; acquiring behavior information corresponding to current time information generated by the dialogue interaction information and attribute information corresponding to the behavior information from a preset time behavior table; and inputting the dialogue interaction information, the historical conversation information, the behavior information corresponding to the current time information and the attribute information into a behavior perception dialogue model to generate response information. The embodiment of the invention improves the generation quantity of the response information and improves the accuracy of the response information.

Description

Intelligent response method and device, storage medium and electronic equipment
Technical Field
The embodiment of the invention relates to the technical field of artificial intelligence, in particular to an intelligent response method, an intelligent response device, a computer readable storage medium and electronic equipment.
Background
Since the proposal of seq2seq framework, it is widely used for dialog generation. Seq2Seq encodes the user's input and then transmits the encoded intermediate representation to the decoding end to generate the output. However, the seq2seq framework is prone to generate common and inconsistent replies. Wherein, the universality generally means that the replied content has no information content, such as "haha" and "i also are" and the like; inconsistency means that the robot may reply to contradictory answers twice for the same question, for example, for "how big you are", replies at different times may output both "i 16 is" and "i 18 is". Therefore, how to promote diversity and consistency of reply generation becomes a problem to be solved.
In the process of researching diversity and consistency, people gradually hope that the conversation robot is personalized, namely people can set the self characters, ages, hobbies, professions and the like of the robot, and the generated responses are in line with the personality of the robot. In the existing modeling generation of personalized information, most are template-based approaches. For example, given the personality setting of the robot, some question and answer templates are manually written, then template matching is performed on the input of the user every time, and the written template answers are returned when a certain threshold value is reached.
However, the above method has the following disadvantages: on one hand, because the question and answer are carried out through the template, the number of answers which can be replied is small, and the matching accuracy is low due to human reasons, so that the conversation accuracy is low; on the other hand, as the degree of personalization of the user is improved, the method consumes a large amount of human resources and is time-consuming.
Therefore, it is desirable to provide a new intelligent response method and apparatus.
It is to be noted that the information invented in the above background section is only for enhancing the understanding of the background of the present invention, and therefore, may include information that does not constitute prior art known to those of ordinary skill in the art.
Disclosure of Invention
The present invention is directed to an intelligent answering method, an intelligent answering device, a computer-readable storage medium, and an electronic device, which overcome, at least to some extent, the problem of low accuracy of a conversation due to the limitations and disadvantages of the related art.
According to an aspect of the present disclosure, there is provided an intelligent answering method, including:
receiving dialogue interaction information input by a first user, and acquiring historical conversation information corresponding to the first user;
acquiring behavior information corresponding to current time information generated by the dialogue interaction information and attribute information corresponding to the behavior information from a preset time behavior table;
and inputting the dialogue interaction information, the historical conversation information, the behavior information corresponding to the current time information and the attribute information into a behavior perception dialogue model to generate response information.
In an exemplary embodiment of the present disclosure, before acquiring behavior information corresponding to current time information generated by the dialog interaction information and attribute information corresponding to the behavior information from a preset time behavior table, the intelligent response method further includes:
classifying the daily behavior data of the first user, and adding target behavior data comprising state attributes to the daily behavior data of each category;
and constructing the preset time behavior table according to a first preset time period to which the target behavior data belongs.
In an exemplary embodiment of the present disclosure, the state attribute includes a key and an attribute value corresponding to the key.
In an exemplary embodiment of the present disclosure, constructing the preset time behavior table according to the first preset time period to which the target behavior data belongs includes:
dividing a second preset time period according to a preset rule to obtain a plurality of first preset time periods;
and filling the target behavior data into a preset time period according to a first preset time period to which the target behavior data belongs.
In an exemplary embodiment of the present disclosure, after acquiring behavior information corresponding to current time information generated by the dialog interaction information and attribute information corresponding to the behavior information from a preset time behavior table, the intelligent response method further includes:
sampling the state attribute of the target behavior data of the first user to obtain a first sampling result, and determining a keyword of the first user and an attribute value corresponding to the keyword according to the first sampling result;
and determining the personalized behavior information set of the first user according to the keywords of the first user and the attribute values corresponding to the keywords.
In an exemplary embodiment of the present disclosure, inputting the dialog interaction information, the historical session information, the behavior information corresponding to the current time information, and the attribute information into a behavior-aware dialog model, and generating the response information includes:
and inputting the dialogue interaction information, the historical conversation information, the behavior information and the attribute information corresponding to the current time information and the personalized behavior information set of the first user into a behavior perception dialogue model to generate response information.
In an exemplary embodiment of the present disclosure, the smart answering method further includes:
sampling again and/or for multiple times in a first preset time period to which the sampled target behavior data belongs to obtain one or more second sampling results;
determining a keyword of a second user corresponding to the first user and an attribute value corresponding to the keyword according to the second sampling result;
and determining the personalized behavior information set of the second user according to the keywords of the second user and the attribute values corresponding to the keywords.
In an exemplary embodiment of the present disclosure, inputting the dialog interaction information, the historical session information, the behavior information corresponding to the current time information, and the attribute information into a behavior-aware dialog model, and generating the response information includes:
inputting the dialogue interaction information, the historical conversation information, the behavior information corresponding to the current time information, the attribute information, the personalized behavior information of the first user and the personalized behavior information of the second user into a behavior perception dialogue model, and generating response information between the first user and the second user.
According to an aspect of the present disclosure, there is provided an intelligent answering device, including:
the information receiving module is used for receiving dialogue interaction information input by a first user and acquiring historical conversation information corresponding to the first user;
the information acquisition module is used for acquiring behavior information corresponding to the current time information generated by the dialogue interaction information and attribute information corresponding to the behavior information from a preset time behavior table;
and the information generation module is used for inputting the dialogue interaction information, the historical conversation information, the behavior information corresponding to the current time information and the attribute information into a behavior perception dialogue model to generate response information.
According to an aspect of the present disclosure, there is provided a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the intelligent response method according to any one of the above-mentioned embodiments.
According to an aspect of the present disclosure, there is provided an electronic device including:
a processor; and
a memory for storing executable instructions of the processor;
wherein the processor is configured to execute the intelligent response method of any of the above embodiments via execution of the executable instructions.
On one hand, by receiving dialogue interaction information input by a first user, and acquiring historical conversation information corresponding to the first user; acquiring behavior information corresponding to current time information generated by the dialogue interaction information and attribute information corresponding to the behavior information from a preset time behavior table; finally, inputting the dialogue interaction information, the historical conversation information, the behavior information corresponding to the current time information and the attribute information into a behavior perception dialogue model to generate response information; the problems that in the prior art, a question and answer template needs to be written manually, so that a large amount of human resources need to be consumed, and time is consumed are solved; on the other hand, the dialogue interaction information, the historical conversation information, the behavior information corresponding to the current time information and the attribute information are input into the behavior perception dialogue model to generate the response information, so that the problems that in the prior art, because the question and answer are carried out through a template, the number of answers which can be replied is small, the matching accuracy is low due to artificial reasons, and the conversation accuracy is low are solved, the generation number of the response information is increased, and the accuracy of the response information is improved at the same time; on the other hand, the dialogue interaction information, the historical conversation information, the behavior information corresponding to the current time information and the attribute information are input into the behavior perception dialogue model to generate the response information, so that the individuation of the response information is improved, and the user experience is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the invention, as claimed.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the invention and together with the description, serve to explain the principles of the invention. It is obvious that the drawings in the following description are only some embodiments of the invention, and that for a person skilled in the art, other drawings can be derived from them without inventive effort.
Fig. 1 schematically shows a flow chart of an intelligent response method according to an exemplary embodiment of the present invention.
Fig. 2 schematically shows a flowchart of a method for constructing the preset time behavior table according to a first preset time period to which the target behavior data belongs, according to an exemplary embodiment of the present invention.
Fig. 3 schematically shows a flow chart of another intelligent response method according to an exemplary embodiment of the present invention.
Fig. 4 schematically shows a flow chart of another intelligent response method according to an exemplary embodiment of the present invention.
Fig. 5 schematically shows a block diagram of an intelligent answering device according to an exemplary embodiment of the present invention.
Fig. 6 schematically illustrates an electronic device for implementing the above-described intelligent response method according to an exemplary embodiment of the present invention.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. Example embodiments may, however, be embodied in many different forms and should not be construed as limited to the examples set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of example embodiments to those skilled in the art. The described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided to provide a thorough understanding of embodiments of the invention. One skilled in the relevant art will recognize, however, that the invention may be practiced without one or more of the specific details, or with other methods, components, devices, steps, and so forth. In other instances, well-known technical solutions have not been shown or described in detail to avoid obscuring aspects of the invention.
Furthermore, the drawings are merely schematic illustrations of the invention and are not necessarily drawn to scale. The same reference numerals in the drawings denote the same or similar parts, and thus their repetitive description will be omitted. Some of the block diagrams shown in the figures are functional entities and do not necessarily correspond to physically or logically separate entities. These functional entities may be implemented in the form of software, or in one or more hardware modules or integrated circuits, or in different networks and/or processor devices and/or microcontroller devices.
In a general reply scheme from a seq2seq framework, the reply content generally has no information content, such as "haha", "i also is", and the inconsistency refers to that for the same question, a robot may reply to contradictory answers twice, for example, for "how big you are", replies at different times may output "i 16 am" and "i 18 am", so more and more people start to research and improve the diversity and consistency of reply generation. Since the seq2seq framework is easy to generate universal and inconsistent replies, how to generate diverse and consistent replies is a problem that needs to be solved urgently.
In the course of studying diversity and consistency, most users increasingly want the conversation robot to be personalized. That is, the user can set the robot's own personality, age, hobby, occupation, etc., and the generated reply is in conformity with the robot personality. The personalized generation and development process goes from embedded coding of people only at the beginning, to exploration of replies conforming to the structured people setting table (name, age, place, and the like), and to replies conforming to the daily description of personalized information (settings of enjoying tea, riding a bike, going to school, and the like).
However, when people chat at ordinary times, people pay more attention to the behavior information of the other party at the current moment, and the behaviors of each person at different times are various, so that it is meaningful and necessary to establish a chat robot which is more consistent with daily behavior activities.
For the modeling generation of the personalized information, the following forms are mainly provided: one is a template-based approach. Generally, given the individual setting of a robot, some question and answer templates are manually written, then template matching is carried out on the input of a user every time, and the written template answers are returned when a certain threshold value is reached. And the other method is to carry out embedded coding on different individuals and implicitly add the different individuals into the dialog generation. That is, different embedded vector representations represent different users, and then the vector and the decoded end word vector are spliced at the decoding end and sent to the decoding module, and then training is carried out, and the user embedded vector is expected to influence and change the result of dialog generation. Alternatively, given the personality of the robot, a reply is given that can conform to the setting. Such as the name, age, location, etc. of a given robot, the robot answers questions related to the setting. The existing technology tries to judge the generation time, and the personalized information is used for guiding the generation when the personalized information of the robot is needed. Also, given a description of the personality of the robot, it is required that it generate answers that can be set to the description. Such as the data set of the game Facebook of ConvAI2, where two players have their own long-term, defined personalized information and both parties chat based on the personalized information. By means of the data, the model is led to learn how to use personalized information when generating. The better model encodes the personalized information and the historical dialogue respectively and then decodes the personalized information and the historical dialogue at a decoding end.
However, the above method has the following drawbacks: on one hand, the template-based method needs to have artificially written corpora, and the reply templates are written well, so that the reply space is small; in addition, once the personalized information becomes more complex, the template-based method is labor-intensive, and matching errors will rise to some extent. On the other hand, at present, personalized exploration is established for some long-term personalized information, but in actual chatting, the behavior states of two parties are frequently chatted, and the behavior states change at different times. Since the content related to the identity setting is rarely bored, it is necessary to study a dynamically variable behavior state as personalized information of the robot. On the other hand, personalized information of the personalized robot at present can be structured or unstructured information, and personalized information processing aiming at the structured and unstructured information is different, and how to unify the structured and unstructured personalized information in a model is also needed to be researched.
The example embodiment first provides an intelligent response method, which can be operated in an intelligent robot terminal; of course, those skilled in the art may also operate the method of the present invention on other platforms as needed, and this is not particularly limited in this exemplary embodiment. Referring to fig. 1, the intelligent response method may include the steps of:
step 110, receiving dialogue interaction information input by a first user, and acquiring historical conversation information corresponding to the first user.
And S120, acquiring behavior information corresponding to the current time information generated by the dialogue interaction information and attribute information corresponding to the behavior information from a preset time behavior table.
And S130, inputting the dialogue interaction information, the historical conversation information, the behavior information corresponding to the current time information and the attribute information into a behavior perception dialogue model to generate response information.
In the intelligent response method, on one hand, the dialogue interaction information input by the first user is received, and the historical conversation information corresponding to the first user is obtained; acquiring behavior information corresponding to current time information generated by the dialogue interaction information and attribute information corresponding to the behavior information from a preset time behavior table; finally, inputting the dialogue interaction information, the historical conversation information, the behavior information corresponding to the current time information and the attribute information into a behavior perception dialogue model to generate response information; the problems that in the prior art, a question and answer template needs to be written manually, so that a large amount of human resources need to be consumed, and time is consumed are solved; on the other hand, the dialogue interaction information, the historical conversation information, the behavior information corresponding to the current time information and the attribute information are input into the behavior perception dialogue model to generate the response information, so that the problems that in the prior art, because the question and answer are carried out through a template, the number of answers which can be replied is small, the matching accuracy is low due to artificial reasons, and the conversation accuracy is low are solved, the generation number of the response information is increased, and the accuracy of the response information is improved at the same time; on the other hand, the dialogue interaction information, the historical conversation information, the behavior information corresponding to the current time information and the attribute information are input into the behavior perception dialogue model to generate the response information, so that the individuation of the response information is improved, and the user experience is improved.
Hereinafter, each step in the above-described intelligent response method in the present exemplary embodiment will be explained and explained in detail with reference to the drawings.
In step S110, dialog interaction information input by a first user is received, and historical session information corresponding to the first user is acquired.
In this example embodiment, a first user (here, the first user is P1 for example) may perform voice interaction with the intelligent robot, and after receiving dialog interaction information (voice information) input by the first user, the intelligent robot may start an internal NLP (Natural Language Processing) module to parse the dialog interaction information, and obtain historical conversation information corresponding to the first user according to a parsing result. The analysis result may include a conversation intention of the first user (for example, a question that needs to be answered by the conversation robot, etc.), and may also include sound information of the first user, etc.; specifically, the sound information may be matched with the existing sound information of the history, and when the matched threshold reaches a preset threshold, the session information corresponding to the existing sound information of the history may be used as the historical session information of the first user.
In step S120, behavior information corresponding to the current time information generated by the dialog interaction information and attribute information corresponding to the behavior information are acquired from a preset time behavior table.
In this exemplary embodiment, in order to facilitate obtaining the behavior information and the attribute information from the preset time behavior table, the preset time behavior table needs to be constructed first. Specifically, firstly, classifying the daily behavior data of the first user, and adding target behavior data including state attributes to the daily behavior data of each category; and secondly, constructing the preset time behavior table according to a first preset time period to which the target behavior data belongs. Wherein the state attribute comprises a keyword and an attribute value corresponding to the keyword.
For example, first, a daily behavior class is defined. Specifically, the daily activities can be divided into the following eight categories, for example: food, shopping, sports, touring, performing, entertainment games, daily and learning activities, etc., and each specific activity is classified into one of the eight categories. Then, a set of behaviors in each class is designed. Specifically, for each of the eight categories, common behavioral activities such as eating chafing dish, eating sweetmeat, gathering food below the gourmet food, shopping below the street, shopping in supermarkets, shopping in shopping malls, shopping in florists and the like can be listed. Further, the state attribute of each behavior needs to be designed. For example, for the above activities, their applicable times (morning-a, noon-B, evening-C), attributes and sets of values of specific state information for each activity may be designed. Specifically, the following table 1 may be mentioned.
TABLE 1
Figure BDA0002130672810000091
Further, after the preset time behavior table is obtained, behavior information corresponding to the current time information generated by the dialog interaction information and attribute information corresponding to the behavior information may be obtained from the preset time behavior table. For example, when the current time information generated by the dialog information is 14 points, the behavior information corresponding to the time information may be acquired: eating; and attribute information corresponding to the behavior information: the source is as follows: canteen, food: home-made dish, taste: in general, the companion: friends, and the like.
In step S130, the dialog interaction information, the historical session information, the behavior information corresponding to the current time information, and the attribute information are input into a behavior-aware dialog model, and response information is generated.
In the present exemplary embodiment, for example, the session intention obtained from the dialog interaction information is: what is eaten at noon? The historical session information is: removing from the dining room at noon to eat rice; the generated response information may be, for example: removing from the dining room at noon to eat home vegetables, and eating rice as staple food; or going to a dining hall at noon to eat fast food; or going to XX restaurant for eating at noon, etc., this example is not particularly limited. It should be noted here that the behavior-aware dialog model may generate one or more response messages according to the dialog interaction information, the historical session information, the behavior information corresponding to the current time information, and the attribute information. If a plurality of response messages are generated, the user satisfaction of each response message can be evaluated according to the historical conversation information, and the response messages are sorted according to the evaluation value, so that the user can select the response messages.
Further, in this exemplary embodiment, the intelligent response method may further include: training a behavior perception dialogue model, wherein the input of the behavior perception dialogue model is a personalized behavior information set of a first user and/or a personalized behavior information set of a second user; and a historical dialogue of the first user and/or the second user. Specifically, the training process may include the following two steps: first, Pre-training of GPT (generated Pre-trained Transformer) language models can be performed based on large-scale Chinese corpora. The model has 12 layers, the dimensionality of an embedded vector is 768, the dropout parameter is 0.1, Chinese characters are used as vocabulary units during training, and pre-training model parameters are obtained by training on large-scale linguistic data for 1-2 weeks. The personalized behavioral awareness dialog model is then fine-tuned. On the basis of the obtained parameters, behavior state information, conversation history and words needing to be replied which are input in the training corpus are spliced together by using separators to be used as a complete sentence, and the complete sentence is sent into a pre-trained language model for fine tuning training. Training until the loss converges, and then serving as the final behavior-aware dialogue model.
Fig. 2 schematically shows a flowchart of a method for constructing the preset time behavior table according to a first preset time period to which the target behavior data belongs, according to an exemplary embodiment of the present invention. Referring to fig. 2, constructing the preset time behavior table according to the first preset time period to which the target behavior data belongs may include step S210 and step S220, which will be described in detail below.
In step S210, a second preset time period is divided according to a preset rule to obtain a plurality of first preset time periods.
In step S220, the target behavior data is filled into a preset time period according to a first preset time period to which the target behavior data belongs.
Hereinafter, step S210 and step S220 will be explained and explained. First, the second preset time period may be, for example, a day, or a week or a year, which is not limited in this example. The preset rule may be, for example, daily work and rest arrangement, weekly work and rest arrangement, and the present example is not limited thereto. Specifically, the present example is described by taking the second preset time period as one day, and the preset rule as the schedule of work and rest for each day.
For example, firstly, the time of day is divided into time intervals, for example, the time intervals can be divided into 0-7, 7-12, 12-15, 15-18, 18-24 to obtain a plurality of first preset time intervals; then, an action is selected for each first preset time period. For example, the appropriate behavior may be selected from the set of behaviors for population. For example, fill in chafing dish for 12-15 time period, fill in shopping mall for 15-18 time period, etc., and select attribute values for each behavior, for example, the behavior of eating chafing dish is a small fat sheep in place, the taste is somewhat spicy, the fellow is family, and the service is general. Specific examples are shown in table 2 below.
TABLE 2
Figure BDA0002130672810000111
Fig. 3 schematically illustrates a flow chart of another intelligent response method according to an example embodiment of the present disclosure. Referring to fig. 3, the intelligent response method may further include step S310 and step S320, which will be described in detail below.
In step S310, the state attribute of the target behavior data of the first user is sampled to obtain a first sampling result, and a keyword of the first user and an attribute value corresponding to the keyword are determined according to the first sampling result.
In step S320, a personalized behavior information set of the first user is determined according to the keyword of the first user and the attribute value corresponding to the keyword.
Hereinafter, step S310 and step S320 will be explained and explained. For example, sampling the state attribute of each behavior in the behavior set of the target behavior data of the first user to obtain a first sampling result, and then determining a keyword and an attribute value according to the first sampling result to form a set of determined personalized behavior information of the first user; for example, keywords are location, taste, and companion; the attribute values are: fishing from the sea bottom, being very delicious, family members, etc.; therefore, the set of the personalized behavior information of the first user in the category of eating hot pot is obtained as follows: a place: fishing out the sea; taste: the food is very delicious; and (3) companion: family members.
Further, after obtaining the personalized behavior information set, the step S130 may further include: and inputting the dialogue interaction information, the historical conversation information, the behavior information and the attribute information corresponding to the current time information and the personalized behavior information set of the first user into a behavior perception dialogue model to generate response information. By the method, the individuation degree of the response information can be further improved, the accuracy of the response information can be improved, and the experience and satisfaction of a user are improved.
Fig. 4 schematically illustrates a flow chart of another intelligent response method according to an example embodiment of the present disclosure. Referring to fig. 4, the intelligent response method may further include steps S410 to S430, which will be described in detail below.
In step S410, sampling is performed again and/or multiple times in a first preset time period to which the sampled target behavior data belongs, so as to obtain one or more second sampling results.
In step S420, a keyword of a second user corresponding to the first user and an attribute value corresponding to the keyword are determined according to the second sampling result.
In step S430, a personalized behavior information set of the second user is determined according to the keyword of the second user and the attribute value corresponding to the keyword.
Further, after obtaining the personalized behavior information set of the second user, the step S130 may further include: inputting the dialogue interaction information, the historical conversation information, the behavior information corresponding to the current time information, the attribute information, the personalized behavior information of the first user and the personalized behavior information of the second user into a behavior perception dialogue model, and generating response information between the first user and the second user. Specifically, the generated response information between the first user and the second user may be, for example, as shown in table 3 below:
TABLE 3
Figure BDA0002130672810000121
Figure BDA0002130672810000131
Hereinafter, the intelligent response method mentioned in the present exemplary embodiment is further explained and explained.
First, a set of daily behaviors of a user is designed. The module designs a common daily activity behavior set, designs a detailed state background for each behavior, and the behaviors and the corresponding detailed state information are personalized information when the robot chats.
Second, the schedule is designed. For each dialogue robot, a personalized behavior schedule can be designed, and the behavior of each time section of the schedule is selected from a daily behavior set. By the design, the daily schedule of each robot is closer to the real world.
Then, corpus of dialogs based on the behavioral state context is collected. Sampling two behavior states in a daily behavior state set to form a group, enabling two annotators to respectively represent a behavior state background, and carrying out daily chatting around the behavior states to obtain multiple rounds of dialogue corpora which accord with the group of behavior states.
Further, a behavioral-aware dialog model is trained. By training the collected multi-turn dialogue corpora of the behavior state, the reply which is consistent with the current set behavior state background can be replied.
And finally, applying the chat robot with time behavior perception. At each moment, the personalized behavior background state of the robot at the current moment can be obtained according to the designed time behavior table, each time one input is made, the input and the personalized behavior state are sent to the trained model, and a reply which accords with the behavior state is output.
In the above intelligent response method, detailed behavior states of different time periods can be designed for the conversation robot, and the conversation robot can generate a corresponding reply according to the behavior state information of the current time. First, the conversational robot may have personalized information for time perception, unlike existing, unchanging personalized information, which may make the robot's reply dynamic. Secondly, the setting of the personalized information is designed according to the daily behavior and activity of human life, so that the personalized information is closer to the real life, and the reply of the robot is more anthropomorphic and realistic. In a word, the invention can improve the anthropomorphic experience of the chat robot, and the generated reply is closer to the real situation.
The present disclosure also provides an intelligent answering device. Referring to fig. 5, the smart responder may include an information receiving module 510, an information acquiring module 520, and an information generating module 530. Wherein:
the information receiving module 510 may be configured to receive dialog interaction information input by a first user and obtain historical session information corresponding to the first user.
The information obtaining module 520 may be configured to obtain behavior information corresponding to current time information generated by the dialog interaction information and attribute information corresponding to the behavior information from a preset time behavior table.
The information generating module 530 may be configured to input the dialog interaction information, the historical session information, the behavior information corresponding to the current time information, and the attribute information into a behavior-aware dialog model, and generate response information.
In an example embodiment of the present disclosure, before acquiring behavior information corresponding to current time information generated by the dialog interaction information and attribute information corresponding to the behavior information from a preset time behavior table, the intelligent response method further includes:
classifying the daily behavior data of the first user, and adding target behavior data comprising state attributes to the daily behavior data of each category;
and constructing the preset time behavior table according to a first preset time period to which the target behavior data belongs.
In an example embodiment of the present disclosure, the state attribute includes a key and an attribute value corresponding to the key.
In an example embodiment of the present disclosure, constructing the preset time behavior table according to a first preset time period to which the target behavior data belongs includes:
dividing a second preset time period according to a preset rule to obtain a plurality of first preset time periods;
and filling the target behavior data into a preset time period according to a first preset time period to which the target behavior data belongs.
In an example embodiment of the present disclosure, the smart answering device further includes:
the first sampling module may be configured to sample a state attribute of target behavior data of the first user to obtain a first sampling result, and determine a keyword of the first user and an attribute value corresponding to the keyword according to the first sampling result;
the first information determining module may be configured to determine the personalized behavior information set of the first user according to the keyword of the first user and the attribute value corresponding to the keyword.
In an example embodiment of the present disclosure, inputting the dialog interaction information, the historical session information, the behavior information corresponding to the current time information, and the attribute information into a behavior-aware dialog model, and generating the response information includes:
and inputting the dialogue interaction information, the historical conversation information, the behavior information and the attribute information corresponding to the current time information and the personalized behavior information set of the first user into a behavior perception dialogue model to generate response information.
In an example embodiment of the present disclosure, the smart answering device further includes:
the second sampling module can be used for sampling again and/or for multiple times in a first preset time period to which the sampled target behavior data belongs to obtain one or more second sampling results;
an attribute value determination module, configured to determine, according to the second sampling result, a keyword of a second user corresponding to the first user and an attribute value corresponding to the keyword;
and the second information determining module is used for determining the personalized behavior information set of the second user according to the keywords of the second user and the attribute values corresponding to the keywords.
In an example embodiment of the present disclosure, inputting the dialog interaction information, the historical session information, the behavior information corresponding to the current time information, and the attribute information into a behavior-aware dialog model, and generating the response information includes:
inputting the dialogue interaction information, the historical conversation information, the behavior information corresponding to the current time information, the attribute information, the personalized behavior information of the first user and the personalized behavior information of the second user into a behavior perception dialogue model, and generating response information between the first user and the second user.
The specific details of each module in the intelligent response device have been described in detail in the corresponding intelligent response method, and therefore are not described herein again.
It should be noted that although in the above detailed description several modules or units of the device for action execution are mentioned, such a division is not mandatory. Indeed, the features and functionality of two or more modules or units described above may be embodied in one module or unit, according to embodiments of the invention. Conversely, the features and functions of one module or unit described above may be further divided into embodiments by a plurality of modules or units.
Moreover, although the steps of the methods of the present invention are depicted in the drawings in a particular order, this does not require or imply that the steps must be performed in this particular order, or that all of the depicted steps must be performed, to achieve desirable results. Additionally or alternatively, certain steps may be omitted, multiple steps combined into one step execution, and/or one step broken down into multiple step executions, etc.
Through the above description of the embodiments, those skilled in the art will readily understand that the exemplary embodiments described herein may be implemented by software, or by software in combination with necessary hardware. Therefore, the technical solution according to the embodiment of the present invention can be embodied in the form of a software product, which can be stored in a non-volatile storage medium (which can be a CD-ROM, a usb disk, a removable hard disk, etc.) or on a network, and includes several instructions to make a computing device (which can be a personal computer, a server, a mobile terminal, or a network device, etc.) execute the method according to the embodiment of the present invention.
In an exemplary embodiment of the present invention, there is also provided an electronic device capable of implementing the above method.
As will be appreciated by one skilled in the art, aspects of the present invention may be embodied as a system, method or program product. Thus, various aspects of the invention may be embodied in the form of: an entirely hardware embodiment, an entirely software embodiment (including firmware, microcode, etc.) or an embodiment combining hardware and software aspects that may all generally be referred to herein as a "circuit," module "or" system.
An electronic device 600 according to this embodiment of the invention is described below with reference to fig. 6. The electronic device 600 shown in fig. 6 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present invention.
As shown in fig. 6, the electronic device 600 is embodied in the form of a general purpose computing device. The components of the electronic device 600 may include, but are not limited to: the at least one processing unit 610, the at least one memory unit 620, and a bus 630 that couples the various system components including the memory unit 620 and the processing unit 610.
Wherein the storage unit stores program code that is executable by the processing unit 610 to cause the processing unit 610 to perform steps according to various exemplary embodiments of the present invention as described in the above section "exemplary methods" of the present specification. For example, the processing unit 610 may perform step S110 as shown in fig. 1: receiving dialogue interaction information input by a first user, and acquiring historical conversation information corresponding to the first user; step S120: acquiring behavior information corresponding to current time information generated by the dialogue interaction information and attribute information corresponding to the behavior information from a preset time behavior table; step S130: and inputting the dialogue interaction information, the historical conversation information, the behavior information corresponding to the current time information and the attribute information into a behavior perception dialogue model to generate response information.
The storage unit 620 may include readable media in the form of volatile memory units, such as a random access memory unit (RAM)6201 and/or a cache memory unit 6202, and may further include a read-only memory unit (ROM) 6203.
The memory unit 620 may also include a program/utility 6204 having a set (at least one) of program modules 6205, such program modules 6205 including, but not limited to: an operating system, one or more application programs, other program modules, and program data, each of which, or some combination thereof, may comprise an implementation of a network environment.
Bus 630 may be one or more of several types of bus structures, including a memory unit bus or memory unit controller, a peripheral bus, an accelerated graphics port, a processing unit, or a local bus using any of a variety of bus architectures.
The electronic device 600 may also communicate with one or more external devices 700 (e.g., keyboard, pointing device, bluetooth device, etc.), with one or more devices that enable a user to interact with the electronic device 600, and/or with any devices (e.g., router, modem, etc.) that enable the electronic device 600 to communicate with one or more other computing devices. Such communication may occur via an input/output (I/O) interface 650. Also, the electronic device 600 may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network such as the Internet) via the network adapter 660. As shown, the network adapter 660 communicates with the other modules of the electronic device 600 over the bus 630. It should be appreciated that although not shown in the figures, other hardware and/or software modules may be used in conjunction with the electronic device 600, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data backup storage systems, among others.
Through the above description of the embodiments, those skilled in the art will readily understand that the exemplary embodiments described herein may be implemented by software, or by software in combination with necessary hardware. Therefore, the technical solution according to the embodiment of the present invention can be embodied in the form of a software product, which can be stored in a non-volatile storage medium (which can be a CD-ROM, a usb disk, a removable hard disk, etc.) or on a network, and includes several instructions to make a computing device (which can be a personal computer, a server, a terminal device, or a network device, etc.) execute the method according to the embodiment of the present invention.
In an exemplary embodiment of the present invention, there is also provided a computer-readable storage medium having stored thereon a program product capable of implementing the above-described method of the present specification. In some possible embodiments, aspects of the invention may also be implemented in the form of a program product comprising program code means for causing a terminal device to carry out the steps according to various exemplary embodiments of the invention described in the above section "exemplary methods" of the present description, when said program product is run on the terminal device.
According to the program product for realizing the method, the portable compact disc read only memory (CD-ROM) can be adopted, the program code is included, and the program product can be operated on terminal equipment, such as a personal computer. However, the program product of the present invention is not limited in this regard and, in the present document, a readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
A computer readable signal medium may include a propagated data signal with readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A readable signal medium may also be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server. In the case of a remote computing device, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., through the internet using an internet service provider).
Furthermore, the above-described figures are merely schematic illustrations of processes involved in methods according to exemplary embodiments of the invention, and are not intended to be limiting. It will be readily understood that the processes shown in the above figures are not intended to indicate or limit the chronological order of the processes. In addition, it is also readily understood that these processes may be performed synchronously or asynchronously, e.g., in multiple modules.
Other embodiments of the invention will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the invention and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the invention being indicated by the following claims.

Claims (6)

1. An intelligent answering method, comprising:
receiving dialogue interaction information input by a first user, and acquiring historical conversation information corresponding to the first user;
classifying the daily behavior data of the first user, and adding target behavior data comprising state attributes to the daily behavior data of each category; according to a first preset time period to which the target behavior data belong, constructing a preset time behavior table, comprising: dividing a second preset time period according to a preset rule to obtain a plurality of first preset time periods; filling the target behavior data into a preset time period according to a first preset time period to which the target behavior data belongs;
acquiring behavior information corresponding to current time information generated by the dialogue interaction information and attribute information corresponding to the behavior information from a preset time behavior table;
sampling state attributes of the target behavior data of the first user to obtain a first sampling result, wherein the state attributes comprise keywords and attribute values corresponding to the keywords, and determining the keywords of the first user and the attribute values corresponding to the keywords according to the first sampling result; determining a personalized behavior information set of the first user according to the keywords of the first user and the attribute values corresponding to the keywords;
inputting the dialogue interaction information, the historical conversation information, the behavior information corresponding to the current time information and the attribute information into a behavior perception dialogue model to generate response information; the method comprises the following steps: and inputting the dialogue interaction information, the historical conversation information, the behavior information and the attribute information corresponding to the current time information and the personalized behavior information set of the first user into a behavior perception dialogue model to generate response information.
2. The intelligent answering method according to claim 1, wherein the intelligent answering method further comprises:
sampling again and/or for multiple times in a first preset time period to which the sampled target behavior data belongs to obtain one or more second sampling results;
determining a keyword of a second user corresponding to the first user and an attribute value corresponding to the keyword according to the second sampling result;
and determining the personalized behavior information set of the second user according to the keywords of the second user and the attribute values corresponding to the keywords.
3. The intelligent response method according to claim 2, wherein the dialog interaction information, the historical session information, the behavior information corresponding to the current time information, and the attribute information are input into a behavior-aware dialog model, and generating the response information comprises:
inputting the dialogue interaction information, the historical conversation information, the behavior information corresponding to the current time information, the attribute information, the personalized behavior information of the first user and the personalized behavior information of the second user into a behavior perception dialogue model, and generating response information between the first user and the second user.
4. An intelligent answering device, comprising:
the information receiving module is used for receiving dialogue interaction information input by a first user and acquiring historical conversation information corresponding to the first user;
the information acquisition module is used for acquiring behavior information corresponding to the current time information generated by the dialogue interaction information and attribute information corresponding to the behavior information from a preset time behavior table; before acquiring behavior information corresponding to current time information generated by the dialog interaction information and attribute information corresponding to the behavior information from a preset time behavior table, the method further comprises the following steps:
classifying the daily behavior data of the first user, and adding target behavior data comprising state attributes to the daily behavior data of each category;
according to a first preset time period to which the target behavior data belong, constructing the preset time behavior table, including: dividing a second preset time period according to a preset rule to obtain a plurality of first preset time periods; filling the target behavior data into a preset time period according to a first preset time period to which the target behavior data belongs;
the first sampling module is used for sampling state attributes of the target behavior data of the first user to obtain a first sampling result, wherein the state attributes comprise keywords and attribute values corresponding to the keywords, and the keywords of the first user and the attribute values corresponding to the keywords are determined according to the first sampling result;
the first information determining module is used for determining a personalized behavior information set of the first user according to the keywords of the first user and the attribute values corresponding to the keywords;
the information generation module is used for inputting the dialogue interaction information, the historical conversation information, the behavior information corresponding to the current time information and the attribute information into a behavior perception dialogue model to generate response information; the method comprises the following steps: and inputting the dialogue interaction information, the historical conversation information, the behavior information and the attribute information corresponding to the current time information and the personalized behavior information set of the first user into a behavior perception dialogue model to generate response information.
5. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the intelligent response method according to any one of claims 1 to 3.
6. An electronic device, comprising:
a processor; and
a memory for storing executable instructions of the processor;
wherein the processor is configured to perform the smart answering method of any one of claims 1-3 via execution of the executable instructions.
CN201910637337.6A 2019-07-15 2019-07-15 Intelligent response method and device, storage medium and electronic equipment Active CN110347817B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910637337.6A CN110347817B (en) 2019-07-15 2019-07-15 Intelligent response method and device, storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910637337.6A CN110347817B (en) 2019-07-15 2019-07-15 Intelligent response method and device, storage medium and electronic equipment

Publications (2)

Publication Number Publication Date
CN110347817A CN110347817A (en) 2019-10-18
CN110347817B true CN110347817B (en) 2022-03-18

Family

ID=68175358

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910637337.6A Active CN110347817B (en) 2019-07-15 2019-07-15 Intelligent response method and device, storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN110347817B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111552779A (en) * 2020-04-28 2020-08-18 深圳壹账通智能科技有限公司 Man-machine conversation method, device, medium and electronic equipment
CN112131367A (en) * 2020-09-24 2020-12-25 民生科技有限责任公司 Self-auditing man-machine conversation method, system and readable storage medium
CN112989013B (en) * 2021-04-30 2021-08-24 武汉龙津科技有限公司 Conversation processing method and device, electronic equipment and storage medium
CN116627261A (en) * 2023-07-25 2023-08-22 安徽淘云科技股份有限公司 Interaction method, device, storage medium and electronic equipment
CN116881429B (en) * 2023-09-07 2023-12-01 四川蜀天信息技术有限公司 Multi-tenant-based dialogue model interaction method, device and storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105094315A (en) * 2015-06-25 2015-11-25 百度在线网络技术(北京)有限公司 Method and apparatus for smart man-machine chat based on artificial intelligence
CN105654950A (en) * 2016-01-28 2016-06-08 百度在线网络技术(北京)有限公司 Self-adaptive voice feedback method and device
CN106297789A (en) * 2016-08-19 2017-01-04 北京光年无限科技有限公司 The personalized interaction method of intelligent robot and interactive system
CN107153685A (en) * 2017-04-25 2017-09-12 竹间智能科技(上海)有限公司 The cognitive approach and device remembered in interactive system based on timeline
CN108124008A (en) * 2017-12-20 2018-06-05 山东大学 A kind of old man under intelligent space environment accompanies and attends to system and method
WO2018175291A1 (en) * 2017-03-20 2018-09-27 Ebay Inc. Detection of mission change in conversation
CN109176535A (en) * 2018-07-16 2019-01-11 北京光年无限科技有限公司 Exchange method and system based on intelligent robot

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9214156B2 (en) * 2013-08-06 2015-12-15 Nuance Communications, Inc. Method and apparatus for a multi I/O modality language independent user-interaction platform

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105094315A (en) * 2015-06-25 2015-11-25 百度在线网络技术(北京)有限公司 Method and apparatus for smart man-machine chat based on artificial intelligence
CN105654950A (en) * 2016-01-28 2016-06-08 百度在线网络技术(北京)有限公司 Self-adaptive voice feedback method and device
CN106297789A (en) * 2016-08-19 2017-01-04 北京光年无限科技有限公司 The personalized interaction method of intelligent robot and interactive system
WO2018175291A1 (en) * 2017-03-20 2018-09-27 Ebay Inc. Detection of mission change in conversation
CN107153685A (en) * 2017-04-25 2017-09-12 竹间智能科技(上海)有限公司 The cognitive approach and device remembered in interactive system based on timeline
CN108124008A (en) * 2017-12-20 2018-06-05 山东大学 A kind of old man under intelligent space environment accompanies and attends to system and method
CN109176535A (en) * 2018-07-16 2019-01-11 北京光年无限科技有限公司 Exchange method and system based on intelligent robot

Also Published As

Publication number Publication date
CN110347817A (en) 2019-10-18

Similar Documents

Publication Publication Date Title
CN110347817B (en) Intelligent response method and device, storage medium and electronic equipment
CN109658928B (en) Cloud multi-mode conversation method, device and system for home service robot
CN111930940B (en) Text emotion classification method and device, electronic equipment and storage medium
CN110347792B (en) Dialog generation method and device, storage medium and electronic equipment
US11842164B2 (en) Method and apparatus for training dialog generation model, dialog generation method and apparatus, and medium
CN106875940B (en) Machine self-learning construction knowledge graph training method based on neural network
US20180272240A1 (en) Modular interaction device for toys and other devices
CN109218390A (en) User's screening technique and device
CN116009748B (en) Picture information interaction method and device in children interaction story
CN109948151A (en) The method for constructing voice assistant
CN115309877A (en) Dialog generation method, dialog model training method and device
US20230154453A1 (en) Method of Generating Response Using Utterance and Apparatus Therefor
CN108306813B (en) Session message processing method, server and client
KR20180105501A (en) Method for processing language information and electronic device thereof
CN114220423A (en) Voice wake-up, method of customizing wake-up model, electronic device, and storage medium
CN112307166B (en) Intelligent question-answering method and device, storage medium and computer equipment
CN111353290B (en) Method and system for automatically responding to user inquiry
CN117252963A (en) Digital person generation method and device
CN109002498B (en) Man-machine conversation method, device, equipment and storage medium
CN110290057B (en) Information processing method and information processing device
CN113569585A (en) Translation method and device, storage medium and electronic equipment
CN113553413A (en) Dialog state generation method and device, electronic equipment and storage medium
CN115481221A (en) Method, device and equipment for enhancing dialogue data and computer storage medium
CN116226411B (en) Interactive information processing method and device for interactive project based on animation
CN108763488A (en) Phonetic prompt method, device and the terminal device to ask for help for user

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant