CN110837543A - Conversation interaction method, device and equipment - Google Patents

Conversation interaction method, device and equipment Download PDF

Info

Publication number
CN110837543A
CN110837543A CN201910973935.0A CN201910973935A CN110837543A CN 110837543 A CN110837543 A CN 110837543A CN 201910973935 A CN201910973935 A CN 201910973935A CN 110837543 A CN110837543 A CN 110837543A
Authority
CN
China
Prior art keywords
category
user
input information
category feature
current input
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910973935.0A
Other languages
Chinese (zh)
Inventor
马秦宇
周阳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Shuliantianxia Intelligent Technology Co Ltd
Original Assignee
Shenzhen H & T Home Online Network Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen H & T Home Online Network Technology Co ltd filed Critical Shenzhen H & T Home Online Network Technology Co ltd
Priority to CN201910973935.0A priority Critical patent/CN110837543A/en
Publication of CN110837543A publication Critical patent/CN110837543A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/332Query formulation
    • G06F16/3329Natural language query formulation or dialogue systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/3331Query processing
    • G06F16/334Query execution
    • G06F16/3343Query execution using phonetics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/3331Query processing
    • G06F16/334Query execution
    • G06F16/3344Query execution using natural language analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/35Clustering; Classification

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • Artificial Intelligence (AREA)
  • Mathematical Physics (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • General Health & Medical Sciences (AREA)
  • Acoustics & Sound (AREA)
  • Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the application discloses a dialogue interaction method, which comprises the following steps: acquiring current input information of a first user, and performing semantic recognition on the current input information to obtain first semantics of the current input information; acquiring at least one category feature content included in the current input information of the first user and category features to which the at least one category feature content belongs respectively; determining a first scene category to which the current input information of the first user belongs; and outputting at least one dialogue question to the first user according to the first scene category and the category feature of the category feature content which is not acquired in the first scene category. By implementing the method and the device, the dialogue problem is output according to the category characteristics of the category characteristic content which is not obtained from the plurality of category characteristics corresponding to the scene categories, and the effect of quickly responding to the user is achieved.

Description

Conversation interaction method, device and equipment
Technical Field
The application relates to the technical field of intelligent voice, in particular to a conversation interaction method, device and equipment.
Background
With the rapid development of artificial intelligence, understanding human language and being able to talk to human beings, robots that give corresponding information feedback have become a demand for most people. In recent years, the application of the chat robot in the industry is more and more, the chat robot can perform intelligent interaction with a user according to the language input of the user, the practical problem of the user is solved, and the investment of human labor is greatly reduced.
In the prior art, a task type chat robot carries out word segmentation on a natural language input by a user, and sets corresponding reply information in advance for each word vector after word segmentation, so that the reply information of the user is determined, and the problem of inaccurate reply information is brought.
Disclosure of Invention
In order to solve the above problems, the present application provides a dialog interaction method, device, and apparatus, which determine the current input information of a user in a scene to which the current input information belongs, output a dialog problem for the current input information of the user in the scene category, and combine the input information with the scene, thereby improving the accuracy of the dialog problem.
In a first aspect, an embodiment of the present application provides a dialog interaction method, where the method includes:
acquiring current input information of a first user, and performing semantic recognition on the current input information to obtain first semantics of the current input information;
transmitting the current input information of the first user to a category content identification model corresponding to the first semantic meaning, and acquiring at least one category characteristic content included in the current input information of the first user and category characteristics to which the at least one category characteristic content belongs respectively;
determining a first scene category to which the current input information of the first user belongs according to category features to which at least one category feature content included in the current input information of the first user belongs, wherein the first scene category corresponds to a plurality of category features;
and outputting at least one dialogue question to the first user according to the first scene category and the category feature of the category feature content which is not acquired in the first scene category.
In a possible embodiment, the method further comprises:
acquiring reply information of the first user to the dialogue question;
transmitting the first reply information to a content category identification model corresponding to the first semantic meaning, and acquiring at least one category feature content included in the first reply information and category features to which the at least one category feature content belongs respectively;
and outputting recommendation information aiming at the current input information of the first user according to the acquired category characteristic content.
In a possible implementation manner, one dialog question output to the first user uniquely corresponds to a first category feature, and the first category feature is a category feature of which category feature content is not acquired;
the transmitting the first reply information to the content category identification model corresponding to the first semantic, and the obtaining at least one category feature content included in the first reply information and the category features to which the at least one category feature content respectively belongs include:
transmitting the first reply information to a content category identification model corresponding to the first semantic, acquiring at least one category feature content included in the first reply information, judging whether a category feature to which the at least one category feature content included in the first reply information belongs includes the first category feature, and if so, acquiring the category feature content of the first category feature from the first reply information;
otherwise, outputting the dialogue questions corresponding to the first category of features to the first user again.
Optionally, the outputting at least one dialog question to the first user according to the first scene category and the category feature of the category feature content not obtained in the first scene category includes:
and outputting a first dialogue question to the first user according to the priority sequence of the plurality of category features corresponding to the first scene category, wherein the first dialogue question only corresponds to the category feature with the highest priority sequence in the category features of the content of which the category features are not acquired.
In a possible embodiment, before outputting the recommendation information for the current input information of the first user according to the acquired category feature content, the method includes:
acquiring necessary preset category characteristics in a plurality of category characteristics corresponding to the first scene category;
and determining the category feature content of the acquired necessary category features.
Optionally, the category feature according to the first scene category and the content of the undetermined category feature in the first scene category includes:
acquiring historical input information of the first user within a first preset time according to the identity of the first user, wherein the historical input information of the first user comprises at least one category characteristic content and category characteristics to which the at least one category characteristic content belongs respectively;
and obtaining category feature content of at least one category feature corresponding to the first scene category from at least one category feature content included in the historical input information of the first user and category features to which the at least one category feature content respectively belongs.
In a possible implementation manner, the performing semantic recognition on the current input information to obtain a first semantic meaning of the current input information includes:
acquiring historical input information of the first user within second preset time according to the identity of the first user;
and performing semantic recognition on the current input information of the first user and the historical input information of the first user to obtain semantics, wherein the semantics are used as first semantics of the current input information.
In a second aspect, an embodiment of the present application provides a dialog interaction device, where the dialog interaction device includes:
the acquisition module is used for acquiring the current input information of the first user;
the semantic recognition module is used for performing semantic recognition on the current input information acquired by the acquisition module to obtain a first semantic meaning of the current input information;
the obtaining module is further configured to transmit the current input information of the first user to the category content identification model corresponding to the first semantic, and obtain at least one category feature content included in the current input information of the first user and category features to which the at least one category feature content belongs respectively;
a determining module, configured to determine a first scene category to which current input information of the first user belongs according to category features to which at least one category feature content included in the current input information of the first user belongs, where the first scene category corresponds to multiple category features;
and the output module is used for outputting at least one dialogue question to the first user according to the first scene category and the category characteristics of the category characteristic content which is not acquired in the first scene category.
In a third aspect, the apparatus comprises an input-output interface, a processor and a memory, wherein the processor is configured to execute a computer program stored in the memory to implement the method described in the above aspects and any one of its possible embodiments.
In a fourth aspect, the readable storage medium has stored therein instructions, which, when executed on a computer, cause the computer to perform the method described in the above aspects and any one of its possible embodiments.
The application provides an interaction method, an interaction device and interaction equipment, wherein when a dialog scene is switched by a user, according to a category feature of category feature content which is not obtained from a plurality of category features corresponding to the scene category, a dialog problem aiming at the current input information of the user under the scene category is output, the current input information of the user is determined in the scene to which the current input information of the user belongs, and the accuracy of output of the dialog problem is improved.
Drawings
Fig. 1 is a schematic flowchart of a dialog interaction method according to an embodiment of the present application;
fig. 2 is a block diagram of a structure of a scene category, a category feature, and a category feature content according to an embodiment of the present application;
fig. 3 is a block diagram of another structure of scene categories, category features, and category feature contents according to an embodiment of the present application;
fig. 4 is a schematic diagram of human-computer interaction of a dialog interaction method according to an embodiment of the present application;
fig. 5 is a schematic diagram of human-computer interaction of another dialog interaction method provided in the embodiment of the present application;
fig. 6 is a schematic flowchart of another dialog interaction method provided in the embodiment of the present application;
fig. 7 is a schematic diagram of human-computer interaction of another dialog interaction method provided in the embodiment of the present application;
fig. 8 is a schematic diagram of human-computer interaction of another dialog interaction method provided in the embodiment of the present application;
fig. 9 is a block diagram illustrating a structure of a dialog interaction device according to an embodiment of the present application;
fig. 10 is a block diagram of a dialog interaction device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The following describes embodiments of the present application in further detail with reference to the accompanying drawings.
A dialog interaction method provided by the present application is first described in detail with reference to the accompanying drawings, which are shown in fig. 1 to 3.
Referring to fig. 1, fig. 1 is a schematic flowchart of a dialog interaction method according to an embodiment of the present application. As shown in fig. 1, a dialog interaction method specifically includes the following steps:
s100, current input information of a first user is obtained, semantic recognition is carried out on the current input information, and first semantics of the current input information are obtained.
Specifically, the dialog interaction device obtains current input information of a first user, where the current input information may be a voice input or a text input, and if the current input information is a voice input, the dialog interaction device converts the current input information into text information and performs semantic recognition on the text information.
For example, the dialog interaction device obtains the current input information of the first user as "i want to buy facial cleaning water", performs semantic recognition on the current input information "i want to buy facial cleaning water", and obtains the first semantic of the current input information "i want to buy facial cleaning water" as "buy"; for another example, the dialog interaction device obtains the current input information of the first user as "i want to go to beijing", performs semantic recognition on the current input information "i want to go to beijing", and obtains the first semantic of the current input information "i want to go to beijing" as "trip".
In a possible implementation manner, the method for performing semantic recognition on the current input information by the dialog interaction device specifically includes transmitting the current input information to a pre-trained semantic recognition model, and optionally, the pre-trained semantic recognition model is a Bidirectional Encoder conversion (BERT) model. The BERT model is used for carrying out semantic recognition on input information based on word vectors, is different from semantic recognition based on word vectors after word segmentation in other model training, inputs the current input information of the first user into the BERT model, and can output the first semantics of the current input information according to the word vectors in the current input information. The BERT pre-training model does not need to perform word segmentation on text information, reduces errors caused by improper word segmentation, and can effectively improve the correctness of semantic recognition.
In a possible embodiment, the dialog interaction device obtains historical input information of the first user within a second preset time according to the identity of the first user; and performing semantic recognition on the current input information of the first user and the historical input information of the first user to obtain semantics, wherein the semantics are used as first semantics of the current input information. Specifically, the dialog interaction device obtains current input information of a first user, and obtains historical input information of the first user according to the identity of the first user because the input information of the user is related to the historical input information. Illustratively, the dialogue interaction device identifies the first user before acquiring the input information of the first user, in a possible implementation manner, the dialogue interaction device performs face recognition on the first user, uses the recognized face information as the identity of the first user, the dialogue interaction device searches the face information matched with the identity of the first user in a storage area of the dialogue interaction device according to the recognized face information, acquiring historical input information of the face information matched with the first user identity identification according to the matched face information, it can be understood that the face information of the user in the storage area has a corresponding binding relationship with the input information of the user, the dialogue interaction device can acquire historical input information of a user matched with the face information according to the recognized face information. The history input information contains at least one category characteristic content included in the first user history input information and category characteristics to which the at least one category characteristic content belongs respectively. Further, the dialog interaction device may store, in a storage area, the current input information of the first user, at least one category feature content included in the current input information, and category features to which the at least one category feature content belongs, respectively.
After the historical input information of the first user is acquired, semantic recognition is performed by combining the historical input information and the current input information of the first user, illustratively, the dialog interaction device transmits the historical input information and the current input information of the first user to a pre-trained semantic recognition model, such as a BERT model, and the BERT model can output semantics according to word vectors in the historical input information and the current input information, so as to obtain semantics of the current input information. The semantic recognition is carried out by combining the historical input information, so that the semantics of the current input information can be better recognized, the intention which the current input information input by the user is required to realize can be accurately judged, and the accuracy of the subsequent steps of the embodiment of the application can be improved.
S101, transmitting the current input information of the first user to a category content identification model corresponding to the first semantic meaning, and acquiring at least one category characteristic content included in the current input information of the first user and category characteristics to which the at least one category characteristic content belongs respectively.
Specifically, the step S100 obtains a first semantic meaning of the current input information of the first user, and obtains category feature content of the first user under the first semantic meaning, which can be understood as extracting key information in the current input information of the first user. For example, the current input information of the first user is "i want to buy facial cleanser", the first semantic meaning obtained in step S100 is "purchase", the dialog interaction device transmits the current input information "i want to buy facial cleanser" to the category content identification model corresponding to the first semantic meaning "purchase", and the category feature content of the first user in the semantic environment of "purchase" is acquired as "face" and "water"; for another example, the current input information of the first user is "i want to go to beijing", the first semantic meaning obtained in step S100 is "trip", the dialog interaction device transmits the current input information "i want to go to beijing" to the category feature content identification model corresponding to the first semantic "trip", and the category feature contents of the first user in the semantic environment of "trip" are obtained as "i" and "go to beijing". The category content identification model can be understood as that under different semantics, the dialogue interaction device needs to know key information, for example, under the semantic of "purchase", it needs to know what the purchased product is, what the function is and/or the use mode, and under the semantic of "travel", it needs to know who goes out, where goes from, where goes to and/or what goes, so that in the category content identification model corresponding to "purchase", extraction and identification are not performed on "me", and in the category content identification model corresponding to "travel", extraction and identification are performed on "me", because "travel" needs to know who goes out, and under the semantic of "purchase", it does not need to know who purchases the product, but needs to know what product needs to be purchased. Therefore, under different semantics, the category feature contents to be acquired are different, and thus it can be understood that the semantics and the category feature content identification model have a corresponding relationship.
In the following, a detailed description is given of the specific category feature content obtained by the category content recognition Model, and in a possible implementation manner, the category content recognition Model may be a Hidden Markov Model (HMM). The HMM model is used to describe a markov process with implicit unknown parameters, which can be understood simply as: there are a number of occurrences (each of which may be different) of class a events and observed, constituting a sequence called an observed sequence. Behind the type a event, there are a plurality of times that the type B event occurs, we consider that the trend of the type B event is left and right of the trend of the type a, the event sequence of the type B is called a hidden state sequence (state sequence for short), and the state sequence is a sequence that we want to label the input information. Before an HMM model is put into use, the HMM model needs to be trained. For example, the HMM model training process can be simply understood as: taking the type a event as an example of obtaining the input information "i want to buy facial water", the current input information "i want to buy facial water" is labeled in an SBIEO (a scheme of adding a label to a text) format, and the SBIEO format label can be understood as follows: when the category characteristic content consists of a single character, marking by using S; when the category characteristic content consists of more than one character, the first character is marked by using B, the middle characters except the head and tail characters are marked by using I, and the tail characters are marked by using E; characters irrelevant to the content of the category features are labeled with O. The sequence randomly marked by the HMM model for inputting the information 'I want to buy and wipe the face water' is an observation sequence, such as an observation sequence 'S-B-I-E-O-S'. While the class B event can be considered as a state sequence input by a developer when training an HMM model, for example, if the state sequence that "i want to buy and wipe water" wants to extract is "O-S", and wants to acquire two category feature contents of "face" and "water", which are formed by a single character, the HMM model will also receive the state sequence "O-S". The HMM training can be understood as a process of correcting an incorrect answer with a normal answer, continuously trains various input information, and continuously adjusts parameters to make an observation sequence infinitely close to or identical to a state sequence, so that the HMM model can correctly perform sequence labeling on the input information to obtain an HMM model meeting the requirements of the application, and a labeling sequence process of the trained HMM model is introduced below.
Illustratively, the HMM model performs sequence labeling on the current input information "i want to buy facial water" as "O-S", knows that "i", "about", "buy", and "wipe" are all characters unrelated to the category feature content and not the category feature content according to the sequence labeling "O-S", and labels of "face" and "water" are both "S", indicating that "face" and "water" are both category feature content formed by a single character, and obtains the category feature contents "face" and "water" according to the sequence labeling "O-S" of the current input information "i want to buy facial water" of the first user; as another example, the HMM model labels "S-O-B-I-E" the sequence of the current input information "I want to go to Beijing", according to the sequence label 'S-O-B-I-E', I 'can know that I' is the category feature content formed by a single character, 'want' is a character irrelevant to the category feature content, 'go' is the first character of the category feature content, 'Beijing' is a middle character of the category feature content except the first character and the last character of the category feature content, 'Jing' is the last character of the category feature content, 'go', 'Beijing' and 'Jing' form a category feature content 'go Beijing', and an HMM model labels 'S-O-B-I-E' according to the sequence label 'I want to go Beijing' of the current input information of the first user to obtain the category feature contents 'I' and 'De Beijing'.
In a possible implementation manner, the dialog interaction device queries a preset category feature according to the category feature content "water" and "face" included in the current input information of the first user, and obtains the category feature "category" and "position" successfully matched with the category feature content "water" and "face" included in the current input information of the first user, that is, the category feature to which the category feature content included in the current input information "i want to buy facial cleaning water" of the first user belongs. In another possible implementation manner, the dialog interaction device queries a preset category feature according to category feature contents "me" and "go to beijing" included in the current input information of the first user, and obtains category features "category" and "position" successfully matched with the category feature contents "water" and "face" included in the current input information of the first user, that is, the category feature to which the category feature contents included in the current input information "i want to go to beijing" of the first user belong.
For better understanding of the relationship between the content of the category feature and the category feature, referring to fig. 2, fig. 2 is a structural block diagram of a scene category, a category feature and category feature content provided in an embodiment of the present application. As shown in fig. 2, the category features are preset, and the preset category features include a plurality of preset category feature contents, for example, category feature 1 "category" includes a plurality of preset category feature contents "water", "milk", and "frost"; the category feature 4 "part" includes a plurality of preset category feature contents "face", "lip", "hand", and "foot", and the like.
Further, the category features corresponding to different scenes are different, see fig. 3, and fig. 3 is a structural block diagram of another scene category, category feature, and category feature content provided in the embodiment of the present application. As shown in fig. 3, the category feature 8 "origin" includes a plurality of preset category feature contents "guangzhou", "shanghai", and "shenzhen", and the like, and the following describes details between the category feature and the category feature in conjunction with the human-computer interaction diagram.
S102, determining a first scene category to which the current input information of the first user belongs according to category characteristics to which at least one category characteristic content included in the current input information of the first user belongs, wherein the first scene category corresponds to a plurality of category characteristics.
Specifically, since the first scene category corresponds to a plurality of category features, and the plurality of category features corresponding to different scene categories are different, for example, as shown in fig. 2, the first scene category is the scene category 1 "skin care", and the corresponding category features include: category feature 1 "category", category feature 2 "function", category feature 3 "user skin type", category feature 4 "location", category feature 5 "brand", category feature 6 "user gender", and category feature 7 "user age"; for another example, as shown in fig. 3, the first scene category may be scene category 2 "trip", and the corresponding category features include: a category feature 8 "departure place", a category feature 9 "destination", a category feature 10 "number", a category feature 11 "travel date", a category feature 12 "travel time", a category feature 13 "travel mode", a category feature 14 "price interval", and the like.
After the category features to which at least one category feature content included in the current input information of the first user respectively belongs are obtained in step S101, in a possible implementation manner, the category features obtained in step S101 may be subjected to character string matching with the category features preset by the dialog interaction device, and a scene category corresponding to the category feature that is successfully matched is taken as the first scene category, it is understood that, in step S101, at least one category feature content of the current input information of the first user and the category feature to which the at least one category feature content respectively belongs are obtained, and in order to quickly determine a dialog problem for the current input information of the first user, the dialog interaction device determines the current input information of the first user in a first scene category so as to execute step S103.
S103, outputting at least one dialogue question to the first user according to the first scene category and the category feature of the category feature content which is not acquired in the first scene category.
Specifically, the first scene category corresponds to a plurality of category features, the first scene category to which the current input information of the first user belongs is determined in step S102, at least one category feature content included in the current input information of the first user and the category features to which the at least one category feature content respectively belongs are obtained in step S101, then the dialog interaction device obtains the category features that the category feature content is not obtained in the first scene category, and outputs at least one dialog question to the first user according to the category features that the category feature content is not obtained.
Illustratively, according to the priority ranking of a plurality of category features corresponding to the first scene category, a first dialogue question is output to the first user, and the first dialogue question uniquely corresponds to the category feature with the highest priority ranking in the category features of the content of the unacquired category features.
In a possible embodiment, the category feature content of the category feature corresponding to the first scene category may also be obtained according to the historical input information of the first user.
In a possible implementation manner, the dialog interaction device obtains, according to the identity of the first user, historical input information of the first user within a first preset time, where the historical input information of the first user includes at least one category feature content and category features to which the at least one category feature content belongs respectively; and obtaining category characteristic content of at least one category characteristic corresponding to the first scene category from at least one category characteristic content included in the historical input information of the first user and category characteristics to which the at least one category characteristic content respectively belongs.
Specifically, the dialog interaction device queries and obtains category feature content of at least one category feature of the historical input information of the first user in the first scene category according to the identity of the first user, supplements the category feature content of at least one category feature of the historical input information of the first user in the first scene category to a corresponding category feature of the current input information of the first user in the first scene category, combining the historical input information of the first user in the first scene category with the category feature content included in the current input information, as the category feature content acquired in the current input information of the first user, implementing the embodiment, the historical input information of the user is memorized and is combined with the current input information, so that the frequency of presenting the dialogue problem to the user can be reduced.
For better understanding of the present embodiment, an exemplary human-computer interaction description is provided below with reference to the accompanying drawings, and reference is made to fig. 2 to 5.
Referring to fig. 4, fig. 4 is a schematic diagram of a human-computer interaction of a dialog interaction method according to an embodiment of the present application. As shown in fig. 4, the first user inputs "i want to buy water for wiping face", the dialog interaction device receives that the current input information of the first user is "i want to buy water for wiping face", and performs semantic recognition on the current input information "i want to buy water for wiping face", so as to obtain that the first semantic of the current input information is "buy"; the method comprises the steps of transmitting current input information 'water which is bought by me to wipe the face' of a first user 'to a category content identification model corresponding to first semantic' purchase ', acquiring category feature contents included in the current input information' water which is bought by me to wipe the face 'of the first user as' water 'and' face ', acquiring category features to which the water' belongs as 'category' and 'face' belong as 'part', and determining a first scene category to which the current input information 'water which is bought by me to wipe the face' of the first user belongs as 'skin care' according to category features 'category' and 'part' to which category feature contents 'water' and 'face' included in the current input information 'water which is bought by me to wipe the face' of the first user respectively belong.
It should be noted that the first scene category "skin care" corresponds to a plurality of category features, as shown in fig. 2, the scene category 1 is "skin care", and the corresponding category feature priorities from high to low are: the category feature 1 "category", the category feature 2 "function", the category feature 3 "user skin type", the category feature 4 "location", the category feature 5 "brand", the category feature 6 "user gender" and the category feature 7 "user age", for example, the category feature to which "water", "milk" and/or "frost" in the category feature content 1 belongs is the category feature 1 "category", it may be understood that when "water", "milk" and/or "frost" are included in the input information of the user, the category content identification model of the first semantic "purchase" may obtain the category feature to which "water", "milk" and/or "frost" belong as the category feature 1 "category according to the category feature content" water "," milk "and/or" frost ". Similarly, the category characteristics of "moisturizing", "whitening", "moisturizing" and/or "oil control" in the category characteristic content 2 are the category characteristics 2 "function"; the category characteristics to which "dry skin", "oily skin", "neutral skin" and/or "allergic skin" in the category characteristic content 3 belong are category characteristics 3 "user skin type"; the category features to which the "face", "lip", "hand" and/or "foot" in the category feature content 4 belong are the category feature 4 "part"; the category characteristics of "suitable herbal", "butcher's antelope", "natural hall" or "herborist" in the category characteristic content 5 are category characteristics 5 "brand"; the category feature to which "male" or "female" in the category feature content 6 belongs is the category feature 6 "gender of the user"; the category feature of "under 20 years", "20 to 25 years", "25 to 30 years" or "over 30 years" in the category feature content 7 is the category feature 7 "user age", and so on, there may be other category features and category feature contents in the scene category 1 "skin care", which are exemplified here and not listed. It will be appreciated that the category characteristic content may be represented as an interval range, for example, 20 years old or younger, 19 years old, 18 years old or babies, all falling within the interval range of the category characteristic content "20 years old or younger".
Based on category features which do not acquire category feature content, such as category feature 2 "function", category feature 3 "user skin type", category feature 5 "brand", category feature 6 "user gender" and category feature 7 "user age", among a plurality of category features corresponding to a first scene category "skin care" to which current input information "i want to buy water to wipe the face" of the first user "belongs, at least one dialog question is output to the first user, and since the highest priority ranking among the category features which do not acquire category feature content is category feature 2" function ", then a dialog question is output as" ask you what kind of function is needed "according to category feature 2" function? Is it moisturizing, whitening, moisturizing or oil-controlling? "
When the user switches scenes, because the corresponding category features of different scene categories are different, according to at least one category feature to which at least one category feature content included in the input information belongs, accurate and rapid matching between the scenes and the category features can be realized, and the dialogue problem aiming at the current input information of the user is output. The following describes the switching of scene types in detail with reference to the drawings.
Referring to fig. 5, fig. 5 is a schematic diagram of human-computer interaction of another dialog interaction method provided in the embodiment of the present application. As shown in fig. 5, the first user enters "i want a ticket to beijing," it is understood that the first user may be the same user as the first user in the embodiment described above with reference to fig. 4, or may be a different user. The method comprises the steps that a conversation interaction device receives that current input information of a first user is 'I want a ticket to get to Beijing', semantic recognition is carried out on the current input information 'I want a ticket to get to Beijing', and a first semantic of the current input information is 'travel'; transmitting the current input information of the first user, namely 'i want a ticket to get to beijing', to a category content identification model corresponding to the first semantic 'travel', acquiring category characteristic contents included in the current input information of the first user, namely 'i want a ticket to get to beijing', including 'one' and 'one' belonging to the category characteristic of 'destination' and 'one' belonging to the category characteristic of 'quantity', and determining that the category of a first scene, to which the current input information of the second user, namely 'i want a ticket to get to beijing', belongs to 'order' according to the category characteristic contents 'destination' and 'quantity' respectively included in the current input information of the first user, namely 'i want a ticket to get to beijing'.
It should be noted that the first scene category "booking ticket" corresponds to a plurality of category features, referring to fig. 3 described above, as shown in fig. 3, the scene category 2 is "booking ticket", and the corresponding category feature priorities from high to low are: category feature 8 "origin", category feature 9 "destination", category feature 10 "quantity", category feature 11 "travel date", category feature 12 "travel time", category feature 13 "travel mode", and category feature 14 "price interval", exemplarily, the category feature to which "guangzhou", "shanghai", or "shenzhen" in the category feature content 1 belongs is the category feature 8 "origin"; the category feature to which "beijing", "guangzhou", "shanghai", or "hangzhou" in the category feature content 9 belongs is the category feature 9 "destination"; it is understood that the "departure place" and the "destination place" are places, and may be distinguished according to some words with specific orientation, for example, the "going" immediately followed place name is determined as the "destination", the "departure place" is determined from the "immediately followed place name, if the determination is impossible, the category feature of the content of the category feature is not determined, and the user is explicitly asked. The category feature to which "one", "two", "three", or "four" of the category feature contents 10 belongs is the "number" of the category feature 10, and optionally, if the user input information includes a definite name of a person, for example, "i and zhang three" may indicate the meaning of "two" of the category feature contents, it may be determined that "i and zhang three" is the "two" of the category feature contents in the "number" of the category feature contents 10; the category feature to which "today", "10 month 1 day", "10 month 6 days", or "mid-autumn" in the category feature content 11 belongs is the category feature 11 "travel date"; the category feature to which "eight am", "09:00", "pm", or "19:00" in the category feature content 12 belongs is the category feature 12 "travel time"; the category characteristic to which the "plane" or the "train" in the category characteristic content 13 belongs is a category characteristic 13 "trip mode"; the category feature of "three hundred or less", "three hundred to five hundred", "five hundred to one thousand" or "more than one thousand" in the category feature content 14 is the category feature 14 "price zone", and so on, there may be other category features and category feature contents in the scene category 2 "ticket booking", which is described here as an example and not listed one by one. It is understood that the category feature content may be represented as an interval range, or may have other different representation manners, for example, the category feature content "day 1 of 10 months" and "festival of national day" may be determined as the same day, and the category features belonging to the same day are all travel dates.
Based on category features which are not obtained from a plurality of category features corresponding to a first scene category "booking tickets" to which the current input information "i want a ticket to go to beijing" of the first user, for example, category feature 8 "place of departure", category feature 10 "number", category feature 11 "travel date", category feature 12 "travel time", category feature 13 "travel mode", and category feature 14 "price interval", at least one dialog question is output to the first user, and since the highest priority ranking among the category features which are not obtained to the category feature content is category feature 8 "place of departure", the dialog question is output according to category feature 8 "place of departure" as "ask where you need to go? "
By implementing the embodiment, the scene category to which the current input information of the first user belongs is determined by acquiring at least one category feature content included in the current input information of the first user and the category feature corresponding to the at least one category feature content, so that a dialog problem is output to the first user quickly according to the category feature corresponding to the scene category, and the effect of accurately and quickly responding to the user when the scene field is switched is achieved.
Based on the embodiments described above in connection with fig. 1 to 5, a single session is achieved, and a plurality of sessions is described in detail below in connection with the specific figures. See fig. 6-8.
Referring to fig. 6, fig. 6 is a schematic flowchart of another dialog interaction method provided in the embodiment of the present application. As shown in fig. 6, another dialog interaction method specifically executes the following steps:
s600, current input information of a first user is obtained, semantic recognition is carried out on the current input information, and first semantics of the current input information are obtained.
S601, transmitting the current input information of the first user to the category content identification model corresponding to the first semantic, and acquiring at least one category feature content included in the current input information of the first user and category features to which the at least one category feature content belongs respectively.
S602, determining a first scene category to which the current input information of the first user belongs according to category features to which at least one category feature content included in the current input information of the first user belongs, wherein the first scene category corresponds to a plurality of category features.
S603, outputting at least one dialogue problem to the first user based on the category feature of the category feature content which is not obtained from the plurality of category features corresponding to the first scene category to which the current input information of the first user belongs.
It is understood that steps S600 to S603 may refer to the embodiment described above with reference to fig. 1, and are not described herein again.
S604, acquiring the reply information of the first user to the dialogue question.
Specifically, the dialog interaction device obtains reply information of the first user to the dialog question, where the reply information may be a voice reply, a text reply, or a reply information obtained by enabling a camera to obtain the category feature of the category feature content not obtained by the dialog interaction device in the first scene. Illustratively, when the dialog interaction device needs to acquire category feature content of a category feature "age", the category feature content may be obtained by performing age identification on the first user through a camera.
S605, transmitting the first reply information to a content category identification model corresponding to the first semantic, and acquiring at least one category feature content included in the first reply information and category features to which the at least one category feature content belongs respectively.
Specifically, the dialog interaction device identifies the category feature content included in the first reply information, and the implementation manner of the dialog interaction device may refer to step S101 in the embodiment described above with reference to fig. 1, where the input of step S101 is the current input information, and the input of step S605 is the first reply information.
In a possible implementation manner, a dialog question output by the dialog interaction device to the first user uniquely corresponds to a first class feature, and the first class feature is a class feature for which class feature content is not acquired; transmitting the first reply information to a content category identification model corresponding to the first semantic, acquiring at least one category feature content included in the first reply information, judging whether a category feature to which the at least one category feature content included in the first reply information belongs includes the first category feature, and if so, acquiring the category feature content of the first category feature from the first reply information; otherwise, outputting the dialogue questions corresponding to the first category of features to the first user again. It can be understood that, in step S603, the dialog question is output to the first user, and the first user can reply according to the dialog question, so that the category feature content of the first category feature uniquely corresponding to the dialog question can be acquired, but if the first user does not reply according to the dialog question, and the acquired category feature content is not the category feature content of the first category feature, it is necessary to repeatedly inquire the user and repeatedly output the dialog question corresponding to the first category feature. Optionally, the number of times of outputting the dialog problem corresponding to the first category feature may be flexibly set, the dialog problem corresponding to the first category feature may be output to the first user all the time, or the dialog problem corresponding to the first category feature may be output three times continuously in order to avoid causing user's dislike, or if the category feature content of the first category feature is not acquired, the first category feature is skipped, and the category feature content of the next category feature is acquired.
And S606, outputting recommendation information aiming at the current input information of the first user according to the acquired category characteristic content.
Specifically, the dialog interaction device performs a plurality of dialogues with the first user through the previous steps S601 to S605 to obtain a plurality of category feature contents, and exemplarily, the dialog interaction device obtains a preset necessary category feature from a plurality of category features corresponding to the first scene category; and determining the category feature content of the acquired necessary category features. And when the conversation interaction device acquires all the preset necessary category characteristics, outputting recommendation information aiming at the current input information of the first user.
For better understanding of the present embodiment, an exemplary human-computer interaction description is provided below with reference to the accompanying drawings, and reference is made to fig. 7 to 8.
Referring to fig. 7, fig. 7 is a schematic diagram of human-computer interaction of another dialog interaction method provided in the embodiment of the present application. As shown in fig. 7, the first user inputs "i want to buy water for wiping face", the dialog interaction device receives that the current input information of the first user is "i want to buy water for wiping face", and performs semantic recognition on the current input information "i want to buy water for wiping face", so as to obtain that the first semantic of the current input information is "buy"; the method comprises the steps of transmitting current input information 'water which is bought by me to wipe the face' of a first user 'to a category content identification model corresponding to first semantic' purchase ', acquiring category feature contents included in the current input information' water which is bought by me to wipe the face 'of the first user as' water 'and' face ', acquiring category features to which the water' belongs as 'category' and 'face' belong as 'part', and determining a first scene category to which the current input information 'water which is bought by me to wipe the face' of the first user belongs as 'skin care' according to category features 'category' and 'part' to which category feature contents 'water' and 'face' included in the current input information 'water which is bought by me to wipe the face' of the first user respectively belong.
From the embodiment described above in conjunction with fig. 2, it can be seen that the scene category 1 is "skin care", and the corresponding category feature priorities from high to low are: category feature 1 "category", category feature 2 "function", category feature 3 "user skin type", category feature 4 "location", category feature 5 "brand", category feature 6 "user gender", and category feature 7 "user age".
Based on category features which do not acquire category feature content, such as category feature 2 "function", category feature 3 "user skin type", category feature 5 "brand", category feature 6 "user gender" and category feature 7 "user age", of the plurality of category features corresponding to the first scene category "skin care" to which the current input information "i want to buy water for wiping the face" of the first user "belongs, at least one dialog question is output to the first user, optionally, because the highest priority ranking among the category features which do not acquire the category feature content is category feature 2" function ", then the dialog question is output as" ask for what kind of function you need? Is it moisturizing, whitening, moisturizing or oil-controlling? "the first reply message of the first user is" whitened ", and it is understood that the first reply message of the first user may be complemented as" (i want to buy) whitened (face wiping water) "in combination with the dialog question and the first reply message of the first user; the category feature content included in the first recovery information is 'whitening', the category feature of the first recovery information is 'function', the category feature of the determined category feature content in the scene category of 'skin care' is 'category', 'position' and 'function', if the preset necessary category feature is 'category' and 'function', the dialogue interaction device can output recommendation information 'suitable rhodiola rosea young white essence water for meeting the needs of the first user according to the category', 'position' and 'function', and the essence water can reduce dark yellow, whiten skin and make skin young. Further, if the first user has other needs, for example, the first user inputs a second reply message "i want to use the natural hall", the previous sentence of the second reply message outputs recommendation information to the first user, but the first user replies "i want to use the natural hall (whitening facial water)", the category feature content is "natural hall", "the category feature to which the natural hall" belongs is "brand", a category feature is added, the natural hall snow-moistening white ice muscle water can satisfy your needs according to the category features "category", "part", "function" and "brand" output recommendation information of the category feature content determined in the "skin care" scene category, the ice muscle water is suitable for natural glacier water and agate guava, can be turbid and white, can improve oxygen-resisting skin, can lighten melanin, and the like, and in analogy, the dialogue memory of the first user can be obtained according to the identity mark of the first user.
Similarly, when the user switches the scene, since the category features corresponding to different scene categories are different, according to at least one category feature to which at least one category feature content included in the input information belongs, rapid matching between the scene and the category feature can be realized, and the recommendation information for the current input information of the user is output. The following describes the switching of scene types in detail with reference to the drawings.
Referring to fig. 8, fig. 8 is a schematic diagram of human-computer interaction of another dialog interaction method provided in the embodiment of the present application. As shown in fig. 8, the first user enters "i want a ticket to beijing," it is understood that the first user may be the same user as the first user in the embodiment described above with reference to fig. 7, or may be a different user. The method comprises the steps that a conversation interaction device receives that current input information of a first user is 'I want a ticket to get to Beijing', semantic recognition is carried out on the current input information 'I want a ticket to get to Beijing', and a first semantic of the current input information is 'travel'; transmitting the current input information of the first user, namely 'i want a ticket to get to beijing', to a category content identification model corresponding to the first semantic 'travel', acquiring category characteristic contents included in the current input information of the first user, namely 'i want a ticket to get to beijing', including 'one' and 'one' belonging to the category characteristic of 'destination' and 'one' belonging to the category characteristic of 'quantity', and determining that the category of a first scene, to which the current input information of the second user, namely 'i want a ticket to get to beijing', belongs to 'order' according to the category characteristic contents 'destination' and 'quantity' respectively included in the current input information of the first user, namely 'i want a ticket to get to beijing'.
From the foregoing embodiment described in conjunction with fig. 3, it can be seen that the scene category 2 is "booking tickets", and the corresponding category feature priorities from high to low are: the category feature 8 "departure place", the category feature 9 "destination", the category feature 10 "number", the category feature 11 "travel date", the category feature 12 "travel time", the category feature 13 "travel mode", and the category feature 14 "price zone".
Based on category features which are not obtained from a plurality of category features corresponding to a first scene category "booking tickets" to which the current input information "i want a ticket to go to beijing" of the first user, for example, category feature 8 "place of departure", category feature 10 "number", category feature 11 "travel date", category feature 12 "travel time", category feature 13 "travel mode", and category feature 14 "price interval", at least one dialog question is output to the first user, and since the highest priority ranking among the category features which are not obtained to the category feature content is category feature 8 "place of departure", the dialog question is output according to category feature 8 "place of departure" as "ask where you need to go? "the first reply information of the first user is" guangzhou ", and it is understood that, in combination with the dialog question and the first reply information of the first user, the first reply information of the first user may be completed as" (i want to go from) guangzhou (to beijing, departed) "; the category feature content included in the first reply information is "guangzhou", the category features belonging to the scene categories of "departure place", "booking ticket" are "departure place", "destination" and "number", if the preset necessary category features are "departure place", "destination", "departure date" and "departure time", the category features of the undetermined category feature content are "departure date" and "departure time", the priority of the category feature 11 "departure date" is higher than the priority of the category feature 12 "travel time", and the dialogue interaction device outputs a dialogue question "when you need to depart" to the first user according to the "departure date? "the second reply information of the first user is" today's six evening hours ", and in combination with the dialog question and the second reply information of the first user, the second reply information of the first user may be complemented to" (i want one) today's six evening hours (tickets departing from guangzhou to beijing) ", the second reply information includes category feature contents of" today "and" six evening hours ", the category feature content of" today "belongs to the category feature of" date of trip "," six evening hours "belongs to the category feature of" time of trip "," ticket booking "includes the category features of" place of departure "," destination "," date of departure "," time of departure "and" number ", the dialog interaction device outputs the recommendation information" at night "for the current input information of the first user based on" place of departure "," destination "," date of departure "," time "and" number ", six, zero and five times of train Z98 going from Guangzhou east to Beijing, thirty hard seats on the rest tickets, one hard bed, no soft bed, 21 hours and 24 minutes, and so on, and the dialogue memory with the first user can be obtained according to the identity mark of the first user.
By implementing the embodiment, multiple conversations can be performed with the user, after the scene category of the current input information of the user is determined, multiple category feature contents and multiple corresponding category features are obtained through the multiple conversations, in the determined scene, the category feature contents and the category features can be rapidly matched, the recommendation information can be accurately output to the user, a large amount of query time can be saved, and further requirements of the user can be rapidly met.
Referring to fig. 9, fig. 9 is a block diagram of a dialog interaction device according to an embodiment of the present application. As shown in fig. 9, the dialog interaction device 90 includes:
an obtaining module 900, configured to obtain current input information of a first user;
a semantic recognition module 901, configured to perform semantic recognition on the current input information acquired by the acquisition module 900 to obtain a first semantic of the current input information;
the obtaining module 900 is further configured to transmit the current input information of the first user to the category content identification model corresponding to the first semantic, and obtain at least one category feature content included in the current input information of the first user and category features to which the at least one category feature content belongs respectively;
a determining module 902, configured to determine, according to category features to which at least one category feature content included in the current input information of the first user respectively belongs, a first scene category to which the current input information of the first user belongs, where the first scene category corresponds to multiple category features;
an output module 903, configured to output at least one dialog question to the first user according to the first scene category and a category feature of a category feature content that is not acquired in the first scene category.
In a possible embodiment, the obtaining module 900 is further configured to obtain a reply message of the first user to the dialog question;
the obtaining module 900 is further configured to transmit the first reply information to a content category identification model corresponding to the first semantic, and obtain at least one category feature content included in the first reply information and category features to which the at least one category feature content belongs respectively;
the output module 903 is configured to output recommendation information for the current input information of the first user according to the category feature content acquired by the acquisition module 900.
In a possible implementation manner, one dialog question output to the first user uniquely corresponds to a first category feature, and the first category feature is a category feature of which category feature content is not acquired;
the obtaining module 900 is further configured to transmit the first reply information to a content category identification model corresponding to the first semantic, and obtain at least one category feature content included in the first reply information;
the determining module 904 is configured to determine whether the category feature to which the at least one category feature content included in the first reply information belongs includes the first category feature, and if the category feature includes the first category feature, the determining module 900 obtains the category feature content of the first category feature from the first reply information;
otherwise, the output module 903 outputs the dialog question corresponding to the first category feature to the first user again.
Optionally, the obtaining module 900 is further configured to output a first session question to the first user according to the priority ranking of the plurality of category features corresponding to the first scene category, where the first session question uniquely corresponds to the category feature with the highest priority ranking among the category features of the content of the category features that are not obtained.
In a possible embodiment, the obtaining module 900 is further configured to obtain a preset necessary category feature in a plurality of category features corresponding to the first scene category;
the determining module 902 is further configured to determine the category feature content of the acquired necessary category feature.
In a possible implementation manner, the obtaining module 900 is further configured to obtain, according to the identity of the first user, historical input information of the first user within a first preset time, where the historical input information of the first user includes at least one category feature content and category features to which the at least one category feature content belongs respectively;
the obtaining module 900 is further configured to obtain category feature content of at least one category feature corresponding to the first scene category from at least one category feature content included in the historical input information of the first user and category features to which the at least one category feature content belongs respectively.
Further, the obtaining module 900 is further configured to obtain, according to the identity of the first user, historical input information of the first user within a second preset time;
the determining module 902 is further configured to perform semantic recognition on the current input information of the first user and the historical input information of the first user to obtain a semantic meaning, which is used as a first semantic meaning of the current input information.
Referring to fig. 10, fig. 10 is a block diagram of a dialog interaction device according to an embodiment of the present application. As shown in fig. 10, the apparatus 10 includes an input/output interface 1000, a processor 1001, and a memory 1002, wherein:
the input/output interface 1000 is used for receiving input information of the user and sending output information for the user, for example, receiving current input information of the user, such as text input, or voice input, or information input by a camera. The input output interface 1000 also transmits dialog questions and/or recommendation information for the user's current input information. For example, the processor 1000 may be a Central Processing Unit (CPU), and the processor may be other general purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf programmable gate array (FPGA) or other programmable logic device, a discrete gate or transistor logic device, a discrete hardware component, and the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 1002 stores instructions, and it is understood that the memory 1002 stores the preset necessary category features, the category features and corresponding category feature contents, and the identity tag and the dialogue memory of the user. Illustratively, the memory 1002 may include both read-only memory and random-access memory, and provides instructions and data to the processor 1001 and the input-output interface 1000. A portion of the memory 1002 may also include non-volatile random access memory. For example, the memory 1002 may also store device type information.
The processor 1001 is configured to execute the computer program stored in the memory to implement any one of the possible embodiments described above.
In a specific implementation, the dialog interaction device 100 may execute, through each built-in functional module thereof, the implementation manners provided in the steps in fig. 1 to fig. 8, which may be specifically referred to the implementation manners provided in the steps in fig. 1 to fig. 8, and thus details are not described herein again
The present application provides a computer-readable storage medium having stored therein instructions, which when executed on a computer, cause the computer to perform any one of the possible embodiments described above.
It should be noted that the terms "first" and "second" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance.
According to the method and the device, the scene category to which the current input information of the first user belongs is determined by obtaining at least one category feature content included in the current input information of the first user and the category feature corresponding to the at least one category feature content, so that the dialog problem is output to the first user quickly according to the category feature corresponding to the scene category, and the effect of quickly responding to the user when the scene field is switched is achieved. And multiple conversations with the user can be realized, multiple category feature contents and multiple corresponding category features are obtained through the multiple conversations, and in a determined scene, the category feature contents and the category features can be quickly matched, so that a large amount of query time is saved, and further requirements of the user are quickly met.
In the embodiments provided in the present application, it should be understood that the disclosed method, apparatus, and system may be implemented in other ways. The above-described embodiments are merely illustrative, for example, the division of the unit is only a logical functional division, and there may be other division ways in actual implementation, such as: multiple units or components may be combined, or may be integrated into another system, or some features may be omitted, or not implemented. In addition, the coupling, direct coupling or communication connection between the components shown or discussed may be through some interfaces, and the indirect coupling or communication connection between the devices or units may be electrical, mechanical or other forms.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed on a plurality of network units; some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, all the functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may be separately regarded as one unit, or two or more units may be integrated into one unit; the integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional unit.
Those of ordinary skill in the art will understand that: all or part of the steps for implementing the method embodiments may be implemented by hardware related to program instructions, and the program may be stored in a computer readable storage medium, and when executed, the program performs the steps including the method embodiments; and the aforementioned storage medium includes: a mobile storage device, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
Alternatively, the integrated unit of the present invention may be stored in a computer-readable storage medium if it is implemented in the form of a software functional module and sold or used as a separate product. Based on such understanding, the technical solutions of the embodiments of the present invention may be essentially implemented or a part contributing to the prior art may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the methods described in the embodiments of the present invention. And the aforementioned storage medium includes: a removable storage device, a ROM, a RAM, a magnetic or optical disk, or various other media that can store program code.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and all the changes or substitutions should be covered within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the appended claims.

Claims (10)

1. A conversational interaction method, the method comprising:
acquiring current input information of a first user, and performing semantic recognition on the current input information to obtain first semantics of the current input information;
transmitting the current input information of the first user to a category content identification model corresponding to the first semantic meaning, and acquiring at least one category characteristic content included in the current input information of the first user and category characteristics to which the at least one category characteristic content belongs respectively;
determining a first scene category to which the current input information of the first user belongs according to category features to which at least one category feature content included in the current input information of the first user belongs, wherein the first scene category corresponds to a plurality of category features;
and outputting at least one dialogue question to the first user according to the first scene category and the category feature of the category feature content which is not acquired in the first scene category.
2. The method of claim 1, further comprising:
acquiring reply information of the first user to the dialogue question;
transmitting the first reply information to a content category identification model corresponding to the first semantic meaning, and acquiring at least one category feature content included in the first reply information and category features to which the at least one category feature content belongs respectively;
and outputting recommendation information aiming at the current input information of the first user according to the acquired category characteristic content.
3. The method according to claim 2, wherein one of the dialog questions output to the first user uniquely corresponds to a first category feature, and the first category feature is a category feature for which category feature content is not acquired;
the transmitting the first reply information to the content category identification model corresponding to the first semantic, and the obtaining at least one category feature content included in the first reply information and the category features to which the at least one category feature content respectively belongs include:
transmitting the first reply information to a content category identification model corresponding to the first semantic, acquiring at least one category feature content included in the first reply information, judging whether a category feature to which the at least one category feature content included in the first reply information belongs includes the first category feature, and if so, acquiring the category feature content of the first category feature from the first reply information;
otherwise, outputting the dialogue questions corresponding to the first category of features to the first user again.
4. The method according to claim 2, wherein before outputting the recommendation information for the current input information of the first user according to the acquired category feature content, the method comprises:
acquiring necessary preset category characteristics in a plurality of category characteristics corresponding to the first scene category;
and determining the category feature content of the acquired necessary category features.
5. The method according to claim 1, wherein the outputting at least one dialog question to the first user according to the first scene category and a category feature of the first scene category for which category feature content is not obtained comprises:
and outputting a first dialogue question to the first user according to the priority sequence of the plurality of category features corresponding to the first scene category, wherein the first dialogue question only corresponds to the category feature with the highest priority sequence in the category features of the content of which the category features are not acquired.
6. The method according to claim 1, wherein the obtaining of the first scene category and the category feature of the first scene category for which the category feature content is not obtained comprises:
acquiring historical input information of the first user within a first preset time according to the identity of the first user, wherein the historical input information of the first user comprises at least one category characteristic content and category characteristics to which the at least one category characteristic content belongs respectively;
and obtaining category feature content of at least one category feature corresponding to the first scene category from at least one category feature content included in the historical input information of the first user and category features to which the at least one category feature content respectively belongs.
7. The method of claim 1, wherein the performing semantic recognition on the current input information to obtain a first semantic meaning of the current input information comprises:
acquiring historical input information of the first user within second preset time according to the identity of the first user;
and performing semantic recognition on the current input information of the first user and the historical input information of the first user to obtain semantics, wherein the semantics are used as first semantics of the current input information.
8. A conversational interaction apparatus, the apparatus comprising:
the acquisition module is used for acquiring the current input information of the first user;
the semantic recognition module is used for performing semantic recognition on the current input information acquired by the acquisition module to obtain a first semantic meaning of the current input information;
the obtaining module is further configured to transmit the current input information of the first user to the category content identification model corresponding to the first semantic, and obtain at least one category feature content included in the current input information of the first user and category features to which the at least one category feature content belongs respectively;
a determining module, configured to determine a first scene category to which current input information of the first user belongs according to category features to which at least one category feature content included in the current input information of the first user belongs, where the first scene category corresponds to multiple category features;
and the output module is used for outputting at least one dialogue problem to the first user according to the first scene category and the category characteristics of the category characteristic content which is not acquired in the first scene category.
9. A dialog interaction device comprising an input-output interface, a processor and a memory, wherein the processor is adapted to execute a computer program stored in the memory to implement the steps of the method according to any one of claims 1 to 7.
10. A computer-readable storage medium having stored thereon instructions which, when executed on a computer, cause the computer to perform the steps of the method according to any one of claims 1 to 7.
CN201910973935.0A 2019-10-14 2019-10-14 Conversation interaction method, device and equipment Pending CN110837543A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910973935.0A CN110837543A (en) 2019-10-14 2019-10-14 Conversation interaction method, device and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910973935.0A CN110837543A (en) 2019-10-14 2019-10-14 Conversation interaction method, device and equipment

Publications (1)

Publication Number Publication Date
CN110837543A true CN110837543A (en) 2020-02-25

Family

ID=69575328

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910973935.0A Pending CN110837543A (en) 2019-10-14 2019-10-14 Conversation interaction method, device and equipment

Country Status (1)

Country Link
CN (1) CN110837543A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111428512A (en) * 2020-03-27 2020-07-17 大众问问(北京)信息科技有限公司 Semantic recognition method, device and equipment
CN111881270A (en) * 2020-07-01 2020-11-03 北京嘀嘀无限科技发展有限公司 Intelligent dialogue method and system

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104464733A (en) * 2014-10-28 2015-03-25 百度在线网络技术(北京)有限公司 Multi-scene managing method and device of voice conversation
WO2016110356A1 (en) * 2015-01-08 2016-07-14 Siemens Aktiengesellschaft Method for integration of semantic data processing
CN107357787A (en) * 2017-07-26 2017-11-17 微鲸科技有限公司 Semantic interaction method, apparatus and electronic equipment
CN107832286A (en) * 2017-09-11 2018-03-23 远光软件股份有限公司 Intelligent interactive method, equipment and storage medium
CN108897723A (en) * 2018-06-29 2018-11-27 北京百度网讯科技有限公司 The recognition methods of scene dialog text, device and terminal
CN108959482A (en) * 2018-06-21 2018-12-07 北京慧闻科技发展有限公司 Single-wheel dialogue data classification method, device and electronic equipment based on deep learning

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104464733A (en) * 2014-10-28 2015-03-25 百度在线网络技术(北京)有限公司 Multi-scene managing method and device of voice conversation
WO2016110356A1 (en) * 2015-01-08 2016-07-14 Siemens Aktiengesellschaft Method for integration of semantic data processing
CN107357787A (en) * 2017-07-26 2017-11-17 微鲸科技有限公司 Semantic interaction method, apparatus and electronic equipment
CN107832286A (en) * 2017-09-11 2018-03-23 远光软件股份有限公司 Intelligent interactive method, equipment and storage medium
CN108959482A (en) * 2018-06-21 2018-12-07 北京慧闻科技发展有限公司 Single-wheel dialogue data classification method, device and electronic equipment based on deep learning
CN108897723A (en) * 2018-06-29 2018-11-27 北京百度网讯科技有限公司 The recognition methods of scene dialog text, device and terminal

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111428512A (en) * 2020-03-27 2020-07-17 大众问问(北京)信息科技有限公司 Semantic recognition method, device and equipment
CN111428512B (en) * 2020-03-27 2023-12-12 大众问问(北京)信息科技有限公司 Semantic recognition method, device and equipment
CN111881270A (en) * 2020-07-01 2020-11-03 北京嘀嘀无限科技发展有限公司 Intelligent dialogue method and system

Similar Documents

Publication Publication Date Title
Peng et al. Kosmos-2: Grounding multimodal large language models to the world
CN110085225B (en) Voice interaction method and device, intelligent robot and computer readable storage medium
CN110121706B (en) Providing responses in a conversation
CN108334487B (en) Missing semantic information completion method and device, computer equipment and storage medium
Bornstein et al. Vocabulary competence in early childhood: Measurement, latent construct, and predictive validity
CN112365894B (en) AI-based composite voice interaction method and device and computer equipment
CN105512228B (en) A kind of two-way question and answer data processing method and system based on intelligent robot
KR20190028793A (en) Human Machine Interactive Method and Device Based on Artificial Intelligence
KR102057184B1 (en) Interest determination system, interest determination method, and storage medium
CN110209774A (en) Handle the method, apparatus and terminal device of session information
CN110837543A (en) Conversation interaction method, device and equipment
CN107704612A (en) Dialogue exchange method and system for intelligent robot
Nicholas et al. Hearing status, language modality, and young children's communicative and linguistic behavior
US9569701B2 (en) Interactive text recognition by a head-mounted device
CN110569344B (en) Method and device for determining standard question corresponding to dialogue text
CN110580516B (en) Interaction method and device based on intelligent robot
CN107562911A (en) More wheel interaction probabilistic model training methods and auto-answer method
CN110851575A (en) Dialogue generating system and dialogue realizing method
WO2021179703A1 (en) Sign language interpretation method and apparatus, computer device, and storage medium
CN114783421A (en) Intelligent recommendation method and device, equipment and medium
McEnery et al. Assessing claims about language use with corpus data—swearing and abuse
Sarker et al. An intelligent system for conversion of bangla sign language into speech
CN117370512A (en) Method, device, equipment and storage medium for replying to dialogue
Mikolov et al. Learning representations of text using neural networks
CN112256856A (en) Robot dialogue method, device, electronic device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20200415

Address after: 1706, Fangda building, No. 011, Keji South 12th Road, high tech Zone, Yuehai street, Nanshan District, Shenzhen City, Guangdong Province

Applicant after: Shenzhen shuliantianxia Intelligent Technology Co.,Ltd.

Address before: 518000, building 10, building ten, building D, Shenzhen Institute of Aerospace Science and technology, 6 hi tech Southern District, Nanshan District, Shenzhen, Guangdong 1003, China

Applicant before: SHENZHEN H & T HOME ONLINE NETWORK TECHNOLOGY Co.,Ltd.

RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200225