CN113420140A - User emotion prediction method and device, electronic equipment and readable storage medium - Google Patents

User emotion prediction method and device, electronic equipment and readable storage medium Download PDF

Info

Publication number
CN113420140A
CN113420140A CN202110974118.4A CN202110974118A CN113420140A CN 113420140 A CN113420140 A CN 113420140A CN 202110974118 A CN202110974118 A CN 202110974118A CN 113420140 A CN113420140 A CN 113420140A
Authority
CN
China
Prior art keywords
target
sample
emotion
entity
session information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110974118.4A
Other languages
Chinese (zh)
Other versions
CN113420140B (en
Inventor
向宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Mininglamp Software System Co ltd
Original Assignee
Beijing Mininglamp Software System Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Mininglamp Software System Co ltd filed Critical Beijing Mininglamp Software System Co ltd
Priority to CN202110974118.4A priority Critical patent/CN113420140B/en
Publication of CN113420140A publication Critical patent/CN113420140A/en
Application granted granted Critical
Publication of CN113420140B publication Critical patent/CN113420140B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/332Query formulation
    • G06F16/3329Natural language query formulation or dialogue systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/3331Query processing
    • G06F16/334Query execution
    • G06F16/3344Query execution using natural language analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/36Creation of semantic tools, e.g. ontology or thesauri
    • G06F16/367Ontology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/10Text processing
    • G06F40/194Calculation of difference between files
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/01Customer relationship services

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Mathematical Physics (AREA)
  • Business, Economics & Management (AREA)
  • Marketing (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Finance (AREA)
  • Accounting & Taxation (AREA)
  • Strategic Management (AREA)
  • General Business, Economics & Management (AREA)
  • Economics (AREA)
  • Development Economics (AREA)
  • Human Computer Interaction (AREA)
  • Animal Behavior & Ethology (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • General Health & Medical Sciences (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application provides a user emotion prediction method and device, electronic equipment and a readable storage medium, and belongs to the technical field of knowledge reasoning. The method comprises the following steps: determining a plurality of candidate dialogues corresponding to the current session information; identifying a target entity and a target event through an entity identification scheme; selecting a target knowledge-graph from a database, the target knowledge-graph at least partially comprising the target entity and the target event; and taking the target emotion category corresponding to the target knowledge graph as a predicted emotion category corresponding to the target dialect. The application improves the accuracy of predicting the emotion categories.

Description

User emotion prediction method and device, electronic equipment and readable storage medium
Technical Field
The present application relates to the field of knowledge inference technology, and in particular, to a method and an apparatus for predicting user emotion, an electronic device, and a readable storage medium.
Background
With the deep popularization of the mobile internet, it is a normalized trend that customer service personnel communicate with users by using an instant messaging tool. Customer service personnel can know the requirements of the user through the session information of the instant messaging tool, so that the service is provided for the user. The user may have a certain emotion during the conversation, and the customer service staff often needs to guide the emotion of the user to develop in a certain direction through the conversation so as to achieve the purpose of the customer service staff, for example, the customer service staff guides the user to develop from the anxiety emotion to the calm emotion, or guides the user to develop from the calm emotion to the anxious emotion.
For the customer service staff with shallow experience, the customer service staff often does not know what kind of dialect can be adopted to guide the emotion of the user to develop towards the direction required by the customer service staff, namely, the emotion development of the user cannot be predicted, so that the development of the work of the customer service staff is not facilitated, and the work efficiency of the customer service staff is reduced.
Disclosure of Invention
An object of the embodiments of the present application is to provide a method and an apparatus for predicting a user emotion, an electronic device, and a readable storage medium, so as to solve the problem of low work efficiency of customer service staff. The specific technical scheme is as follows:
in a first aspect, a method for predicting a user emotion is provided, the method comprising:
determining a plurality of candidate dialogues corresponding to current session information, wherein the current session information is first session information sent by a target user, and the candidate dialogues can provide emotional value for the target user;
identifying a target entity and a target event through an entity identification scheme, wherein the target entity and the target event are at least obtained by a target dialect which is any one of the candidate dialects;
selecting a target knowledge graph at least partially comprising the target entity and the target event from a database, wherein the database comprises a plurality of sample knowledge graphs, the sample knowledge graphs are obtained based on sample session information, and each sample knowledge graph corresponds to one emotion category;
and taking a target emotion category corresponding to the target knowledge graph as a predicted emotion category corresponding to the target conversation, wherein the predicted emotion category is used for indicating an emotion generated when the target user sees the target conversation.
Optionally, before selecting the target knowledge-graph at least partially containing the target entity and the target event from the database, the method further comprises:
inputting sample session information into an emotion recognition model to obtain a sample emotion category of a sample user recognized by the emotion recognition model, wherein the sample session information comprises second session information sent by the sample user and session information corresponding to the second session information;
using the sample emotion classification as an emotion label of the sample session information;
extracting a sample entity and a sample event in the sample session information through a preset extraction scheme, wherein the sample entity is an entity associated with the sample user, and the sample event is an event associated with the sample user;
and generating a sample knowledge graph carrying emotion labels based on the sample entities and the sample events.
Optionally, the selecting a target knowledge-graph from a database, the target knowledge-graph at least partially containing the target entity and the target event, comprises:
determining a sample entity and a sample event in the sample knowledge graph, wherein the sample entity and the sample event are used for indicating a source of emotion generated by a sample user corresponding to the sample session information;
determining a first similarity of the target entity and the sample entity, and determining a second similarity of the target event and the sample event;
taking the sum of the first similarity and the second similarity as an emotional similarity;
and taking the sample knowledge graph as the target knowledge graph under the condition that the emotion similarity is larger than a preset threshold value.
Optionally, the taking the target emotion category corresponding to the target knowledge graph as the predicted emotion category corresponding to the target utterance includes:
determining the sample knowledge graph as a to-be-selected knowledge graph under the condition that the emotion similarity is larger than the preset threshold, wherein the to-be-selected knowledge graph corresponds to the sample emotion category;
under the condition that the number of the knowledge graphs to be selected is at least two, determining the emotion probability to be selected of the knowledge graphs to be selected corresponding to the sample emotion categories, wherein the sample emotion categories are provided with different emotion probabilities according to different emotion similarities;
and selecting the knowledge graph to be selected with the highest emotion probability as the target knowledge graph.
Optionally, before or after determining a plurality of candidate sessions corresponding to the current session information, the method further includes:
inputting the current session information into an emotion recognition model to obtain the emotion category of the target user recognized by the emotion recognition model;
determining a negative probability of the negative emotion category in case that the emotion category of the target user is determined to be a negative emotion category;
and acquiring a matched preset dialect from a database according to the current session information and the negative probability, wherein the preset dialect at least comprises a soothing dialect and a solution dialect, and the emotion soothing capability of the preset dialect is in direct proportion to the negative probability.
Optionally, before or after the preset dialog matching with the preset dialog is obtained from the database, the method further comprises:
searching similar session information of the current session information from sample session information according to a target entity and a target event in the current session information;
searching for negative feedback dialogs aiming at the similar session information from sample session information, wherein the negative feedback dialogs are dialogs which bring negative emotions to the sample users;
setting a negative label for the negative feedback utterance, wherein the negative label is used to provide negative feedback for the utterance provider.
Optionally, identifying the target entity and the target event by the entity identification scheme includes:
identifying a first sub-entity and a first sub-event included in the current session information and identifying a second sub-entity and a second sub-event included in the target session through an entity identification scheme;
taking the first sub-entity and the second sub-entity as the target entities and the first sub-event and the second sub-event as the target events; or the like, or, alternatively,
and taking the second sub-entity as the target entity and the second sub-event as the target event.
In a second aspect, an apparatus for predicting a user's emotion is provided, the apparatus comprising:
the system comprises a first determining module, a second determining module and a judging module, wherein the first determining module is used for determining a plurality of candidate dialogues corresponding to current session information, the current session information is first session information sent by a target user, and the candidate dialogues can provide emotional value for the target user;
the identification module is used for identifying a target entity and a target event through an entity identification scheme, wherein the target entity and the target event are at least obtained by a target dialogues, and the target dialogues are any one of the candidate dialogues;
selecting a target knowledge graph at least partially comprising the target entity and the target event from a database, wherein the database comprises a plurality of sample knowledge graphs, the sample knowledge graphs are obtained based on sample session information, and each sample knowledge graph corresponds to one emotion category;
and taking a target emotion category corresponding to the target knowledge graph as a predicted emotion category corresponding to the target conversation, wherein the predicted emotion category is used for indicating an emotion generated when the target user sees the target conversation.
In a third aspect, an electronic device is provided, which includes a processor, a communication interface, a memory and a communication bus, wherein the processor, the communication interface and the memory complete communication with each other through the communication bus;
a memory for storing a computer program;
and a processor for implementing any of the methods for predicting a user's mood when executing the program stored in the memory.
In a fourth aspect, a computer-readable storage medium is provided, having stored therein a computer program which, when executed by a processor, implements any of the methods for predicting a user's mood.
The embodiment of the application has the following beneficial effects:
the application is applied to knowledge reasoning in the technical field of knowledge graphs, and the embodiment of the application provides a prediction method of user emotion.
In the application, the terminal selects a target dialect which is closer to a target knowledge graph from a plurality of candidate dialects according to a target entity and a target event which are at least obtained by the target dialect, and then the target emotion category of the target knowledge graph is used as a predicted emotion category corresponding to the target dialect, so that the terminal can display the emotion category corresponding to each candidate dialect for a customer service staff to select the dialect corresponding to a certain emotion to be pushed to a target user, the target user can generate corresponding emotion, and the aim that the customer service staff guides the emotion of the target user is achieved. The method and the device can enable the customer service staff to directly see the emotion categories corresponding to the dialogues to be selected, so that the customer service staff does not need to spend time and energy to think the dialogues and feed back the thoughts to the target user, and the working efficiency of the customer service staff is improved.
Of course, not all of the above advantages need be achieved in the practice of any one product or method of the present application.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly described below, and it is obvious for those skilled in the art to obtain other drawings without inventive exercise.
Fig. 1 is a hardware environment diagram of a method for predicting a user emotion according to an embodiment of the present disclosure;
fig. 2 is a flowchart of a method for predicting a user emotion according to an embodiment of the present application;
FIG. 3 is a diagram of a system including current session information and a plurality of candidate sessions;
FIG. 4 is a flow chart of a method for generating a sample knowledge-graph as provided by an embodiment of the present application;
fig. 5 is a schematic diagram illustrating a preset dialog of a terminal according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of a device for predicting a user emotion according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
In the following description, suffixes such as "module", "component", or "unit" used to denote elements are used only for the convenience of description of the present application, and have no specific meaning in themselves. Thus, "module" and "component" may be used in a mixture.
To solve the problems mentioned in the background, according to an aspect of embodiments of the present application, an embodiment of a prediction method of a user emotion is provided.
Alternatively, in the embodiment of the present application, the method for predicting the emotion of the user may be applied to a hardware environment formed by the terminal 101 and the server 103 as shown in fig. 1. As shown in fig. 1, a server 103 is connected to a terminal 101 through a network, which may be used to provide services for the terminal or a client installed on the terminal, and a database 105 may be provided on the server or separately from the server, and is used to provide data storage services for the server 103, and the network includes but is not limited to: wide area network, metropolitan area network, or local area network, and the terminal 101 includes but is not limited to a PC, a cell phone, a tablet computer, and the like.
The prediction method of the user emotion in the embodiment of the present application may be executed by the server 103, or may be executed by the terminal 101, or may be executed by both the server 103 and the terminal 101.
The embodiment of the application provides a method for predicting user emotion, which is applied to a terminal for example and used for predicting the user emotion.
The following describes in detail a method for predicting a user emotion provided in an embodiment of the present application with reference to a specific embodiment, as shown in fig. 2, the specific steps are as follows:
step 201: and determining a plurality of candidate dialogues corresponding to the current session information.
The current session information is first session information sent by the target user, and the candidate session can provide emotional value for the target user.
In the embodiment of the application, customer service personnel communicate with a target user through a terminal, the terminal acquires current conversation information sent by the target user and prohibits the current conversation information from being leaked under the condition that the target user authorizes, and the terminal executes the following operations after determining that the current conversation information is legal: the terminal determines a plurality of candidate dialogs according to the current session information, wherein the candidate dialogs can provide emotional value for the target user, different candidate dialogs can provide the same emotional value and can also provide different emotional values, and the emotional value is an emotional category. The current session information only refers to the session information currently sent by the target user.
Step 202: and identifying a target entity and a target event through an entity identification scheme.
Wherein the target entity and the target event are obtained by at least a target dialogues, and the target dialogues are any one of the candidate dialogues.
The user selects a target dialect from the dialects to be selected, and the terminal identifies a target entity and a target event at least contained by the target dialect through an entity identification scheme after receiving a selection instruction of the target dialect. Specifically, the terminal may identify a target entity and a target event included in the target session and the current session information, or may only identify the target entity and the target event included in the target session.
Illustratively, the target entity may be an entity related to the target user, including the target user's location, age, use of the product, and the like. The target event is an event related to the target user, including the current situation of the target user.
Step 203: a target knowledge-graph is selected from the database that at least partially contains the target entity and the target event.
The database comprises a plurality of sample knowledge maps, the sample knowledge maps are obtained based on sample session information, and each sample knowledge map corresponds to one emotion category.
The database comprises a plurality of sample knowledge maps, the sample knowledge maps are obtained based on sample session information, the sample session information comprises second session information sent by a sample user and session information corresponding to the second session information, the sample session information is session information occurring before current session information, and the sample user and the current user can be different. The terminal generates a plurality of sample knowledge maps according to sample session information of sample users, wherein different knowledge maps correspond to different emotion categories of the users, the emotion of the categories is generated by the users based on the issued second session and the obtained conversation, and the emotion categories include but are not limited to excitation, joy, calm, worry, anxiety, irritability and the like.
And the terminal acquires the sample knowledge graph from the database and then selects a target knowledge graph at least partially comprising the target entity and the target event from the sample knowledge graph. The target knowledge-graph may include at least a part of the target entity, at least a part of the target event, or at least a part of both the target entity and the target event.
Step 204: and taking the target emotion category corresponding to the target knowledge graph as a predicted emotion category corresponding to the target dialect.
Wherein the predicted emotion category is used to indicate an emotion generated when the target user sees the target conversation.
The target knowledge graph corresponds to the target emotion category, and the target knowledge graph at least partially comprises a target entity and a target event, and the target entity and the target event are obtained at least according to the target dialect, so that the target knowledge graph is closer to the target dialect, the target emotion category can be used as the predicted emotion category corresponding to the target dialect, and the target user can generate the emotion of the predicted emotion category after seeing the target dialect. And the terminal obtains the predicted emotion category corresponding to each dialect to be selected according to the method.
In the application, the terminal selects a target dialect which is closer to a target knowledge graph from a plurality of candidate dialects according to a target entity and a target event which are at least obtained by the target dialect, and then the target emotion category of the target knowledge graph is used as a predicted emotion category corresponding to the target dialect, so that the terminal can display the emotion category corresponding to each candidate dialect for a customer service staff to select the dialect corresponding to a certain type of emotion to be pushed to a target user, so that the target user generates the emotion of the corresponding category, and the purpose that the customer service staff guides the emotion of the target user is achieved. The method and the device can enable the customer service staff to directly see the emotion categories corresponding to the dialogues to be selected, so that the customer service staff does not need to spend time and energy to think the dialogues and feed back the thoughts to the target user, and the working efficiency of the customer service staff is improved.
Fig. 3 is a diagram of a system including current session information and a plurality of candidate sessions. It can be seen that the current session information is the information sent by the client "i generally refers to facial cleanser and skin toner, and occasionally refers to cream", the candidate dialogs are three dialogs shown on the right side of the instant messaging tool interface, and respectively refer to "what efficacy is a product that you use in daily life", "suggest that you can use a product that is repaired after sun", "xx repair lotion is good, and you can know to buy at once". The three types of predicted emotion categories sequentially corresponding to the dialogs to be selected are as follows: happy, happy and calm. The customer service staff can select one of the three candidate dialogs to reply to the target user according to the requirement of guiding the emotion of the client.
As an alternative embodiment, before the target knowledge-graph at least partially including the target entity and the target event is selected from the database, as shown in fig. 4, the method further includes:
step 401: and inputting the sample conversation information into the emotion recognition model to obtain the sample emotion category of the sample user recognized by the emotion recognition model.
The sample session information comprises second session information sent by a sample user and session information corresponding to the second session information.
In the embodiment of the application, the terminal takes the second session information sent by the sample user and the session information corresponding to the second session information as the sample session information, that is, the sample session information includes the second session information of the sample user and the session information fed back by the customer service staff for the second session information. And the terminal inputs the sample conversation information into the emotion recognition model to obtain the sample emotion category of the sample user recognized by the emotion recognition model.
Step 402: and taking the sample emotion category as an emotion label of the sample session information.
And after the terminal obtains the sample emotion types corresponding to the sample session information, taking the sample emotion types as emotion labels of the sample session information.
Step 403: and extracting the sample entity and the sample event in the sample session information through a preset extraction scheme.
Wherein the sample entity is an entity associated with the sample user and the sample event is an event associated with the sample user.
The terminal extracts a sample entity and a sample event in the sample session information through a preset extraction scheme, wherein the preset extraction scheme comprises an entity identification scheme and an event identification scheme, the sample entity is an entity associated with a sample user, and the sample event is an event associated with the sample user.
Illustratively, as shown in fig. 3, a total of three pieces of session information are included, and for the last piece of session information, the first two pieces of session information are sample session information, from which a sample entity and a sample event can be obtained. The sample entity includes: seaside, weekend, face. Sample events include: sunburn, very red and painful face, skin sedation, relief of redness and pain, and what the skin care procedure is. In the present application, the sample session information is not limited to the session information in fig. 3, and may be session information of different users at different time points that occur before the current session information.
Step 404: and generating a sample knowledge graph carrying emotion labels based on the sample entities and the sample events.
After the terminal obtains the sample entity and the sample event, a sample knowledge graph carrying emotion labels is generated based on the sample entity, the sample event, the relation between the sample entities and the relation between the sample events, wherein the sample entity and the sample event represent the reason for the emotion category of the sample. For example, if the first two pieces of session information are used as sample session information in fig. 3, the sample emotion category is anxiety, and the reason for anxiety is: the face sunburn is very red and painful.
As an alternative embodiment, the selecting from the database a target knowledge-graph at least partially containing the target entity and the target event comprises: determining a sample entity and a sample event in the sample knowledge graph, wherein the sample entity and the sample event are used for indicating a source of emotion generated by a sample user corresponding to the sample session information; determining a first similarity of a target entity and a sample entity, and determining a second similarity of a target event and a sample event; taking the sum of the first similarity and the second similarity as the emotion similarity; and taking the sample knowledge graph as a target knowledge graph under the condition that the emotion similarity is greater than a preset threshold value.
In the embodiment of the application, the terminal determines a sample entity and a sample event in a sample knowledge graph, determines a first similarity between the target entity and the sample entity after determining at least the target entity and the target event obtained by a target dialogues, determines a second similarity between the target event and the sample event, and then takes the sum of the first similarity and the second similarity as the emotion similarity, when the terminal determines that the emotion similarity is greater than a preset threshold, the content of the target dialogues is close to the content in the sample knowledge graph, and the sample knowledge graph can be taken as the target knowledge graph, so that the target emotion class of the target knowledge graph can be taken as the predicted emotion class of the target dialogues, and the accuracy of the predicted emotion class is improved.
As an optional implementation manner, in the case that the emotion similarity is greater than the preset threshold, taking the sample knowledge graph as the target knowledge graph includes: determining the sample knowledge graph as a to-be-selected knowledge graph under the condition that the emotion similarity is larger than a preset threshold value, wherein the to-be-selected knowledge graph corresponds to the sample emotion category; under the condition that the number of the knowledge graphs to be selected is at least two, determining the emotion probability to be selected of the knowledge graphs corresponding to the emotion classes of the samples, wherein the emotion classes of the samples are provided with different emotion probabilities according to different emotion similarities; and selecting the knowledge graph to be selected with the highest emotion probability as a target knowledge graph.
In the embodiment of the application, different candidate dialogs can correspond to the same emotion category, but can correspond to different emotion probabilities of the same emotion category. Specifically, after the terminal determines the emotion similarity between the target dialogues and the sample knowledge graph, if the emotion similarity is judged to be not greater than a preset threshold value, the emotion similarity between the target dialogues and the sample knowledge graph is low, and the sample knowledge graph cannot be used as the target knowledge graph. And if the emotion similarity is judged to be larger than the preset threshold value, the emotion similarity between the target dialogues and the sample knowledge graph is high, determining the sample knowledge graph as a to-be-selected knowledge graph, and enabling the to-be-selected knowledge graph to correspond to the sample emotion category.
And if the number of the knowledge graph to be selected is one, taking the knowledge graph to be selected as the target knowledge graph. And if the number of the knowledge graphs to be selected is at least two, determining the emotion probability of the selected knowledge graph corresponding to the emotion category of the sample, and selecting the knowledge graph with the highest emotion probability as the target knowledge graph, so that the selected target knowledge graph is closer to the target dialect, and the target dialect guides the emotion of the target user more quickly and effectively.
As an optional implementation manner, before or after determining the multiple candidate sessions corresponding to the current session information, the method further includes: inputting the current session information into an emotion recognition model to obtain the emotion category of the target user recognized by the emotion recognition model; determining a negative probability of the negative emotion category in the case that the emotion category of the target user is determined to be the negative emotion category; and acquiring a matched preset conversation from the database according to the current conversation information and the negative probability, wherein the preset conversation at least comprises a conversation placating operation and a solution conversation, and the emotion placating capability of the preset conversation is in direct proportion to the negative probability.
In the embodiment of the application, the terminal inputs the current session information into the emotion recognition model to obtain the emotion category of the target user recognized by the emotion recognition model, and if the emotion category of the target user is determined to be the negative emotion category, the client needs to be pacified and a solution needs to be provided. Negative emotions may correspond to different negative probabilities, which represent different degrees of negative emotions for a user, generally the greater the negative probability, the greater the degree of negative emotions for a user.
The terminal determines negative probability of the negative emotion category, then determines multiple anti-conversation techniques based on current conversation information, and determines a preset conversation technique in the anti-conversation techniques based on the negative probability, wherein the higher the negative probability is, the stronger the ability of the preset conversation technique to sooth the emotion is, and the preset conversation technique at least comprises the calming conversation technique and the solution conversation technique. According to the application, the negative emotion of the target user can be sensed in time, the preset dialect for soothing the target user is given, and the user experience is improved. In addition, the method and the device can automatically send the message to the target terminal after determining that the target user generates negative emotion, so that managers can know the emotion of the target user and make preparation work in advance.
Fig. 5 is a schematic diagram of a terminal showing a preset dialog. It can be seen that the current negative emotion of the user is anxiety, the negative probability is 73%, the recommendation speech comprises placation speech and two solutions speech, and the customer service staff can feed the recommendation speech back to the user, so that the anxiety of the user is reduced.
As an optional implementation, before or after the preset dialect matching is obtained from the database, the method further includes: searching similar session information of the current session information from the sample session information according to a target entity and a target event in the current session information; searching negative feedback dialogs aiming at similar conversation information from the sample conversation information, wherein the negative feedback dialogs are dialogs which bring negative emotion to sample users; setting a negative label for negative feedback speech, wherein the negative label is used to provide negative feedback for the speech provider.
After receiving the current session information, the terminal searches for similar session information of the current session information from the sample session information according to the target entity and the target event in the current session information, wherein the range of the similar session information may be larger than that of the target knowledge graph. If the target knowledge graph is selected from a plurality of candidate knowledge graphs, the range of the similar session information is the same as the total range of the plurality of candidate knowledge graphs.
The sample conversation information comprises negative feedback dialogues aiming at similar conversation information, the terminal sets a negative label for the negative feedback dialogues, the negative label is used for indicating that the negative feedback dialogues bring negative emotions to sample users, negative key information in the negative feedback dialogues is displayed on the terminal, negative feedback is provided for customer service staff, the customer service staff is prompted to not feed the negative key information back to target users, and experience of the target users is improved. Preferably, the negative key information can be displayed before the matched preset words are obtained, so that the situation that the customer service staff sends words containing the negative key information to the target user before the preset words are not received is avoided.
Negative feedback of missing fetch codes is shown in fig. 5, including: no meal, refund and machine failure. The negative feedback may prompt the customer service personnel not to feedback the vocabulary of the negative feedback to the user.
As an alternative embodiment, the identifying the target entity and the target event by the entity identification scheme includes: identifying a first sub-entity and a first sub-event included in current session information and identifying a second sub-entity and a second sub-event included in a target session through an entity identification scheme; taking the first sub-entity and the second sub-entity as target entities, and taking the first sub-event and the second sub-event as target events; or, the second sub-entity is used as the target entity, and the second sub-event is used as the target event.
In the embodiment of the present application, the target entity and the target event may be from the current session information and the target session, or may be from only the target session. Specifically, if the target entity and the target event are from the current session information and the target session, the terminal identifies a first sub-entity and a first sub-event included in the current session information, identifies a second sub-entity and a second sub-event included in the target session, takes the first sub-entity and the second sub-entity as the target entity, and takes the first sub-event and the second sub-event as the target event by using the entity identification scheme. And if the target entity and the target event are from the target technology, taking the second sub-entity as the target entity and taking the second sub-event as the target event.
Optionally, an embodiment of the present application further provides a processing flow chart for predicting a user emotion, and the specific steps are as follows.
Step 1: and inputting the sample conversation information into the emotion recognition model to obtain the sample emotion category of the sample user recognized by the emotion recognition model.
Step 2: and extracting the sample entities and the sample events in the sample session information through a preset extraction scheme, and generating a sample knowledge graph carrying emotion labels based on the sample entities and the sample events.
And step 3: and determining a plurality of candidate dialogues corresponding to the current session information.
And 4, step 4: and identifying at least a target entity and a target event obtained by the target technology through an entity identification scheme.
And 5: a target knowledge-graph is selected from the database that at least partially contains the target entity and the target event.
Step 6: and taking the target emotion category corresponding to the target knowledge graph as a predicted emotion category corresponding to the target dialect.
And 7: and under the condition that the current session information is determined to be in the negative emotion category, acquiring the matched preset dialogues from the database, and determining the negative feedback dialogues of the current session information.
Based on the same technical concept, an embodiment of the present application further provides a device for predicting a user emotion, as shown in fig. 6, the device includes:
the determining module 601 is configured to determine multiple candidate dialogues corresponding to current session information, where the current session information is first session information sent by a target user, and the candidate dialogues can provide emotional value for the target user;
an identifying module 602, configured to identify a target entity and a target event through an entity identification scheme, where the target entity and the target event are obtained by at least a target utterance, and the target utterance is any one of candidate utterances;
a selecting module 603, configured to select, from a database, a target knowledge graph at least partially including a target entity and a target event, where the database includes a plurality of sample knowledge graphs, the sample knowledge graphs are obtained based on sample session information, and each sample knowledge graph corresponds to one emotion category;
and a module 604, configured to use a target emotion category corresponding to the target knowledge graph as a predicted emotion category corresponding to the target conversation, where the predicted emotion category is used to indicate an emotion generated when the target user sees the target conversation.
Optionally, the apparatus is further configured to:
inputting the sample session information into an emotion recognition model to obtain a sample emotion category of the sample user recognized by the emotion recognition model, wherein the sample session information comprises second session information sent by the sample user and session information corresponding to the second session information;
taking the sample emotion category as an emotion label of the sample session information;
extracting a sample entity and a sample event in the sample session information through a preset extraction scheme, wherein the sample entity is an entity associated with a sample user, and the sample event is an event associated with the sample user;
and generating a sample knowledge graph carrying emotion labels based on the sample entities and the sample events.
Optionally, the selecting module 603 is configured to:
determining a sample entity and a sample event in the sample knowledge graph, wherein the sample entity and the sample event are used for indicating a source of emotion generated by a sample user corresponding to the sample session information;
determining a first similarity of a target entity and a sample entity, and determining a second similarity of a target event and a sample event;
taking the sum of the first similarity and the second similarity as the emotion similarity;
and taking the sample knowledge graph as a target knowledge graph under the condition that the emotion similarity is greater than a preset threshold value.
Optionally, the selecting module 603 is further configured to:
determining the sample knowledge graph as a to-be-selected knowledge graph under the condition that the emotion similarity is larger than a preset threshold value, wherein the to-be-selected knowledge graph corresponds to the sample emotion category;
under the condition that the number of the knowledge graphs to be selected is at least two, determining the emotion probability to be selected of the knowledge graphs corresponding to the emotion classes of the samples, wherein the emotion classes of the samples are provided with different emotion probabilities according to different emotion similarities;
and selecting the knowledge graph to be selected with the highest emotion probability as a target knowledge graph.
Optionally, the apparatus is further configured to:
inputting the current session information into an emotion recognition model to obtain the emotion category of the target user recognized by the emotion recognition model;
determining a negative probability of the negative emotion category in the case that the emotion category of the target user is determined to be the negative emotion category;
and acquiring a matched preset conversation from the database according to the current conversation information and the negative probability, wherein the preset conversation at least comprises a conversation placating operation and a solution conversation, and the emotion placating capability of the preset conversation is in direct proportion to the negative probability.
Optionally, the apparatus is further configured to:
searching similar session information of the current session information from the sample session information according to a target entity and a target event in the current session information;
searching negative feedback dialogs aiming at similar conversation information from the sample conversation information, wherein the negative feedback dialogs are dialogs which bring negative emotion to sample users;
setting a negative label for negative feedback speech, wherein the negative label is used to provide negative feedback for the speech provider.
Optionally, the identifying module 602 is configured to:
identifying a first sub-entity and a first sub-event included in current session information and identifying a second sub-entity and a second sub-event included in a target session through an entity identification scheme;
taking the first sub-entity and the second sub-entity as target entities, and taking the first sub-event and the second sub-event as target events; or the like, or, alternatively,
and taking the second sub-entity as a target entity and taking the second sub-event as a target event.
According to another aspect of the embodiments of the present application, an electronic device is provided, as shown in fig. 7, and includes a memory 703, a processor 701, a communication interface 702, and a communication bus 704, where a computer program operable on the processor 701 is stored in the memory 703, the memory 703 and the processor 701 communicate with each other through the communication interface 702 and the communication bus 704, and the steps of the method are implemented when the processor 701 executes the computer program.
The memory and the processor in the electronic equipment are communicated with the communication interface through a communication bus. The communication bus may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The communication bus may be divided into an address bus, a data bus, a control bus, etc.
The Memory may include a Random Access Memory (RAM) or a non-volatile Memory (non-volatile Memory), such as at least one disk Memory. Optionally, the memory may also be at least one memory device located remotely from the processor.
The Processor may be a general-purpose Processor, and includes a Central Processing Unit (CPU), a Network Processor (NP), and the like; the Integrated Circuit may also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, a discrete Gate or transistor logic device, or a discrete hardware component.
There is also provided, in accordance with yet another aspect of an embodiment of the present application, a computer-readable medium having non-volatile program code executable by a processor.
Optionally, in an embodiment of the present application, a computer readable medium is configured to store program code for the processor to execute the above method.
Optionally, the specific examples in this embodiment may refer to the examples described in the above embodiments, and this embodiment is not described herein again.
When the embodiments of the present application are specifically implemented, reference may be made to the above embodiments, and corresponding technical effects are achieved.
It is to be understood that the embodiments described herein may be implemented in hardware, software, firmware, middleware, microcode, or any combination thereof. For a hardware implementation, the Processing units may be implemented within one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), general purpose processors, controllers, micro-controllers, microprocessors, other electronic units configured to perform the functions described herein, or a combination thereof.
For a software implementation, the techniques described herein may be implemented by means of units performing the functions described herein. The software codes may be stored in a memory and executed by a processor. The memory may be implemented within the processor or external to the processor.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the modules is merely a logical division, and in actual implementation, there may be other divisions, for example, multiple modules or components may be combined or integrated into another system, or some features may be omitted, or not implemented. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solutions of the embodiments of the present application may be essentially implemented or make a contribution to the prior art, or may be implemented in the form of a software product stored in a storage medium and including several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a U disk, a removable hard disk, a ROM, a RAM, a magnetic disk, or an optical disk. It is noted that, in this document, relational terms such as "first" and "second," and the like, may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The above description is merely exemplary of the present application and is presented to enable those skilled in the art to understand and practice the present application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (10)

1. A method for predicting a user's mood, the method comprising:
determining a plurality of candidate dialogues corresponding to current session information, wherein the current session information is first session information sent by a target user, and the candidate dialogues can provide emotional value for the target user;
identifying a target entity and a target event through an entity identification scheme, wherein the target entity and the target event are at least obtained by a target dialect which is any one of the candidate dialects;
selecting a target knowledge graph at least partially comprising the target entity and the target event from a database, wherein the database comprises a plurality of sample knowledge graphs, the sample knowledge graphs are obtained based on sample session information, and each sample knowledge graph corresponds to one emotion category;
and taking a target emotion category corresponding to the target knowledge graph as a predicted emotion category corresponding to the target conversation, wherein the predicted emotion category is used for indicating an emotion generated when the target user sees the target conversation.
2. The method of claim 1, wherein prior to selecting the target knowledge-graph from the database that at least partially contains the target entity and the target event, the method further comprises:
inputting sample session information into an emotion recognition model to obtain a sample emotion category of a sample user recognized by the emotion recognition model, wherein the sample session information comprises second session information sent by the sample user and session information corresponding to the second session information;
using the sample emotion classification as an emotion label of the sample session information;
extracting a sample entity and a sample event in the sample session information through a preset extraction scheme, wherein the sample entity is an entity associated with the sample user, and the sample event is an event associated with the sample user;
and generating a sample knowledge graph carrying emotion labels based on the sample entities and the sample events.
3. The method of claim 1, wherein selecting a target knowledge-graph from a database that at least partially contains the target entity and the target event comprises:
determining a sample entity and a sample event in the sample knowledge graph, wherein the sample entity and the sample event are used for indicating a source of emotion generated by a sample user corresponding to the sample session information;
determining a first similarity of the target entity and the sample entity, and determining a second similarity of the target event and the sample event;
taking the sum of the first similarity and the second similarity as an emotional similarity;
and taking the sample knowledge graph as the target knowledge graph under the condition that the emotion similarity is larger than a preset threshold value.
4. The method of claim 3, wherein taking the sample knowledge-graph as the target knowledge-graph in the case that the emotional similarity is greater than a preset threshold comprises:
determining the sample knowledge graph as a to-be-selected knowledge graph under the condition that the emotion similarity is larger than the preset threshold, wherein the to-be-selected knowledge graph corresponds to the sample emotion category;
under the condition that the number of the knowledge graphs to be selected is at least two, determining the emotion probability to be selected of the knowledge graphs to be selected corresponding to the sample emotion categories, wherein the sample emotion categories are provided with different emotion probabilities according to different emotion similarities;
and selecting the knowledge graph to be selected with the highest emotion probability as the target knowledge graph.
5. The method of claim 1, wherein before or after determining a plurality of candidate sessions corresponding to current session information, the method further comprises:
inputting the current session information into an emotion recognition model to obtain the emotion category of the target user recognized by the emotion recognition model;
determining a negative probability of the negative emotion category in case that the emotion category of the target user is determined to be a negative emotion category;
and acquiring a matched preset dialect from a database according to the current session information and the negative probability, wherein the preset dialect at least comprises a soothing dialect and a solution dialect, and the emotion soothing capability of the preset dialect is in direct proportion to the negative probability.
6. The method of claim 5, wherein before or after obtaining the matching pre-set dialect from the database, the method further comprises:
searching similar session information of the current session information from sample session information according to a target entity and a target event in the current session information;
searching for negative feedback dialogs aiming at the similar session information from sample session information, wherein the negative feedback dialogs are dialogs which bring negative emotions to the sample users;
setting a negative label for the negative feedback utterance, wherein the negative label is used to provide negative feedback for the utterance provider.
7. The method of claim 1, wherein identifying the target entity and the target event via an entity identification scheme comprises:
identifying a first sub-entity and a first sub-event included in the current session information and identifying a second sub-entity and a second sub-event included in the target session through an entity identification scheme;
taking the first sub-entity and the second sub-entity as the target entities and the first sub-event and the second sub-event as the target events; or the like, or, alternatively,
and taking the second sub-entity as the target entity and the second sub-event as the target event.
8. An apparatus for predicting a user's emotion, the apparatus comprising:
the system comprises a determining module, a judging module and a judging module, wherein the determining module is used for determining a plurality of candidate dialogues corresponding to current session information, the current session information is first session information sent by a target user, and the candidate dialogues can provide emotional value for the target user;
the identification module is used for identifying a target entity and a target event through an entity identification scheme, wherein the target entity and the target event are at least obtained by a target dialogues, and the target dialogues are any one of the candidate dialogues;
the system comprises a selecting module, a judging module and a judging module, wherein the selecting module is used for selecting at least part of a target knowledge graph containing a target entity and a target event from a database, the database contains a plurality of sample knowledge graphs, the sample knowledge graphs are obtained based on sample conversation information, and each sample knowledge graph corresponds to one emotion category;
and the module is used for taking a target emotion category corresponding to the target knowledge graph as a predicted emotion category corresponding to the target conversation, wherein the predicted emotion category is used for indicating the emotion generated when the target user sees the target conversation.
9. An electronic device is characterized by comprising a processor, a communication interface, a memory and a communication bus, wherein the processor and the communication interface are used for realizing mutual communication by the memory through the communication bus;
a memory for storing a computer program;
a processor for implementing the method of any one of claims 1 to 7 when executing a program stored in the memory.
10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the method of any one of claims 1 to 7.
CN202110974118.4A 2021-08-24 2021-08-24 User emotion prediction method and device, electronic equipment and readable storage medium Active CN113420140B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110974118.4A CN113420140B (en) 2021-08-24 2021-08-24 User emotion prediction method and device, electronic equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110974118.4A CN113420140B (en) 2021-08-24 2021-08-24 User emotion prediction method and device, electronic equipment and readable storage medium

Publications (2)

Publication Number Publication Date
CN113420140A true CN113420140A (en) 2021-09-21
CN113420140B CN113420140B (en) 2021-12-28

Family

ID=77719344

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110974118.4A Active CN113420140B (en) 2021-08-24 2021-08-24 User emotion prediction method and device, electronic equipment and readable storage medium

Country Status (1)

Country Link
CN (1) CN113420140B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114357197A (en) * 2022-03-08 2022-04-15 支付宝(杭州)信息技术有限公司 Event reasoning method and device

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109447819A (en) * 2018-09-03 2019-03-08 中国平安人寿保险股份有限公司 It is a kind of intelligently to talk about art based reminding method, system and terminal device
US20190278822A1 (en) * 2017-07-26 2019-09-12 Ping An Technology (Shenzhen) Co., Ltd. Cross-Platform Data Matching Method and Apparatus, Computer Device and Storage Medium
CN110298682A (en) * 2019-05-22 2019-10-01 深圳壹账通智能科技有限公司 Intelligent Decision-making Method, device, equipment and medium based on user information analysis
CN110347863A (en) * 2019-06-28 2019-10-18 腾讯科技(深圳)有限公司 Talk about art recommended method and device and storage medium
CN111028827A (en) * 2019-12-10 2020-04-17 深圳追一科技有限公司 Interaction processing method, device, equipment and storage medium based on emotion recognition

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190278822A1 (en) * 2017-07-26 2019-09-12 Ping An Technology (Shenzhen) Co., Ltd. Cross-Platform Data Matching Method and Apparatus, Computer Device and Storage Medium
CN109447819A (en) * 2018-09-03 2019-03-08 中国平安人寿保险股份有限公司 It is a kind of intelligently to talk about art based reminding method, system and terminal device
CN110298682A (en) * 2019-05-22 2019-10-01 深圳壹账通智能科技有限公司 Intelligent Decision-making Method, device, equipment and medium based on user information analysis
CN110347863A (en) * 2019-06-28 2019-10-18 腾讯科技(深圳)有限公司 Talk about art recommended method and device and storage medium
CN111028827A (en) * 2019-12-10 2020-04-17 深圳追一科技有限公司 Interaction processing method, device, equipment and storage medium based on emotion recognition

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114357197A (en) * 2022-03-08 2022-04-15 支付宝(杭州)信息技术有限公司 Event reasoning method and device
CN114357197B (en) * 2022-03-08 2022-07-26 支付宝(杭州)信息技术有限公司 Event reasoning method and device

Also Published As

Publication number Publication date
CN113420140B (en) 2021-12-28

Similar Documents

Publication Publication Date Title
WO2020253362A1 (en) Service processing method, apparatus and device based on emotion analysis, and storage medium
CN111540353B (en) Semantic understanding method, device, equipment and storage medium
WO2021136458A1 (en) Processing method and apparatus for police emergency dispatching
CN109447789A (en) Method for processing business, device, electronic equipment and storage medium
CN111651571A (en) Man-machine cooperation based session realization method, device, equipment and storage medium
CN110570208B (en) Complaint preprocessing method and device
CN111179935A (en) Voice quality inspection method and device
CN113420140B (en) User emotion prediction method and device, electronic equipment and readable storage medium
CN112669842A (en) Man-machine conversation control method, device, computer equipment and storage medium
CN114357126A (en) Intelligent question-answering system
CN112632248A (en) Question answering method, device, computer equipment and storage medium
CN109902146A (en) Credit information acquisition methods, device, terminal and storage medium
CN113591463B (en) Intention recognition method, device, electronic equipment and storage medium
CN113707157B (en) Voiceprint recognition-based identity verification method and device, electronic equipment and medium
CN116501858B (en) Text processing and data query method
CN117278675A (en) Outbound method, device, equipment and medium based on intention classification
CN110931002B (en) Man-machine interaction method, device, computer equipment and storage medium
CN112581297A (en) Information pushing method and device based on artificial intelligence and computer equipment
CN109388695B (en) User intention recognition method, apparatus and computer-readable storage medium
US10296510B2 (en) Search query based form populator
CN115602160A (en) Service handling method and device based on voice recognition and electronic equipment
CN115964384A (en) Data query method and device, electronic equipment and computer readable medium
CN113870478A (en) Rapid number-taking method and device, electronic equipment and storage medium
CN114202363A (en) Artificial intelligence based call method, device, computer equipment and medium
CN112148939A (en) Data processing method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant