CN112364971A - Session control method and device and electronic equipment - Google Patents

Session control method and device and electronic equipment Download PDF

Info

Publication number
CN112364971A
CN112364971A CN202011230074.6A CN202011230074A CN112364971A CN 112364971 A CN112364971 A CN 112364971A CN 202011230074 A CN202011230074 A CN 202011230074A CN 112364971 A CN112364971 A CN 112364971A
Authority
CN
China
Prior art keywords
user
session
virtual
image
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011230074.6A
Other languages
Chinese (zh)
Inventor
符博
于晨晨
胡长建
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lenovo Beijing Ltd
Original Assignee
Lenovo Beijing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lenovo Beijing Ltd filed Critical Lenovo Beijing Ltd
Priority to CN202011230074.6A priority Critical patent/CN112364971A/en
Publication of CN112364971A publication Critical patent/CN112364971A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/004Artificial life, i.e. computing arrangements simulating life
    • G06N3/006Artificial life, i.e. computing arrangements simulating life based on simulated virtual individual or collective life forms, e.g. social simulations or particle swarm optimisation [PSO]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/48Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
    • G10L25/51Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
    • G10L25/63Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination for estimating an emotional state

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Computational Linguistics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Psychiatry (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Child & Adolescent Psychology (AREA)
  • Signal Processing (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Acoustics & Sound (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Social Psychology (AREA)
  • Hospice & Palliative Care (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application discloses a session control method, a session control device and electronic equipment, wherein the method comprises the following steps: obtaining user session information to be analyzed in a session process, wherein the user session information at least comprises session content input by a user; determining a first emotional characteristic of the user based on the user session information; constructing a first virtual user avatar having a first emotional characteristic; determining the session response information and a second emotional characteristic of the session response information according to the user session information; constructing a virtual service person image with a second emotional characteristic; and controlling a user terminal of the intelligent dialog system to display a first virtual user image associated with the session content and a virtual service personnel image associated with the session response information in a session interactive interface. The scheme of the application can improve the flexibility of intelligent conversation.

Description

Session control method and device and electronic equipment
Technical Field
The present application relates to the field of intelligent dialog technologies, and in particular, to a session control method, an apparatus, and an electronic device.
Background
People such as customer service can be simulated to interact information in the form of voice or text with real users through an intelligent dialogue system (also called an intelligent conversation system).
In the intelligent dialogue scene, a user can input conversation content in a voice or text form into the intelligent dialogue system through a user side terminal (such as a specific intelligent dialogue device or a user terminal), and the intelligent dialogue system can output reply information in a text form to the user side terminal. For example, in the intelligent customer service system, a user may send a question to be consulted to a server of the intelligent customer service system through a user terminal installed with an intelligent customer service application, and the server may return answer information related to the question to the user terminal. For another example, in the intelligent chat system, the chat robot may output corresponding reply information according to the chat message input by the user, so that the user sees the reply information.
However, currently, the intelligent dialog system only outputs the dialog text replied by the intelligent dialog system to the user side terminal, so that the user can only judge whether the intelligent dialog system accurately recognizes the user intention according to the dialog text replied by the intelligent dialog system, so that the flexibility of the intelligent dialog is poor.
Disclosure of Invention
The application provides a session control method, a session control device and electronic equipment.
The session control method comprises the following steps:
obtaining user session information to be analyzed in a session process, wherein the user session information at least comprises session content input by a user;
determining a first emotional characteristic of the user based on the user session information;
constructing a first virtual user avatar having the first emotional characteristic;
determining session response information and a second emotional characteristic of the session response information according to the user session information;
constructing a virtual service person image having the second emotional characteristic;
and controlling a user terminal of the intelligent dialog system to display the first virtual user image associated with the session content and the virtual service personnel image associated with the session response information in a session interactive interface.
In one possible case, said constructing a first virtual user avatar having said first emotional characteristic comprises:
determining an extrinsic morphological feature for expressing the first emotional feature;
constructing a first virtual user image having the appearance characteristics.
In yet another possible case, the determining an extrinsic morphological feature for expressing the first emotional feature comprises:
determining a facial expression and a limb movement of a human body for expressing the first emotional characteristic;
the constructing a first virtual user image having the appearance characteristics includes:
obtaining a virtual human body model;
adding the facial expression to the human body model, and adjusting the human body model to have the limb action to obtain a constructed first virtual user image.
In yet another possible case, the controlling a user terminal of the intelligent dialog system to display the first avatar associated with the conversation content and the avatar associated with the conversation response information in a conversation interactive interface includes:
if the first virtual user image is built and the virtual service personnel image is not built, controlling a user terminal of the intelligent dialogue system to display the first virtual user image associated with the dialogue content in a dialogue interaction interface;
after the virtual service personnel image is constructed, controlling a user terminal of the intelligent dialogue system to display the first virtual user image associated with the dialogue content and the virtual service personnel image associated with the dialogue response information in a dialogue interaction interface.
In yet another possible scenario, the first avatar is associated with a first message bar;
the virtual service personnel image is associated with a second message bar;
the user terminal of the control intelligent dialog system displays the first virtual user image associated with the session content and the virtual service personnel image associated with the session response information in a session interactive interface, and the method comprises the following steps:
configuring the conversation content as content displayed within the first message bar;
configuring the conversation response information into content displayed in the second message bar;
and controlling a user terminal of the intelligent dialog system to display the first virtual user image and the virtual service personnel image in a session interactive interface, displaying the session content in a first message bar associated with the first virtual user image, and displaying the session response information in a second message bar associated with the virtual service personnel image.
In another possible case, the determining, according to the user session information, session response information and a second emotional characteristic of the session response information includes:
determining the session response information according to the user session information, and determining a second emotional characteristic represented by the session response information;
alternatively, the first and second electrodes may be,
determining the session response information according to the user session information;
and according to the first emotional characteristic represented by the user session information, determining a second emotional characteristic corresponding to the first emotional characteristic, and determining the second emotional characteristic as the emotional characteristic required to be presented by the session response information.
In yet another possible scenario, the method further comprises:
under the condition that the user feedback opinions are met, constructing a second virtual user image representing set emotion characteristics, and associating a prompt language for the second virtual user image, wherein the set emotion characteristics represent the satisfied emotion of the user to the conversation, and the prompt language is used for prompting the user to feed back evaluation opinions under the condition that the user is satisfied with the conversation;
and controlling the user terminal to display a second virtual user image associated with the prompt.
In another aspect, the present application further provides a session control apparatus, including:
the system comprises an information obtaining unit, a processing unit and a processing unit, wherein the information obtaining unit is used for obtaining user session information to be analyzed in a session process, and the user session information at least comprises session content input by a user;
a first emotion determining unit, configured to determine a first emotional characteristic of the user based on the user session information;
a first avatar construction unit for constructing a first virtual user avatar having the first emotional characteristic;
the second emotion determining unit is used for determining session response information and second emotion characteristics of the session response information according to the user session information;
a second emotion constructing unit for constructing a virtual service person image having the second emotion characteristic;
and the display control unit is used for controlling the user terminal of the intelligent dialog system to display the first virtual user image associated with the session content and the virtual service personnel image associated with the session response information in a session interactive interface.
In yet another aspect, the present application provides an electronic device comprising: a processor and a memory;
the processor is configured to execute the session control method as described in any one of the above;
the memory is used for storing programs needed by the processor to perform the above operations.
According to the scheme, the intelligent conversation system can determine the first emotional characteristic of the user according to the user session information obtained by the user side, and construct the virtual user image with the first emotional characteristic of the user; meanwhile, the intelligent dialogue system can determine conversation response information and second emotion characteristics suitable for being fed back to the user according to the user conversation information, and construct a virtual service personnel image with the second emotion characteristics. On the basis, the intelligent conversation system can control the user terminal to output the virtual user image associated with the conversation content input by the user and the virtual service personnel image associated with the identified conversation response information, so that the user can intuitively feel the emotional feeling identified by the intelligent conversation system and the fed back emotional characteristic in a conversation interactive interface, and the interactivity of the intelligent conversation is enhanced. And compared with the simple display of the session information in the text form, the flexibility of intelligent conversation is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic diagram of a composition architecture of an intelligent dialog scenario applicable to the embodiment of the present application;
fig. 2 is a schematic flowchart of an embodiment of a session control method according to an embodiment of the present application;
fig. 3 is a schematic flowchart of a session control method according to another embodiment of the present application;
fig. 4 is a schematic flowchart of interaction of a session control method according to another embodiment of the present application;
FIG. 5 is a diagram illustrating content displayed in a conversational interaction window in an embodiment of the application;
FIG. 6 is a schematic diagram of content displayed in a conversation interaction window in an embodiment of the present application;
FIG. 7 is a diagram illustrating another example of content displayed in a conversation interaction window in an embodiment of the present application;
fig. 8 is a schematic structural diagram of an embodiment of a session control apparatus in the present application;
fig. 9 is a schematic diagram of a composition architecture of an electronic device according to an embodiment of the present application.
The terms "first," "second," "third," "fourth," and the like in the description and in the claims, as well as in the drawings described above, if any, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It should be understood that the data so used may be interchanged under appropriate circumstances such that embodiments of the application described herein may be practiced otherwise than as specifically illustrated.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without inventive step, are within the scope of the present disclosure.
The scheme of the application is suitable for the intelligent dialogue system, so that the flexibility and the interchangeability of intelligent dialogue are provided in the intelligent dialogue system.
In the present application, the intelligent dialog system may be an intelligent customer service system providing intelligent customer service; but also an intelligent chat system for simulating users to provide chat services, etc.
For ease of understanding, a scenario to which the session control method of the present application is applied will be described first.
As shown in fig. 1, the scenario includes: an intelligent dialog system 10 and at least one client 20.
The intelligent dialogue system comprises at least one intelligent dialogue server 101.
The client 20 can establish a communication connection with the intelligent dialog server 101 in the intelligent dialog system via a network. For example, the client may be a user terminal installed with an intelligent dialogue application, and the client may establish a communication connection with the intelligent dialogue server through the intelligent dialogue application.
Wherein after the client 20 establishes a session connection with the intelligent dialog server 101, the user of the client can send a session message to the intelligent dialog server through the client.
Correspondingly, the intelligent dialog server 101 determines a reply message according to the session message sent by the client and in combination with the corresponding control policy, and returns the reply message to the client.
For example, taking an intelligent customer service server for providing customer service for a simulated human being by an intelligent dialogue server as an example, the intelligent customer service server can receive inquiry information about product use, problem resolution and information resources sent by a client, and give out relevant answers according to the inquiry information.
For example, after the client establishes a session connection with the intelligent dialog server, the client may display a session interactive interface (also referred to as a dialog interactive interface or a chat interface, etc.), and the client inputs the session content in a form of voice or text for information consultation through the session interactive interface. The client sends the session content to the session server, and after determining the reply information for the session content, the session server may feed back the reply information to the client, so that the client displays the session content input by the user and the reply information fed back by the session server on the session interaction interface.
Of course, during the session interaction between the intelligent dialog server and the client, the intelligent dialog server may also return some evaluation options to the client to prompt the user to evaluate the dialog service condition provided by the intelligent dialog server. For example, still taking the intelligent customer service scenario as an example, after confirming that the current customer service is completed, the intelligent customer service server may return an evaluation option or an evaluation interface to the client to prompt the user of the client to evaluate whether the current customer service is satisfied or not, or whether a problem proposed by the user is solved or not.
It is to be understood that fig. 1 is only one possible application scenario to which the session control method of the present application is applicable. In practical applications, the intelligent dialog system may further include a user terminal, where the user terminal may be a set intelligent session terminal or a terminal such as a mobile phone installed with an intelligent dialog application. In this case, the user terminal in the intelligent dialog system can directly receive the conversation content in the form of voice or text input by the user and other user conversation information related to the user; meanwhile, the user terminal can analyze the user session information and output the reply content, so that the user can see the information input by the user terminal and the information analyzed by the intelligent dialogue terminal in a display interface of the user terminal.
The following describes a session control method provided by the present application with reference to a flowchart.
As shown in fig. 2, which shows a flowchart of an embodiment of a session control method according to the present application, the method of the present embodiment may be applied to an intelligent dialog system, for example, a server of the intelligent dialog system, or a user terminal in the intelligent dialog system.
The method of the embodiment may include:
s201, user session information to be analyzed in a session process is obtained.
The user session information at least comprises session contents input by a user. The session content input by the user may be content that requires conversational interaction with the intelligent dialog system. The conversation content may be voice content or text content input by the user.
Such as a text message input by the user in the session interaction interface of the user terminal, or session content input by voice.
In a possible implementation manner, in order to better analyze the emotion of the user later, the user session information may further include a user image such as a face image or a limb movement image of the user.
S202, determining a first emotional characteristic of the user based on the user session information.
The first emotion characteristic of the user is a user emotion characteristic represented by session information of the user, and therefore the first emotion characteristic reflects the current emotion type of the user.
For example, the emotional characteristics of the user may be categorized as happy, impaired, angry, depressed, and irritable.
In this embodiment, for convenience of distinction, the user emotion feature determined based on the user session information is referred to as a first emotion feature.
It will be appreciated that the first emotional characteristic may be derived by emotional recognition of user session information.
For example, the user session information is the session content in the form of text, and emotion recognition can be performed on the text of the session content. The manner of emotion recognition can be various, and the application is not limited. For example, emotional words in the conversation content are extracted, and the emotional features of the user represented by the extracted emotional words are determined.
For another example, if the user session information is session content in the form of voice, the emotional characteristics of the user can be comprehensively determined by combining the emotion represented by the text converted from the session content while performing emotion recognition on the voice of the user.
In one possible implementation, the user session information may include session content input by the user and a user image of the user, in which case the first emotional characteristic of the user may be determined synthetically in connection with the user image and the session content.
For example, the emotion recognition may be performed on the user image and the session content, respectively, and then the first emotional characteristic of the user may be determined comprehensively by combining the emotional characteristics recognized from the user image and the emotional characteristics recognized from the session content.
For another example, the emotion recognition can be performed on the user image and the conversation content in a unified manner, and the recognized first emotional characteristic is obtained. For example, the user image and the conversation content are input into a trained emotion recognition model, and a first emotion feature output by the emotion recognition model is obtained.
S203, constructing a first virtual user image with a first emotional characteristic.
The first avatar is to represent an avatar of a user in a conversation, and the avatar exhibits the first emotional characteristic.
The first virtual user image is used for representing the user in the conversation interactive interface and reflecting the user image visually, and the specific form of the first virtual user image can be various.
For example, the first avatar may be a virtual character, such as a preset virtual character, or a character of a virtual character pre-selected by the user.
As another example, the first avatar may also be a cartoon avatar or an animal avatar, or the like.
It can be understood that the emotional features may be expressed by facial expressions, body movements, postures, and the like, and thus, the first virtual user image constructed in the present application may exhibit the first emotional feature by one or more of the image features of facial expressions, body movements, postures, and the like.
It is to be understood that the present application may be to construct and generate a first avatar having the first emotional characteristic based on the first emotional characteristic and the desired avatar characteristic of the avatar.
The image features refer to feature forms of each component of the virtual user image. For example, if the virtual user image is a human body, the image features may be features of a head and a facial organ or feature data, and the virtual user image may be generated based on the features or the feature data; the first avatar may then be constructed or synthesized in combination with the first emotional characteristic and the generated avatar.
In order to improve the construction speed of the first virtual user image, the method can also be used for pre-storing a model of the virtual user image, and on the basis, the model of the virtual user image can be adjusted according to the first emotional characteristic so as to obtain the first virtual user image with the first emotional characteristic.
S204, determining the session response information and the second emotional characteristic of the session response information according to the user session information.
The session response information is response information which is required to be output according to the session content input by the user in the session.
For example, in an intelligent customer service scenario, the intelligent customer service system needs to generate a consultation solution for the consultation information input by the user. For another example, in an intelligent chat scenario, the intelligent customer service system may obtain chat content input by the user and generate a chat callback for the chat content.
The specific manner of determining the session response information based on the user session information may be various, for example, the user intention may be determined by performing intention identification on the user session information; and then generating the session response information according with the user intention by combining the user intention. Of course, there may be other possible implementations, which are not limited in this application.
Wherein the second emotional characteristic is a characteristic of an emotion required to be fed back by the intelligent dialog system for the first emotional characteristic of the user.
It will be appreciated that, because the user session information may characterize the emotional characteristics of the user, the emotional characteristics that the session response information is adapted to exhibit may be determined in conjunction with the user session information. Several cases are exemplified below:
in a possible implementation manner, the session response information may be determined according to the user session information; then, a second emotional characteristic characterized by the session response information is determined. The second emotional characteristic may be determined, for example, by emotion recognition of the session response information.
For example, taking the intelligent customer service system as an example, the user session information is "how to solve the problem is not good and is too angry", then the session response information determined by the intelligent customer service system may be "sorry, i contact with the vending process as soon as possible", and then the second emotional characteristic represented by the session response information is an emotion that is difficult to find or not good.
In another possible implementation manner, while the session response information is determined according to the user session information, a second emotional characteristic corresponding to the first emotional characteristic may be determined according to the first emotional characteristic represented by the user session information, so that the second emotional characteristic is an emotional characteristic presented by the session response information.
Wherein, if the first emotional characteristic characterized by the user session information is obtained, the second emotional characteristic can be determined directly based on the first emotional characteristic without repeatedly determining the first emotional characteristic.
In practical applications, the second emotional characteristic may be configured to be presented to the user via the intelligent dialog system to facilitate prompting the user to transition from the first emotional characteristic to a more pleasant emotional characteristic. If the first emotional characteristic is happy, then the intelligent dialog system may determine that the second emotional characteristic is happy in order to be able to embody that the intelligent dialog system is able to co-situation with the user. Similarly, the first emotional characteristic is heart impairment, and the second emotional characteristic may also be heart impairment; the second emotional characteristic is anger, which may be a difficult emotion, a bad meaning, or a painful emotion, etc.
For example, the correspondence between different emotional characteristics and the second emotional characteristic may be constructed in advance. On the basis, according to the corresponding relation, a second emotional characteristic suitable for the first emotional characteristic represented by the user session information is determined.
And S205, constructing a virtual service personnel image with a second emotional characteristic.
The virtual service person image is the image of the service person simulated by the intelligent dialogue system and used for dialogue with the user. The virtual attendant image may be an avatar of a human, cartoon character, or animal, etc., similar to the virtual user image of the previous user.
Wherein the second emotional characteristic may be expressed by a facial expression, a body movement or a posture, etc. of the virtual service person avatar.
It should be noted that the sequence of steps S202 to S203 and S204 to S205 is not limited to that shown in fig. 2, and in practical applications, steps S202 to S203 and steps S204 to S205 may also be executed synchronously.
S206, controlling the user terminal of the intelligent dialogue system to display the first virtual user image associated with the dialogue content and the virtual service personnel image associated with the dialogue response information in the dialogue interaction interface.
The conversation interactive interface is an interface which is displayed by the user terminal and used for displaying conversation content input by the user and conversation reply content fed back by the intelligent conversation system. For example, the conversation interaction interface can be a conversation window.
It can be understood that, the first virtual user image and the image of the virtual service person are detected in the session interaction interface, so that the user can visually see the virtual user image capable of expressing the self emotion, and can visually feel the emotional reaction of the virtual person simulated by the intelligent dialogue system to the user emotion through the image of the virtual service person, thereby enabling the user to feel the actual feeling of the dialogue with the real person.
In a possible case, in a case where the present embodiment is applied to an intelligent dialogue server of an intelligent dialogue system, the intelligent dialogue server may transmit, to the user terminal, data on the first avatar associated with the conversation content and the avatar associated with the conversation response information. On this basis, the user terminal may present the first virtual user image and virtual service person image in the session interaction interface.
In yet another possible case, in a case where the present embodiment is applied to a user terminal of an intelligent dialog system, the user terminal may directly output the first avatar associated with the session content and the avatar associated with the session response information at the session interaction interface.
It will be appreciated that since the session content inputted by the user may take a long time before the intelligent dialog system outputs the session response information, and the generation of the virtual service person image requires operations related to parsing the session content inputted by the user and determining the session response information, the time consumption is generally relatively long. Therefore, in order to improve the timeliness of the conversation interaction and enable the user to see the virtual user image corresponding to the input conversation content in time, the first virtual user image can be displayed in the conversation interaction interface of the terminal after the first virtual user image is constructed.
Specifically, if the first virtual user image is constructed and the virtual service person image is not constructed, the user terminal of the intelligent dialog system may be controlled to display the first virtual user image associated with the session content in the session interaction interface.
Correspondingly, after the virtual service personnel image is constructed, the user terminal of the intelligent dialogue system can be controlled to display the first virtual user image associated with the dialogue content and the virtual service personnel image associated with the dialogue response information in the dialogue interaction interface.
Certainly, in practical application, the intelligent dialogue system cannot care about the sequence of the virtual service personnel image and the first virtual user image, and the intelligent dialogue system can control the user terminal to display the first virtual user image as long as the first virtual user image is determined to be generated; and simultaneously, after the virtual service personnel image is determined to be generated, controlling the user terminal to display the virtual service personnel image. On the basis, the intelligent dialogue system can control the user terminal to simultaneously display the virtual service personnel image and the first virtual user image.
In the application, the intelligent dialogue system can determine the first emotional characteristic of the user according to the user session information obtained by the user side and construct a virtual user image with the first emotional characteristic of the user; meanwhile, the intelligent dialogue system can determine conversation response information and second emotion characteristics suitable for being fed back to the user according to the user conversation information, and construct a virtual service personnel image with the second emotion characteristics. On the basis, the intelligent conversation system can control the user terminal to output the virtual user image associated with the conversation content input by the user and the virtual service personnel image associated with the identified conversation response information, so that the user can intuitively feel the emotional feeling identified by the intelligent conversation system and the fed emotional characteristics in a conversation interaction interface, and the interactivity and the flexibility of the intelligent conversation are enhanced.
Meanwhile, compared with the simple display of the session information in the text form, the scheme of the application not only increases the flexibility of intelligent conversation, but also can embody the common emotion of the intelligent conversation system to the user by outputting the virtual user image and the virtual service personnel image, and is favorable for improving the experience of the user in the intelligent conversation.
In the above embodiment of the present application, in order to generate the first virtual user image more reasonably, an extrinsic morphological feature for expressing the first emotional feature may be determined first, where the extrinsic morphological feature is an extrinsic feature that the virtual user image needs to present.
For example, in an object image such as a human body, a cartoon character, or an animal, the emotional characteristics of the object image may be reflected by external morphological characteristics such as facial expressions, body movements, and the posture of the object image.
For example, taking the first emotional characteristic as an example of happiness, the facial expression may include part or all of expressions such as squinting of eyes, opening of mouth, and rising of mouth angle; the limb movement can be the movement characteristics of leg jumping, abdomen holding and the like.
Accordingly, a first avatar having the appearance may be constructed. If the appearance line morphological feature is a facial expression and a limb action expressing the first emotional feature, a first virtual user image with the facial expression and the limb action can be constructed.
In order to improve the real conversation feeling of the user and the intelligent conversation system, the virtual user image can be set as a human body image. In the following, the virtual user image is taken as a virtual human body, and the virtual user image is constructed based on a virtual human body model as an example.
Fig. 3 is a schematic flow chart illustrating a session control method according to another embodiment of the present application. The method of the embodiment is applied to an intelligent dialog system, and the method of the embodiment may include:
s301, user session information to be analyzed in the session process is obtained.
The user session information at least comprises session contents input by a user. Of course, the user session information may include a user image of the user.
S302, determining a first emotion characteristic of the user based on the user session information.
S303, determining the facial expression and the limb movement of the human body for expressing the first emotional characteristic.
For example, the information of facial expressions and body movements corresponding to different emotional features may be stored in advance, and on this basis, the facial expressions and body movements corresponding to the first emotional feature may be queried.
Of course, there may be other possibilities, which are not limiting.
S304, obtaining a virtual human body model.
The virtual human body model is a human body model which is constructed and stored in advance. An avatar representing the user may be generated by the mannequin.
S305, adding the facial expression to the human body model, and adjusting the human body model to have the limb action to obtain a constructed first virtual user image.
It is understood that, in the case of a human body model having a facial organ, adding a corresponding facial expression to the human body model can be realized by adjusting the shape or layout of the facial organ, or the like.
Under the condition that the human body model is only a human body outline shape, corresponding facial organs can be constructed on the face of the human body model, and the facial organs can present the expression which is in accordance with the facial expression.
The adjustment of the body model with the limb action can embody the limb action by changing the phase position relationship of part or all of the parts of the body model, such as the head, the trunk, the limbs and the like.
Of course, the limb movement may also be a sequence of movements of the limb in order to be able to more intuitively present the user's mood. In this case, the body movements conforming to the movement sequence change can be successively constructed in the human body model according to the movement sequence of the body, so that a dynamic virtual user image can be presented subsequently.
S306, according to the user session information, the session response information and the second emotional characteristic of the session response information are determined.
This step can be seen in the description relating to the previous embodiment.
S307, a second appearance feature for expressing the second emotion feature is determined.
For the sake of convenience of distinction, the present embodiment refers to the extrinsic morphological feature used to express the first emotional feature as a first extrinsic morphological feature; and the extrinsic morphological feature used to express the second emotional feature is referred to as a second extrinsic morphological feature.
The second appearance feature may also be one or more of facial expressions, body movements, and the like for expressing the second emotion feature.
It is understood that the virtual attendant image may also be a virtual human body, and may also be a robot or other image. For different images, the appearance morphological features characterizing the same emotional feature may be different, but the specific process of determining the appearance morphological features for expressing the emotional feature is similar and will not be described herein again.
And S308, constructing a virtual service personnel image with the second appearance characteristic.
It will be appreciated that, similar to constructing the first virtual user image, one way is to: a virtual service person model can be constructed in advance, and on the basis, a virtual service person image with the second appearance characteristic can be constructed on the virtual service person model.
The other mode is as follows: the virtual service person image can be generated according to the characteristics of the virtual service person image and the second appearance characteristics.
S309, controlling the user terminal of the intelligent dialogue system to display the first virtual user image associated with the dialogue content and the virtual service personnel image associated with the dialogue response information in the dialogue interaction interface.
It is to be understood that, in any of the above embodiments of the present application, for convenience of distinguishing, the session content input by the user and the session response information fed back by the intelligent dialog system, in the present application, the first virtual user image and the virtual service person image may be respectively associated with respective message bars, so as to control the user terminal to display corresponding session information in the respective message bars. For the sake of convenience of distinction, the message column associated with the first virtual user image is referred to as a first message column, and the message column associated with the virtual service person image is referred to as a second message column.
Accordingly, the session content may be configured as content displayed within the first message bar; configuring the conversation response information as the content displayed in the second message bar. On the basis, the user terminal of the intelligent dialogue system can be controlled to display the first virtual user image and the virtual service personnel image in the dialogue interaction interface, display the dialogue content in the first message bar associated with the first virtual user image, and display the dialogue response information in the second message bar associated with the virtual service personnel image.
In order to facilitate understanding of the present application, the following description takes an example in which an intelligent dialog server of an intelligent dialog system controls a user terminal (such as a mobile terminal, an intelligent dialog device, or an intelligent speaker with a display screen) of the intelligent dialog system to display a conversation message.
As shown in fig. 4, which shows a schematic flow interaction diagram of an embodiment of a session control method according to the present application, the method of this embodiment may include:
s401, in the session process, the user terminal of the intelligent dialogue system obtains the user image of the user and the session content input by the user, and sends the user session message including the user image and the session content of the user to the intelligent dialogue server.
The present embodiment takes the example that the user terminal collects the user image while collecting the user session content, but the present embodiment is also applicable if the user terminal only sends the user-side session content to the intelligent dialog server.
S402, the intelligent dialogue server determines a first emotion characteristic of the user according to the conversation content and the user image of the user.
S403, the intelligent dialogue server constructs a first virtual user image with the first emotion characteristics according to the first emotion characteristics.
S404, the intelligent dialogue server analyzes the user intention of the conversation content of the user and determines conversation response information based on the user intention.
For convenience of understanding, the present embodiment takes the example of determining the session response information by performing intent recognition on the session content of the user and combining the recognized user intent, but the present embodiment is also applicable to determining the session response information by other methods.
S405, the intelligent dialogue server determines a second emotion characteristic required by the conversation response information according to the first emotion characteristic.
S406, the intelligent dialogue server constructs the virtual service personnel image with the second emotion characteristics.
S407, the intelligent dialogue server configures the conversation content into the content in the first message bar associated with the first virtual user image, and configures the conversation response information into the content in the second message bar associated with the virtual service personnel image.
S408, the intelligent dialogue server sends a conversation response message to the user terminal.
The session response message carries data of a first virtual user image and a virtual service person image, wherein a first message bar of the first virtual user image is associated with session content, and a second message bar of the virtual service person image is associated with session response information.
S409, the user terminal displays the first virtual user image and the virtual service personnel image in the session interactive interface, displays the session content in the first message bar of the first virtual user image, and displays the session response information in the second message bar of the virtual service personnel.
It should be noted that, in this embodiment, the example is that the intelligent dialog server sends the related data of the first virtual user image and the virtual server human image simultaneously through the session response message. However, in practical application, the intelligent dialogue server may also send the first virtual user image and the data related to the virtual service person image, and correspondingly, the user terminal may display the first virtual user image, the virtual service person and the content related thereto, so that the user terminal may present the first virtual user image, the virtual service person image and the information related thereto at the same time in the session interaction interface.
The following describes the present embodiment by taking an intelligent customer service scenario as an example, and fig. 5 and fig. 6 respectively show two schematic diagrams of a session interaction window at the user terminal side in the intelligent customer service scenario.
In the conversation interaction window of fig. 5 and 6, the conversation contents input by the user and the virtual user image constructed for the user by the intelligent customer service server are presented on the right side. The left side presents the session response information fed back by the intelligent customer service server and the virtual customer service image of the simulated customer service personnel.
The user inputs the conversation content of 'my mobile phone can not be opened' through the user terminal, and then the user terminal can obtain the user image and send the user image and the conversation content to the intelligent customer service server.
And the intelligent customer service server recognizes the emotion of the user as complaining of discontent emotion, and then can generate a virtual user image with puzzlement expression. Meanwhile, the intelligent customer service server controls the user terminal to display the virtual user image 501 with the discourse dissatisfaction expression in the conversation interactive window 500, and display the conversation content of 'my mobile phone is not opened' in the message column 502 associated with the virtual user image 501, as shown in fig. 5.
Meanwhile, the intelligent customer service server determines that the conversation response message needing to be replied is 'do not need to be urgent, help you see a specific problem on the side of me and please provide the model of the mobile phone' first according to the conversation content 'my mobile phone can not be opened'. And the emotion of the virtual customer service is determined to be apology or no good meaning by combining the doubtful emotion of the user, so that the follow-up user can feel the understanding of the virtual customer service. On this basis, the intelligent customer service server may slightly bring the virtual customer service image with apology expression, and control the user terminal to display the virtual customer service image 503 in the conversation interaction window 500, and display "please do not need to worry about, i' e this side see a specific question, please provide the conversation response information of the next your mobile phone model" in the message column 504 associated with the virtual customer service image.
Further as shown in fig. 6, it is assumed that the user sees the answer response information satisfied well, and changes from complaining of emotion to smiling. After the user inputs "thank you, the mobile phone model is model ×", the virtual user image 601 can be displayed on the conversation interaction window by the scheme of the application, and meanwhile, the message column of the virtual user image displays "thank you, and the mobile phone model is model ×". On the basis, the intelligent customer service server can generate the expression happy virtual customer service image, and control the user terminal to display the expression happy virtual customer service image 602 in the conversation interaction window, and a message bar of the virtual customer service image 602 displays that 'i help you to find possible reasons as soon as possible'.
As can be seen from the figures 5 and 6, through the scheme of the application, the user can intuitively feel the understanding of the intelligent customer service system on the emotion of the user and the emotional reaction of the intelligent customer service system, so that the user can feel the substitution feeling of chatting with real artificial customer service, the user experience is improved, and the flexibility of intelligent customer service interaction is improved.
It can be understood that, in order to guide the user to feed back the service opinions in time, the method and the device can also add some guiding characteristics such as guiding actions and the like to the virtual user image representing the user under the condition of meeting the user feedback opinions, so as to be beneficial to improving the enthusiasm of the user for service evaluation.
Specifically, under the condition that the user feedback opinions are met, a second virtual user image representing set emotional characteristics is constructed, and a prompt is associated with the second virtual user image. Wherein the emotional characteristic is set to characterize a satisfaction emotion of the user for the session. The prompt is used for prompting the user to feed back the evaluation opinion under the condition of satisfying the conversation.
For the convenience of distinguishing, the virtual user image actively generated by the intelligent dialog system under the condition of meeting the user feedback opinions is called as a second virtual user image.
It is understood that the specific manner of constructing the second avatar with the set emotional characteristics may be similar to the manner of constructing the first avatar, and specifically refer to the related description above, and will not be described herein again.
The condition for meeting the user feedback opinions can be a condition for meeting the condition of ending the session and other conditions requiring the user to feed back the service evaluation opinions. If the conversation content input by the user is not received for a long time in the conversation, the conversation connection of the intelligent conversation does not need to be maintained any more by the user, and the condition that the feedback opinions of the user are currently met can be determined. In another example, when an instruction for closing the session interactive interface is detected, it is determined that the condition for the user to feed back the opinion is satisfied.
Accordingly, the user terminal may be controlled to display the second avatar associated with the hint.
Still taking the intelligent customer service scenario as an example, as shown in fig. 7, after the intelligent customer service server outputs "a specially-arranged person is scheduled to provide service for you" and a virtual customer service person image 701 to the session interaction interface of the user terminal, the consultation service may be considered to be ended. On this basis, in order to guide the user to input service evaluation, the intelligent customer service server outputs a second virtual user image 702 having a satisfactory expression to the conversation display interface of the user terminal, and the second virtual user image is associated with a hint "can solve too well, i want to give you good comment". The user may be prompted to evaluate the service by a substitution sensation given to the user by the second avatar.
The application also provides a session control device corresponding to the session control method. As shown in fig. 8, which shows a schematic structural diagram of an embodiment of a session control apparatus according to the present application, the apparatus of the present embodiment may include:
an information obtaining unit 801, configured to obtain user session information to be analyzed in a session process, where the user session information at least includes session content input by a user;
a first emotion determining unit 802, configured to determine a first emotional characteristic of the user based on the user session information;
a first avatar construction unit 803 for constructing a first virtual user avatar having the first emotional characteristic;
a second emotion determining unit 804, configured to determine, according to the user session information, session response information and a second emotion feature of the session response information;
a second avatar construction unit 805 for constructing a virtual service person avatar having the second emotional characteristic;
the display control unit 806 is configured to control the user terminal of the intelligent dialog system to display the first avatar associated with the session content and the avatar associated with the session response information in the session interaction interface.
In one possible implementation, the first character construction unit includes:
a morphology determining subunit for determining an extrinsic morphology feature for expressing the first emotional feature;
a first avatar construction subunit for constructing a first virtual user avatar having the extrinsic morphological features.
As an alternative, the morphology determination subunit includes:
the expression action determining subunit is used for determining the facial expression and the limb action of the human body for expressing the first emotion characteristic;
a first character construction subunit comprising:
a model obtaining subunit, configured to obtain a virtual human body model;
and the model adjusting subunit is used for adding the facial expression to the human body model and adjusting the human body model to have the limb action so as to obtain the constructed first virtual user image.
In yet another possible implementation manner, the display control unit may include:
the first display control unit is used for controlling the user terminal of the intelligent dialogue system to display the first virtual user image associated with the dialogue content in the dialogue interaction interface if the first virtual user image is constructed and the virtual service personnel image is not constructed;
and the second display control unit is used for controlling the user terminal of the intelligent dialogue system to display the first virtual user image associated with the dialogue content and the virtual service personnel image associated with the dialogue response information in a dialogue interaction interface after the virtual service personnel image is constructed.
In an optional mode, a first virtual user image constructed by a first image construction unit of the application is associated with a first message bar;
the virtual service personnel image constructed by the second image construction unit is associated with a second message bar;
correspondingly, the display control unit comprises:
a first configuration subunit, configured to configure the session content as the content displayed in the first message bar;
a second configuration subunit, configured to configure the session response information as the content displayed in the second message bar;
and the display control subunit is used for controlling the user terminal of the intelligent dialog system to display the first virtual user image and the virtual service personnel image in the session interactive interface, displaying the session content in a first message bar associated with the first virtual user image, and displaying the session response information in a second message bar associated with the virtual service personnel image.
In yet another possible implementation, the second emotion determining unit includes:
the layer-by-layer analysis subunit is used for determining the session response information according to the user session information and determining a second emotion characteristic represented by the session response information;
alternatively, the first and second electrodes may be,
a response determining subunit, configured to determine the session response information according to the user session information;
and the emotion determining subunit is used for determining a second emotional characteristic corresponding to the first emotional characteristic according to the first emotional characteristic represented by the user session information, and determining the second emotional characteristic as the emotional characteristic required to be presented by the session response information.
In yet another possible implementation manner, the apparatus further includes:
the evaluation image generating unit is used for constructing a second virtual user image representing set emotion characteristics under the condition that the feedback opinions of the user are met, and associating a prompt language with the second virtual user image, wherein the set emotion characteristics represent the satisfied emotion of the user to the conversation, and the prompt language is used for prompting the user to feed back the evaluation opinions under the condition that the user is satisfied with the conversation;
and the evaluation image display unit is used for controlling the user terminal to display the second virtual user image associated with the prompt.
On the other hand, the application also provides an electronic device, and the electronic device can also be a server in the intelligent dialogue system or a user terminal in the intelligent dialogue system. As shown in fig. 9, which shows a schematic view of a composition structure of an electronic device according to the present application, the electronic device of the present embodiment at least includes: a processor 901 and a memory 902.
Wherein the processor is configured to execute the session control method according to any one of the above embodiments.
The memory is used for storing programs required for the processor to perform operations.
The memory may also be used for programs such as an operating system.
It will be appreciated that the electronic device may also include other components, as shown in fig. 9, a display 903, an input device 904 to which the electronic device is connected, and a communication bus 905. The processor, memory and display and input device may be connected by a communication bus.
Of course, the electronic device may also include more or less components than those shown in fig. 9, which is not limited in this regard.
In still another aspect, the present application further provides a storage medium storing a program for implementing the session control method described in any one of the above embodiments when the program is executed.
It should be noted that, in the present specification, the embodiments are all described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments may be referred to each other. Meanwhile, the features described in the embodiments of the present specification may be replaced or combined with each other, so that those skilled in the art can implement or use the present application. For the device-like embodiment, since it is basically similar to the method embodiment, the description is simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (10)

1. A session control method, comprising:
obtaining user session information to be analyzed in a session process, wherein the user session information at least comprises session content input by a user;
determining a first emotional characteristic of the user based on the user session information;
constructing a first virtual user avatar having the first emotional characteristic;
determining session response information and a second emotional characteristic of the session response information according to the user session information;
constructing a virtual service person image having the second emotional characteristic;
and controlling a user terminal of the intelligent dialog system to display the first virtual user image associated with the session content and the virtual service personnel image associated with the session response information in a session interactive interface.
2. The method of claim 1, said constructing a first virtual user avatar having said first emotional feature, comprising:
determining an extrinsic morphological feature for expressing the first emotional feature;
constructing a first virtual user image having the appearance characteristics.
3. The method of claim 2, the determining an extrinsic morphological feature for expressing the first emotional feature, comprising:
determining a facial expression and a limb movement of a human body for expressing the first emotional characteristic;
the constructing a first virtual user image having the appearance characteristics includes:
obtaining a virtual human body model;
adding the facial expression to the human body model, and adjusting the human body model to have the limb action to obtain a constructed first virtual user image.
4. The method of claim 1, the controlling a user terminal of an intelligent dialog system to display the first avatar associated with the session content and the avatar associated with the session response information in a session interaction interface, comprising:
if the first virtual user image is built and the virtual service personnel image is not built, controlling a user terminal of the intelligent dialogue system to display the first virtual user image associated with the dialogue content in a dialogue interaction interface;
after the virtual service personnel image is constructed, controlling a user terminal of the intelligent dialogue system to display the first virtual user image associated with the dialogue content and the virtual service personnel image associated with the dialogue response information in a dialogue interaction interface.
5. The method of claim 1 or 4, the first avatar being associated with a first message bar;
the virtual service personnel image is associated with a second message bar;
the user terminal of the control intelligent dialog system displays the first virtual user image associated with the session content and the virtual service personnel image associated with the session response information in a session interactive interface, and the method comprises the following steps:
configuring the conversation content as content displayed within the first message bar;
configuring the conversation response information into content displayed in the second message bar;
and controlling a user terminal of the intelligent dialog system to display the first virtual user image and the virtual service personnel image in a session interactive interface, displaying the session content in a first message bar associated with the first virtual user image, and displaying the session response information in a second message bar associated with the virtual service personnel image.
6. The method of claim 1, wherein the determining the session response information and the second emotional characteristic of the session response information according to the user session information comprises:
determining the session response information according to the user session information, and determining a second emotional characteristic represented by the session response information;
alternatively, the first and second electrodes may be,
determining the session response information according to the user session information;
and according to the first emotional characteristic represented by the user session information, determining a second emotional characteristic corresponding to the first emotional characteristic, and determining the second emotional characteristic as the emotional characteristic required to be presented by the session response information.
7. The method of claim 1, further comprising:
under the condition that the user feedback opinions are met, constructing a second virtual user image representing set emotion characteristics, and associating a prompt language for the second virtual user image, wherein the set emotion characteristics represent the satisfied emotion of the user to the conversation, and the prompt language is used for prompting the user to feed back evaluation opinions under the condition that the user is satisfied with the conversation;
and controlling the user terminal to display a second virtual user image associated with the prompt.
8. A session control apparatus comprising:
the system comprises an information obtaining unit, a processing unit and a processing unit, wherein the information obtaining unit is used for obtaining user session information to be analyzed in a session process, and the user session information at least comprises session content input by a user;
a first emotion determining unit, configured to determine a first emotional characteristic of the user based on the user session information;
a first avatar construction unit for constructing a first virtual user avatar having the first emotional characteristic;
the second emotion determining unit is used for determining session response information and second emotion characteristics of the session response information according to the user session information;
a second emotion constructing unit for constructing a virtual service person image having the second emotion characteristic;
and the display control unit is used for controlling the user terminal of the intelligent dialog system to display the first virtual user image associated with the session content and the virtual service personnel image associated with the session response information in a session interactive interface.
9. The apparatus of claim 8, the first character construction unit, comprising:
a morphology determining subunit for determining an extrinsic morphology feature for expressing the first emotional feature;
a first avatar construction subunit for constructing a first virtual user avatar having the extrinsic morphological features.
10. An electronic device, comprising: a processor and a memory;
the processor is configured to execute the session control method according to any one of claims 1 to 7;
the memory is used for storing programs needed by the processor to perform the above operations.
CN202011230074.6A 2020-11-06 2020-11-06 Session control method and device and electronic equipment Pending CN112364971A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011230074.6A CN112364971A (en) 2020-11-06 2020-11-06 Session control method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011230074.6A CN112364971A (en) 2020-11-06 2020-11-06 Session control method and device and electronic equipment

Publications (1)

Publication Number Publication Date
CN112364971A true CN112364971A (en) 2021-02-12

Family

ID=74508802

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011230074.6A Pending CN112364971A (en) 2020-11-06 2020-11-06 Session control method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN112364971A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113014471A (en) * 2021-01-18 2021-06-22 腾讯科技(深圳)有限公司 Session processing method, device, terminal and storage medium
CN113516183A (en) * 2021-07-05 2021-10-19 深圳小湃科技有限公司 Fault response method, system, device and storage medium
CN113569031A (en) * 2021-07-30 2021-10-29 北京达佳互联信息技术有限公司 Information interaction method and device, electronic equipment and storage medium
CN113747249A (en) * 2021-07-30 2021-12-03 北京达佳互联信息技术有限公司 Live broadcast problem processing method and device and electronic equipment
WO2023082737A1 (en) * 2021-11-12 2023-05-19 腾讯科技(深圳)有限公司 Data processing method and apparatus, and device and readable storage medium
WO2024007655A1 (en) * 2022-07-06 2024-01-11 腾讯科技(深圳)有限公司 Social processing method and related device

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20130082693A (en) * 2011-12-14 2013-07-22 건국대학교 산학협력단 Apparatus and method for video chatting using avatar
KR101719742B1 (en) * 2015-10-14 2017-03-24 주식회사 아크스튜디오 Method and apparatus for mobile messenger service by using avatar
CN107329990A (en) * 2017-06-06 2017-11-07 北京光年无限科技有限公司 A kind of mood output intent and dialogue interactive system for virtual robot
CN110085229A (en) * 2019-04-29 2019-08-02 珠海景秀光电科技有限公司 Intelligent virtual foreign teacher information interacting method and device
CN110418095A (en) * 2019-06-28 2019-11-05 广东虚拟现实科技有限公司 Processing method, device, electronic equipment and the storage medium of virtual scene

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20130082693A (en) * 2011-12-14 2013-07-22 건국대학교 산학협력단 Apparatus and method for video chatting using avatar
KR101719742B1 (en) * 2015-10-14 2017-03-24 주식회사 아크스튜디오 Method and apparatus for mobile messenger service by using avatar
CN107329990A (en) * 2017-06-06 2017-11-07 北京光年无限科技有限公司 A kind of mood output intent and dialogue interactive system for virtual robot
CN110085229A (en) * 2019-04-29 2019-08-02 珠海景秀光电科技有限公司 Intelligent virtual foreign teacher information interacting method and device
CN110418095A (en) * 2019-06-28 2019-11-05 广东虚拟现实科技有限公司 Processing method, device, electronic equipment and the storage medium of virtual scene

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113014471A (en) * 2021-01-18 2021-06-22 腾讯科技(深圳)有限公司 Session processing method, device, terminal and storage medium
CN113014471B (en) * 2021-01-18 2022-08-19 腾讯科技(深圳)有限公司 Session processing method, device, terminal and storage medium
CN113516183A (en) * 2021-07-05 2021-10-19 深圳小湃科技有限公司 Fault response method, system, device and storage medium
CN113516183B (en) * 2021-07-05 2024-04-16 深圳小湃科技有限公司 Fault response method, system, equipment and storage medium
CN113569031A (en) * 2021-07-30 2021-10-29 北京达佳互联信息技术有限公司 Information interaction method and device, electronic equipment and storage medium
CN113747249A (en) * 2021-07-30 2021-12-03 北京达佳互联信息技术有限公司 Live broadcast problem processing method and device and electronic equipment
WO2023082737A1 (en) * 2021-11-12 2023-05-19 腾讯科技(深圳)有限公司 Data processing method and apparatus, and device and readable storage medium
WO2024007655A1 (en) * 2022-07-06 2024-01-11 腾讯科技(深圳)有限公司 Social processing method and related device

Similar Documents

Publication Publication Date Title
CN112364971A (en) Session control method and device and electronic equipment
CN110286756A (en) Method for processing video frequency, device, system, terminal device and storage medium
CN110609620B (en) Human-computer interaction method and device based on virtual image and electronic equipment
CN110400251A (en) Method for processing video frequency, device, terminal device and storage medium
CN110647636A (en) Interaction method, interaction device, terminal equipment and storage medium
CN110413841A (en) Polymorphic exchange method, device, system, electronic equipment and storage medium
CN107315742A (en) The Interpreter's method and system that personalize with good in interactive function
CN113067953A (en) Customer service method, system, device, server and storage medium
US11455510B2 (en) Virtual-life-based human-machine interaction methods, apparatuses, and electronic devices
CN110808038A (en) Mandarin assessment method, device, equipment and storage medium
CN110794964A (en) Interaction method and device for virtual robot, electronic equipment and storage medium
CN114495927A (en) Multi-modal interactive virtual digital person generation method and device, storage medium and terminal
CN114995636A (en) Multi-modal interaction method and device
JP2004185437A (en) Program, server, client and method for body information reflecting chatting
JP6796762B1 (en) Virtual person dialogue system, video generation method, video generation program
CN112634886A (en) Interaction method of intelligent equipment, server, computing equipment and storage medium
Rincón-Nigro et al. A text-driven conversational avatar interface for instant messaging on mobile devices
CN116009692A (en) Virtual character interaction strategy determination method and device
EP4006903A1 (en) System with post-conversation representation, electronic device, and related methods
JP7253269B2 (en) Face image processing system, face image generation information providing device, face image generation information providing method, and face image generation information providing program
CN112632262A (en) Conversation method, conversation device, computer equipment and storage medium
CN113205811A (en) Conversation processing method and device and electronic equipment
CN110633361A (en) Input control method and device and intelligent session server
CN109559760A (en) A kind of sentiment analysis method and system based on voice messaging
JP2001357414A (en) Animation communicating method and system, and terminal equipment to be used for it

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination