CN112883181A - Session message processing method and device, electronic equipment and storage medium - Google Patents

Session message processing method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN112883181A
CN112883181A CN202110218984.0A CN202110218984A CN112883181A CN 112883181 A CN112883181 A CN 112883181A CN 202110218984 A CN202110218984 A CN 202110218984A CN 112883181 A CN112883181 A CN 112883181A
Authority
CN
China
Prior art keywords
message
target
emotion
conversation
voice
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110218984.0A
Other languages
Chinese (zh)
Inventor
邱静
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202110218984.0A priority Critical patent/CN112883181A/en
Publication of CN112883181A publication Critical patent/CN112883181A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/332Query formulation
    • G06F16/3329Natural language query formulation or dialogue systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/338Presentation of query results

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Artificial Intelligence (AREA)
  • Human Computer Interaction (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application provides a method and a device for processing a session message, electronic equipment and a storage medium; the method comprises the following steps: presenting a session interface for conducting a message session; receiving a conversation message to be displayed, wherein the message content of the conversation message corresponds to a target emotion category; displaying the conversation message in the conversation interface by adopting a target message style corresponding to the target emotion category; wherein the target message style has an emoji pattern corresponding to the target emotion classification; by the method and the device, the user can conveniently and quickly know the emotion of the conversation object, and meanwhile, the interest and pleasure of conversation interaction are increased.

Description

Session message processing method and device, electronic equipment and storage medium
Technical Field
The present application relates to the field of internet and human-computer interaction technologies, and in particular, to a method and an apparatus for processing a session message, an electronic device, and a storage medium.
Background
With the rapid development of internet technology, instant messaging application has been widely applied to the life of people and has become the main way of daily communication and interaction of people. In the related technology, when a user chats with a session object, messages presented in a message page are all displayed by using a clear-color message bubble, so that the message display mode is single and uninteresting; and the message of the voice message can only show the voice duration, and the user cannot know other related contents in advance before playing the voice message.
Disclosure of Invention
The embodiment of the application provides a processing method and device of a session message, electronic equipment and a storage medium, which are convenient for a user to quickly know the emotion of a session object, increase the interest and joy of session interaction, and improve the user experience.
The technical scheme of the embodiment of the application is realized as follows:
an embodiment of the present application provides a method for processing a session message, including:
presenting a session interface for conducting a message session;
receiving a conversation message to be displayed, wherein the message content of the conversation message corresponds to a target emotion category;
displaying the conversation message in the conversation interface by adopting a target message style corresponding to the target emotion category;
wherein the target message style has an emoji pattern corresponding to the target emotion classification.
An embodiment of the present application further provides a device for processing a session message, including:
the presentation module is used for presenting a session interface for carrying out message session;
the receiving module is used for receiving a conversation message to be displayed, wherein the message content of the conversation message corresponds to the target emotion category;
the display module is used for displaying the conversation message in the conversation interface by adopting a target message style corresponding to the target emotion category;
wherein the target message style has an emoji pattern corresponding to the target emotion classification.
In the above scheme, the receiving module is further configured to present a message sending function item and a message editing box in the session interface;
presenting the edited text conversation message in the message edit box, and presenting at least one candidate expression in an associated area of the message edit box, wherein each candidate expression corresponds to an expression pattern;
responding to the selection operation of a target expression in the at least one candidate expression, and taking an expression pattern corresponding to the target expression as an expression pattern of the target message style;
receiving the session message in response to a trigger operation for the messaging function item.
In the above scheme, the apparatus further comprises:
the emotion recognition module is used for carrying out emotion recognition on the message content of the conversation message in real time in the process of receiving the conversation message;
when the emotion category corresponding to the message content is identified as the target emotion category, presenting the emotional expression corresponding to the target emotion category;
and taking the expression pattern corresponding to the emotional expression as the expression pattern of the target message style.
In the above scheme, the apparatus further comprises:
the emotion recognition mode setting module is used for presenting a mode setting interface for setting an emotion recognition mode of the conversation message;
receiving a setting operation for an emotion automatic recognition mode triggered based on the mode setting interface;
setting an emotion recognition mode of the conversation message to an emotion automatic recognition mode in response to the setting operation;
correspondingly, the emotion recognition module is further configured to acquire an emotion recognition mode of the session message;
and when the emotion recognition mode of the conversation message is determined to be the emotion automatic recognition mode, performing emotion recognition on the message content of the conversation message in real time.
In the above scheme, the receiving module is further configured to present a voice function entry in the session interface;
responding to the trigger operation aiming at the voice function inlet, presenting a voice input interface, and presenting a voice input function item in the voice input interface;
and responding to a voice input operation triggered based on the voice input function item, receiving the voice conversation message, and presenting an expression pattern corresponding to the target emotion category in the voice input interface.
In the above scheme, the presenting module is further configured to present at least two emotion categories for selection;
in response to an emotion category selection operation triggered based on the at least two emotion categories, treating the selected emotion category as the target emotion category.
In the above scheme, the display module is further configured to present an emotion recognition switch in the session interface;
when the switch state of the emotion recognition switch is an on state, displaying the conversation message by adopting a target message style corresponding to the target emotion type;
the switch state comprises an opening state and a closing state.
In the above scheme, the display module is further configured to receive a state switching instruction for the emotion recognition switch;
responding to the state switching instruction, and controlling the emotion recognition switch to be switched from the on state to the off state;
and when receiving a new session message to be displayed, adopting a default message style to display the new session message.
In the above scheme, the apparatus further comprises:
the expression pattern selection module is used for presenting an expression pattern selection interface and presenting at least two types of candidate expression patterns for selection in the expression pattern selection interface;
the number of each type of candidate expression patterns is at least two, and each candidate expression pattern corresponds to one emotion category;
responding to an expression pattern selection operation triggered based on the at least two types of candidate expression patterns, and taking the selected type of candidate expression patterns as the expression patterns of the target message style;
correspondingly, the display module is further configured to obtain a candidate expression pattern corresponding to the target emotion category in the selected type of candidate expression patterns;
and displaying the conversation message by adopting the target message style with the acquired candidate expression pattern.
In the above scheme, the display module is further configured to display the conversation message in a target message style with an expression pattern having a target color;
or, the conversation message is displayed by adopting a target message style with an expression pattern with at least two colors, wherein the at least two colors are cyclically changed according to the corresponding presentation sequence.
In the above scheme, the display module is further configured to display the conversation message in a target message style corresponding to the target emotion category and having a target size when the conversation message is a voice conversation message;
the target size is matched with voice parameters of the voice conversation message, and the voice parameters comprise at least one of voice duration and voice volume.
In the above scheme, the display module is further configured to display the conversation message in a target message style corresponding to the target emotion category and having a target display style;
wherein the target display style corresponds to a type of the message content.
In the above scheme, the apparatus further comprises:
a message playing module, configured to, when the session message is a voice session message, respond to a playing instruction for the voice session message, play the voice session message, and
and playing the animation special effect associated with the expression pattern in the playing process of the voice conversation message.
In the above scheme, the presentation module is further configured to receive a text conversion instruction for the voice session message when the session message is a voice session message;
presenting a text display area having a shape of the emoticon in response to the text conversion instruction;
and presenting the message text corresponding to the voice conversation message in the text display area.
In the above solution, the emotion recognition module is further configured to, when the conversation message is a voice conversation message, extract a message feature of the message content, where the message feature includes at least one of a semantic feature and a sound feature;
and determining a target emotion category corresponding to the conversation message based on the message characteristics.
In the above scheme, the emotion recognition module is further configured to extract message content included in the session message;
acquiring a first emotion keyword of the message content and a second emotion keyword corresponding to each preset emotion category;
matching the first emotion keywords with the second emotion keywords respectively to obtain the matching degree of the message content and each emotion category;
and determining the emotion category corresponding to the highest matching degree, wherein the emotion category is the target emotion category corresponding to the message content of the session message.
An embodiment of the present application further provides an electronic device, including:
a memory for storing executable instructions;
and the processor is used for realizing the processing method of the session message provided by the embodiment of the application when the executable instruction stored in the memory is executed.
The embodiment of the present application further provides a computer-readable storage medium, which stores executable instructions, and when the executable instructions are executed by a processor, the method for processing the session message provided in the embodiment of the present application is implemented.
The embodiment of the application has the following beneficial effects:
in the embodiment of the application, after receiving the conversation message to be displayed, adopting a target message style corresponding to a target emotion category to display the conversation message, wherein the target emotion category corresponds to the message content of the conversation message, and the target message style has an expression pattern corresponding to the target emotion category; therefore, the conversation message is displayed through the target message style, so that the user can conveniently and quickly know the emotion of the conversation object, and the interest and the pleasure of conversation interaction are increased.
Drawings
Fig. 1 is a schematic architecture diagram of a system 100 for processing a session message according to an embodiment of the present application;
fig. 2 is a schematic structural diagram of an electronic device 500 of a method for processing a conversation message according to an embodiment of the present application;
fig. 3 is a flowchart illustrating a processing method of a session message according to an embodiment of the present application;
FIG. 4 is a first schematic diagram illustrating selection of candidate expressions according to an embodiment of the present disclosure;
fig. 5 is a schematic diagram illustrating selection of candidate expressions according to an embodiment of the present application;
fig. 6 is a schematic diagram illustrating a receiving flow of a voice conversation message according to an embodiment of the present application;
FIG. 7 is a diagram illustrating an exemplary setting of an emotion recognition mode provided by an embodiment of the present application;
FIG. 8 is a schematic diagram of a selection of at least two emotion categories provided by an embodiment of the application;
FIG. 9 is a schematic representation of a presentation of a session message provided by an embodiment of the present application;
fig. 10 is a schematic representation of an emotion recognition switch provided in an embodiment of the present application;
FIG. 11 is a schematic diagram of an expression pattern selection provided in an embodiment of the present application;
FIG. 12 is a schematic representation of a presentation of a session message provided by an embodiment of the present application;
FIG. 13 is a schematic representation of a presentation of a session message provided by an embodiment of the present application;
FIG. 14 is a representation of message text of a voice conversation message as provided by an embodiment of the present application;
fig. 15 is a presentation diagram of a conversation message provided in the related art;
fig. 16 is a schematic illustration showing an expression pattern of a bee image provided in an embodiment of the present application;
fig. 17 is a schematic representation of a bee emotion pattern corresponding to different voice conversation messages provided in this application;
fig. 18 is a flowchart illustrating a method for processing a session message according to an embodiment of the present application;
fig. 19 is a schematic structural diagram of a processing device 555 for a session message according to an embodiment of the present application.
Detailed Description
In order to make the objectives, technical solutions and advantages of the present application clearer, the present application will be described in further detail with reference to the attached drawings, the described embodiments should not be considered as limiting the present application, and all other embodiments obtained by a person of ordinary skill in the art without creative efforts shall fall within the protection scope of the present application.
In the following description, reference is made to "some embodiments" which describe a subset of all possible embodiments, but it is understood that "some embodiments" may be the same subset or different subsets of all possible embodiments, and may be combined with each other without conflict.
In the following description, references to the terms "first \ second \ third" are only to distinguish similar objects and do not denote a particular order, but rather the terms "first \ second \ third" are used to interchange specific orders or sequences, where appropriate, so as to enable the embodiments of the application described herein to be practiced in other than the order shown or described herein.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used herein is for the purpose of describing embodiments of the present application only and is not intended to be limiting of the application.
Before further detailed description of the embodiments of the present application, terms and expressions referred to in the embodiments of the present application will be described, and the terms and expressions referred to in the embodiments of the present application will be used for the following explanation.
1) The terminal comprises a client and an application program running in the terminal and used for providing various services, such as an instant messaging client and a video playing client.
2) In response to the condition or state on which the performed operation depends, one or more of the performed operations may be in real-time or may have a set delay when the dependent condition or state is satisfied; there is no restriction on the order of execution of the operations performed unless otherwise specified.
Based on the above explanations of terms and terms involved in the embodiments of the present application, the following describes a system for processing a conversation message provided by the embodiments of the present application. Referring to fig. 1, fig. 1 is a schematic block diagram of a processing system 100 for session messages provided in an embodiment of the present application, in order to support an exemplary application, a terminal (an exemplary terminal 400-1 is shown) is connected to a server 200 through a network 300, where the network 300 may be a wide area network or a local area network, or a combination of the two, and data transmission is implemented using a wireless or wired link.
A terminal (e.g., terminal 400-1) for presenting a conversation interface for conducting a message conversation on a graphical interface (graphical interface 410-1 is shown as an example); receiving a conversation message to be displayed, and sending the conversation message to be displayed to the server 200;
the server 200 is used for receiving the conversation message to be displayed, performing emotion recognition on the message content of the conversation message, and returning a notification message of which the emotion category corresponding to the message content is the target emotion category to the terminal when the emotion category corresponding to the message content is recognized as the target emotion category;
and a terminal (e.g., terminal 400-1) for receiving the notification message, and displaying the conversation message in a conversation interface by using a target message style corresponding to the target emotion category, wherein the target message style has the expression pattern corresponding to the target emotion category.
In practical application, the server 200 may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing basic cloud computing services such as a cloud service, a cloud database, cloud computing, a cloud function, cloud storage, a network service, cloud communication, a middleware service, a domain name service, a security service, a CDN, a big data and artificial intelligence platform, and the like. The terminal (e.g., terminal 400-1) may be, but is not limited to, a smart phone, a tablet computer, a laptop computer, a desktop computer, a smart speaker, a smart television, a smart watch, and the like. The terminal (e.g., terminal 400-1) and the server 200 may be directly or indirectly connected through wired or wireless communication, and the application is not limited thereto.
Referring to fig. 2, fig. 2 is a schematic structural diagram of an electronic device 500 of a method for processing a conversation message according to an embodiment of the present application. In practical applications, the electronic device 500 may be a server or a terminal shown in fig. 1, and an electronic device that implements the method for processing a session message according to an embodiment of the present application is described by taking the electronic device 500 as the terminal shown in fig. 1 as an example, where the electronic device 500 provided in the embodiment of the present application includes: at least one processor 510, memory 550, at least one network interface 520, and a user interface 530. The various components in the electronic device 500 are coupled together by a bus system 540. It is understood that the bus system 540 is used to enable communications among the components. The bus system 540 includes a power bus, a control bus, and a status signal bus in addition to a data bus. For clarity of illustration, however, the various buses are labeled as bus system 540 in fig. 2.
The Processor 510 may be an integrated circuit chip having Signal processing capabilities, such as a general purpose Processor, a Digital Signal Processor (DSP), or other programmable logic device, discrete gate or transistor logic device, discrete hardware components, or the like, wherein the general purpose Processor may be a microprocessor or any conventional Processor, or the like.
The user interface 530 includes one or more output devices 531 enabling presentation of media content, including one or more speakers and/or one or more visual display screens. The user interface 530 also includes one or more input devices 532, including user interface components to facilitate user input, such as a keyboard, mouse, microphone, touch screen display, camera, other input buttons and controls.
The memory 550 may be removable, non-removable, or a combination thereof. Exemplary hardware devices include solid state memory, hard disk drives, optical disk drives, and the like. Memory 550 optionally includes one or more storage devices physically located remote from processor 510.
The memory 550 may comprise volatile memory or nonvolatile memory, and may also comprise both volatile and nonvolatile memory. The nonvolatile memory may be a Read Only Memory (ROM), and the volatile memory may be a Random Access Memory (RAM). The memory 550 described in embodiments herein is intended to comprise any suitable type of memory.
In some embodiments, memory 550 can store data to support various operations, examples of which include programs, modules, and data structures, or subsets or supersets thereof, as exemplified below.
An operating system 551 including system programs for processing various basic system services and performing hardware-related tasks, such as a framework layer, a core library layer, a driver layer, etc., for implementing various basic services and processing hardware-based tasks;
a network communication module 552 for communicating to other computing devices via one or more (wired or wireless) network interfaces 520, exemplary network interfaces 520 including: bluetooth, wireless compatibility authentication (WiFi), and Universal Serial Bus (USB), etc.;
a presentation module 553 for enabling presentation of information (e.g., a user interface for operating peripherals and displaying content and information) via one or more output devices 531 (e.g., a display screen, speakers, etc.) associated with the user interface 530;
an input processing module 554 to detect one or more user inputs or interactions from one of the one or more input devices 532 and to translate the detected inputs or interactions.
In some embodiments, the processing apparatus for the conversation message provided by the embodiments of the present application may be implemented in software, and fig. 2 illustrates a processing apparatus 555 for the conversation message stored in a memory 550, which may be software in the form of programs and plug-ins, and includes the following software modules: a presentation module 5551, a reception module 5552 and a presentation module 5553, which are logical and thus arbitrarily combined or further split according to the implemented functions, the functions of the respective modules will be explained below.
In other embodiments, the processing Device of the session message provided in this embodiment may be implemented by a combination of hardware and software, and by way of example, the processing Device of the session message provided in this embodiment may be a processor in the form of a hardware decoding processor, which is programmed to execute the processing method of the session message provided in this embodiment, for example, the processor in the form of the hardware decoding processor may employ one or more Application Specific Integrated Circuits (ASICs), DSPs, Programmable Logic Devices (PLDs), Complex Programmable Logic devices (cpds), Field Programmable Gate Arrays (FPGAs), or other electronic components.
Based on the above description of the system and the electronic device for processing a conversation message provided in the embodiments of the present application, a method for processing a conversation message provided in the embodiments of the present application is described below. In some embodiments, the method for processing the session message provided by the embodiment of the present application may be implemented by a server or a terminal alone, or implemented by a server and a terminal in a cooperation manner, and the method for processing the session message provided by the embodiment of the present application is described below with an embodiment of a terminal as an example. Referring to fig. 3, fig. 3 is a schematic flowchart of a processing method of a session message provided in the embodiment of the present application, where the processing method of the session message provided in the embodiment of the present application includes:
step 101: the terminal presents a session interface for conducting a message session.
Here, in practical applications, the terminal is provided with a client for conducting a message session, such as an instant messaging client, and a user can run the client on the terminal to implement a message session with other users. When the terminal runs the client, a session interface for a user to perform message session is presented, and the session interface can be a session interface of a group session or a session interface of a single chat session.
Step 102: a conversation message to be displayed is received.
Wherein the message content of the session message corresponds to the target emotion classification.
Here, the terminal receives the conversation message to be displayed by: receiving a conversation message to be displayed, which is sent by a conversation object corresponding to a conversation interface; or receiving a conversation message to be displayed edited by the user based on the conversation interface, namely the conversation message to be sent by the current user. In practical applications, the message content of each session message includes corresponding emotions, so as to correspond to corresponding emotion categories, such as happiness, sadness, anger, calmness, and the like. Here, the conversation message to be displayed corresponds to a target emotion category, and specifically, emotion recognition may be performed on the conversation message to be displayed to determine an emotion category corresponding to the conversation message to be displayed. In the embodiment of the application, a target message style corresponding to a target emotion category is adopted to display a conversation message, and specifically, the target message style may include a message bubble, a message card, a message display frame, and the like; in addition, the target message style has an emoticon corresponding to the target emotion category, so that the user can quickly know the emotion of the conversation object while increasing the interest and pleasure of the conversation interaction.
In some embodiments, when the conversation message is a text conversation message, the terminal may receive the conversation message to be displayed by: presenting a message sending function item and a message editing frame in a session interface; presenting the edited text conversation message in a message edit box, and presenting at least one candidate expression in an associated area of the message edit box, wherein each candidate expression corresponds to one expression pattern; responding to the selection operation aiming at the target expression in at least one candidate expression, and taking the expression pattern corresponding to the target expression as the expression pattern of the target message style; in response to a triggering operation for a messaging function item, a session message is received.
In practical applications, the conversation messages include text conversation messages and voice conversation messages. And respectively providing an editing entry for the session messages of the corresponding type in the session interface so that a user can edit the session messages of the corresponding type through the corresponding editing entry.
When the conversation message is a text conversation message, namely the user needs to edit the text conversation message, the text conversation message can be edited through a message editing box of the conversation interface. Here, the terminal presents the message transmission function item and the message edit box in the conversation interface, through which the edited text conversation message is presented.
In the embodiment of the application, when the edited text conversation message is presented through the message edit box, at least one candidate expression is also presented in the association area of the message edit box, each candidate expression corresponds to an expression pattern, such as a bee pattern, a butterfly pattern, a flower pattern and the like, so that a user can select a required candidate expression, the conversation message is presented through the selected expression pattern of the candidate expression, and the interestingness of chatting interaction is improved.
In practical application, at least one candidate expression is presented each time the text conversation message is edited, so that a user can select the currently required candidate expression when sending the text conversation message, the sent conversation message can be displayed by adopting expression patterns of different candidate expressions, and user experience is improved.
And when a selection operation aiming at a target expression in the at least one candidate expression is received, responding to the selection operation, and taking an expression pattern corresponding to the target expression as an expression pattern of a target message style, so that when a sending instruction aiming at the text conversation message is received, the text conversation message is displayed in a conversation interface based on the target message style with the expression pattern. In practical applications, the sending instruction for the text conversation message may be received in response to the triggering operation by triggering the triggering operation for the message sending function item.
As an example, referring to fig. 4, fig. 4 is a schematic diagram illustrating selection of candidate expressions provided in an embodiment of the present application. Here, the terminal presents the edited text conversation message in the message edit box, and the associated area of the message edit box, i.e., the upper right of the message edit box, presents 3 candidate expressions including a bee expression, a kitten expression, and a fishbone expression, as shown in a in fig. 4; and receiving a selection operation aiming at the 'bee expression', taking an expression pattern 'bee pattern' corresponding to the 'bee expression' as an expression pattern of the target message pattern, receiving the text conversation message by the terminal when receiving a message sending instruction triggered based on the message sending function item 'sending', generating a target message pattern (such as a message bubble) with the 'bee pattern' based on the 'bee expression', and displaying the text conversation message based on the target message pattern with the 'bee pattern', wherein the target message pattern is B in figure 4.
Continuing on, as an example, referring to fig. 5, fig. 5 is a schematic diagram illustrating selection of candidate expressions provided in the embodiment of the present application. Here, the terminal presents the edited text conversation message in the message edit box, and the associated area of the message edit box, i.e., the upper right of the message edit box, presents 3 candidate expressions including a bee expression, a kitten expression, and a fishbone expression, as shown in a in fig. 5; receiving a selection operation aiming at the cat expression, taking the expression pattern "cat pattern" corresponding to the "cat expression" as the expression pattern of the target message pattern, at this time, when receiving a message sending instruction triggered based on the message sending function item "send", the terminal receives the text conversation message, generates a target message pattern (such as a message bubble) with the "cat pattern" based on the "cat expression", and displays the text conversation message based on the target message pattern with the "cat pattern", as shown in B in fig. 5.
In some embodiments, when the conversation message is a voice conversation message, the terminal may receive the conversation message to be displayed by: presenting a voice function entry in a conversation interface; responding to the trigger operation aiming at the voice function inlet, presenting a voice input interface, and presenting a voice input function item in the voice input interface; and responding to a voice input operation triggered based on the voice input function item, receiving a voice conversation message, and presenting an expression pattern corresponding to the target emotion category in a voice input interface.
When the conversation message is a voice conversation message, namely a user needs to record the voice conversation message, the user can enter the voice recording interface through the voice function inlet of the conversation interface so as to record the voice conversation message. Here, the terminal presents a voice function entry in the session interface, presents a voice entry interface in response to a trigger operation for the voice function entry, and presents a voice entry function item in the voice entry interface. And responding to the voice input operation triggered based on the voice input function item, and acquiring and receiving the voice conversation message of the user by the terminal. When the terminal collects the voice conversation message of the user, the terminal presents the expression pattern of the target emotion category corresponding to the message content of the voice conversation message in the voice input interface, such as the bee expression pattern corresponding to the pleasant emotion. In practical applications, the emoticons may be dynamic, such as dynamically presented by flashing, color changing, etc., and the frequency of the dynamic flashing or color changing may match the frequency and tone of the voice conversation message inputted by the user.
In practical applications, the emotion of the user may change during the process of entering the voice conversation message, and thus the received voice conversation message may correspond to a plurality of different emotion categories. At this time, the expression patterns presented on the voice input interface can also change along with different emotion categories corresponding to the received conversation messages, so that the user can feel the emotion change of the user when inputting the voice conversation messages. For example, the emotion category corresponding to the voice conversation message received by the terminal is converted from calm to happy and then from happy to sad, the bee expression pattern presented by the voice input interface is also changed according to the change of the emotion category, namely the calm bee expression pattern is converted into the happy bee expression pattern, and then the happy bee expression pattern is converted into the sad bee expression pattern.
Based on this, the target emotion category corresponding to the voice conversation message may be determined by the proportion of the message content corresponding to each emotion category included in the voice conversation message, and specifically, the emotion category with the highest proportion of the corresponding message content may be determined as the target emotion category, so as to display the voice conversation message in a message style with an emoticon of the target emotion category.
Referring to fig. 6 as an example, fig. 6 is a schematic diagram illustrating a receiving flow of a voice conversation message according to an embodiment of the present application. Here, the terminal presents a voice function entry in the session interface, as shown in a in fig. 6; in response to the trigger operation for the voice function entry, presenting a voice entry interface, and presenting a voice entry function item "hold talk" in the voice entry interface, as shown in B in fig. 6; in response to the voice input operation triggered by the "press-and-talk" function item based on the voice input, the terminal receives the voice conversation message of the user, and simultaneously presents an expression pattern corresponding to the sadness emotion when it is determined that the target emotion category corresponding to the voice conversation message is sadness emotion, where the expression pattern is a sadness bee pattern, as shown in C in fig. 6. In practical implementation, the grief pattern can be dynamically represented, for example, by flashing, color transformation, and fluctuation of the bee abdomen ripple, and the frequency of the ripple fluctuation can match the voice frequency and tone of the voice conversation message inputted by the user.
In some embodiments, the terminal can perform emotion recognition on the message content of the session message in real time during the process of receiving the session message; when the emotion category corresponding to the message content is identified as the target emotion category, presenting the emotional expression corresponding to the target emotion category; and taking the expression pattern corresponding to the emotional expression as the expression pattern of the target message style.
In practical application, the terminal can perform emotion recognition on the message content of the conversation message in real time in the process of receiving the conversation message to be displayed; when the emotion category corresponding to the message content is identified as the target emotion category, the emotion expression corresponding to the target emotion category is presented, for example, the target emotion category is identified as sadness, and the emotion expression corresponding to sadness (for example, sadness bee expression, sadness doggie expression, etc.) can be presented in the receiving interface of the session message, for example, the voice input interface, the associated region of the message edit box, etc. At this time, the expression pattern corresponding to the emotional expression is used as the expression pattern of the target message style, so that the received conversation message to be displayed is displayed based on the target message style with the expression pattern.
In practical application, there may be a part of users that do not need to perform automatic emotion recognition, and in this case, in order to reduce unnecessary resource waste of the device, a mode setting function of an emotion recognition mode of a conversation message may be provided, so that the users can perform setting of the emotion recognition mode as needed. Based on this, in some embodiments, the terminal may set the emotion recognition mode of the conversation message by: presenting a mode setting interface for setting an emotion recognition mode of a conversation message; receiving a setting operation aiming at an emotion automatic recognition mode triggered based on a mode setting interface; setting an emotion recognition mode of the conversation message to an emotion automatic recognition mode in response to the setting operation;
accordingly, the terminal can perform emotion recognition on the message content of the conversation message in real time by: acquiring an emotion recognition mode of a session message; and when the emotion recognition mode of the conversation message is determined to be the emotion automatic recognition mode, performing emotion recognition on the message content of the conversation message in real time.
Here, referring to fig. 7, fig. 7 is a schematic diagram of setting of an emotion recognition mode provided in an embodiment of the present application. The terminal presents a mode setting interface for setting an emotion recognition mode of a session message, and presents a mode setting function item 'automatically recognizing message emotion' in the mode setting interface, as shown in a in fig. 7, the mode setting function item 'automatically recognizing message emotion' is in an off state; when an on instruction for the mode setting function item "automatically recognize message emotion" is received, the mode setting function item "automatically recognize message emotion" is controlled to be in an on state, as shown in B in fig. 7, which is used to indicate that the emotion recognition mode of the session message is currently set to the emotion automatic recognition mode.
Based on the method, the terminal firstly acquires the emotion recognition mode of the conversation message, and when the emotion recognition mode of the conversation message is determined to be the emotion automatic recognition mode, emotion recognition is carried out on the message content of the conversation message in real time in the process of receiving the conversation message.
In some embodiments, after receiving the conversation message to be displayed, the terminal may also present at least two emotion categories for selection; in response to an emotion category selection operation triggered based on at least two emotion categories, the selected emotion category is taken as a target emotion category.
In practical application, the terminal can also present at least two emotion categories for selection, so that the user can select the emotion categories according to needs, for example, when the emotion recognition mode is an emotion non-automatic recognition mode, the emotion categories for selection can be presented. The emotion categories include the emotion categories of calm, anger, sadness, and joy. In particular, the at least two emotion categories may be presented by emotional expressions of the respective emotion categories, such as presenting bee expressions corresponding to calm, anger, sadness, and joy, or may be presented by words. And in response to the emotion category selection operation triggered based on at least two emotion categories, taking the emotion category selected by the emotion category selection operation as the target emotion category.
By way of example, referring to fig. 8, fig. 8 is a schematic diagram of a selection of at least two emotion categories provided by an embodiment of the present application. Here, a voice entry interface for a voice conversation message presents four emotion categories including calm, anger, sadness, and joy in the form of words. And receiving a selection operation aiming at the target emotion category of sadness, and presenting an expression pattern corresponding to the emotion category of sadness, namely a sadness bee expression pattern, in the voice input interface.
In some embodiments, after the terminal receives the conversation message to be displayed, the emotion classification of the conversation message can be identified by the following method: when the conversation message is a voice conversation message, extracting message characteristics of message content; determining a target emotion category corresponding to the conversation message based on the message characteristics; wherein the message features include at least one of semantic features and voice features.
Here, the sound characteristic includes at least one of a frequency, a volume, a tone, and the like of the sound. In practical application, when the target emotion classification corresponding to the conversation message is determined based on the message feature, the message feature may be input into a neural network model (such as a gaussian mixture model hidden markov model) which is trained in advance, so that the emotion classification corresponding to the message content is predicted based on the message feature through the neural network model, and a prediction result, that is, the target emotion classification corresponding to the message content is obtained.
In some embodiments, after the terminal receives the conversation message to be displayed, the emotion classification of the conversation message can be identified by the following method: extracting message content contained in the session message; acquiring a first emotion keyword of message content and a second emotion keyword corresponding to each preset emotion category; matching the first emotion keywords with the second emotion keywords respectively to obtain the matching degree of the message content and each emotion category; and determining the emotion category corresponding to the highest matching degree, wherein the emotion category is a target emotion category corresponding to the message content of the session message.
In practical application, the message content of the session message can be analyzed to obtain a first emotion keyword in the message content; then, second emotion keywords corresponding to preset emotion categories are obtained; and matching the first emotion keywords with the second emotion keywords respectively to obtain the matching degree of the message content and each emotion category, so as to obtain the emotion category with the highest matching degree by screening, wherein the emotion category is the target emotion category corresponding to the message content of the session message.
Step 103: and displaying the conversation message in a conversation interface by adopting a target message style corresponding to the target emotion category.
Wherein the target message style has an emoji pattern corresponding to the target emotion classification.
Here, when the terminal receives the conversation message to be displayed and determines that the conversation message corresponds to the target emotion category, in the conversation interface, the conversation message is displayed in a target message style corresponding to the target emotion category, and the target message style has an expression pattern corresponding to the target emotion category, and may be a message bubble having the expression pattern. By way of example, referring to fig. 9, fig. 9 is a schematic presentation diagram of a conversation message provided in an embodiment of the present application. Here, if the target emotion category corresponding to the conversation message 1 is "happy", the conversation message 1 is displayed in a message style having a bee expression pattern corresponding to "happy"; if the target emotion category corresponding to the conversation message 2 is "sad", the conversation message 2 is displayed in a message style with a bee emotion pattern corresponding to the "sad", wherein the message style is a message bubble, and each conversation message is displayed by the message bubble with the corresponding emotion pattern.
In some embodiments, the terminal may present the conversation message in a target message style corresponding to the target emotion category by: presenting an emotion recognition switch in the conversation interface; when the switch state of the emotion recognition switch is in an open state, displaying the conversation message by adopting a target message style corresponding to the target emotion type; the switch state comprises an opening state and a closing state.
Here, the on state is used to indicate that the conversation message is presented in a target message style corresponding to a target emotion category; the closed state is used to indicate that the session message is presented in a default message style.
In some embodiments, the terminal receives a state switching instruction for the emotion recognition switch; responding to a state switching instruction, and controlling the emotion recognition switch to be switched from an on state to an off state; and when receiving a new session message to be displayed, adopting a default message style to display the new session message.
By way of example, referring to fig. 10, fig. 10 is a schematic representation of an emotion recognition switch provided in an embodiment of the present application. Here, as shown in a in fig. 10, when the on-off state of the emotion recognition switch is the on state, the terminal presents the conversation message in a target message style corresponding to the target emotion category; as shown in B of fig. 10, when the on-off state of the emotion recognition switch is switched from the on state to the off state in response to the state switching instruction, the terminal displays the conversation message in a default message style, for example, in a message style without an emoticon corresponding to the target emotion category.
In some embodiments, the terminal may select the expression pattern corresponding to the target message style by: presenting an expression pattern selection interface, and presenting at least two types of candidate expression patterns for selection in the expression pattern selection interface; responding to an expression pattern selection operation triggered based on at least two types of candidate expression patterns, and taking the selected type of candidate expression patterns as the expression patterns of the target message style; the number of each type of candidate expression patterns is at least two, and each candidate expression pattern corresponds to one emotion category;
correspondingly, the terminal can display the conversation message in a target message style corresponding to the target emotion category in the following way: obtaining candidate expression patterns corresponding to the target emotion categories in the selected candidate expression patterns; and displaying the conversation message by adopting the target message style with the acquired candidate expression patterns.
Here, the terminal presents an expression pattern selection interface, and presents at least two types of candidate expression patterns for selection in the expression pattern selection interface, such as a honeybee type expression pattern, a kitten type expression pattern, a fishbone type expression pattern, and the like. The number of each type of candidate expression patterns is at least two, each candidate expression pattern corresponds to one emotion category, for example, the honeybee type expression patterns include a honeybee expression pattern corresponding to anger emotion, a honeybee expression pattern corresponding to joy emotion, a honeybee expression pattern corresponding to sadness emotion, and the like. And responding to the expression pattern selection operation, and taking the selected candidate expression pattern of the same type, such as the selected bee expression pattern, as the expression pattern of the target message style.
Continuously, when the terminal displays the conversation message by adopting the target message style corresponding to the target emotion category, the terminal determines the candidate expression pattern corresponding to the target emotion category from the selected candidate expression patterns, so that the conversation message is displayed by adopting the target message style with the candidate expression pattern.
By way of example, referring to fig. 11, fig. 11 is a schematic diagram illustrating selection of expression patterns provided in an embodiment of the present application. Here, the terminal presents an expression pattern selection interface, and presents at least two types of candidate expression patterns for selection in the expression pattern selection interface, including a honeybee type expression pattern, a kitten type expression pattern, and a fishbone type expression pattern, as shown in a in fig. 11, the number of each type of candidate expression pattern is at least two, each candidate expression pattern corresponds to one emotion category, for example, the honeybee type expression pattern includes a honeybee expression pattern corresponding to a honeybee anger emotion, a honeybee expression pattern corresponding to a happy emotion, a honeybee expression pattern corresponding to a sad emotion, and the like. In response to the emoticon selection operation, the selected candidate emoticons of one category, i.e., the bee-type candidate emoticons, are used as the emoticons of the target message style, as shown in B in fig. 11.
Continuously, when the terminal displays the conversation message in the target message style corresponding to the target emotion category, the terminal determines a candidate emotion pattern corresponding to the target emotion category "sadness" from the bee-class candidate emotion patterns, so that the conversation message is displayed in the target message style with the bee emotion pattern of "sadness", as shown in C in fig. 11.
In some embodiments, the terminal may present the conversation message in a target message style corresponding to the target emotion category by: displaying the conversation message by adopting a target message style of an expression pattern with a target color; or, the conversation message is displayed by adopting a target message style with the expression patterns of at least two colors, wherein the at least two colors are cyclically changed according to the corresponding presentation sequence.
In practical applications, the conversation message may be presented in a target message style having an emoticon with a target color (e.g., yellow), or in a target message style having an emoticon with a plurality of different colors by circularly transforming a plurality of colors. In practical implementation, a corresponding presentation sequence and a presentation duration may be set for each color, so that the multiple colors are cyclically changed according to the set presentation sequence and presentation duration, so as to display the conversation message in a target message style with expression patterns of multiple different colors.
By way of example, referring to fig. 12, fig. 12 is a schematic presentation diagram of a conversation message provided in an embodiment of the present application. Here, as shown in a of fig. 12, the terminal presents the conversation message in a target message style having an emoticon of a target color (e.g., yellow); as shown in B in fig. 12, the conversation message is presented in a target message style with an emoticon of three colors, the three colors cyclically change according to a preset presentation order, each color has a corresponding presentation duration, for example, the presentation order is yellow, pink, and blue, and the presentation duration of each color is 1 s. Based on the above, after the target message style with the yellow expression pattern shows the conversation message 1s, adopting the target message style with the pink expression pattern to show the conversation message; and after the target message style with the pink expression pattern shows the conversation message 1s, adopting the target message style with the blue expression pattern to show the conversation message, and sequentially and circularly changing.
In some embodiments, the terminal may present the conversation message in a target message style corresponding to the target emotion category by: when the conversation message is a voice conversation message, adopting a target message style which corresponds to the target emotion category and has a target size to display the conversation message; the target size is matched with voice parameters of the voice conversation message, and the voice parameters comprise at least one of voice duration and voice volume.
In practical applications, the size of the target message pattern may be determined based on voice parameters such as voice duration, voice volume, and the like of the voice conversation message. By way of example, referring to fig. 13, fig. 13 is a schematic presentation diagram of a conversation message provided in an embodiment of the present application. Here, as shown in a of fig. 13, the conversation message having a short voice duration is presented in a target message style corresponding to the target emotion category and having a first target length; displaying the conversation message with longer voice time by adopting a target message style which corresponds to the target emotion category and has a second target length; wherein the first target length is less than the second target length.
Continuously, as shown in B in fig. 13, the conversation messages whose voice volume is smaller than the normal volume threshold range are displayed in a target message style corresponding to the target emotion category and having a first target size; displaying the conversation messages with the voice volume within the normal volume threshold range by adopting a target message style which corresponds to the target emotion category and has a second target size; displaying the conversation messages with the voice volume larger than the normal volume threshold range by adopting a target message style which corresponds to the target emotion category and has a third target size; wherein the third target size is larger than the second target size, which is larger than the first target size.
In some embodiments, the terminal may present the conversation message in a target message style corresponding to the target emotion category by: displaying the conversation message by adopting a target message style which corresponds to the target emotion category and has a target display style; wherein the target display style corresponds to a type of the message content.
Here, the type of the message content may include songs, poems, and the like; the target display style can be static or dynamic, such as expression pattern flashing, color transformation, and the like.
As an example, the type of the message content of the conversation message is song category, the corresponding emotion category is "happy", the conversation message is displayed in a target message pattern corresponding to the "happy" emotion category and having a target display pattern corresponding to the song category, that is, the conversation message is displayed in a message pattern having a bee emotion pattern corresponding to the happy emotion, and the bee emotion pattern is presented in a flickering display pattern.
In some embodiments, the terminal may present the voice conversation message in the play state by: and when the conversation message is the voice conversation message, responding to a playing instruction aiming at the voice conversation message, playing the voice conversation message, and playing the animation special effect associated with the expression pattern in the playing process of the voice conversation message.
Here, the animation special effect may be an expressive pattern flashing display, a color change, a ripple display in a pattern, a spot, or the like, and is not limited in the embodiment of the present application.
As an example, the terminal presents the voice conversation message in a message style of a bee emoticon with a "happy" emotion; in response to the playing instruction for the voice conversation message, during the playing of the voice conversation message, playing an animation special effect associated with the bee emotion pattern, such as swinging the wings of the bee according to the sound characteristics (such as sound frequency, tone, etc.) of the voice conversation message, wherein the swinging frequency is matched with the sound characteristics of the voice conversation message.
In some embodiments, the terminal may present the message text corresponding to the voice message by: when the conversation message is a voice conversation message, receiving a text conversion instruction aiming at the voice conversation message; presenting a text presentation area having a shape of an emoticon in response to a text conversion instruction; and in the text display area, presenting the message text corresponding to the voice conversation message.
Here, when the conversation message is a voice conversation message, if a text conversion instruction for the voice conversation message is received, a text display area having a shape of an emoticon is presented in response to the text conversion instruction, and a message text corresponding to the voice conversation message is presented through the text display area. The expression pattern is an expression pattern of a target display style corresponding to the voice conversation message, for example, the expression pattern of the target display style corresponding to the voice conversation message is a bee expression pattern, the shape of the text display area may be a bee expression pattern, and similarly, the emotion category of the expression pattern is also a target emotion category corresponding to the voice conversation message.
By way of example, referring to fig. 14, fig. 14 is a schematic representation of a presentation of message text of a voice conversation message provided by an embodiment of the present application. Here, the terminal presents the voice conversation message in a message style of a bee emotion pattern having a "happy" emotion, as shown in a of fig. 14; in response to a text conversion instruction for a voice conversation message, a text display area in the shape of a bee emoticon having a "happy" emotion is presented, and in the text display area, a message text "you are happy, which corresponds to the voice conversation message, is presented, and you are easy to recognize! ", as shown at B in fig. 14.
Taking the session interaction through the game social client as an example, the terminal runs the game social client, and presents function entries of a plurality of game discussion rooms in the view interface, wherein each game discussion room corresponds to a corresponding game discussion theme. And based on the game discussion theme, responding to the triggering operation of the function entrance aiming at the target game discussion room, entering the target game discussion room, and presenting a corresponding session interface for session interaction of a user entering the target game discussion room.
When receiving a conversation message to be displayed, determining a target emotion category corresponding to the message content of the conversation message, and displaying the conversation message in a conversation interface by adopting a target message style corresponding to the target emotion category, wherein the target message style has an expression pattern corresponding to the target emotion category. In practical application, the expression pattern can be a game character selected by a user and related to a game discussion theme in the target game discussion room, the game character corresponds to a plurality of different emotional expressions, and each emotional expression corresponds to a corresponding emotion category, including emotion categories of happiness, sadness, anger, calmness and the like.
By applying the embodiment of the application, after receiving the conversation message to be displayed, adopting the target message style corresponding to the target emotion category to display the conversation message, wherein the target emotion category corresponds to the message content of the conversation message, and the target message style has the expression pattern corresponding to the target emotion category; therefore, the conversation message is displayed through the target message style, so that the user can conveniently and quickly know the emotion of the conversation object, the interest and the pleasure of conversation interaction are increased, and the user experience is improved.
An exemplary application of the embodiments of the present application in a practical application scenario will be described below.
As shown in fig. 15, fig. 15 is a presentation diagram of a conversation message provided in the related art. Here, the terminal displays the voice conversation message in the chat page through a plain ordinary voice bubble, and the user cannot know any related content of the voice message before playing the voice message, and the appearance is single and uninteresting, and the contained information is less.
Based on this, embodiments of the present application provide a method for processing a session message, so as to solve at least the existing problems. Next, a processing method of a session message provided in the embodiment of the present application is described by taking the session message as a voice session message as an example.
In the embodiment of the application, the emotion category corresponding to the message content is judged according to the voice message content sent by the user, and the chat bubbles of the corresponding expression patterns (such as bees) are drawn according to the voice message duration, the voice sound decibels and the singing of the user, so that the user sending the message can feel the emotion of the user through the multiple dimensions of the expression patterns, the pattern ripple variation frequency, the bubble size, the bubble color and the like, the user receiving the message can know the emotion of the sender in advance, meanwhile, the visual change is more realized, and the playability of the voice bubbles is improved.
First, a method for processing a conversation message provided in the embodiment of the present application is described from a product layer by taking an emotional expression as an image of a bee as an example.
When the user inputs the voice conversation message through the voice input function item, the terminal performs emotion recognition on the input voice message in real time, and presents an expression pattern of an emotional expression of the bee image in the voice input interface, wherein the expression pattern corresponds to the recognized emotion category, as shown in fig. 6.
In other embodiments, the automatic emotion recognition mode may be turned off in the mode setting page of the emotion recognition mode, that is, the mode is switched to the emotion self-selection mode, at this time, the system may turn off the automatic emotion recognition mode, and when the user inputs the voice session message through the voice entry function item, a plurality of emotion categories for selection may be presented in the voice entry interface, so that the user may select a corresponding emotion category as needed, as shown in fig. 8.
In practical applications, the expression pattern of the bee image can be displayed as follows, as shown in fig. 16, fig. 16 is a schematic view illustrating the expression pattern of the bee image provided in the embodiment of the present application. Here, the expression pattern of the bee image presented at the time of voice entry is determined based on four dimensions, including:
(1) emotion category of user voice message: the four emotion categories of calmness, anger, sadness and joy (calm, anger, sadness) of the user are conveyed through the bee expression, the four emotions are also conveyed through the rhythm of the ripple on the abdomen of the bee, and the four emotions of calm, anger, sadness and jerkiness are corresponding to the different rhythms of the ripple, as shown in a A, B, C, D diagram in fig. 16.
(2) Recording duration of voice message: the recording duration of the voice message is expressed by the length of the bee.
(3) Voice volume of voice message: the volume of the voice message is distinguished by the size of the bee.
(4) The message content of the voice message is of a target type (such as singing, poetry, etc.): the color and the color of the bee body are gradually changed to convey that the user sings and plays, the expression is fixed, and the color and the ripple can be changed, as shown in an E diagram in fig. 16.
Continuously, when the terminal receives a voice conversation message transmitted by the conversation object or personally transmitted by the user, the voice conversation message is presented with an emoticon corresponding to the emoticon as input, as shown in fig. 8. When the voice conversation message is in a playing state, the ripple vibration frequency, expression change, color change and the like of the bee image correspond to those in the recording state one by one, the specific change of the bee image is shown in fig. 17, and fig. 17 is a presentation schematic diagram of bee expression patterns corresponding to different voice conversation messages provided by the embodiment of the application. Here, (1) the same conversation message, the bee expression of the voice input interface is the same as the bee expression of the message bubble, and the ripple vibration is also the same; (2) the length of the bee expression corresponding to the voice conversation message is matched with the voice duration; (3) the size of the bee expression corresponding to the voice conversation message is matched with the voice volume; (4) when the message content of the voice conversation message is in a target type (such as singing, poetry and the like) and is in a playing state, the colors of the expressions of the bees are cyclically changed according to the preset sequence of the colors.
In practical applications, the bee image of the above expression pattern may be replaced by other images, such as seal, butterfly, etc., which are combined with patterns of ripples and expressions, and are not limited in the embodiments of the present application.
Next, referring to fig. 18, fig. 18 is a flowchart illustrating a method for processing a session message according to an embodiment of the present application, and the method for processing a session message according to the embodiment of the present application is described from a technical level.
Step 201: is a trigger operation for a voice entry function item received? If not, go to step 202; if yes, go to step 203.
Here, it is determined whether the user performs a long-press operation for the voice entry function item "press-and-talk". When the user presses the button of 'holding and speaking', recording the recording content of the user and uploading the recording content to the server.
Step 202: no data need be recorded.
Step 203: is the emotion automatic recognition mode turned on? If yes, go to step 204; if not, go to step 205.
Step 204: is the message content entered based on the voice-entry function term attributed to the target type? If yes, go to step 206; if not, go to step 207.
Step 205: is the message content entered based on the voice-entry function term attributed to the target type? If yes, go to step 206; if not, go to step 208.
Here, the target types involved in step 204 and step 205 are types of message contents, such as songs, poems, and the like.
Step 206: recording the expression pattern corresponding to the conversation message as a color-changing bee image.
Here, in practical applications, the expression pattern may be fixed, but the color of the expression pattern may be cyclically changed in the presentation order of the respective different colors.
Step 207: recording voice frequency and tone fluctuation of the conversation message, and judging the emotion type corresponding to the conversation message.
In practical applications, the emotion categories and corresponding expression patterns include:
calm mood: light ripple, calm expression
Anger emotion: heavy amplitude ripple, angry expression
Grief of grief: slow amplitude ripple and grief expression
Joyful mood: jumping (jerky) amplitude ripple, happy expression.
Step 208: and presenting the expression patterns corresponding to the emotion categories of the session messages.
Here, the expression pattern is a bee image, and in practical implementation, the abdomen ripple of the bee corresponding to the emotion category can be presented to show the conversation message.
In practical implementation, if a piece of voice conversation message corresponds to multiple emotion categories, the expression patterns also change correspondingly, for example, different color expression patterns are displayed for different emotion categories.
Step 209: recording the voice recording duration of the conversation message.
Here, in practical use, the shortest and longest bubble lengths are set, and the length is increased in the shortest bubble length. The longest message bubble was equally divided into 20, and the message bubble increased in length by 1 for each 3 seconds increase.
Specifically, assuming that the longest chat bubble length is X and the shortest bubble length is Y according to screen adaptation, X-Y/20 is an increase in unit length every 3 seconds.
Step 210: the voice sound decibels of the conversation message are recorded.
Here, in practical application, the expression pattern corresponding to the conversation message with the volume of 20-40 db is the first bee image, the expression pattern corresponding to the conversation message with the volume of 41-60 db is the second bee image, and the expression pattern corresponding to the conversation message with the volume of more than 61 db is the third bee image; wherein, the size of the first bee image is smaller than that of the second bee image, and the size of the second bee image is smaller than that of the third bee image.
Step 211: and uploading the recorded data to a server.
Here, the data includes the entered voice conversation message and related data (such as voice recording duration, sound decibel, emotion category, and the like) corresponding to the voice conversation message.
Step 212: the receiving end downloads the data of the server and draws a corresponding message bubble to be presented in the session interface, so that the session message is presented through the message bubble.
By applying the embodiment of the application, various expression patterns, such as the expression patterns of the bee image, can be converted according to the emotion change of the voice sound wave. The receiver can quickly judge the emotion contained in the voice message through information such as the color appearance of the message bubbles. When the voice conversation message is played, the message bubble also has expression change. Therefore, the voice emotion can be judged in advance, the pleasure of the chat conversation content is improved, the message bubbles are not uniform any more, the visual change is realized, and the playability of the voice bubbles is improved.
Continuing with the description of the processing device 555 for session messages provided in the embodiments of the present application, in some embodiments, the processing device for session messages may be implemented by using software modules. Referring to fig. 19, fig. 19 is a schematic structural diagram of a processing device 555 for a session message provided in an embodiment of the present application, where the processing device 555 for a session message provided in an embodiment of the present application includes:
a presentation module 5551 for presenting a session interface for conducting a message session;
a receiving module 5552, configured to receive a conversation message to be displayed, where a message content of the conversation message corresponds to a target emotion category;
a display module 5553, configured to display, in the conversation interface, the conversation message in a target message style corresponding to the target emotion category;
wherein the target message style has an emoji pattern corresponding to the target emotion classification.
In some embodiments, the receiving module 5552 is further configured to present a message sending function item and a message editing box in the session interface;
presenting the edited text conversation message in the message edit box, and presenting at least one candidate expression in an associated area of the message edit box, wherein each candidate expression corresponds to an expression pattern;
responding to the selection operation of a target expression in the at least one candidate expression, and taking an expression pattern corresponding to the target expression as an expression pattern of the target message style;
receiving the session message in response to a trigger operation for the messaging function item.
In some embodiments, the apparatus further comprises:
the emotion recognition module is used for carrying out emotion recognition on the message content of the conversation message in real time in the process of receiving the conversation message;
when the emotion category corresponding to the message content is identified as the target emotion category, presenting the emotional expression corresponding to the target emotion category;
and taking the expression pattern corresponding to the emotional expression as the expression pattern of the target message style.
In some embodiments, the apparatus further comprises:
the emotion recognition mode setting module is used for presenting a mode setting interface for setting an emotion recognition mode of the conversation message;
receiving a setting operation for an emotion automatic recognition mode triggered based on the mode setting interface;
setting an emotion recognition mode of the conversation message to an emotion automatic recognition mode in response to the setting operation;
correspondingly, the emotion recognition module is further configured to acquire an emotion recognition mode of the session message;
and when the emotion recognition mode of the conversation message is determined to be the emotion automatic recognition mode, performing emotion recognition on the message content of the conversation message in real time.
In some embodiments, the receiving module 5552 is further configured to present a voice function entry in the session interface;
responding to the trigger operation aiming at the voice function inlet, presenting a voice input interface, and presenting a voice input function item in the voice input interface;
and responding to a voice input operation triggered based on the voice input function item, receiving the voice conversation message, and presenting an expression pattern corresponding to the target emotion category in the voice input interface.
In some embodiments, the presenting module 5551 is further configured to present at least two categories of emotions for selection;
in response to an emotion category selection operation triggered based on the at least two emotion categories, treating the selected emotion category as the target emotion category.
In some embodiments, the presentation module 5553 is further configured to present an emotion recognition switch in the conversation interface;
when the switch state of the emotion recognition switch is an on state, displaying the conversation message by adopting a target message style corresponding to the target emotion type;
the switch state comprises an opening state and a closing state.
In some embodiments, the presentation module 5553 is further configured to receive a state switching instruction for the emotion recognition switch;
responding to the state switching instruction, and controlling the emotion recognition switch to be switched from the on state to the off state;
and when receiving a new session message to be displayed, adopting a default message style to display the new session message.
In some embodiments, the apparatus further comprises:
the expression pattern selection module is used for presenting an expression pattern selection interface and presenting at least two types of candidate expression patterns for selection in the expression pattern selection interface;
the number of each type of candidate expression patterns is at least two, and each candidate expression pattern corresponds to one emotion category;
responding to an expression pattern selection operation triggered based on the at least two types of candidate expression patterns, and taking the selected type of candidate expression patterns as the expression patterns of the target message style;
correspondingly, the display module is further configured to obtain a candidate expression pattern corresponding to the target emotion category in the selected type of candidate expression patterns;
and displaying the conversation message by adopting the target message style with the acquired candidate expression pattern.
In some embodiments, the presentation module 5553 is further configured to present the conversation message in a target message style with an emoticon having a target color;
or, the conversation message is displayed by adopting a target message style with an expression pattern with at least two colors, wherein the at least two colors are cyclically changed according to the corresponding presentation sequence.
In some embodiments, the presenting module 5553 is further configured to present the conversation message in a target message style corresponding to the target emotion category and having a target size when the conversation message is a voice conversation message;
the target size is matched with voice parameters of the voice conversation message, and the voice parameters comprise at least one of voice duration and voice volume.
In some embodiments, the presenting module 5553 is further configured to present the conversation message in a target message style corresponding to the target emotion category and having a target display style;
wherein the target display style corresponds to a type of the message content.
In some embodiments, the apparatus further comprises:
a message playing module, configured to, when the session message is a voice session message, respond to a playing instruction for the voice session message, play the voice session message, and
and playing the animation special effect associated with the expression pattern in the playing process of the voice conversation message.
In some embodiments, the presenting module 5551 is further configured to receive a text conversion instruction for the voice conversation message when the conversation message is a voice conversation message;
presenting a text display area having a shape of the emoticon in response to the text conversion instruction;
and presenting the message text corresponding to the voice conversation message in the text display area.
In some embodiments, the emotion recognition module is further configured to extract a message feature of the message content when the conversation message is a voice conversation message, the message feature including at least one of a semantic feature and a voice feature;
and determining a target emotion category corresponding to the conversation message based on the message characteristics.
In some embodiments, the emotion recognition module is further configured to extract message content included in the conversation message;
acquiring a first emotion keyword of the message content and a second emotion keyword corresponding to each preset emotion category;
matching the first emotion keywords with the second emotion keywords respectively to obtain the matching degree of the message content and each emotion category;
and determining the emotion category corresponding to the highest matching degree, wherein the emotion category is the target emotion category corresponding to the message content of the session message.
By applying the embodiment of the application, after receiving the conversation message to be displayed, adopting the target message style corresponding to the target emotion category to display the conversation message, wherein the target emotion category corresponds to the message content of the conversation message, and the target message style has the expression pattern corresponding to the target emotion category; therefore, the conversation message is displayed through the target message style, so that the user can conveniently and quickly know the emotion of the conversation object, the interest and the pleasure of conversation interaction are increased, and the user experience is improved.
An embodiment of the present application further provides an electronic device, where the electronic device includes:
a memory for storing executable instructions;
and the processor is used for realizing the processing method of the session message provided by the embodiment of the application when the executable instruction stored in the memory is executed.
Embodiments of the present application also provide a computer program product or a computer program comprising computer instructions stored in a computer-readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the computer device executes the conversation message processing method provided by the embodiment of the application.
The embodiment of the present application further provides a computer-readable storage medium, which stores executable instructions, and when the executable instructions are executed by a processor, the method for processing the session message provided in the embodiment of the present application is implemented.
In some embodiments, the computer-readable storage medium may be memory such as FRAM, ROM, PROM, EP ROM, EEPROM, flash memory, magnetic surface memory, optical disk, or CD-ROM; or may be various devices including one or any combination of the above memories.
In some embodiments, executable instructions may be written in any form of programming language (including compiled or interpreted languages), in the form of programs, software modules, scripts or code, and may be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment.
By way of example, executable instructions may correspond, but do not necessarily have to correspond, to files in a file system, and may be stored in a portion of a file that holds other programs or data, such as in one or more scripts in a hypertext Markup Language (H TML) document, in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub-programs, or portions of code).
By way of example, executable instructions may be deployed to be executed on one computing device or on multiple computing devices at one site or distributed across multiple sites and interconnected by a communication network.
The above description is only an example of the present application, and is not intended to limit the scope of the present application. Any modification, equivalent replacement, and improvement made within the spirit and scope of the present application are included in the protection scope of the present application.

Claims (19)

1. A method for processing a session message, the method comprising:
presenting a session interface for conducting a message session;
receiving a conversation message to be displayed, wherein the message content of the conversation message corresponds to a target emotion category;
displaying the conversation message in the conversation interface by adopting a target message style corresponding to the target emotion category;
wherein the target message style has an emoji pattern corresponding to the target emotion classification.
2. The method of claim 1, wherein when the conversation message is a text conversation message, the receiving a conversation message to be displayed comprises:
presenting a message sending function item and a message editing frame in the session interface;
presenting the edited text conversation message in the message edit box, and presenting at least one candidate expression in an associated area of the message edit box, wherein each candidate expression corresponds to an expression pattern;
responding to the selection operation of a target expression in the at least one candidate expression, and taking an expression pattern corresponding to the target expression as an expression pattern of the target message style;
receiving the session message in response to a trigger operation for the messaging function item.
3. The method of claim 1, wherein the method further comprises:
performing emotion recognition on the message content of the conversation message in real time in the process of receiving the conversation message;
when the emotion category corresponding to the message content is identified as the target emotion category, presenting the emotional expression corresponding to the target emotion category;
and taking the expression pattern corresponding to the emotional expression as the expression pattern of the target message style.
4. The method of claim 3, wherein prior to presenting a conversation interface for conducting a message conversation, the method further comprises:
presenting a mode setting interface for setting an emotion recognition mode of the conversation message;
receiving a setting operation for an emotion automatic recognition mode triggered based on the mode setting interface;
setting an emotion recognition mode of the conversation message to an emotion automatic recognition mode in response to the setting operation;
correspondingly, the emotion recognition of the message content of the conversation message in real time includes:
acquiring an emotion recognition mode of the session message;
and when the emotion recognition mode of the conversation message is determined to be the emotion automatic recognition mode, performing emotion recognition on the message content of the conversation message in real time.
5. The method of claim 1, wherein when the conversation message is a voice conversation message, the receiving the conversation message to be displayed comprises:
presenting a voice function entry in the conversation interface;
responding to the trigger operation aiming at the voice function inlet, presenting a voice input interface, and presenting a voice input function item in the voice input interface;
and responding to a voice input operation triggered based on the voice input function item, receiving the voice conversation message, and presenting an expression pattern corresponding to the target emotion category in the voice input interface.
6. The method of claim 1, wherein after receiving the conversation message to be displayed, the method further comprises:
presenting at least two mood categories for selection;
in response to an emotion category selection operation triggered based on the at least two emotion categories, treating the selected emotion category as the target emotion category.
7. The method of claim 1, wherein said presenting the conversation message in a targeted message style corresponding to the targeted emotion classification comprises:
presenting an emotion recognition switch in the session interface;
when the switch state of the emotion recognition switch is an on state, displaying the conversation message by adopting a target message style corresponding to the target emotion type;
the switch state comprises an opening state and a closing state.
8. The method of claim 7, wherein the method further comprises:
receiving a state switching instruction for the emotion recognition switch;
responding to the state switching instruction, and controlling the emotion recognition switch to be switched from the on state to the off state;
and when receiving a new session message to be displayed, adopting a default message style to display the new session message.
9. The method of claim 1, wherein prior to presenting a conversation interface for conducting a message conversation, the method further comprises:
presenting an expression pattern selection interface, and presenting at least two types of candidate expression patterns for selection in the expression pattern selection interface;
the number of each type of candidate expression patterns is at least two, and each candidate expression pattern corresponds to one emotion category;
responding to an expression pattern selection operation triggered based on the at least two types of candidate expression patterns, and taking the selected type of candidate expression patterns as the expression patterns of the target message style;
correspondingly, the displaying the conversation message in the target message style corresponding to the target emotion category includes:
obtaining candidate expression patterns corresponding to the target emotion category in the selected candidate expression patterns;
and displaying the conversation message by adopting the target message style with the acquired candidate expression pattern.
10. The method of claim 1, wherein said presenting the conversation message in a targeted message style corresponding to the targeted emotion classification comprises:
displaying the conversation message by adopting a target message style of an expression pattern with a target color;
or, the conversation message is displayed by adopting a target message style with an expression pattern with at least two colors, wherein the at least two colors are cyclically changed according to the corresponding presentation sequence.
11. The method of claim 1, wherein said presenting the conversation message in a targeted message style corresponding to the targeted emotion classification comprises:
when the conversation message is a voice conversation message, adopting a target message style which corresponds to the target emotion category and has a target size to display the conversation message;
the target size is matched with voice parameters of the voice conversation message, and the voice parameters comprise at least one of voice duration and voice volume.
12. The method of claim 1, wherein said presenting the conversation message in a targeted message style corresponding to the targeted emotion classification comprises:
displaying the conversation message in a target message style which corresponds to the target emotion category and has a target display style;
wherein the target display style corresponds to a type of the message content.
13. The method of claim 1, wherein after presenting the conversation message in a targeted message style corresponding to the targeted emotion classification, the method further comprises:
when the conversation message is a voice conversation message, responding to a playing instruction aiming at the voice conversation message, playing the voice conversation message, and
and playing the animation special effect associated with the expression pattern in the playing process of the voice conversation message.
14. The method of claim 1, wherein after presenting the conversation message in a targeted message style corresponding to the targeted emotion classification, the method further comprises:
when the conversation message is a voice conversation message, receiving a text conversion instruction aiming at the voice conversation message;
presenting a text display area having a shape of the emoticon in response to the text conversion instruction;
and presenting the message text corresponding to the voice conversation message in the text display area.
15. The method of claim 1, wherein after receiving the conversation message to be displayed, the method further comprises:
when the conversation message is a voice conversation message, extracting message characteristics of the message content, wherein the message characteristics comprise at least one of semantic characteristics and sound characteristics;
and determining a target emotion category corresponding to the conversation message based on the message characteristics.
16. The method of claim 1, wherein after receiving the conversation message to be displayed, the method further comprises:
extracting message content contained in the session message;
acquiring a first emotion keyword of the message content and a second emotion keyword corresponding to each preset emotion category;
matching the first emotion keywords with the second emotion keywords respectively to obtain the matching degree of the message content and each emotion category;
and determining the emotion category corresponding to the highest matching degree, wherein the emotion category is the target emotion category corresponding to the message content of the session message.
17. An apparatus for processing a session message, the apparatus comprising:
the presentation module is used for presenting a session interface for carrying out message session;
the receiving module is used for receiving a conversation message to be displayed, wherein the message content of the conversation message corresponds to the target emotion category;
the display module is used for displaying the conversation message in the conversation interface by adopting a target message style corresponding to the target emotion category;
wherein the target message style has an emoji pattern corresponding to the target emotion classification.
18. An electronic device, characterized in that the electronic device comprises:
a memory for storing executable instructions;
a processor, configured to execute the executable instructions stored in the memory to implement the method for processing the session message according to any one of claims 1 to 16.
19. A computer-readable storage medium having stored thereon executable instructions for implementing a method of processing a conversational message as claimed in any one of claims 1 to 16 when executed.
CN202110218984.0A 2021-02-26 2021-02-26 Session message processing method and device, electronic equipment and storage medium Pending CN112883181A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110218984.0A CN112883181A (en) 2021-02-26 2021-02-26 Session message processing method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110218984.0A CN112883181A (en) 2021-02-26 2021-02-26 Session message processing method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN112883181A true CN112883181A (en) 2021-06-01

Family

ID=76054774

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110218984.0A Pending CN112883181A (en) 2021-02-26 2021-02-26 Session message processing method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112883181A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114780190A (en) * 2022-04-13 2022-07-22 脸萌有限公司 Message processing method and device, electronic equipment and storage medium
CN116137617A (en) * 2021-11-17 2023-05-19 腾讯科技(深圳)有限公司 Expression pack display and associated sound acquisition methods, devices, equipment and storage medium
WO2024114162A1 (en) * 2022-11-29 2024-06-06 腾讯科技(深圳)有限公司 Animation processing method and apparatus, computer device, storage medium, and program product

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107516533A (en) * 2017-07-10 2017-12-26 阿里巴巴集团控股有限公司 A kind of session information processing method, device, electronic equipment
CN109343919A (en) * 2018-08-30 2019-02-15 深圳市口袋网络科技有限公司 A kind of rendering method and terminal device, storage medium of bubble of chatting
CN109547332A (en) * 2018-11-22 2019-03-29 腾讯科技(深圳)有限公司 Communication session interaction method and device, and computer equipment
CN110187862A (en) * 2019-05-29 2019-08-30 北京达佳互联信息技术有限公司 Speech message display methods, device, terminal and storage medium
CN110379430A (en) * 2019-07-26 2019-10-25 腾讯科技(深圳)有限公司 Voice-based cartoon display method, device, computer equipment and storage medium
CN110417641A (en) * 2019-07-23 2019-11-05 上海盛付通电子支付服务有限公司 A kind of method and apparatus sending conversation message
CN111106995A (en) * 2019-12-26 2020-05-05 腾讯科技(深圳)有限公司 Message display method, device, terminal and computer readable storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107516533A (en) * 2017-07-10 2017-12-26 阿里巴巴集团控股有限公司 A kind of session information processing method, device, electronic equipment
CN109343919A (en) * 2018-08-30 2019-02-15 深圳市口袋网络科技有限公司 A kind of rendering method and terminal device, storage medium of bubble of chatting
CN109547332A (en) * 2018-11-22 2019-03-29 腾讯科技(深圳)有限公司 Communication session interaction method and device, and computer equipment
CN110187862A (en) * 2019-05-29 2019-08-30 北京达佳互联信息技术有限公司 Speech message display methods, device, terminal and storage medium
CN110417641A (en) * 2019-07-23 2019-11-05 上海盛付通电子支付服务有限公司 A kind of method and apparatus sending conversation message
CN110379430A (en) * 2019-07-26 2019-10-25 腾讯科技(深圳)有限公司 Voice-based cartoon display method, device, computer equipment and storage medium
CN111106995A (en) * 2019-12-26 2020-05-05 腾讯科技(深圳)有限公司 Message display method, device, terminal and computer readable storage medium

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116137617A (en) * 2021-11-17 2023-05-19 腾讯科技(深圳)有限公司 Expression pack display and associated sound acquisition methods, devices, equipment and storage medium
WO2023087888A1 (en) * 2021-11-17 2023-05-25 腾讯科技(深圳)有限公司 Emoticon display and associated sound acquisition methods and apparatuses, device and storage medium
CN116137617B (en) * 2021-11-17 2024-03-22 腾讯科技(深圳)有限公司 Expression pack display and associated sound acquisition methods, devices, equipment and storage medium
CN114780190A (en) * 2022-04-13 2022-07-22 脸萌有限公司 Message processing method and device, electronic equipment and storage medium
CN114780190B (en) * 2022-04-13 2023-12-22 脸萌有限公司 Message processing method, device, electronic equipment and storage medium
WO2023200397A3 (en) * 2022-04-13 2023-12-28 脸萌有限公司 Message processing method and apparatus, electronic device, and storage medium
WO2024114162A1 (en) * 2022-11-29 2024-06-06 腾讯科技(深圳)有限公司 Animation processing method and apparatus, computer device, storage medium, and program product

Similar Documents

Publication Publication Date Title
US11620984B2 (en) Human-computer interaction method, and electronic device and storage medium thereof
US20200395008A1 (en) Personality-Based Conversational Agents and Pragmatic Model, and Related Interfaces and Commercial Models
CN112883181A (en) Session message processing method and device, electronic equipment and storage medium
US20180133900A1 (en) Embodied dialog and embodied speech authoring tools for use with an expressive social robot
JP4395687B2 (en) Information processing device
KR102012968B1 (en) Method and server for controlling interaction robot
US6522333B1 (en) Remote communication through visual representations
US20180342095A1 (en) System and method for generating virtual characters
CN108491147A (en) A kind of man-machine interaction method and mobile terminal based on virtual portrait
CN107040452B (en) Information processing method and device and computer readable storage medium
EP2834811A1 (en) Robot capable of incorporating natural dialogues with a user into the behaviour of same, and methods of programming and using said robot
EP1332492A1 (en) User interface / entertainment device that simulates personal interaction and responds to user's mental state and/or personality
JP2004527809A (en) Environmentally responsive user interface / entertainment device that simulates personal interaction
EP1370974A2 (en) Self-updating personal interaction simulator
CN107480766B (en) Method and system for content generation for multi-modal virtual robots
JP2010073192A (en) Conversation scenario editing device, user terminal device, and automatic answering system
CN112087669B (en) Method and device for presenting virtual gift and electronic equipment
CN113010138B (en) Article voice playing method, device and equipment and computer readable storage medium
US11267121B2 (en) Conversation output system, conversation output method, and non-transitory recording medium
KR20100129122A (en) Animation system for reproducing text base data by animation
CN110493123A (en) Instant communication method, device, equipment and storage medium
Pauletto et al. Exploring expressivity and emotion with artificial voice and speech technologies
CN112306450A (en) Information processing method and device
CN111914115B (en) Sound information processing method and device and electronic equipment
KR20200021648A (en) User terminal for providing chatting service

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40046468

Country of ref document: HK