CN109039851B - Interactive data processing method and device, computer equipment and storage medium - Google Patents

Interactive data processing method and device, computer equipment and storage medium Download PDF

Info

Publication number
CN109039851B
CN109039851B CN201710438781.6A CN201710438781A CN109039851B CN 109039851 B CN109039851 B CN 109039851B CN 201710438781 A CN201710438781 A CN 201710438781A CN 109039851 B CN109039851 B CN 109039851B
Authority
CN
China
Prior art keywords
real
asynchronous message
virtual session
expression
time
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710438781.6A
Other languages
Chinese (zh)
Other versions
CN109039851A (en
Inventor
李斌
陈晓波
罗程
陈郁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201710438781.6A priority Critical patent/CN109039851B/en
Publication of CN109039851A publication Critical patent/CN109039851A/en
Application granted granted Critical
Publication of CN109039851B publication Critical patent/CN109039851B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/1066Session management
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L51/00User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
    • H04L51/04Real-time or near real-time messaging, e.g. instant messaging [IM]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L51/00User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
    • H04L51/07User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail characterised by the inclusion of specific contents
    • H04L51/10Multimedia information
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/1066Session management
    • H04L65/1083In-session procedures

Abstract

The invention relates to an interactive data processing method, an interactive data processing device, computer equipment and a storage medium, wherein the method comprises the following steps: real-time interaction is carried out through virtual session members in a virtual session scene; acquiring a trigger instruction of an asynchronous message; when the asynchronous message acquisition mode corresponding to the trigger instruction is mutually exclusive with the real-time interaction, interrupting the real-time interaction mutually exclusive with the asynchronous message acquisition mode; acquiring asynchronous messages for playing in the virtual session scene according to the trigger instruction; sending the obtained asynchronous message; resuming the interrupted real-time interaction. The integration of real-time interaction and asynchronous messages is realized, so that a user can improve the information amount of interaction information by sending the asynchronous interaction messages in the real-time interaction communication process.

Description

Interactive data processing method and device, computer equipment and storage medium
Technical Field
The present invention relates to the field of computer technologies, and in particular, to an interactive data processing method, an interactive data processing apparatus, a computer device, and a storage medium.
Background
With the rapid development of scientific technology, various communication technologies are more and more advanced, and the requirements of people on communication forms are more and more diversified. In the current interactive communication mode, the real-time interactive communication mode is popular with the majority of users by virtue of the advantages of timeliness, good interactivity and the like.
The real-time interactive communication requires that an information receiver and an information sender which carry out interactive communication are kept in an online state, and the information sender needs to synchronously wait for the response of the information receiver after sending information to the information receiver.
However, in the current real-time interactive communication process, the real-time transceiving processing is performed, and in order to meet the real-time performance, the information amount of the real-time interactive information which is transmitted and received in real time each time is relatively small.
Disclosure of Invention
In view of the above, it is necessary to provide an interactive data processing method, an apparatus, a computer device, and a storage medium for solving the problem that the information amount of the interactive information of the current real-time interactive communication is relatively small.
A method of interactive data processing, the method comprising:
real-time interaction is carried out through virtual session members in a virtual session scene;
acquiring a trigger instruction of an asynchronous message;
when the asynchronous message acquisition mode corresponding to the trigger instruction is mutually exclusive with the real-time interaction, then
Interrupting the real-time interaction mutually exclusive with the asynchronous message acquisition mode;
acquiring asynchronous messages for playing in the virtual session scene according to the trigger instruction;
sending the obtained asynchronous message;
resuming the interrupted real-time interaction.
An interactive data processing apparatus, the apparatus comprising:
the real-time interaction module is used for carrying out real-time interaction through the virtual session members in the virtual session scene;
the instruction acquisition module is used for acquiring a trigger instruction of the asynchronous message;
a mutual exclusion processing module, configured to interrupt the real-time interaction mutually exclusive with the asynchronous message acquisition mode when the asynchronous message acquisition mode corresponding to the trigger instruction is mutually exclusive with the real-time interaction;
the asynchronous message acquisition module is used for acquiring asynchronous messages played in the virtual session scene according to the trigger instruction;
the sending module is used for sending the acquired asynchronous message;
the mutual exclusion processing module is further configured to resume the interrupted real-time interaction.
A computer device comprising a memory and a processor, the memory having stored therein computer-readable instructions that, when executed by the processor, cause the processor to perform the steps of:
real-time interaction is carried out through virtual session members in a virtual session scene;
acquiring a trigger instruction of an asynchronous message;
when the asynchronous message acquisition mode corresponding to the trigger instruction is mutually exclusive with the real-time interaction, then
Interrupting the real-time interaction mutually exclusive with the asynchronous message acquisition mode;
acquiring asynchronous messages for playing in the virtual session scene according to the trigger instruction;
sending the obtained asynchronous message;
resuming the interrupted real-time interaction.
A storage medium storing computer-readable instructions which, when executed by one or more processors, cause the one or more processors to perform the steps of:
real-time interaction is carried out through virtual session members in a virtual session scene;
acquiring a trigger instruction of an asynchronous message;
when the asynchronous message acquisition mode corresponding to the trigger instruction is mutually exclusive with the real-time interaction, then
Interrupting the real-time interaction mutually exclusive with the asynchronous message acquisition mode;
acquiring asynchronous messages for playing in the virtual session scene according to the trigger instruction;
sending the obtained asynchronous message;
resuming the interrupted real-time interaction.
The interactive data processing method, the interactive data processing device, the computer equipment and the storage medium carry out real-time interaction through the virtual session members in the virtual session scene; acquiring a trigger instruction of an asynchronous message; when the asynchronous message acquisition mode corresponding to the trigger instruction is mutually exclusive with the real-time interaction, the real-time interaction mutually exclusive with the asynchronous message acquisition mode is interrupted; acquiring asynchronous messages for playing in a virtual session scene according to the triggering instruction; sending the obtained asynchronous message; resuming the interrupted real-time interaction. The sending processing of the asynchronous message is inserted in the real-time interaction, so that the integration of the real-time interaction and the asynchronous message is realized, and the information quantity of the interactive information can be improved by sending the asynchronous interactive information in the real-time interactive communication process of a user.
A method of interactive data processing, the method comprising:
real-time interaction is carried out through virtual session members in a virtual session scene;
receiving an asynchronous message;
when the playing mode of the asynchronous message is mutually exclusive with the real-time interaction, then
Interrupting the real-time interaction mutually exclusive with the playing mode;
playing the asynchronous message in the virtual session scene;
and after the asynchronous message is played, recovering the interrupted real-time interaction.
An interactive data processing apparatus, the apparatus comprising:
the real-time interaction module is used for carrying out real-time interaction through the virtual session members in the virtual session scene;
a receiving module for receiving an asynchronous message;
a mutual exclusion processing module, configured to interrupt the real-time interaction mutually exclusive to the playing mode when the playing mode of the asynchronous message is mutually exclusive to the real-time interaction;
the asynchronous message playing module is used for playing the asynchronous message in the virtual session scene;
and the mutual exclusion processing module is also used for recovering the interrupted real-time interaction after the asynchronous message is played.
A computer device comprising a memory and a processor, the memory having stored therein computer-readable instructions that, when executed by the processor, cause the processor to perform the steps of:
real-time interaction is carried out through virtual session members in a virtual session scene;
receiving an asynchronous message;
when the playing mode of the asynchronous message is mutually exclusive with the real-time interaction, then
Interrupting the real-time interaction mutually exclusive with the playing mode;
playing the asynchronous message in the virtual session scene;
and after the asynchronous message is played, recovering the interrupted real-time interaction.
A storage medium storing computer-readable instructions which, when executed by one or more processors, cause the one or more processors to perform the steps of:
real-time interaction is carried out through virtual session members in a virtual session scene;
receiving an asynchronous message;
when the playing mode of the asynchronous message is mutually exclusive with the real-time interaction, then
Interrupting the real-time interaction mutually exclusive with the playing mode;
playing the asynchronous message in the virtual session scene;
and after the asynchronous message is played, recovering the interrupted real-time interaction.
The interactive data processing method, the interactive data processing device, the computer equipment and the storage medium carry out real-time interaction through the virtual session members in the virtual session scene; real-time interaction is carried out through virtual session members in a virtual session scene; receiving an asynchronous message; when the playing mode of the asynchronous message is mutually exclusive with the real-time interaction, the real-time interaction mutually exclusive with the playing mode is interrupted; playing the asynchronous message in the virtual session scene; and after the asynchronous message is played, recovering the interrupted real-time interaction. The playing processing of the asynchronous message is inserted in the real-time interaction, so that the integration of the real-time interaction and the asynchronous message is realized, and the information quantity of the interactive information can be improved by playing the asynchronous interactive information in the real-time interactive communication process of a user.
Drawings
FIG. 1 is a diagram of an application environment of a method for interactive data processing in one embodiment;
FIG. 2 is a schematic diagram showing an internal configuration of a computer device according to an embodiment;
FIG. 3 is a flowchart illustrating a method for interactive data processing according to an embodiment;
FIG. 4A is a diagram illustrating a virtual session scenario interface for real-time interaction in one embodiment;
FIG. 4B is a diagram illustrating an interface for obtaining asynchronous messages, according to an embodiment;
FIG. 5 is an architecture diagram illustrating an implementation of the interactive data processing method in one embodiment;
FIG. 6 is a schematic diagram of an interface for playing asynchronous messages in one embodiment;
FIGS. 7A-7B are schematic diagrams of interfaces for playing asynchronous messages in another embodiment;
FIG. 8 is a schematic flow chart diagram illustrating the steps of real-time interaction in one embodiment;
FIG. 9 is a timing diagram illustrating an implementation of the interactive data processing method in one embodiment;
FIG. 10 is a flowchart illustrating an interactive data processing method according to another embodiment;
FIG. 11 is a flowchart illustrating an interactive data processing method according to another embodiment;
FIG. 12 is a flowchart illustrating a preset interaction playback step according to an embodiment;
FIG. 13 is a block diagram showing an arrangement of an interactive data processing apparatus according to an embodiment;
FIG. 14 is a block diagram showing the construction of an interactive data processing apparatus according to another embodiment;
fig. 15 is a block diagram showing a configuration of an interactive data processing apparatus according to still another embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
FIG. 1 is a diagram of an application environment of a method for interactive data processing in one embodiment. Referring to fig. 1, an application environment of the interactive data processing method includes a first terminal 110, a second terminal 120, and a server 130. The first terminal 110 and the second terminal 120 have an application program installed therein, and the first terminal 110 and the second terminal 120 may be configured to send an asynchronous message and receive an asynchronous message. The server 130 may be an independent physical server or a server cluster including a plurality of physical servers. The server 130 may include an open service platform and may further include an access server to access the open service platform. The first terminal 110 and the second terminal 120 may be the same or different terminals. The terminal may be a mobile terminal, a desktop computer, or a vehicle-mounted device, and the mobile terminal may include at least one of a mobile phone, a tablet computer, a personal digital assistant, a wearable device, and the like.
In the virtual session scenario, the first terminal 110 may interact with a corresponding virtual session member of the second terminal 120 in real time through the corresponding virtual session member, where the real-time interaction may include real-time voice interaction and/or real-time emotive interaction. The first terminal 110 may obtain a trigger instruction of the asynchronous message, and when an asynchronous message obtaining manner corresponding to the trigger instruction is mutually exclusive with the real-time interaction, interrupt the real-time interaction mutually exclusive with the asynchronous message obtaining manner, and obtain the asynchronous message for playing in the virtual session scene according to the trigger instruction. The first terminal 110 may send the retrieved asynchronous message to the server 130 and resume the interrupted real-time interaction. Further, the server 130 may forward the asynchronous message to the second terminal 120, and the second terminal 120 may play the asynchronous message in a virtual session scene. Among them, the second terminal 120 is at least one.
It is to be appreciated that in other embodiments, the first terminal 110 may send the asynchronous message directly to the second terminal 120 in a point-to-point manner without forwarding through the server 130.
FIG. 2 is a diagram showing an internal configuration of a computer device according to an embodiment. The computer devices may be the first terminal 110 and the second terminal 120 in fig. 1. Referring to fig. 2, the computer apparatus includes a processor, a non-volatile storage medium, an internal memory, a network interface, a display screen, and an input device, which are connected through a system bus. Among other things, the non-volatile storage medium of the computer device may store an operating system and computer readable instructions that, when executed, may cause a processor to perform an interactive data processing method. The processor of the computer device is used for providing calculation and control capability and supporting the operation of the whole computer device. The internal memory may have stored therein computer-readable instructions that, when executed by the processor, cause the processor to perform a method of interactive data processing. The network interface of the computer device is used for network communication, such as sending asynchronous messages and receiving asynchronous messages. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, a key, a track ball or a touch pad arranged on the shell of the computer equipment, an external keyboard, a touch pad or a mouse and the like. The touch layer and the display screen form a touch screen.
Those skilled in the art will appreciate that the architecture shown in fig. 2 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
FIG. 3 is a flowchart illustrating a method for interactive data processing according to an embodiment. The interactive data processing method may be applied to the first terminal 110 and/or the second terminal 120 in fig. 1. The embodiment is mainly illustrated by applying the method to the first terminal 110 in fig. 1. Referring to fig. 3, the method specifically includes the following steps:
s302, real-time interaction is carried out through virtual session members in a virtual session scene.
The virtual conversation scene is a conversation scene provided for virtual conversation members, and members joining the virtual conversation scene are displayed with the images of the virtual conversation members when displaying the images. And the virtual session scene can support real-time communication and asynchronous communication.
In one embodiment, the virtual conversation scenario may be a virtual room. The virtual session scene may be a three-dimensional virtual session scene or a two-dimensional virtual session scene. The virtual session scene may be created based on a session, and specifically, the virtual session scene may be created based on a multi-person session (a session with session members greater than or equal to 3), or may be created based on a double-person session (a session with session members of only 2).
The virtual conversation member is an avatar when the member in the virtual conversation scene shows. It will be appreciated that the avatar is an avatar that is virtualized, unlike a real avatar. The virtual conversation member includes a virtual character figure. Virtual session members may also include avatars of animals, plants, or other things. The virtual session member may be a three-dimensional virtual session member or a two-dimensional virtual session member. The virtual session member may be a default avatar (e.g., a virtual session initial model) or an avatar derived from the virtual session member initial model in combination with user characteristics (e.g., user facial characteristics) and/or user-defined attributes (e.g., clothing attributes).
It is understood that in the virtual session scenario, a real-time communication connection has been previously established between the first terminal and the second terminal to perform real-time interaction. The first terminal can trigger real-time interaction data through the virtual session members in a virtual session scene through the real-time communication connection so as to perform real-time interaction with the second terminal. And the real-time interactive data comprises at least one of real-time voice data, real-time expression data and real-time interactive action data. The real-time interaction action data is different from the real-time expression data, can be generated by acquiring preset real-time interaction actions and is used for realizing real-time interaction. The real-time interactive action data can be a single interactive action, namely one virtual session member implements the real-time interactive action, or can be a multi-person (more than or equal to 2) interactive action, namely a plurality of (more than or equal to 2) virtual session members implement the real-time interactive action together. Real-time interactive actions include dancing, hugging, kissing, etc.
In one embodiment, the first terminal may directly establish a real-time communication connection with the second terminal, or the first terminal and the second terminal may respectively establish a real-time communication connection with the server, and forward real-time interaction data therebetween in real time through the server, so as to implement real-time interaction. The real-time communication connection may be a connection established based on a UDP (User Datagram Protocol). In one embodiment, the first terminal and the second terminal can respectively establish real-time communication connection with a real-time interaction service program in the server.
S304, acquiring the trigger instruction of the asynchronous message.
The triggering instruction of the asynchronous message is used for triggering the acquisition and sending operation of the asynchronous message. Asynchronous messages can be sent without the need that both parties of interactive communication are in an online state in real time, and a message sender can continue to send the next message without synchronously waiting for the response of a receiver after sending the asynchronous messages.
The asynchronous message may be an instant messaging message or a short message, etc. The asynchronous message may include a voice message or an emoticon message. The emotion message is a message that can be used to acquire emotion data. The expression data is data that can represent a corresponding expression action.
The asynchronous message may also include a preset interactive action message, a picture message, or a text message, etc. It will be appreciated that an asynchronous message may also be a combination of the above messages. Such as a voice emoticon message (i.e., a message containing both voice and emoticon) or a text picture message (i.e., a message containing both text and picture).
Specifically, the first terminal may run an application program having a virtual session scene, and display an application program interface. The user can perform asynchronous message triggering operation on the display interface of the application program, and the first terminal generates a triggering instruction of the asynchronous message in response to the asynchronous message triggering operation.
S306, when the asynchronous message acquisition mode corresponding to the trigger instruction is mutually exclusive with the real-time interaction, the real-time interaction mutually exclusive with the asynchronous message acquisition mode is interrupted.
The asynchronous message acquisition mode is a mode for acquiring asynchronous messages. The asynchronous message acquisition mode can comprise a voice acquisition mode, an image acquisition mode or an input (including selection input) mode and the like.
It is understood that the asynchronous message acquisition mode is determined according to the triggering instruction. The different types of asynchronous messages may be acquired in different ways. For example, a voice message may be acquired by voice capture, an emoticon message may be acquired by image capture, a text message may be acquired by capturing input text, and so on. The trigger instructions of the same type of asynchronous message may be different, and thus the corresponding asynchronous message acquisition modes may be different. For example, the picture message may be obtained by triggering the existing picture, or by triggering the image capturing device to take a picture.
Mutual exclusion is mutually exclusive, and when two parties need to use the same object, the two parties have a mutual exclusion relationship. In this embodiment, when the asynchronous message acquisition mode needs to use an object for implementing real-time interaction, the asynchronous message acquisition mode is mutually exclusive from the real-time interaction. The object for realizing the real-time interaction comprises a real-time interaction data acquisition device and/or a virtual conversation member.
In one embodiment, when the trigger instruction is an instruction for recording voice and/or an instruction for acquiring an image, the asynchronous message acquisition mode corresponding to the trigger instruction is voice acquisition and/or image acquisition, and the asynchronous acquisition mode is mutually exclusive with real-time interaction. It can be understood that the image obtained by image acquisition can be used for generating and outputting a picture (such as taking a picture), and can also be used for identifying expression features to obtain expression data.
Interrupting the mutually exclusive real-time interaction with the asynchronous message acquisition mode is to suspend the execution of the current mutually exclusive real-time interaction with the asynchronous message acquisition mode and then execute the acquisition of the processed asynchronous message. It is to be understood that the interruption does not completely end the real-time interaction, and refers to a pause, which can be resumed later (e.g., the interrupted real-time interaction can be resumed after the asynchronous message is sent). In one embodiment, the first terminal may suspend processing of the real-time interaction by controlling an interface that implements the real-time interaction, and instead may retrieve the asynchronous message to interrupt the real-time interaction.
It can be understood that the real-time interaction of the first terminal in the virtual session scene through the virtual session members may be one or more, and only the mutually exclusive real-time interaction and the mutually exclusive real-time interaction in the asynchronous message acquisition mode are interrupted. In one embodiment, the real-time interaction includes real-time voice interaction and/or real-time emotive interaction. And real-time expression interaction, namely transmitting the acquired real-time expression data in real time.
For example, the asynchronous message acquisition mode is voice acquisition, that is, a voice acquisition device is required to record voice data to generate asynchronous voice messages, and a voice acquisition device is required to acquire and transmit voice data in real time during real-time interaction, so that the asynchronous message acquisition mode is mutually exclusive with real-time voice interaction, and the asynchronous message acquisition mode is not mutually exclusive with real-time expression interaction during real-time interaction because an image acquisition device is not required.
And S308, acquiring asynchronous messages for playing in the virtual session scene according to the triggering instruction.
In one embodiment, the first terminal may record expression data and/or voice data expression data according to the trigger instruction, where the expression data is data capable of representing a corresponding expression action.
The method comprises the steps of collecting images (such as head images) and/or recording voice data, and generating asynchronous messages according to expression data and/or recorded voice data obtained by recognizing expression features in the images. The first terminal may also take a picture according to the trigger instruction and generate an asynchronous message (i.e., a picture message) from the taken picture.
In an embodiment, when the first terminal collects and identifies an image according to the trigger instruction, the first terminal may perform focus display on a virtual session member corresponding to the current user identifier in a virtual session scene displayed by the first terminal, and control the virtual session member to trigger an expression action represented by the expression data. The current user identifier is the user identifier which logs in the first terminal through the application program for realizing the virtual session scene at present. The focus display may be to display the virtual session member corresponding to the current user identifier in a manner of highlighting among the plurality of virtual session members in the virtual session scene, or to display only the virtual session member corresponding to the current user identifier in the virtual session scene.
FIG. 4A is a diagram of a virtual session context interface for real-time interaction in one embodiment. In real-time interaction, there are 3 virtual session members A, B and C shown in the virtual session scene. FIG. 4B is a diagram illustrating an interface for obtaining asynchronous messages, according to an embodiment. The virtual session member a in fig. 4B is a virtual session member corresponding to the current user identifier, and the user may trigger the triggering instruction of the emotion message by pressing the button i to collect emotion data, and in the process of collecting emotion data, only the virtual session member a corresponding to the current user identifier is displayed in the virtual session scene (i.e., focus display is performed).
Wherein the retrieved asynchronous message is for being played in a virtual session context. Playing, namely, the asynchronous message is embodied in a virtual session scene, and can be embodied in a voice playing form and/or a presentation form. The presentation may be performed by controlling the virtual session member to perform a corresponding action, or may be displayed in association with the virtual session member.
And S310, sending the acquired asynchronous message.
Specifically, the first terminal may directly send the asynchronous message to a second terminal (hereinafter referred to as a second terminal) corresponding to a user identifier other than the current user identifier in the virtual session scene. The second terminal may be all or part of the second terminal, and the second terminal is at least one understandable user identifier, where the current user identifier and the user identifier corresponding to the second terminal are used to uniquely identify a corresponding member in the virtual session scene.
The first terminal may directly send the asynchronous message to the second terminal in a point-to-point manner, or the first terminal may send the asynchronous message to the server, so that the server forwards the asynchronous message to the second terminal in the virtual session scenario. In one embodiment, the first terminal may pre-establish an asynchronous message connection channel with an asynchronous message service program in the server (e.g., establish the asynchronous message connection channel in a TCP connection). Fig. 5 is an architecture diagram of an implementation of the interactive data processing method in one embodiment.
Further, the second terminal may play the asynchronous message in a virtual session scene. In one embodiment, the second terminal may acquire voice data and/or emotion data according to the asynchronous message and play the voice data and/or emotion data in a virtual session scene.
In an embodiment, when the voice data and the emotion data are acquired according to the asynchronous message, the second terminal may play the voice data in a virtual session scene, control a virtual session member corresponding to the emotion data, and trigger an emotion action corresponding to the emotion data. The virtual session member corresponding to the emotion data is a virtual session member corresponding to the user identifier logged in the first terminal. After receiving the asynchronous message, the second terminal may directly play the asynchronous message, or may play the asynchronous message through a corresponding trigger operation. In one embodiment, after receiving an asynchronous message, the second terminal displays an icon corresponding to a user identifier corresponding to the asynchronous message in a virtual session scene interface, triggers playing of voice data according to received triggering operation on the icon, triggers focus display of a virtual session member corresponding to the user identifier corresponding to the asynchronous message, and controls the virtual session member to trigger an expressive action represented by the expressive data. The icon corresponding to the user identifier corresponding to the asynchronous message may be an icon of a head portrait of the corresponding virtual session member. It will be appreciated that asynchronous messages may repeatedly trigger play.
FIG. 6 is a diagram illustrating an interface for playing asynchronous messages, in one embodiment. In combination with the virtual session scene in the real-time interaction state before receiving the asynchronous message in fig. 4A and the interface schematic diagram for recording the asynchronous message in fig. 4B, the virtual session member a displayed in fig. 6 is a virtual session member corresponding to the asynchronous message (the virtual session member a in fig. 6 is a focus display state), the avatar icon a in fig. 6 is an avatar icon of the virtual session member a, and by triggering the avatar icon a, voice data can be played and the virtual session member a is controlled to trigger the expressive action represented by the expressive data.
S312, resuming the interrupted real-time interaction.
And the interrupted real-time interaction is recovered and executed. After resuming the interrupted real-time interaction, the first terminal may continue to perform the previously interrupted real-time interaction through the virtual session members in the virtual session scene. For example, assuming that the real-time emoticon interaction is interrupted before, and the execution of the real-time emoticon interaction is resumed, the first terminal may continue to perform real-time emoticon interaction with the second terminal through the virtual session member in the virtual session scene.
According to the interactive data processing method, the playing processing of the asynchronous message is inserted into the real-time interaction, so that the integration of the real-time interaction and the asynchronous message is realized, and the information quantity of the interactive information can be improved by playing the asynchronous interactive information in the real-time interactive communication process of a user.
In one embodiment, step S308 includes: acquiring a head image according to a trigger instruction; identifying expression characteristics in the head image to obtain expression data for controlling virtual session members in a virtual session scene to trigger expression actions; and generating an asynchronous message for playing in the virtual conversation scene according to the expression data.
Specifically, the user may initiate a triggering operation of the asynchronous message by pressing or clicking a button for triggering image acquisition, and generate a corresponding asynchronous message triggering instruction. Such as the button i in fig. 4B, may be used to trigger image acquisition.
The head image is image data obtained by image-capturing the head. The head portrait data may include face image data and head motion image data. Head movements, including head twisting movements, such as lowering, raising, twisting left or twisting right, and the like.
Specifically, the first terminal may capture a head image by calling a local camera. The camera of the machine can be a front camera or a rear camera of the machine. It is to be understood that the captured head image data may be obtained by image capturing any head appearing in the image capturing area, and is not limited to the user identified by the current user identification.
The expression features are features capable of expressing emotion or emotion and comprise facial expression features and posture expression features. The facial expression is an expression expressed by a facial organ, such as a facial expression like a eyebrow flick or a blink. The gesture expression is an expression expressed by a limb action, such as a turning head action expression.
In one embodiment, the first terminal may parse the head image data, identify facial expression features and/or head motion expression features in the head image data, and obtain expression data. The expression data is data that can represent a corresponding expression action. The emotion data can control the emotion actions represented by the triggering of the corresponding virtual session members in the virtual session scene.
The first terminal may generate an asynchronous message for playing in the virtual conversation scene according to the emotion data. Specifically, the first terminal may directly use the emotion data as the message content to generate the asynchronous message, or the first terminal may use the download address of the emotion data as the message content to generate the asynchronous message.
In the above embodiment, the asynchronous message including the emotion data is sent to control the corresponding virtual session member in the virtual session scene to trigger the emotion action corresponding to the emotion data, which is another expression mode for the real emotion and provides a new interaction mode.
In one embodiment, identifying expressive features in the head image, and obtaining expression data for controlling a virtual session member in the virtual session scene to trigger an expressive action comprises: and identifying expression characteristics in the head images acquired according to the time sequence to obtain expression data frames with the time sequence, wherein each expression data frame comprises an expression characteristic value corresponding to the expression type. Generating an asynchronous message for playing in a virtual conversation scenario from emotion data includes: and generating an asynchronous message according to the expression data frames with the time sequence, wherein the asynchronous message is used for controlling corresponding virtual session members in the virtual session scene, and triggering expression actions corresponding to the expression characteristic values in each expression data frame according to the time sequence.
The expression data frame is one frame of expression data. It can be understood that the head images are collected from frame to frame, and the expression features in each frame of head image are identified to obtain the corresponding expression data frame. It is understood that each expression data frame includes at least one expression type and a corresponding expression feature value. For example, in one expression data frame, the expression types of closed eyes and open mouth and the corresponding expression feature values may be included.
The expression type is the category of the expression in the motion expression dimension, and comprises opening the mouth, blinking, laughing, crying, turning the head or nodding the head and the like. It is to be understood that the above-mentioned expression types are only used for examples, and are not used for limiting the classification of expressions, and the types of expression types may be set according to actual needs. And the expression characteristic value is used for representing the magnitude of the expression action amplitude and/or degree corresponding to the expression type.
Specifically, the second terminal may control a corresponding virtual session member in the virtual session scene according to the expression type corresponding to the expression feature value in the expression data frame, and trigger an expression action corresponding to the expression feature value in each expression data frame according to a time sequence.
In one embodiment, the first terminal identifies the expression features in the head images acquired according to the time sequence, can also obtain the time intervals between expression data frames with the time sequence, and can generate asynchronous messages according to the expression data frames with the time sequence and the time intervals between the expression data frames with the time sequence. The second terminal can control the virtual conversation member to trigger the expression action corresponding to the expression characteristic value in each expression data frame according to the time sequence and the corresponding time interval. It will be appreciated that the time interval between different expression data frames may be different.
In the above embodiment, by sending the asynchronous message with the expression data frame having the time sequence, the expression data frame includes the feature value corresponding to the expression data to control the corresponding virtual session member in the virtual session scene, and triggering the expression action corresponding to the expression feature value in each expression data frame according to the time sequence, the expression action can be determined more accurately, and the accuracy of the controlled expression action is improved.
In one embodiment, step S308 further comprises: voice data is recorded. Generating an asynchronous message for playing in a virtual session scene according to the emotion data, comprising: and generating an asynchronous message for playing in the virtual conversation scene according to the expression data and the voice data.
In this embodiment, the user may trigger one control key or several combined control keys at the same time to generate a trigger instruction of the voice expression message at the same time, and the trigger instruction triggers the first terminal to record voice data, collect a head image, and recognize the head image to obtain expression data. The first terminal can generate an asynchronous message according to the obtained expression data and the voice data. The asynchronous message is intended to be played in a virtual session context.
In an embodiment, when the second terminal that receives the asynchronous message plays the asynchronous message generated according to the emotion data and the voice data, the second terminal may play the voice data in a virtual session scene, and simultaneously control a virtual session member corresponding to the asynchronous message to trigger an emotion action corresponding to the emotion data.
In the embodiment, the expression data and the voice data are collected through one trigger instruction, and the asynchronous message is generated according to the expression data and the voice data, so that the information content of the asynchronous message is improved.
In one embodiment, generating an asynchronous message for playing in a virtual conversation scene from voice data and emotion data includes: uploading voice data and expression data; acquiring a voice data downloading address of voice data and an expression data downloading address of expression data; and generating an asynchronous message for playing in the virtual session scene according to the voice data downloading address and the expression data downloading address.
Specifically, the first terminal may upload the recorded voice data and the recognized expression data to the server for storage. The server can return the voice data downloading address of the voice data and the expression data downloading address of the expression data to the first terminal. The first terminal can take the voice data download address and the expression data download address as message contents to generate asynchronous messages. The asynchronous message is intended to be played in a virtual session context by a second terminal that receives the asynchronous message.
Further, the second terminal receiving the asynchronous message may download the voice data and the emotion data according to the voice data download address and the emotion data download address, and play the voice data and the emotion data in the virtual session scene.
In the embodiment, the asynchronous message for playing in the virtual session scene is generated according to the voice data download address and the expression data download address, so that the data volume of the asynchronous message is reduced, and the sending efficiency of the asynchronous message is improved.
In one embodiment, after step S304, the method further comprises: and when the asynchronous message acquisition mode corresponding to the trigger instruction is not mutually exclusive with the real-time interaction, executing the step of acquiring the asynchronous message for playing in the virtual session scene according to the trigger instruction and the step of sending the acquired asynchronous message during the real-time interaction.
In one embodiment, when the asynchronous message acquisition mode corresponding to the trigger instruction does not need to use an object for realizing real-time interaction, the asynchronous message acquisition mode and the real-time interaction are not mutually exclusive. The object for realizing the real-time interaction comprises a real-time interaction data acquisition device and/or a virtual conversation member.
In one embodiment, when the trigger instruction is an instruction to acquire a text message and/or a preset interaction message, the corresponding asynchronous message acquisition mode is to acquire an input text message and acquire an identifier of the preset interaction, and the asynchronous acquisition mode is not mutually exclusive with the real-time interaction. When the trigger instruction is an instruction for acquiring an existing picture and generating a picture message, the corresponding asynchronous message acquisition mode is not mutually exclusive with real-time interaction.
When the asynchronous message acquisition mode corresponding to the trigger instruction is not mutually exclusive with the real-time interaction, the first terminal can acquire the asynchronous message for playing in the virtual session scene according to the trigger instruction when the first terminal performs the real-time interaction. Specifically, the first terminal may acquire the selected existing picture to generate a picture message, or may acquire the input text to generate a text message.
In one embodiment, when an asynchronous message acquisition mode corresponding to the trigger instruction is mutually exclusive from real-time interaction, acquiring an asynchronous message for playing in a virtual session scene according to the trigger instruction includes: acquiring an identifier of a preset interaction action according to a trigger instruction; generating an asynchronous message according to the identifier of the preset interaction action; and the asynchronous message is used for controlling corresponding virtual session members in the virtual session scene so as to implement preset interaction actions.
The preset interaction action is a preset general interaction action. The preset interaction action can be a single interaction action, namely one virtual session member implements the interaction action, or a multi-person (more than or equal to 2) interaction action, namely a plurality of (more than or equal to 2) virtual session members implement the interaction action together. The preset interaction actions include dancing, hugging, kissing, etc.
In one embodiment, when the asynchronous message obtaining manner corresponding to the trigger instruction is mutually exclusive from the real-time interaction, obtaining the asynchronous message for playing in the virtual session scene according to the trigger instruction further includes: acquiring a current user identifier; acquiring a target user identifier aimed at by a preset interaction action; generating an asynchronous message according to the identifier of the preset interaction action, comprising: generating an asynchronous message according to the current user identifier, the acquired target user identifier and the identifier of the preset interactive action; and the asynchronous message is used for controlling the virtual session member corresponding to the current user identifier in the virtual session scene so as to implement the preset interaction action aiming at the virtual session member corresponding to the target user identifier.
The target user identifier for the preset interaction action is a user identifier corresponding to a virtual session member which is to be matched with a virtual session member corresponding to the current user identifier to jointly realize the preset interaction action. It can be understood that the virtual session member corresponding to the target user identifier may directly implement the corresponding preset interaction action, or may passively receive the preset interaction action.
For example, the preset interaction action is "cheek", and in the virtual session scene, the virtual session member a is controlled to perform the cheek action on the virtual session member B, so that the virtual session member B belongs to the passive receiving preset interaction action. For another example, the preset interaction action is "hug", and in the virtual session scenario, the virtual session member a and the virtual session member B are controlled to jointly implement the hug action, so that the virtual session member B directly implements the corresponding preset interaction action.
In the above embodiment, the asynchronous message is generated by presetting the identifier of the interactive action, and when the asynchronous message is played, the virtual session members can be controlled to trigger the corresponding interactive action, so that a new interactive mode is provided.
In an embodiment, when the second terminal receives an asynchronous message in which the corresponding asynchronous message acquisition mode and the real-time interaction are mutually exclusive, the asynchronous message and the corresponding virtual session member may be displayed in association in a virtual session scene. Specifically, the asynchronous message may be displayed in association with the virtual session member body, or the asynchronous message may be displayed in association with an icon (such as an avatar icon) corresponding to the virtual session member. As shown in fig. 7A and 7B, fig. 7A is a display in which a picture message is associated with the avatar icon a of virtual conversation member a, and fig. 7B is a display in which a text message is associated with the avatar icon a of virtual conversation member a.
As shown in fig. 8, in an embodiment, the step S302 (referred to as a real-time interaction step for short) specifically includes the following steps:
and S802, acquiring a head image in real time.
S804, identifying expression characteristics in the head image collected in real time to obtain real-time expression data.
In one embodiment, the first terminal can identify expression features in the head image acquired in real time to obtain expression types and corresponding expression feature values; and generating real-time expression data comprising expression characteristic values corresponding to the identified expression types.
The expression type is the category of the expression in the action expression dimension, and comprises the types of opening mouth, blinking, laughing, crying, turning head or nodding head and the like. The expression type obtained by the first terminal identifying the expression features in the head image data is at least one. And the expression characteristic value is used for representing the magnitude of the expression action amplitude and/or degree corresponding to the expression type. For example, if the expression type "cry" is different in corresponding expression characteristic values, the degree of crying is different, and for example, the expression type "cry" may be classified into different degrees such as twilling or crying. For another example, the expression type "turn left", the expression feature value may be an angle of turning, and the larger the angle of turning, the larger the amplitude of turning.
In one embodiment, generating real-time expression data including expression feature values corresponding to the identified expression types includes: and combining the expression characteristic values corresponding to the expression types obtained by identification to obtain real-time expression data.
Specifically, the first terminal may directly combine the expression type obtained by the recognition and the corresponding expression feature value to obtain real-time expression data. The first terminal may also add the expression feature value corresponding to the identified expression type to a position corresponding to the corresponding expression type to generate corresponding real-time expression data. It can be understood that the terminal corresponding to the second user identifier may determine the expression type corresponding to the real-time expression data according to the position corresponding to the real-time expression data. For example, if the expression type "open mouth" corresponds to the 1 st position, the expression feature value "10 degrees" corresponding to the "open mouth" is added to the 1 st position, and if the expression type "turn left" corresponds to the 2 nd position, the expression feature value "15 degrees" corresponding to the "turn left" is added to the 2 nd position, and so on, the expression feature values are combined to generate corresponding real-time expression data.
It is to be understood that, in this embodiment, the expression feature values included in the generated real-time expression data may be only expression feature values corresponding to the identified expression types. For example, only the expression types "head left turn" and "mouth open" are recognized, and the expression feature values included in the real-time expression data are only the expression feature values corresponding to the expression types "head left turn" and "mouth open".
In another embodiment, the identified expression type belongs to a preset expression type set. Generating real-time expression data comprising expression characteristic values corresponding to the identified expression types, including: giving expression characteristic values which represent that corresponding expression actions are not triggered to the unrecognized expression types in the preset expression type set; and combining the expression characteristic values corresponding to the expression types according to the preset sequence of the expression types in the preset expression type set to form real-time expression data.
And S806, sending real-time expression data in real time, wherein the sent real-time expression data is used for controlling the corresponding virtual session member in the virtual session scene to trigger the expression action corresponding to the real-time expression data in real time.
Specifically, the first terminal may send the real-time expression data to a second terminal (hereinafter referred to as a second terminal) corresponding to a user identifier other than the current user identifier in the virtual session scene in real time. The second terminal may be all or part of the second terminal, and the second terminal is at least one understandable user identifier, where the current user identifier and the user identifier corresponding to the second terminal are used to uniquely identify members in the virtual session scene.
The first terminal can directly send the real-time expression data to the second terminal in a point-to-point mode, and the first terminal can also send the real-time expression data to the server, so that the server forwards the real-time expression data to the second terminal in the virtual session scene.
Further, the second terminal receiving the real-time expression data can control the corresponding virtual session member in the virtual session scene to trigger the expression action corresponding to the real-time expression data in real time according to the real-time expression data.
In one embodiment, the first terminal may further acquire real-time voice data in real time, and send the real-time voice data to the second terminal in real time, where the sent real-time voice data is used for playing in a virtual session scene displayed by the second terminal. In addition, the first terminal can also acquire and transmit real-time interactive action data in real time, and the transmitted real-time interactive action data is used for controlling the corresponding virtual session members to trigger the interactive actions corresponding to the real-time expression data in real time in the virtual session scene displayed by the second terminal. It can be understood that the user can select the real-time interactive action icon, the first terminal obtains the corresponding real-time interactive action data,
in the embodiment, the real-time expression data is identified and sent to control the corresponding virtual session member in the virtual session scene to trigger the expression action corresponding to the real-time expression data in real time, so that the method is another expression mode of the real expression of the user for interactive communication and provides a new interactive mode.
In one embodiment, before real-time interaction by the virtual session members in the virtual session scenario, the method further comprises: acquiring a current user identifier; acquiring a multi-user session identifier corresponding to a current user identifier; and adding the current user identification into a member list of the virtual session scene identified by the multi-person session identification.
Specifically, the first terminal may obtain a current login user identifier, that is, a current user identifier, and obtain a multi-user session identifier corresponding to the current user identifier; and sending the multi-person session identifier and the current user identifier to the server, so that the server adds the current user identifier into a member list of the virtual session scene identified by the multi-person session identifier.
Wherein the multi-person session identifier is used for uniquely identifying the multi-person session. The number of members in the multi-person session is greater than or equal to 3. The multi-person session may be a group or temporary multi-person chat session, but may also be other types of multi-person sessions.
It is to be understood that the current user identity is a member of the multi-person session corresponding to the corresponding multi-person session identity. The virtual session scene identified by the multi-person session identifier is equivalent to a virtual session scene created based on the multi-person session corresponding to the multi-person session identifier. The virtual session scene identified by the multi-person session identifier may be a virtual session scene with the multi-person session identifier as a direct identifier, that is, the unique identifier of the virtual session scene is the multi-person session identifier itself. The virtual session scene identified by the multi-person session identifier can also be a virtual session scene with the multi-person session identifier as an indirect identifier, namely, the unique identifier of the virtual session scene is a virtual session scene identifier uniquely corresponding to the multi-person session identifier, and the virtual session scene identifier can be determined according to the multi-person session identifier, so that the corresponding virtual session scene can be indirectly and uniquely identified by the multi-person session identifier.
Specifically, a user can log in an application program for implementing a virtual session scene through a current user identifier, and after the login is successful, a multi-person session interface is opened in the first terminal, where the opened multi-person session interface is an interface of a multi-person session corresponding to a multi-person session identifier corresponding to the current user identifier. The user may initiate an operation to join the virtual session context in the open multi-person session interface. The first terminal responds to the operation, acquires a multi-user session identifier corresponding to the current login user identifier, sends the multi-user session identifier and the current user identifier to the server, and the server adds the current user identifier into a member list of a virtual session scene identified by the multi-user session identifier so as to add the current user identifier into the corresponding virtual session scene.
In one embodiment, the server may return access information of the virtual session context identified with the multi-person session identification to the first terminal, and the first terminal may join the virtual session context according to the access information. Wherein, the access information comprises an access IP address and a port.
In the embodiment, the current login user identifier is added into the virtual session scene created based on the corresponding multi-person session, so that an interactive communication mode that the virtual session member in the virtual session scene triggers the expression action corresponding to the expression data is realized, the improvement on the multi-person session is realized, and a new interactive interaction mode is provided.
Fig. 9 is a timing diagram of implementing the interactive data processing method in an embodiment, which specifically includes the following steps:
1) and the terminal which needs to carry out real-time interaction through the virtual session scene opens the session and applies for joining the corresponding virtual session scene.
2) And each terminal starts the audio equipment and the image acquisition equipment to acquire real-time voice data and real-time expression data.
3) And each terminal sends the real-time voice data and the real-time expression data to a server.
4) And the server forwards the real-time voice data and the real-time expression data to terminals corresponding to other members of the virtual conversation scene.
5) And the terminal receiving the real-time voice data and the real-time expression data plays the real-time voice data and the real-time expression data in the virtual conversation scene.
6) And the first terminal receives a trigger instruction of the asynchronous voice and/or expression message and interrupts the real-time voice interaction or the real-time expression interaction mutually exclusive with the asynchronous message acquisition mode.
7) And the first terminal acquires the asynchronous voice and/or expression message according to the trigger instruction.
8) The first terminal sends asynchronous voice and/or emotion messages to the server.
9) And the first terminal recovers the real-time voice interaction or the real-time expression interaction.
10) And the server forwards the asynchronous voice and/or emotion message to the terminals corresponding to the other members of the virtual session scene.
11) And the terminal receiving the asynchronous voice and/or emotion message plays the asynchronous voice and/or emotion message in the virtual session scene.
12) And the first terminal receives a trigger instruction of the picture and/or the text message, and acquires the picture and/or the text message during real-time interaction.
13) And the first terminal sends the picture and/or the text message to the server.
14) And the server forwards the picture and/or the text message to terminals corresponding to other members of the virtual session scene.
As shown in fig. 10, in one embodiment, another interactive data processing method is provided, the method comprising the steps of:
s1002, acquiring a multi-user session identifier corresponding to the current user identifier, and adding the current user identifier into a member list of a virtual session scene identified by the multi-user session identifier.
And S1004, carrying out real-time interaction through the virtual session members in the virtual session scene.
In one embodiment, real-time interaction by virtual session members in a virtual session scenario includes: acquiring a head image in real time; identifying expression characteristics in the head image acquired in real time to obtain real-time expression data; and sending real-time expression data in real time, wherein the sent real-time expression data is used for controlling a corresponding virtual session member in a virtual session scene to trigger an expression action corresponding to the real-time expression data in real time.
S1006, acquiring a trigger instruction of the asynchronous message.
S1008, determining whether the asynchronous message acquisition mode corresponding to the trigger instruction is mutually exclusive with the real-time interaction, if so, entering step S1010, and if not, entering step S1020.
S1010, interrupting the mutually exclusive real-time interaction with the asynchronous message acquisition mode.
And S1012, acquiring a head image according to the trigger instruction, and identifying expression features in the head image acquired according to the time sequence to obtain an expression data frame with the time sequence.
And each expression data frame comprises expression characteristic values corresponding to the expression types.
And S1014, recording voice data, and uploading voice data and expression data frames.
S1016, acquiring a voice data downloading address of the voice data and an expression data downloading address of the expression data frame; and generating an asynchronous message for playing in the virtual session scene according to the voice data downloading address and the expression data downloading address.
The asynchronous message is used for controlling corresponding virtual session members in a virtual session scene, triggering expression actions corresponding to expression characteristic values in each expression data frame according to a time sequence, and playing voice data in the virtual session scene.
And S1018. sending the acquired asynchronous message and recovering the interrupted real-time interaction.
S1020, acquiring an identifier of a preset interaction action according to a trigger instruction during real-time interaction;
and S1022, acquiring the current user identifier and acquiring the target user identifier for the preset interaction action.
And S1024, generating an asynchronous message according to the current user identifier, the acquired target user identifier and the identifier of the preset interactive action.
The asynchronous message is used for controlling the virtual session member corresponding to the current user identifier in the virtual session scene so as to implement the preset interaction action aiming at the virtual session member corresponding to the target user identifier.
And S1026, sending the acquired asynchronous message.
According to the interactive data processing method, the playing processing of the asynchronous message is inserted into the real-time interaction, so that the integration of the real-time interaction and the asynchronous message is realized, and the information quantity of the interactive information can be improved by playing the asynchronous interactive information in the real-time interactive communication process of a user.
Secondly, the corresponding virtual session members in the virtual session scene are controlled to trigger the expression actions corresponding to the real-time expression data in real time by sending the asynchronous message comprising the expression data, so that the method is another expression mode of the real expressions and provides a new interaction mode.
Then, by sending an asynchronous message of an expression data frame with a time sequence, the expression data frame comprises a characteristic value corresponding to expression data to control a corresponding virtual conversation member in a virtual conversation scene, and triggering an expression action corresponding to the expression characteristic value in each expression data frame according to the time sequence, the expression action can be more accurately determined, and the accuracy of the controlled expression action is improved.
And then, an asynchronous message is generated according to the expression data and the voice data, so that the information content of the asynchronous message is improved.
And finally, generating an asynchronous message for playing in the virtual session scene according to the voice data download address and the expression data download address, so that the data volume of the asynchronous message is reduced, and the sending efficiency of the asynchronous message is improved.
As shown in fig. 11, in one embodiment, yet another interactive data processing method is provided, which may be applied to the first terminal 110 and/or the second terminal 120 in fig. 1. The embodiment is mainly illustrated by applying the method to the second terminal 120 in fig. 1. Referring to fig. 10, the method specifically includes the following steps:
and S1102, performing real-time interaction through the virtual session members in the virtual session scene.
S1104, an asynchronous message is received.
Specifically, the second terminal may directly receive the asynchronous message sent by the first terminal in a point-to-point manner. And receiving asynchronous messages sent by the first terminal and forwarded by the server.
When the asynchronous message is at least one of a voice message, an emoticon message, a voice emoticon message, a preset interactive action message, a picture message or a text message.
S1106, when the playing mode of the asynchronous message is mutually exclusive with the real-time interaction, the mutually exclusive real-time interaction with the playing mode is interrupted.
The asynchronous message is played in a manner of playing the asynchronous message. The playing mode of the asynchronous message may include at least one of voice playing, controlling the virtual session member to trigger the corresponding action or display in the form of picture or text, and the like.
When the playing mode of the asynchronous message needs to use an object for realizing real-time interaction, the playing mode of the asynchronous message and the real-time interaction are mutually exclusive. The object for realizing the real-time interaction comprises a playing device of real-time interaction data and/or a virtual conversation member.
In one embodiment, when the asynchronous message is a voice message, an emoticon message, a voice emoticon message, or a preset interactive action message, the asynchronous message is played in a manner mutually exclusive from the real-time interaction.
It can be understood that when playing the voice message and the voice emotion message, the voice playing device needs to be used, and the voice playing device is mutually exclusive with the real-time voice interaction, and when playing the emotion message or the voice emotion message or the preset interaction message, the virtual session member needs to be controlled to execute the corresponding emotion action or interaction action, and the emotion message or the voice emotion message is mutually exclusive with the real-time emotion. In addition, when the preset interactive action message also has a corresponding background sound, and the second terminal plays the preset interactive action message, a voice playing device is also needed, and mutual exclusion with real-time voice interaction is achieved.
The interruption of the mutually exclusive real-time interaction with the playing mode is to suspend the execution of the mutually exclusive real-time interaction with the playing mode of the asynchronous message at present and then execute the playing processing of the asynchronous message. It is understood that the interruption does not completely end the real-time interaction, and refers to pause, and the interruption can be resumed later (for example, the interrupted real-time interaction can be resumed after the asynchronous message playing is finished).
S1108, the asynchronous message is played in the virtual session scene.
Specifically, after receiving the asynchronous message, the second terminal may directly play the asynchronous message, or may play the asynchronous message through a corresponding trigger operation.
In an embodiment, after receiving the asynchronous message, the second terminal may display an icon corresponding to the user identifier corresponding to the asynchronous message in a virtual session scene interface, trigger playing of voice data according to the received trigger operation on the icon, trigger focus display of a virtual session member corresponding to the user identifier corresponding to the asynchronous message, and control the virtual session member to trigger an expressive action represented by the expressive data. The icon corresponding to the user identifier corresponding to the asynchronous message may be an icon of a head portrait of the corresponding virtual session member. It will be appreciated that asynchronous messages may repeatedly trigger play.
In an embodiment, the second terminal may control a virtual session member corresponding to the asynchronous message in a virtual session scene to trigger an action represented by the asynchronous message, may also perform associated display on the asynchronous message and the corresponding virtual session member in the virtual session scene, and may also directly play the asynchronous message in the virtual session scene.
In one embodiment, when the asynchronous message is a voice message, the second terminal may simultaneously control the corresponding virtual session member to trigger the preset emotive action when playing the voice message.
And S1110, after the asynchronous message is played, resuming the interrupted real-time interaction.
And the interrupted real-time interaction is recovered and executed. After resuming the interrupted real-time interaction, the second terminal may continue to perform the previously interrupted real-time interaction through the virtual session members in the virtual session scene. For example, assuming that the real-time emoticon interaction is interrupted before, and after the execution of the real-time emoticon interaction is resumed, the second terminal may continue to perform the real-time emoticon interaction with the terminals corresponding to the other members in the virtual session scene through the virtual session member in the virtual session scene. It is to be understood that the terminals corresponding to other members in the virtual session scenario herein are not limited to the first terminal sending the asynchronous message, but may also be other second terminals.
In the embodiment, the playing processing of the asynchronous message is inserted in the real-time interaction, so that the integration of the real-time interaction and the asynchronous message is realized, and the information amount of the interactive information can be improved by playing the asynchronous interactive information in the real-time interactive communication process of a user.
In one embodiment, step S1108 includes: acquiring corresponding expression data according to the asynchronous message; determining a virtual session member corresponding to the asynchronous message in a virtual session scene; and controlling the determined virtual conversation member to trigger the expressive action represented by the expressive data.
The virtual session member corresponding to the asynchronous message is a virtual session member corresponding to the user identifier logged on the first terminal sending the asynchronous message. For example, a user logs in a first terminal through a user identifier P and sends an asynchronous message to a second terminal, and a virtual session member corresponding to the asynchronous message determined by the second terminal is a virtual session member corresponding to the user identifier P.
In one embodiment, determining a virtual session member in a virtual session context corresponding to an asynchronous message comprises: and determining the user identification for sending the asynchronous message, and determining the virtual session member corresponding to the user identification for sending the asynchronous message in the virtual session scene. The user identifier for sending the asynchronous message may be carried in the asynchronous message, or the user identifier for sending the asynchronous message may be determined by the second terminal according to an asynchronous message channel corresponding to the asynchronous message.
Controlling the determined virtual conversation member to trigger the expressive action represented by the expressive data, comprising: in the virtual conversation scene, controlling the determined virtual conversation member to implement the expression action represented by the expression data, or generating corresponding texture information according to the expression data, and displaying the generated texture information on the expression display part of the corresponding virtual conversation member.
In the above embodiment, the asynchronous message including the emotion data is played to control the corresponding virtual session member in the virtual session scene to trigger the emotion action corresponding to the emotion data, which is another expression mode for the real emotion and provides a new interaction mode.
In one embodiment, the obtaining of the corresponding emotion data according to the asynchronous message includes: acquiring corresponding expression data frames with time sequence according to the asynchronous message, wherein each expression data frame comprises an expression characteristic value corresponding to an expression type; controlling the determined virtual conversation member to trigger the expressive action represented by the expressive data, comprising: and in the virtual conversation scene, controlling the determined virtual conversation members, and triggering expression actions corresponding to the expression characteristic values in each expression data frame according to a time sequence.
The expression data frame is one frame of expression data. It can be understood that the head images are collected from frame to frame, and the expression features in each frame of head image are identified to obtain the corresponding expression data frame. It is understood that each expression data frame includes at least one expression type and a corresponding expression feature value. For example, in one expression data frame, the expression types of closed eyes and open mouth and the corresponding expression feature values may be included.
The expression type is the category of the expression in the motion expression dimension, and comprises opening the mouth, blinking, laughing, crying, turning the head or nodding the head and the like. It is to be understood that the above-mentioned expression types are only used for examples, and are not used for limiting the classification of expressions, and the types of expression types may be set according to actual needs. And the expression characteristic value is used for representing the magnitude of the expression action amplitude and/or degree corresponding to the expression type.
Specifically, the second terminal may control a corresponding virtual session member in the virtual session scene according to the expression type corresponding to the expression feature value in the expression data frame, and trigger an expression action corresponding to the expression feature value in each expression data frame according to a time sequence.
In an embodiment, the second terminal may further obtain a time interval between expression data frames with a time sequence from the asynchronous message, and may control the virtual session member to trigger an expression action corresponding to the expression feature value in each expression data frame according to the time sequence and the corresponding time interval. It will be appreciated that the time interval between different expression data frames may be different.
In the above embodiment, by playing the asynchronous message with the expression data frame having the time sequence, the expression data frame includes the feature value corresponding to the expression data to control the corresponding virtual session member in the virtual session scene, and triggering the expression action corresponding to the expression feature value in each expression data frame according to the time sequence, the expression action can be determined more accurately, and the accuracy of the controlled expression action is improved.
In one embodiment, obtaining corresponding emotion data from the asynchronous message includes: and extracting the expression data downloading address in the asynchronous message, and downloading the expression data according to the expression data downloading address.
Specifically, the asynchronous message includes an expression data download address, the second terminal can extract the expression data download address in the asynchronous message, send an expression data download request to the server according to the expression data download address, and the server sends expression data corresponding to the expression data download address to the second terminal.
In one embodiment, the method further comprises: and extracting the voice data download address in the asynchronous message. And downloading the voice data according to the voice data downloading address. In the virtual session scenario, voice data is played.
Wherein, the asynchronous message also comprises a voice data download address.
The second terminal can extract the voice data download address in the asynchronous message, send the voice data download request to the server according to the voice data download address, and the server sends the voice data corresponding to the voice data download address to the second terminal. The second terminal may play the voice data in the virtual session scene.
In the embodiment, the corresponding expression data and voice data are obtained according to the voice data download address and the expression data download address in the asynchronous message, so that the data volume of the asynchronous message is reduced, and the receiving efficiency of the asynchronous message is improved.
As shown in fig. 12, in an embodiment, the step S1108 (referred to as a preset interactive action playing step for short) specifically includes the following steps:
s1108a, obtaining the first subscriber identity of the asynchronous message.
Specifically, the second terminal may directly obtain the first subscriber identity for sending the asynchronous message included in the asynchronous message, or may determine the first subscriber identity for sending the asynchronous message according to the asynchronous message channel corresponding to the asynchronous message.
S1108b, extracting the identifier of the preset interaction action in the asynchronous message.
The asynchronous message also comprises an identifier of a preset interaction action. The preset interaction action is a preset universal interaction action. The preset interaction action can be a single interaction action, namely one virtual session member implements the interaction action, or a multi-person (more than or equal to 2) interaction action, namely a plurality of (more than or equal to 2) virtual session members implement the interaction action together. The preset interaction actions include dancing, hugging, kissing, etc.
It can be understood that the preset interaction action may also have corresponding background sound, and when the preset interaction action is played, the corresponding virtual session member in the virtual session scene may be controlled to implement the preset interaction action and play the corresponding background sound.
S1108c, in the virtual session scenario, controlling a virtual session member corresponding to the first user identifier to trigger a preset interaction action.
Specifically, the second terminal may obtain a corresponding preset interaction logic code according to the identifier of the preset interaction, and in the virtual session scene, according to the preset interaction logic code, control the virtual session member corresponding to the first user identifier to trigger the preset interaction.
In one embodiment, controlling the virtual session member corresponding to the first user identifier to trigger a preset interaction action includes: in a virtual session scene, controlling a virtual session member corresponding to the first user identifier to implement the preset interaction action according to the preset interaction action logic code, or acquiring corresponding preset texture information according to the preset interaction action logic code, and displaying the preset texture information on a preset display part of the virtual session member corresponding to the first user identifier.
In one embodiment, step S1108 further includes: from the asynchronous message, a second subscriber identity is extracted. Step S1108c includes: and in the virtual session scene, controlling the virtual session member corresponding to the first user identifier in the virtual session scene so as to implement a preset interaction action aiming at the virtual session member corresponding to the second user identifier.
The second user identifier is a user identifier for the preset interaction action, that is, a user identifier corresponding to a virtual session member that is to cooperate with a virtual session member corresponding to the first user identifier to jointly implement the preset interaction action. The virtual session member corresponding to the second user identifier may directly implement the corresponding preset interaction action, or may passively receive the preset interaction action. It should be noted that the second user identifier and the second terminal do not necessarily correspond to each other here.
Specifically, the second terminal may control, in the virtual session scene, a virtual session member corresponding to the first user identifier and a virtual session member corresponding to the second user identifier in the virtual session scene to jointly implement a corresponding preset interaction action. For example, the interaction action is preset as "hug", and in the virtual session scenario, the virtual session member corresponding to the first user identifier and the virtual session member corresponding to the second user identifier are controlled to jointly implement the "hug" action.
The second terminal may also control a virtual session member corresponding to the first user identifier in the virtual session scene, and implement a corresponding preset interaction action on the virtual session member corresponding to the second user identifier. For example, the interaction action is preset as "cheek", and in the virtual session scene, the virtual session member corresponding to the first user identifier is controlled to perform the cheek action on the virtual session member corresponding to the second user identifier.
The preset interaction action can also have corresponding background sound. The second terminal can preset the identification of the interaction action to acquire the corresponding background sound, and can control the corresponding virtual session member in the virtual session scene to implement the preset interaction action and play the corresponding background sound when playing the preset interaction action.
In the above embodiment, the asynchronous message is generated by presetting the identifier of the interactive action, and when the asynchronous message is played, the virtual session members can be controlled to trigger the corresponding interactive action, so that a new interactive mode is provided.
In one embodiment, the method further comprises: and when the playing mode of the asynchronous message is not mutually exclusive with the real-time interaction, executing the step of playing the asynchronous message in the virtual session scene during the real-time interaction.
When the playing mode of the asynchronous message does not need to use an object for realizing real-time interaction, the playing mode of the asynchronous message is not mutually exclusive with the real-time interaction. The object for realizing the real-time interaction comprises a real-time interaction data acquisition device and/or a virtual conversation member.
When the playing mode of the asynchronous message is not mutually exclusive with the real-time interaction, the asynchronous message for playing in the virtual session scene can be acquired according to the triggering instruction when the second terminal performs the real-time interaction.
In one embodiment, when the asynchronous message is a picture message or a text message, the playing mode of the asynchronous message is not mutually exclusive to the real-time interaction.
In one embodiment, when the playing mode of the asynchronous message is not mutually exclusive to the real-time interaction, playing the asynchronous message in the virtual session scene comprises: and acquiring corresponding characters and/or pictures according to the asynchronous messages. And in the virtual conversation scene, displaying the characters and/or pictures and corresponding virtual conversation members in an associated manner.
Specifically, the second terminal may display the asynchronous message in association with the virtual session member body, or may display the asynchronous message in association with an icon (such as an avatar icon) corresponding to the virtual session member.
In the embodiment, the asynchronous messages and the virtual session members are displayed in a correlated manner, so that the interactive asynchronous messages are more clearly presented, and the information acquisition efficiency of a user is improved.
In one embodiment, real-time interaction by virtual session members in a virtual session scenario includes: the head image is acquired in real time. And identifying the expression characteristics in the head image acquired in real time to obtain real-time expression data. And sending real-time expression data in real time, wherein the sent real-time expression data is used for controlling a corresponding virtual session member in a virtual session scene to trigger an expression action corresponding to the real-time expression data in real time.
Specifically, the first terminal can identify expression features in the head image acquired in real time to obtain expression types and corresponding expression feature values; and generating expression data comprising expression characteristic values corresponding to the identified expression types.
The expression type is the category of the expression in the action expression dimension, and comprises the types of opening mouth, blinking, laughing, crying, turning head or nodding head and the like. The expression type obtained by the first terminal identifying the expression features in the head image data is at least one. And the expression characteristic value is used for representing the magnitude of the expression action amplitude and/or degree corresponding to the expression type. For example, if the expression type "cry" is different in corresponding expression characteristic values, the degree of crying is different, and for example, the expression type "cry" may be classified into different degrees such as twilling or crying. For another example, the expression type "turn left", the expression feature value may be an angle of turning, and the larger the angle of turning, the larger the amplitude of turning.
In one embodiment, generating expression data including expression feature values corresponding to the identified expression types includes: and combining the expression characteristic values corresponding to the expression types obtained by identification to obtain expression data.
Specifically, the first terminal may directly combine the expression type obtained by the recognition and the corresponding expression feature value to obtain expression data. The first terminal may also add the expression feature value corresponding to the identified expression type to a position corresponding to the corresponding expression type to generate corresponding expression data. It can be understood that the terminal corresponding to the second user identifier may determine the expression type corresponding to the expression data according to the position corresponding to the expression data. For example, if the expression type "open mouth" corresponds to the 1 st position, the expression feature value "10 degrees" corresponding to "open mouth" is added to the 1 st position, and if the expression type "turn left" corresponds to the 2 nd position, the expression feature value "15 degrees" corresponding to "turn left" is added to the 2 nd position, and so on, the expression feature values are combined to generate corresponding expression data.
It is to be understood that, in this embodiment, the expression feature values included in the generated expression data may be only expression feature values corresponding to the identified expression types. For example, only the expression types "head left turn" and "mouth open" are recognized, and the expression feature values included in the expression data are only the expression feature values corresponding to the expression types "head left turn" and "mouth open".
In another embodiment, the identified expression type belongs to a preset expression type set. Generating expression data including expression feature values corresponding to the identified expression types, including: giving expression characteristic values which represent that corresponding expression actions are not triggered to the unrecognized expression types in the preset expression type set; and combining the expression characteristic values corresponding to the expression types according to the preset sequence of the expression types in the preset expression type set to form expression data.
Specifically, the first terminal may send the real-time expression data to a second terminal (hereinafter referred to as a second terminal) corresponding to a user identifier other than the current user identifier in the virtual session scene in real time. The second terminal may be all or part of the second terminal, and the second terminal is at least one understandable user identifier, where the current user identifier and the user identifier corresponding to the second terminal are used to uniquely identify members in the virtual session scene.
The first terminal can directly send the real-time expression data to the second terminal in a point-to-point mode, and the first terminal can also send the real-time expression data to the server, so that the server forwards the real-time expression data to the second terminal in the virtual session scene.
Further, the second terminal receiving the real-time expression data can control the corresponding virtual session member in the virtual session scene to trigger the expression action corresponding to the real-time expression data in real time according to the real-time expression data.
In one embodiment, the first terminal may further acquire real-time voice data in real time, and send the real-time voice data to the second terminal in real time, where the sent real-time voice data is used for playing in a virtual session scene displayed by the second terminal. In addition, the first terminal can also acquire and transmit real-time interactive action data in real time, and the transmitted real-time interactive action data is used for controlling the corresponding virtual session members to trigger the interactive actions corresponding to the real-time expression data in real time in the virtual session scene displayed by the second terminal. It can be understood that the user can select the real-time interactive action icon, the first terminal obtains the corresponding real-time interactive action data,
in the embodiment, the real-time expression data is identified and sent to control the corresponding virtual session member in the virtual session scene to trigger the expression action corresponding to the real-time expression data in real time, so that the method is another expression mode of the real expression of the user for interactive communication and provides a new interactive mode.
As shown in fig. 13, in one embodiment, an interactive data processing apparatus 1300 is provided, wherein the apparatus 1300 comprises: a real-time interaction module 1302, an instruction obtaining module 1304, a mutual exclusion processing module 1306, an asynchronous message obtaining module 1308, and a sending module 1310, where:
and a real-time interaction module 1302, configured to perform real-time interaction through the virtual session members in the virtual session scene.
And an instruction obtaining module 1304, configured to obtain a trigger instruction of the asynchronous message.
The mutual exclusion processing module 1306 is configured to interrupt the real-time interaction that is mutually exclusive with the asynchronous message acquisition mode when the asynchronous message acquisition mode corresponding to the trigger instruction is mutually exclusive with the real-time interaction.
An asynchronous message obtaining module 1308, configured to obtain, according to the trigger instruction, an asynchronous message for playing in the virtual session scene.
A sending module 1310 configured to send the acquired asynchronous message.
The mutex processing module 1306 is also used to resume interrupted real-time interactions.
In one embodiment, the asynchronous message acquisition module 1308 is further configured to acquire a header image according to a trigger instruction; identifying expression characteristics in the head image to obtain expression data for controlling virtual session members in a virtual session scene to trigger expression actions; and generating an asynchronous message for playing in the virtual conversation scene according to the expression data.
In one embodiment, the asynchronous message obtaining module 1308 is further configured to identify expression features in the head images collected according to the time sequence, to obtain expression data frames with the time sequence, where each expression data frame includes an expression feature value corresponding to an expression type; and generating an asynchronous message according to the expression data frames with the time sequence, wherein the asynchronous message is used for controlling corresponding virtual session members in the virtual session scene, and triggering expression actions corresponding to the expression characteristic values in each expression data frame according to the time sequence.
In one embodiment, the asynchronous message acquisition module 1308 is further configured to record voice data; and generating an asynchronous message for playing in the virtual conversation scene according to the voice data and the expression data.
In one embodiment, the asynchronous message acquisition module 1308 is further configured to upload voice data and emotion data; acquiring a voice data downloading address of voice data and an expression data downloading address of expression data; and generating an asynchronous message for playing in the virtual session scene according to the voice data downloading address and the expression data downloading address.
In an embodiment, when the asynchronous message obtaining manner corresponding to the trigger instruction is not mutually exclusive from the real-time interaction, the asynchronous message obtaining module 1308 is further configured to obtain an asynchronous message for playing in a virtual session scene according to the trigger instruction during the real-time interaction, and notify the sending module 1310 to send the obtained asynchronous message after obtaining the asynchronous message.
In an embodiment, when the asynchronous message acquisition mode corresponding to the trigger instruction is not mutually exclusive with the real-time interaction, the asynchronous message acquisition module 1308 is further configured to acquire an identifier of the preset interaction action according to the trigger instruction; generating an asynchronous message according to the identifier of the preset interaction action; and the asynchronous message is used for controlling corresponding virtual session members in the virtual session scene so as to implement preset interaction actions.
In an embodiment, when the asynchronous message obtaining mode corresponding to the trigger instruction is not mutually exclusive with the real-time interaction, the asynchronous message obtaining module 1308 is further configured to obtain a current user identifier; acquiring a target user identifier aimed at by a preset interaction action; generating an asynchronous message according to the current user identifier, the acquired target user identifier and the identifier of the preset interactive action; and the asynchronous message is used for controlling the virtual session member corresponding to the current user identifier in the virtual session scene so as to implement the preset interaction action aiming at the virtual session member corresponding to the target user identifier.
In one embodiment, the real-time interaction module 1302 is further configured to capture the head image in real-time; identifying expression characteristics in the head image acquired in real time to obtain real-time expression data; and sending real-time expression data in real time, wherein the sent real-time expression data is used for controlling a corresponding virtual session member in a virtual session scene to trigger an expression action corresponding to the real-time expression data in real time.
As shown in fig. 14, in one embodiment, the apparatus 1300 further comprises:
a virtual session context adding module 1301, configured to obtain a current user identifier; acquiring a multi-user session identifier corresponding to a current user identifier; and adding the current user identification into a member list of the virtual session scene identified by the multi-person session identification.
As shown in fig. 15, in one embodiment, another interactive data processing apparatus 1500 is provided, the apparatus 1500 comprising: a real-time interaction module 1502, a receiving module 1504, a mutual exclusion processing module 1506, and an asynchronous message play module 1508, wherein:
a real-time interaction module 1502, configured to perform real-time interaction through virtual session members in a virtual session scene;
a receiving module 1504 for receiving an asynchronous message.
The mutual exclusion processing module 1506 is configured to interrupt the real-time interaction mutually exclusive to the playing mode when the playing mode of the asynchronous message is mutually exclusive to the real-time interaction.
An asynchronous message playing module 1508, configured to play an asynchronous message in a virtual session scenario;
the mutual exclusion processing module 1506 is further configured to resume the interrupted real-time interaction after the asynchronous message is played.
In one embodiment, the asynchronous message playing module 1508 is further configured to obtain corresponding emotion data according to the asynchronous message; determining a virtual session member corresponding to the asynchronous message in a virtual session scene; and controlling the determined virtual conversation member to trigger the expressive action represented by the expressive data.
In one embodiment, the asynchronous message playing module 1508 is further configured to obtain corresponding emotion data frames with time sequence according to the asynchronous message, where each emotion data frame includes an emotion feature value corresponding to an emotion type; and in the virtual conversation scene, controlling the determined virtual conversation members, and triggering expression actions corresponding to the expression characteristic values in each expression data frame according to a time sequence.
In one embodiment, the asynchronous message playing module 1508 is further configured to extract an emoticon data download address from the asynchronous message; and downloading the expression data according to the expression data downloading address.
In one embodiment, the asynchronous message playing module 1508 is also configured to extract a voice data download address in the asynchronous message; downloading voice data according to the voice data downloading address; in the virtual session scenario, voice data is played.
In one embodiment, the asynchronous message playing module 1508 is further configured to obtain a first user identification for sending the asynchronous message; extracting the identifier of the preset interaction action in the asynchronous message; and in the virtual session scene, controlling the virtual session member corresponding to the first user identifier to trigger a preset interaction action.
In one embodiment, the asynchronous message playing module 1508 is further configured to extract a second user identification from the asynchronous message; and in the virtual session scene, controlling the virtual session member corresponding to the first user identifier in the virtual session scene so as to implement a preset interaction action aiming at the virtual session member corresponding to the second user identifier.
In one embodiment, the asynchronous message playing module 1508 is further configured to play the asynchronous message in the virtual session context when the playing mode of the asynchronous message is not mutually exclusive with the real-time interaction.
In one embodiment, when the playing manner of the asynchronous message is not mutually exclusive from the real-time interaction, the asynchronous message playing module 1508 is further configured to obtain corresponding text and/or pictures according to the asynchronous message; and in the virtual conversation scene, displaying the characters and/or pictures and corresponding virtual conversation members in an associated manner.
In one embodiment, the real-time interaction module 1502 is further configured to capture head images in real-time; identifying expression characteristics in the head image acquired in real time to obtain real-time expression data; and sending real-time expression data in real time, wherein the sent real-time expression data is used for controlling a corresponding virtual session member in a virtual session scene to trigger an expression action corresponding to the real-time expression data in real time.
A computer device comprising a memory and a processor, the memory having stored therein computer-readable instructions that, when executed by the processor, cause the processor to perform the steps of: real-time interaction is carried out through virtual session members in a virtual session scene; acquiring a trigger instruction of an asynchronous message; when the asynchronous message acquisition mode corresponding to the trigger instruction is mutually exclusive with the real-time interaction, the real-time interaction mutually exclusive with the asynchronous message acquisition mode is interrupted; acquiring asynchronous messages for playing in a virtual session scene according to the triggering instruction; sending the obtained asynchronous message; resuming the interrupted real-time interaction.
In one embodiment, the obtaining of an asynchronous message for playing in a virtual session context according to a triggering instruction executed by a processor comprises: acquiring a head image according to a trigger instruction; identifying expression characteristics in the head image to obtain expression data for controlling virtual session members in a virtual session scene to trigger expression actions; and generating an asynchronous message for playing in the virtual conversation scene according to the expression data.
In one embodiment, the identifying expressive features in the head image performed by the processor to obtain the expressive data for controlling the triggering of an expressive action by a virtual session member in a virtual session scene comprises: identifying expression characteristics in the head images acquired according to the time sequence to obtain expression data frames with the time sequence, wherein each expression data frame comprises an expression characteristic value corresponding to an expression type; the generating of the asynchronous message for playing in the virtual conversation scene according to the emotion data executed by the processor comprises: and generating an asynchronous message according to the expression data frames with the time sequence, wherein the asynchronous message is used for controlling corresponding virtual session members in the virtual session scene, and triggering expression actions corresponding to the expression characteristic values in each expression data frame according to the time sequence.
In one embodiment, the obtaining of the asynchronous message for playing in the virtual session scene according to the triggering instruction executed by the processor further comprises: voice data is recorded. The generating of the asynchronous message for playing in the virtual conversation scene according to the emotion data executed by the processor comprises: and generating an asynchronous message for playing in the virtual conversation scene according to the voice data and the expression data.
In one embodiment, the processor executed to generate an asynchronous message for playing in a virtual conversation scene from the speech data and the emotion data includes: uploading voice data and expression data; acquiring a voice data downloading address of voice data and an expression data downloading address of expression data; and generating an asynchronous message for playing in the virtual session scene according to the voice data downloading address and the expression data downloading address.
In one embodiment, after obtaining the triggering instruction for the asynchronous message, the computer readable instructions further cause the processor to perform the steps of: and when the asynchronous message acquisition mode corresponding to the trigger instruction is not mutually exclusive with the real-time interaction, executing the step of acquiring the asynchronous message for playing in the virtual session scene according to the trigger instruction and the step of sending the acquired asynchronous message during the real-time interaction.
In one embodiment, when an asynchronous message acquisition manner corresponding to a trigger instruction and a real-time interaction are mutually exclusive, which is executed by a processor, acquiring an asynchronous message for playing in a virtual session scene according to the trigger instruction, includes: acquiring an identifier of a preset interaction action according to a trigger instruction; generating an asynchronous message according to the identifier of the preset interaction action; and the asynchronous message is used for controlling corresponding virtual session members in the virtual session scene so as to implement preset interaction actions.
In one embodiment, when the asynchronous message acquisition mode corresponding to the trigger instruction and the real-time interaction are mutually exclusive, the acquiring, according to the trigger instruction, the asynchronous message for playing in the virtual session scene further includes: acquiring a current user identifier; acquiring a target user identifier aimed at by a preset interaction action; the processor generates an asynchronous message according to the identifier of the preset interaction action, and the asynchronous message comprises the following steps: generating an asynchronous message according to the current user identifier, the acquired target user identifier and the identifier of the preset interactive action; and the asynchronous message is used for controlling the virtual session member corresponding to the current user identifier in the virtual session scene so as to implement the preset interaction action aiming at the virtual session member corresponding to the target user identifier.
In one embodiment, the real-time interaction by the virtual session members in the virtual session scenario performed by the processor comprises: acquiring a head image in real time; identifying expression characteristics in the head image acquired in real time to obtain real-time expression data; and sending real-time expression data in real time, wherein the sent real-time expression data is used for controlling a corresponding virtual session member in a virtual session scene to trigger an expression action corresponding to the real-time expression data in real time.
In one embodiment, prior to real-time interaction by a virtual session member in a virtual session scenario, the computer readable instructions further cause the processor to perform the steps of: acquiring a current user identifier; acquiring a multi-user session identifier corresponding to a current user identifier; and adding the current user identification into a member list of the virtual session scene identified by the multi-person session identification.
In one embodiment, a storage medium is provided having computer-readable instructions stored thereon which, when executed by one or more processors, cause the one or more processors to perform the steps of: real-time interaction is carried out through virtual session members in a virtual session scene; acquiring a trigger instruction of an asynchronous message; when the asynchronous message acquisition mode corresponding to the trigger instruction is mutually exclusive with the real-time interaction, the real-time interaction mutually exclusive with the asynchronous message acquisition mode is interrupted; acquiring asynchronous messages for playing in a virtual session scene according to the triggering instruction; sending the obtained asynchronous message; resuming the interrupted real-time interaction.
In one embodiment, the obtaining of an asynchronous message for playing in a virtual session context according to a triggering instruction executed by a processor comprises: acquiring a head image according to a trigger instruction; identifying expression characteristics in the head image to obtain expression data for controlling virtual session members in a virtual session scene to trigger expression actions; and generating an asynchronous message for playing in the virtual conversation scene according to the expression data.
In one embodiment, the identifying expressive features in the head image performed by the processor to obtain the expressive data for controlling the triggering of an expressive action by a virtual session member in a virtual session scene comprises: identifying expression characteristics in the head images acquired according to the time sequence to obtain expression data frames with the time sequence, wherein each expression data frame comprises an expression characteristic value corresponding to an expression type; the generating of the asynchronous message for playing in the virtual conversation scene according to the emotion data executed by the processor comprises: and generating an asynchronous message according to the expression data frames with the time sequence, wherein the asynchronous message is used for controlling corresponding virtual session members in the virtual session scene, and triggering expression actions corresponding to the expression characteristic values in each expression data frame according to the time sequence.
In one embodiment, the obtaining of the asynchronous message for playing in the virtual session scene according to the triggering instruction executed by the processor further comprises: voice data is recorded. The generating of the asynchronous message for playing in the virtual conversation scene according to the emotion data executed by the processor comprises: and generating an asynchronous message for playing in the virtual conversation scene according to the voice data and the expression data.
In one embodiment, the processor executed to generate an asynchronous message for playing in a virtual conversation scene from the speech data and the emotion data includes: uploading voice data and expression data; acquiring a voice data downloading address of voice data and an expression data downloading address of expression data; and generating an asynchronous message for playing in the virtual session scene according to the voice data downloading address and the expression data downloading address.
In one embodiment, after obtaining the triggering instruction for the asynchronous message, the computer readable instructions further cause the processor to perform the steps of: and when the asynchronous message acquisition mode corresponding to the trigger instruction is not mutually exclusive with the real-time interaction, executing the step of acquiring the asynchronous message for playing in the virtual session scene according to the trigger instruction and the step of sending the acquired asynchronous message during the real-time interaction.
In one embodiment, when an asynchronous message acquisition manner corresponding to a trigger instruction and a real-time interaction are mutually exclusive, which is executed by a processor, acquiring an asynchronous message for playing in a virtual session scene according to the trigger instruction, includes: acquiring an identifier of a preset interaction action according to a trigger instruction; generating an asynchronous message according to the identifier of the preset interaction action; and the asynchronous message is used for controlling corresponding virtual session members in the virtual session scene so as to implement preset interaction actions.
In one embodiment, when the asynchronous message acquisition mode corresponding to the trigger instruction and the real-time interaction are mutually exclusive, the acquiring, according to the trigger instruction, the asynchronous message for playing in the virtual session scene further includes: acquiring a current user identifier; acquiring a target user identifier aimed at by a preset interaction action; the processor generates an asynchronous message according to the identifier of the preset interaction action, and the asynchronous message comprises the following steps: generating an asynchronous message according to the current user identifier, the acquired target user identifier and the identifier of the preset interactive action; and the asynchronous message is used for controlling the virtual session member corresponding to the current user identifier in the virtual session scene so as to implement the preset interaction action aiming at the virtual session member corresponding to the target user identifier.
In one embodiment, the real-time interaction by the virtual session members in the virtual session scenario performed by the processor comprises: acquiring a head image in real time; identifying expression characteristics in the head image acquired in real time to obtain real-time expression data; and sending real-time expression data in real time, wherein the sent real-time expression data is used for controlling a corresponding virtual session member in a virtual session scene to trigger an expression action corresponding to the real-time expression data in real time.
In one embodiment, prior to real-time interaction by a virtual session member in a virtual session scenario, the computer readable instructions further cause the processor to perform the steps of: acquiring a current user identifier; acquiring a multi-user session identifier corresponding to a current user identifier; and adding the current user identification into a member list of the virtual session scene identified by the multi-person session identification.
In one embodiment, another computer device is provided, comprising a memory and a processor, the memory having stored therein computer-readable instructions that, when executed by the processor, cause the processor to perform the steps of: real-time interaction is carried out through virtual session members in a virtual session scene; receiving an asynchronous message; when the playing mode of the asynchronous message is mutually exclusive with the real-time interaction, the real-time interaction mutually exclusive with the playing mode is interrupted; playing the asynchronous message in the virtual session scene; and after the asynchronous message is played, recovering the interrupted real-time interaction.
In one embodiment, the playing of asynchronous messages in a virtual session scene performed by a processor comprises: acquiring corresponding expression data according to the asynchronous message; determining a virtual session member corresponding to the asynchronous message in a virtual session scene; and controlling the determined virtual conversation member to trigger the expressive action represented by the expressive data.
In one embodiment, the obtaining of the corresponding emotion data according to the asynchronous message executed by the processor includes: and acquiring corresponding expression data frames with time sequence according to the asynchronous message, wherein each expression data frame comprises an expression characteristic value corresponding to the expression type. The processor-executed control of the determined virtual session member triggers an emotive action represented by the emotive data, including: and in the virtual conversation scene, controlling the determined virtual conversation members, and triggering expression actions corresponding to the expression characteristic values in each expression data frame according to a time sequence.
In one embodiment, the obtaining of the corresponding emotion data according to the asynchronous message executed by the processor includes: extracting an expression data downloading address in the asynchronous message; and downloading the expression data according to the expression data downloading address.
In one embodiment, the computer readable instructions further cause the processor to perform the steps of: extracting a voice data download address in the asynchronous message; downloading voice data according to the voice data downloading address; in the virtual session scenario, voice data is played.
In one embodiment, playing the asynchronous message in the virtual session scene performed by the processor comprises: acquiring a first user identifier for sending an asynchronous message; extracting the identifier of the preset interaction action in the asynchronous message; and in the virtual session scene, controlling the virtual session member corresponding to the first user identifier to trigger a preset interaction action.
In one embodiment, the playing of the asynchronous message in the virtual session scene performed by the processor further comprises: from the asynchronous message, a second subscriber identity is extracted. The method for controlling the virtual session member corresponding to the first user identifier to trigger the preset interaction action in the virtual session scene executed by the processor includes: and in the virtual session scene, controlling the virtual session member corresponding to the first user identifier in the virtual session scene so as to implement a preset interaction action aiming at the virtual session member corresponding to the second user identifier.
In one embodiment, the computer readable instructions further cause the processor to perform the steps of: and when the playing mode of the asynchronous message is not mutually exclusive with the real-time interaction, executing the step of playing the asynchronous message in the virtual session scene during the real-time interaction.
In one embodiment, when the playing mode of the asynchronous message is mutually exclusive from the real-time interaction, the playing of the asynchronous message in the virtual session scene executed by the processor includes: acquiring corresponding characters and/or pictures according to the asynchronous messages; and in the virtual conversation scene, displaying the characters and/or pictures and corresponding virtual conversation members in an associated manner.
In one embodiment, the real-time interaction by the virtual session members in the virtual session scenario performed by the processor comprises: acquiring a head image in real time; identifying expression characteristics in the head image acquired in real time to obtain real-time expression data; and sending real-time expression data in real time, wherein the sent real-time expression data is used for controlling a corresponding virtual session member in a virtual session scene to trigger an expression action corresponding to the real-time expression data in real time.
In one embodiment, another storage medium is provided that stores computer-readable instructions which, when executed by one or more processors, cause the one or more processors to perform the steps of: real-time interaction is carried out through virtual session members in a virtual session scene; receiving an asynchronous message; when the playing mode of the asynchronous message is mutually exclusive with the real-time interaction, the real-time interaction mutually exclusive with the playing mode is interrupted; playing the asynchronous message in the virtual session scene; and after the asynchronous message is played, recovering the interrupted real-time interaction.
In one embodiment, the playing of asynchronous messages in a virtual session scene performed by a processor comprises: acquiring corresponding expression data according to the asynchronous message; determining a virtual session member corresponding to the asynchronous message in a virtual session scene; and controlling the determined virtual conversation member to trigger the expressive action represented by the expressive data.
In one embodiment, the obtaining of the corresponding emotion data according to the asynchronous message executed by the processor includes: and acquiring corresponding expression data frames with time sequence according to the asynchronous message, wherein each expression data frame comprises an expression characteristic value corresponding to the expression type. The processor-executed control of the determined virtual session member triggers an emotive action represented by the emotive data, including: and in the virtual conversation scene, controlling the determined virtual conversation members, and triggering expression actions corresponding to the expression characteristic values in each expression data frame according to a time sequence.
In one embodiment, the obtaining of the corresponding emotion data according to the asynchronous message executed by the processor includes: extracting an expression data downloading address in the asynchronous message; and downloading the expression data according to the expression data downloading address.
In one embodiment, the computer readable instructions further cause the processor to perform the steps of: extracting a voice data download address in the asynchronous message; downloading voice data according to the voice data downloading address; in the virtual session scenario, voice data is played.
In one embodiment, playing the asynchronous message in the virtual session scene performed by the processor comprises: acquiring a first user identifier for sending an asynchronous message; extracting the identifier of the preset interaction action in the asynchronous message; and in the virtual session scene, controlling the virtual session member corresponding to the first user identifier to trigger a preset interaction action.
In one embodiment, the playing of the asynchronous message in the virtual session scene performed by the processor further comprises: from the asynchronous message, a second subscriber identity is extracted. The method for controlling the virtual session member corresponding to the first user identifier to trigger the preset interaction action in the virtual session scene executed by the processor includes: and in the virtual session scene, controlling the virtual session member corresponding to the first user identifier in the virtual session scene so as to implement a preset interaction action aiming at the virtual session member corresponding to the second user identifier.
In one embodiment, the computer readable instructions further cause the processor to perform the steps of: and when the playing mode of the asynchronous message is not mutually exclusive with the real-time interaction, executing the step of playing the asynchronous message in the virtual session scene during the real-time interaction.
In one embodiment, when the playing mode of the asynchronous message is mutually exclusive from the real-time interaction, the playing of the asynchronous message in the virtual session scene executed by the processor includes: acquiring corresponding characters and/or pictures according to the asynchronous messages; and in the virtual conversation scene, displaying the characters and/or pictures and corresponding virtual conversation members in an associated manner.
In one embodiment, the real-time interaction by the virtual session members in the virtual session scenario performed by the processor comprises: acquiring a head image in real time; identifying expression characteristics in the head image acquired in real time to obtain real-time expression data; and sending real-time expression data in real time, wherein the sent real-time expression data is used for controlling a corresponding virtual session member in a virtual session scene to trigger an expression action corresponding to the real-time expression data in real time.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium, and can include the processes of the embodiments of the methods described above when the computer program is executed. The storage medium may be a non-volatile storage medium such as a magnetic disk, an optical disk, a Read-Only Memory (ROM), or a Random Access Memory (RAM).
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above examples only show some embodiments of the present invention, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the inventive concept, which falls within the scope of the present invention. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (15)

1. A method of interactive data processing, the method comprising:
real-time interaction is carried out through virtual session members in a virtual session scene; the real-time interaction is an interactive communication mode that an information receiver and an information sender which require interactive communication are both in an online state, and the information sender needs to synchronously wait for the response of the information receiver after sending information to the information receiver; the virtual session scene is a session scene provided for the virtual session members; when members joining the virtual conversation scene perform image display, the images of the virtual conversation members are used for displaying;
acquiring a trigger instruction of an asynchronous message;
when the asynchronous message acquisition mode corresponding to the trigger instruction needs to use an acquisition device of real-time interactive data, judging that the asynchronous message acquisition mode and the real-time interaction are mutually exclusive, and if so, judging that the asynchronous message acquisition mode and the real-time interaction are mutually exclusive
Interrupting the real-time interaction mutually exclusive with the asynchronous message acquisition mode;
acquiring data based on the acquisition device according to the trigger instruction to generate asynchronous messages; the asynchronous message is used for being played in the virtual session scene; wherein, the playing refers to embodying the asynchronous message in the virtual session scene;
sending the asynchronous message;
resuming the interrupted real-time interaction.
2. The method of claim 1, wherein the collecting data based on the collecting device to generate an asynchronous message according to the triggering instruction comprises:
acquiring a head image based on an image acquisition device according to the trigger instruction;
identifying expression characteristics in the head image to obtain expression data for controlling a virtual session member in the virtual session scene to trigger an expression action;
and generating an asynchronous message for playing in the virtual session scene according to the expression data.
3. The method of claim 2, wherein the recognizing expressive features in the head image to obtain expressive data for controlling a virtual conversation member in the virtual conversation scene to trigger an expressive action comprises:
identifying expression characteristics in the head images acquired according to the time sequence to obtain expression data frames with the time sequence, wherein each expression data frame comprises an expression characteristic value corresponding to an expression type;
the generating of the asynchronous message for playing in the virtual session scene according to the expression data includes:
and generating an asynchronous message according to the expression data frames with the time sequence, wherein the asynchronous message is used for controlling corresponding virtual session members in the virtual session scene, and triggering expression actions corresponding to the expression characteristic values in each expression data frame according to the time sequence.
4. The method of claim 2, wherein the collecting data based on the collection device to generate an asynchronous message according to the triggering instruction further comprises:
recording voice data based on a voice acquisition device;
the generating of the asynchronous message for playing in the virtual session scene according to the expression data includes:
and generating an asynchronous message for playing in the virtual session scene according to the voice data and the expression data.
5. The method of claim 1, wherein after the trigger instruction to fetch the asynchronous message, the method further comprises:
when the asynchronous message acquisition mode corresponding to the trigger instruction is not mutually exclusive with the real-time interaction, then
And when the real-time interaction is carried out, executing the step of acquiring the asynchronous message for playing in the virtual session scene according to the trigger instruction, and executing the step of sending the asynchronous message.
6. The method according to claim 5, wherein when the asynchronous message obtaining manner corresponding to the trigger instruction is mutually exclusive from the real-time transaction, the step of obtaining the asynchronous message for playing in the virtual session scene according to the trigger instruction comprises:
acquiring an identifier of a preset interaction action according to the trigger instruction;
generating an asynchronous message according to the identifier of the preset interaction action; the asynchronous message is used for controlling corresponding virtual session members in the virtual session scene so as to implement the preset interaction action.
7. A method of interactive data processing, the method comprising:
real-time interaction is carried out through virtual session members in a virtual session scene; the real-time interaction is an interactive communication mode that an information receiver and an information sender which require interactive communication are both in an online state, and the information sender needs to synchronously wait for the response of the information receiver after sending information to the information receiver; the virtual session scene is a session scene provided for the virtual session members; when members joining the virtual conversation scene perform image display, the images of the virtual conversation members are used for displaying;
receiving an asynchronous message; if the asynchronous message acquisition mode corresponding to the asynchronous message needs to use a real-time interactive data acquisition device, the asynchronous message is generated based on data acquired by the real-time interactive data acquisition device, wherein the information sender interrupts the real-time interaction mutually exclusive with the asynchronous message acquisition mode after judging that the asynchronous acquisition mode and the real-time interaction are mutually exclusive;
when the playing mode of the asynchronous message needs to use an object for realizing real-time interaction, judging that the playing mode of the asynchronous message is mutually exclusive with the real-time interaction, and if so, judging that the playing mode of the asynchronous message is mutually exclusive with the real-time interaction
Interrupting the real-time interaction mutually exclusive with the playing mode; the object for realizing the real-time interaction comprises a playing device of real-time interaction data and/or a virtual conversation member;
playing the asynchronous message in the virtual session scene; wherein, playing the asynchronous message means that the asynchronous message is embodied in the virtual session scene;
and after the asynchronous message is played, recovering the interrupted real-time interaction.
8. The method of claim 7, wherein playing the asynchronous message in the virtual session context comprises:
acquiring corresponding expression data according to the asynchronous message;
determining a virtual session member corresponding to the asynchronous message in the virtual session scene;
and controlling the determined virtual conversation member to trigger the expressive action represented by the expressive data.
9. The method of claim 8, wherein the obtaining the corresponding emotion data according to the asynchronous message comprises:
acquiring corresponding expression data frames with time sequence according to the asynchronous message, wherein each expression data frame comprises an expression characteristic value corresponding to an expression type;
the controlling the determined virtual conversation member to trigger the expressive action represented by the expressive data comprises the following steps:
and in the virtual conversation scene, controlling the determined virtual conversation members, and triggering expression actions corresponding to the expression characteristic values in each expression data frame according to the time sequence.
10. The method of claim 7, wherein playing the asynchronous message in the virtual session context comprises:
acquiring a first user identifier for sending the asynchronous message;
extracting the identifier of the preset interaction action in the asynchronous message;
and in the virtual session scene, controlling the virtual session member corresponding to the first user identifier to trigger the preset interaction action.
11. The method of claim 10, wherein playing the asynchronous message in the virtual session context further comprises:
extracting a second user identification from the asynchronous message;
in the virtual session scene, controlling the virtual session member corresponding to the first user identifier to trigger the preset interaction action, including:
and in the virtual session scene, controlling a virtual session member corresponding to the first user identifier in the virtual session scene to implement the preset interaction action aiming at the virtual session member corresponding to the second user identifier.
12. An interactive data processing apparatus, characterized in that the apparatus comprises:
the real-time interaction module is used for carrying out real-time interaction through the virtual session members in the virtual session scene; the real-time interaction is an interactive communication mode that an information receiver and an information sender which require interactive communication are both in an online state, and the information sender needs to synchronously wait for the response of the information receiver after sending information to the information receiver; the virtual session scene is a session scene provided for the virtual session members; when members joining the virtual conversation scene perform image display, the images of the virtual conversation members are used for displaying;
the instruction acquisition module is used for acquiring a trigger instruction of the asynchronous message;
a mutual exclusion processing module, configured to determine that an asynchronous message acquisition mode corresponding to the trigger instruction is mutually exclusive with the real-time interaction if the asynchronous message acquisition mode needs to use an acquisition device for real-time interactive data, and interrupt the real-time interaction mutually exclusive with the asynchronous message acquisition mode;
the asynchronous message acquisition module is used for acquiring data based on the acquisition device according to the trigger instruction so as to generate asynchronous messages; the asynchronous message is used for being played in the virtual session scene; wherein, the playing refers to embodying the asynchronous message in the virtual session scene;
the sending module is used for sending the acquired asynchronous message;
the mutual exclusion processing module is further configured to resume the interrupted real-time interaction.
13. An interactive data processing apparatus, characterized in that the apparatus comprises:
the real-time interaction module is used for carrying out real-time interaction through the virtual session members in the virtual session scene; the real-time interaction is an interactive communication mode that an information receiver and an information sender which require interactive communication are both in an online state, and the information sender needs to synchronously wait for the response of the information receiver after sending information to the information receiver; the virtual session scene is a session scene provided for the virtual session members; when members joining the virtual conversation scene perform image display, the images of the virtual conversation members are used for displaying;
a receiving module for receiving an asynchronous message; if the asynchronous message acquisition mode corresponding to the asynchronous message needs to use a real-time interactive data acquisition device, the asynchronous message is generated based on data acquired by the real-time interactive data acquisition device, wherein the information sender interrupts the real-time interaction mutually exclusive with the asynchronous message acquisition mode after judging that the asynchronous acquisition mode and the real-time interaction are mutually exclusive;
a mutual exclusion processing module, configured to determine that the playing mode of the asynchronous message is mutually exclusive with the real-time interaction when the playing mode of the asynchronous message needs to use an object for implementing real-time interaction, and interrupt the real-time interaction mutually exclusive with the playing mode; the object for realizing the real-time interaction comprises a playing device of real-time interaction data and/or a virtual conversation member;
the asynchronous message playing module is used for playing the asynchronous message in the virtual session scene; wherein, playing the asynchronous message means that the asynchronous message is embodied in the virtual session scene;
and the mutual exclusion processing module is also used for recovering the interrupted real-time interaction after the asynchronous message is played.
14. A computer device comprising a memory and a processor, the memory having stored therein computer-readable instructions that, when executed by the processor, cause the processor to perform the steps of the method of any of claims 1 to 11.
15. A storage medium storing computer-readable instructions which, when executed by one or more processors, cause the one or more processors to perform the steps of the method of any one of claims 1 to 11.
CN201710438781.6A 2017-06-12 2017-06-12 Interactive data processing method and device, computer equipment and storage medium Active CN109039851B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710438781.6A CN109039851B (en) 2017-06-12 2017-06-12 Interactive data processing method and device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710438781.6A CN109039851B (en) 2017-06-12 2017-06-12 Interactive data processing method and device, computer equipment and storage medium

Publications (2)

Publication Number Publication Date
CN109039851A CN109039851A (en) 2018-12-18
CN109039851B true CN109039851B (en) 2020-12-29

Family

ID=64629230

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710438781.6A Active CN109039851B (en) 2017-06-12 2017-06-12 Interactive data processing method and device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN109039851B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110401810B (en) * 2019-06-28 2021-12-21 广东虚拟现实科技有限公司 Virtual picture processing method, device and system, electronic equipment and storage medium
CN111515970B (en) * 2020-04-27 2023-07-14 腾讯科技(深圳)有限公司 Interaction method, mimicry robot and related device
CN115878648B (en) * 2023-02-22 2023-05-05 成都成电医星数字健康软件有限公司 Automatic adjustment method and device for data logic time sequence, electronic equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103237237A (en) * 2013-03-28 2013-08-07 四三九九网络股份有限公司 Method and device for video playing picture book
CN104683402A (en) * 2013-11-29 2015-06-03 华为终端有限公司 Communication method and user equipment
US9401987B2 (en) * 2013-05-07 2016-07-26 Yellowpages.Com Llc Systems and methods to provide connections to users in different geographic regions
CN106249607A (en) * 2016-07-28 2016-12-21 桂林电子科技大学 Virtual Intelligent household analogue system and method
CN106422324A (en) * 2015-08-11 2017-02-22 腾讯科技(深圳)有限公司 Multi-terminal real-time communication method, device and system

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102571631B (en) * 2011-12-23 2016-08-31 上海量明科技发展有限公司 The sending method of motion images information, terminal and system in instant messaging
CN103368816A (en) * 2012-03-29 2013-10-23 深圳市腾讯计算机系统有限公司 Instant communication method based on virtual character and system
CN106683501B (en) * 2016-12-23 2019-05-14 武汉市马里欧网络有限公司 A kind of AR children scene plays the part of projection teaching's method and system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103237237A (en) * 2013-03-28 2013-08-07 四三九九网络股份有限公司 Method and device for video playing picture book
US9401987B2 (en) * 2013-05-07 2016-07-26 Yellowpages.Com Llc Systems and methods to provide connections to users in different geographic regions
CN104683402A (en) * 2013-11-29 2015-06-03 华为终端有限公司 Communication method and user equipment
CN106422324A (en) * 2015-08-11 2017-02-22 腾讯科技(深圳)有限公司 Multi-terminal real-time communication method, device and system
CN106249607A (en) * 2016-07-28 2016-12-21 桂林电子科技大学 Virtual Intelligent household analogue system and method

Also Published As

Publication number Publication date
CN109039851A (en) 2018-12-18

Similar Documents

Publication Publication Date Title
US10089793B2 (en) Systems and methods for providing real-time composite video from multiple source devices featuring augmented reality elements
JP6616288B2 (en) Method, user terminal, and server for information exchange in communication
WO2018107918A1 (en) Method for interaction between avatars, terminals, and system
CN108322832B (en) Comment method and device and electronic equipment
CN110401810B (en) Virtual picture processing method, device and system, electronic equipment and storage medium
KR20130022434A (en) Apparatus and method for servicing emotional contents on telecommunication devices, apparatus and method for recognizing emotion thereof, apparatus and method for generating and matching the emotional contents using the same
CN109039851B (en) Interactive data processing method and device, computer equipment and storage medium
JP2001245269A (en) Device and method for generating communication data, device and method for reproducing the data and program storage medium
US20160006772A1 (en) Information-processing device, communication system, storage medium, and communication method
CN109150690B (en) Interactive data processing method and device, computer equipment and storage medium
US20220398816A1 (en) Systems And Methods For Providing Real-Time Composite Video From Multiple Source Devices Featuring Augmented Reality Elements
CN111583355A (en) Face image generation method and device, electronic equipment and readable storage medium
CN112351327A (en) Face image processing method and device, terminal and storage medium
CN111629222B (en) Video processing method, device and storage medium
CN109819341B (en) Video playing method and device, computing equipment and storage medium
CN114374880B (en) Joint live broadcast method, joint live broadcast device, electronic equipment and computer readable storage medium
CN109525483A (en) The generation method of mobile terminal and its interactive animation, computer readable storage medium
CN114915852B (en) Video call interaction method, device, computer equipment and storage medium
CN111614926B (en) Network communication method, device, computer equipment and storage medium
CN115396390A (en) Interaction method, system and device based on video chat and electronic equipment
CN114425162A (en) Video processing method and related device
US20160166921A1 (en) Integrating interactive games and video calls
US20230328012A1 (en) Virtual-figure-based data processing method and apparatus, computer device, and storage medium
CN114629869B (en) Information generation method, device, electronic equipment and storage medium
WO2023074898A1 (en) Terminal, information processing method, program, and recording medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant