CN109150690B - Interactive data processing method and device, computer equipment and storage medium - Google Patents

Interactive data processing method and device, computer equipment and storage medium Download PDF

Info

Publication number
CN109150690B
CN109150690B CN201710458909.5A CN201710458909A CN109150690B CN 109150690 B CN109150690 B CN 109150690B CN 201710458909 A CN201710458909 A CN 201710458909A CN 109150690 B CN109150690 B CN 109150690B
Authority
CN
China
Prior art keywords
expression
dimensional virtual
scene
session
virtual session
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710458909.5A
Other languages
Chinese (zh)
Other versions
CN109150690A (en
Inventor
李斌
陈晓波
李磊
王俊山
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201710458909.5A priority Critical patent/CN109150690B/en
Publication of CN109150690A publication Critical patent/CN109150690A/en
Application granted granted Critical
Publication of CN109150690B publication Critical patent/CN109150690B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L51/00User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
    • H04L51/04Real-time or near real-time messaging, e.g. instant messaging [IM]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L51/00User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
    • H04L51/52User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail for supporting social networking services
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/131Protocols for games, networked simulations or virtual reality

Abstract

The invention relates to an interactive data processing method, an interactive data processing device, computer equipment and a storage medium, wherein the method comprises the following steps: adding a corresponding virtual session scene through a currently logged first user identifier; acquiring head image data; identifying expression features in the head image data to obtain expression data; and sending the expression data to a terminal corresponding to a second user identifier added into the virtual session scene, so that the terminal controls a virtual session member corresponding to the first user identifier to trigger an expression action represented by the expression data in the virtual session scene. According to the method, interactive communication is achieved by controlling the virtual session member to trigger the expression action represented by the expression data, and compared with interactive communication based on the real image of the user, privacy safety in the interactive communication process is improved to a certain extent.

Description

Interactive data processing method and device, computer equipment and storage medium
Technical Field
The present invention relates to the field of computer technologies, and in particular, to an interactive data processing method, an interactive data processing apparatus, a computer device, and a storage medium.
Background
With the rapid development of scientific technology, various communication technologies are more and more advanced, and the requirements of people on communication forms are more and more diversified. In the current interactive communication mode, video communication is popular with wide users because the video communication can display some action moods of both communication parties and is not boring as voice communication or pure text communication.
However, in current video communication processes, displayed user images may be maliciously captured or recorded and may be further disseminated. The user image is relatively private information, and if the user image is recorded or spread maliciously, the privacy of the user can be seriously damaged. Therefore, at present, video communication has certain privacy security problems.
Disclosure of Invention
Based on this, it is necessary to provide an interactive data processing method, an apparatus, a computer device, and a storage medium for solving the problem of certain privacy security existing in current video communication.
A method of interactive data processing, the method comprising:
adding a corresponding virtual session scene through a currently logged first user identifier;
acquiring head image data;
identifying expression features in the head image data to obtain expression data;
and sending the expression data to a terminal corresponding to a second user identifier added into the virtual session scene, so that the terminal controls a virtual session member corresponding to the first user identifier to trigger an expression action represented by the expression data in the virtual session scene.
An interactive data processing apparatus, characterized in that the apparatus comprises:
the joining module is used for joining the corresponding virtual session scene through the currently logged first user identifier;
the image acquisition module is used for acquiring head image data;
the expression recognition module is used for recognizing expression characteristics in the head image data to obtain expression data;
and the control module is used for sending the expression data to a terminal corresponding to a second user identifier added into the virtual session scene, so that the terminal controls a virtual session member corresponding to the first user identifier to trigger the expression action represented by the expression data in the virtual session scene.
A computer device comprising a memory and a processor, the memory having stored therein computer-readable instructions that, when executed by the processor, cause the processor to perform the steps of:
adding a corresponding virtual session scene through a currently logged first user identifier;
acquiring head image data;
identifying expression features in the head image data to obtain expression data;
and sending the expression data to a terminal corresponding to a second user identifier added into the virtual session scene, so that the terminal controls a virtual session member corresponding to the first user identifier to trigger an expression action represented by the expression data in the virtual session scene.
A storage medium storing computer-readable instructions which, when executed by one or more processors, cause the one or more processors to perform the steps of:
adding a corresponding virtual session scene through a currently logged first user identifier;
acquiring head image data;
identifying expression features in the head image data to obtain expression data;
and sending the expression data to a terminal corresponding to a second user identifier added into the virtual session scene, so that the terminal controls a virtual session member corresponding to the first user identifier to trigger an expression action represented by the expression data in the virtual session scene.
According to the interactive data processing method, the interactive data processing device, the computer equipment and the storage medium, the currently logged first user identification is added into the corresponding virtual session scene, the facial data is collected and recognized to obtain the expression data, and the expression data is sent to the terminal corresponding to the second user identification added into the virtual session scene. And the terminal receiving the expression data controls the virtual session member corresponding to the first user identifier to trigger the expression action represented by the expression data in the virtual session scene. The interactive communication is realized by controlling the virtual session member to trigger the expression action represented by the expression data, and compared with the interactive communication based on the real image of the user, the privacy security in the interactive communication process is improved to a certain extent.
A method of interactive data processing, the method comprising:
adding a corresponding virtual session scene through the currently logged second user identifier;
receiving expression data sent by a terminal corresponding to a first user identifier added into the virtual session scene;
extracting expression characteristic values from the expression data;
and in the virtual session scene, controlling a virtual session member corresponding to the first user identifier to trigger the expression action represented by the expression characteristic value.
An interactive data processing apparatus, characterized in that the apparatus comprises:
the joining module joins in the corresponding virtual session scene through the currently logged second user identifier;
the expression feature extraction module is used for receiving expression data sent by a terminal corresponding to a first user identifier added into the virtual session scene; extracting expression characteristic values from the expression data;
and the control module controls a virtual session member corresponding to the first user identifier to trigger the expression action represented by the expression characteristic value in the virtual session scene.
A computer device comprising a memory and a processor, the memory having stored therein computer-readable instructions that, when executed by the processor, cause the processor to perform the steps of:
adding a corresponding virtual session scene through the currently logged second user identifier;
receiving expression data sent by a terminal corresponding to a first user identifier added into the virtual session scene;
extracting expression characteristic values from the expression data;
and in the virtual session scene, controlling a virtual session member corresponding to the first user identifier to trigger the expression action represented by the expression characteristic value.
A storage medium storing computer-readable instructions which, when executed by one or more processors, cause the one or more processors to perform the steps of:
adding a corresponding virtual session scene through the currently logged second user identifier;
receiving expression data sent by a terminal corresponding to a first user identifier added into the virtual session scene;
extracting expression characteristic values from the expression data;
and in the virtual session scene, controlling a virtual session member corresponding to the first user identifier to trigger the expression action represented by the expression characteristic value.
The interactive data processing method, the interactive data processing device, the computer equipment and the storage medium join the corresponding virtual session scene through the currently logged second user identifier, and receive the expression data sent by the terminal corresponding to the first user identifier joining the virtual session scene; extracting an expression characteristic value from the expression data; and in the virtual conversation scene, controlling the virtual conversation member corresponding to the first user identification to trigger the expressive action represented by the expressive characteristic value. The interactive communication is realized by controlling the virtual session member to trigger the expression action represented by the expression data, and compared with the interactive communication based on the real image of the user, the privacy security in the interactive communication process is improved to a certain extent.
Drawings
FIG. 1 is a diagram of an application environment of a method for interactive data processing in one embodiment;
FIG. 2 is a schematic diagram showing an internal configuration of a computer device according to an embodiment;
FIG. 3 is a flowchart illustrating a method for interactive data processing according to an embodiment;
FIG. 4A is a diagram illustrating an interface of a virtual session context in one embodiment;
FIG. 4B is a diagram illustrating an interface of a virtual session context in another embodiment;
FIG. 5 is a timing diagram of a method of interactive data processing in one embodiment;
FIG. 6 is an architecture diagram of a method of interactive data processing in one embodiment;
FIG. 7 is a flowchart illustrating the virtual session scene presentation step in one embodiment;
FIG. 8 is a flowchart illustrating the steps of the perspective conversion operation in one embodiment;
FIG. 9 is a flowchart illustrating an interactive data processing method according to another embodiment;
FIG. 10 is a flowchart illustrating an interactive data processing method according to still another embodiment;
FIG. 11 is a flowchart illustrating a virtual session scene displaying step in another embodiment;
FIG. 12 is a block diagram showing an arrangement of an interactive data processing apparatus according to an embodiment;
FIG. 13 is a block diagram showing the construction of an interactive data processing apparatus according to another embodiment;
FIG. 14 is a block diagram showing a configuration of an interactive data processing apparatus according to still another embodiment;
FIG. 15 is a block diagram showing a configuration of an interactive data processing apparatus according to still another embodiment;
fig. 16 is a block diagram showing a configuration of an interactive data processing apparatus according to still another embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
FIG. 1 is a diagram of an application environment of a method for interactive data processing in one embodiment. Referring to fig. 1, an application environment of the interactive data processing method includes a first terminal 110, a second terminal 120, and a server 130. The first terminal 110 and the second terminal 120 are terminals installed with an application program that implements a virtual session scene function, and the first terminal 110 and the second terminal 120 may be used to send emotion data and receive emotion data. The server 130 may be an independent physical server or a server cluster including a plurality of physical servers. The server 130 may include an open service platform and may further include an access server to access the open service platform. The first terminal 110 and the second terminal 120 may be the same or different terminals. The terminal may be a mobile terminal or a desktop computer, and the mobile terminal may include at least one of a mobile phone, a tablet computer, a personal digital assistant, a wearable device, and the like.
The first terminal 110 may join the corresponding virtual session scenario through the currently logged-in first user identifier. The first terminal 110 may collect head image data, identify an expression feature in the head image data, and obtain expression data. The first terminal 110 may send the emotion data to the server 130, and the server 130 forwards the emotion data to the second terminal 120 corresponding to the second user identifier joined to the virtual session scenario. The second terminal 120 controls the virtual session member corresponding to the first user identifier to trigger the emotive action represented by the emotive data in the virtual session scene.
It is understood that in other embodiments, the first terminal 110 may directly transmit the emotion data to the second terminal 120 in a point-to-point manner without forwarding through the server 130.
FIG. 2 is a diagram showing an internal configuration of a computer device according to an embodiment. The computer devices may be the first terminal 110 and the second terminal 120 in fig. 1. Referring to fig. 2, the computer apparatus includes a processor, a nonvolatile storage medium, an internal memory, a network interface, a display screen, an input device, and an image capture device, which are connected by a system bus. Among other things, the non-volatile storage medium of the computer device may store an operating system and computer readable instructions that, when executed, may cause a processor to perform an interactive data processing method. The processor of the computer device is used for providing calculation and control capability and supporting the operation of the whole computer device. The internal memory may have stored therein computer-readable instructions that, when executed by the processor, cause the processor to perform a method of interactive data processing. The network interface of the computer device is used for network communication, such as transmitting expression data and the like. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, a key, a track ball or a touch pad arranged on the shell of the computer equipment, an external keyboard, a touch pad or a mouse and the like. The touch layer and the display screen form a touch screen. The image acquisition device may be a camera.
Those skilled in the art will appreciate that the architecture shown in fig. 2 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
FIG. 3 is a flowchart illustrating a method for interactive data processing according to an embodiment. The interactive data processing method may be applied to the first terminal 110 and/or the second terminal 120 in fig. 1. The embodiment is mainly illustrated by applying the method to the first terminal 110 in fig. 1. Referring to fig. 3, the method specifically includes the following steps:
s302, adding the corresponding virtual session scene through the currently logged first user identifier.
The virtual conversation scene is a conversation scene provided for virtual conversation members, and members joining the virtual conversation scene are displayed with the images of the virtual conversation members when displaying the images.
In one embodiment, the virtual conversation scenario may be a virtual room. The virtual session scene may be a three-dimensional virtual session scene or a two-dimensional virtual session scene. The virtual session scene may be created based on a session, and specifically, the virtual session scene may be created based on a multi-person session (a session with session members greater than or equal to 3), or may be created based on a double-person session (a session with session members of only 2). In one embodiment, the virtual conversation scene may further include presented background information, wherein the presented background information may include a background picture or a three-dimensional background model, and the background picture may be a two-dimensional picture or a three-dimensional picture. The presented context information may be real context information or virtual context information.
In one embodiment, the virtual session context may be a real-time virtual session context. The real-time virtual session scene means that the virtual session scene is used for realizing real-time communication. For example, the wechat group is a multi-person conversation, a real-time call is created in the wechat group, and all members joining the real-time call can be displayed by using the virtual image, that is, the virtual session members are displayed to realize the real-time call, so that the virtual session scene is formed.
The virtual conversation member is an avatar when the member in the virtual conversation scene shows. It will be appreciated that the avatar is an avatar that is virtualized, unlike a real avatar. The virtual conversation member includes a virtual character figure. Virtual session members may also include avatars of animals, plants, or other things. The virtual session member may be a three-dimensional virtual session member or a two-dimensional virtual session member. The virtual session member may be a default avatar (e.g., a virtual session initial model) or an avatar derived from the virtual session member initial model in combination with user characteristics (e.g., user facial characteristics) and/or user-defined attributes (e.g., clothing attributes).
The first user identifier of the current login is the first user identifier of the application program used for realizing the virtual session scene of the current login. The application program for implementing the virtual session scene may be an instant messaging application program, a social application program, a game application program, or the like. The terminal corresponding to the first user identifier currently logged in may be referred to as a "first terminal".
In one embodiment, the first terminal may request the server to add the currently logged first user identifier to the member list of the corresponding virtual session scenario, so as to enable the server to join the corresponding virtual session scenario through the currently logged first user identifier. After joining the virtual session scenario, the first terminal may communicate with a terminal corresponding to the other user identifier joining the virtual session scenario, for example, send expression data to a terminal corresponding to the other user identifier joining the virtual session scenario. It is to be understood that the user id joining the virtual session scenario may be a user id located in a member list of the virtual session scenario.
In one embodiment, the first terminal may further present, in the virtual session scene, the members in the virtual session scene as virtual session members of the avatar. The virtual session member displayed by the first terminal may include a virtual session member corresponding to the first user identifier currently logged in through the first terminal, or may not include a virtual session member corresponding to the first user identifier currently logged in through the first terminal. The virtual session member corresponding to the first user identifier is not displayed in the virtual session scene displayed by the first terminal, so that the interactive communication between the first terminal and the terminals corresponding to other members is not influenced, and the computing processing resources and the display resources of the system can be saved.
S304, head image data are collected.
The head image data is image data obtained by acquiring a real-time image of the head. The head portrait data may include face image data and head motion image data. Head movements, including head twisting movements, such as lowering, raising, twisting left or twisting right, and the like.
Specifically, the first terminal may acquire the head image data by calling the local camera. The camera of the machine can be a front camera or a rear camera of the machine. It is understood that the acquired head image data may be obtained by image-acquiring any head appearing in the image-acquiring region, and is not limited to the user corresponding to the first user identifier.
In one embodiment, the first terminal may detect the number of members in the virtual conversation scene, and perform the step S304 to perform the header image data collection process when the number of members in the virtual conversation scene is not less than the preset number threshold. Wherein the preset number threshold may be 2. It will be appreciated that the number of members in a virtual session scenario as referred to herein may include the first user identification itself currently logged on.
And S306, identifying the expression features in the head image data to obtain expression data.
The expression features are features capable of expressing emotion or emotion and comprise facial expression features and posture expression features. The facial expression is an expression expressed by a facial organ, such as a facial expression like a eyebrow flick or a blink. The gesture expression is an expression expressed by a limb action, such as a turning head action expression.
In one embodiment, the first terminal may parse the head image data, identify facial expression features and/or head motion expression features in the head image data, and obtain expression data. The expression data is data that can represent a corresponding expression action.
The expression data may include a string of expression feature values arranged in order. In one embodiment, the position or sequence corresponding to each expression characteristic value represents the corresponding expression type. For example, the expression type in the first position is "crying", and the expression feature value in the first position is used for representing the crying degree.
The expression data may also include expression type identifications and corresponding expression feature values. The expression type is the category of the expression in the motion expression dimension, and comprises opening the mouth, blinking, laughing, crying, turning the head or nodding the head and the like. It is to be understood that the above-mentioned expression types are only used for examples, and are not used for limiting the classification of expressions, and the types of expression types may be set according to actual needs.
And S308, sending the expression data to a terminal corresponding to a second user identifier added into the virtual session scene, so that the terminal controls a virtual session member corresponding to the first user identifier to trigger the expression action represented by the expression data in the virtual session scene.
The second user identifier added to the virtual session scenario refers to all or part of the user identifiers except the first user identifier in the member list of the virtual session scenario. In this embodiment, the second user identifier may be one or more.
The first terminal may send the expression data to the server, and the server forwards the expression data to a terminal corresponding to the second user identifier added to the virtual session scenario. The first terminal may also directly send the expression data to the terminal corresponding to the second user identifier added to the virtual session scenario in a point-to-point manner, for example, when the virtual session scenario is established based on a point-to-point manner for a two-person session, the first terminal may directly send the expression data to the terminal corresponding to the second user identifier added to the virtual session scenario.
In one embodiment, sending the expression data to a terminal corresponding to a second user identifier added to a virtual session scene, so that the terminal controls a virtual session member corresponding to the first user identifier to trigger an expression action represented by the expression data in the virtual session scene, includes: and sending the expression data to a terminal corresponding to a second user identifier added into the virtual session scene, so that the terminal extracts an expression characteristic value corresponding to the identified expression type from the expression data, and controlling a virtual session member corresponding to the first user identifier to trigger an expression action represented by the extracted expression characteristic value in the virtual session scene.
In an embodiment, the terminal corresponding to the second user identifier may determine an expression type corresponding to the extracted expression feature value, and in the virtual session scene, according to the expression control logic code corresponding to the determined expression type and the extracted expression feature value, control the virtual session member corresponding to the first user identifier to implement a corresponding expression action. For example, if the expression motion represented by the expression data is "open mouth 10 degrees", the virtual conversation member corresponding to the first user identifier is controlled to perform an action of "open mouth 10 degrees".
In an embodiment, the terminal corresponding to the second user identifier may further generate corresponding texture information according to the expression feature value and the corresponding expression type, and in the virtual session scene, the texture information is displayed at an expression display part of the virtual session member corresponding to the first user identifier. For example, when the expression motion represented by the expression data is "crying", the terminal corresponding to the second user identifier may generate "teardrop" texture information corresponding to "crying" according to the expression data, and display the texture information of the "teardrop" below the eyes of the virtual session member corresponding to the first user identifier.
FIG. 4A is an interface diagram of a virtual session scenario in one embodiment. The current members of the virtual session scenario are only 2, user a and user b. Assume that fig. 4A is an interface of a virtual session scene displayed on a terminal corresponding to a user a, an image capture area at the upper left corner in fig. 4A shows a real head image of the user using the terminal corresponding to the user a, and a virtual session member B corresponding to the user B is displayed in the interface.
Fig. 4B is an interface diagram of a virtual session scenario in another embodiment. The current members of the virtual conversation scene are multiple, it is assumed that fig. 4A is an interface of the virtual conversation scene displayed on the terminal corresponding to the user a, an image acquisition area at the upper left corner in fig. 4A displays a real head image of the user using the terminal corresponding to the user a, three virtual character images displayed in the interface are virtual conversation members, the virtual conversation member B is a virtual conversation member corresponding to the user B, and the virtual conversation member B triggers a "blinking right eye" expression action represented by expression data.
According to the interactive data processing method, the currently logged-in first user identification is added into the corresponding virtual session scene, facial data are collected and recognized to obtain expression data, and the expression data are sent to the terminal corresponding to the second user identification added into the virtual session scene. And the terminal receiving the expression data controls the virtual session member corresponding to the first user identifier to trigger the expression action represented by the expression data in the virtual session scene. The interactive communication is realized by controlling the virtual session member to trigger the expression action represented by the expression data, and compared with the interactive communication based on the real image of the user, the privacy security in the interactive communication process is improved to a certain extent.
In addition, the method for controlling the virtual conversation member to trigger the expression action represented by the expression data is another expression mode of the real expression of the user for the interactive communication, so that the user can recognize the online user through the expression action, and a new interaction mode is provided.
In one embodiment, step S302 includes: acquiring a multi-user session identifier corresponding to a currently logged-in first user identifier; and sending the multi-person session identifier and the first user identifier to the server, so that the server adds the first user identifier into a member list of the virtual session scene identified by the multi-person session identifier.
Wherein the multi-person session identifier is used for uniquely identifying the multi-person session. The number of members in the multi-person session is greater than or equal to 3. The multi-person session may be a group or temporary multi-person chat session, but may also be other types of multi-person sessions.
It is understood that the first user identification currently logged in is a member of the multi-person session corresponding to the corresponding multi-person session identification. The virtual session scene identified by the multi-person session identifier is equivalent to a virtual session scene created based on the multi-person session corresponding to the multi-person session identifier. The virtual session scene identified by the multi-person session identifier may be a virtual session scene with the multi-person session identifier as a direct identifier, that is, the unique identifier of the virtual session scene is the multi-person session identifier itself. The virtual session scene identified by the multi-person session identifier can also be a virtual session scene with the multi-person session identifier as an indirect identifier, namely, the unique identifier of the virtual session scene is a virtual session scene identifier uniquely corresponding to the multi-person session identifier, and the virtual session scene identifier can be determined according to the multi-person session identifier, so that the corresponding virtual session scene can be indirectly and uniquely identified by the multi-person session identifier.
Specifically, a user can log in an application program for implementing a virtual session scene through a first user identifier, and after the login is successful, a multi-person session interface is opened in a first terminal, where the opened multi-person session interface is an interface of a multi-person session corresponding to a multi-person session identifier corresponding to the first user identifier. The user may initiate an operation to join the virtual session context in the open multi-person session interface. The first terminal responds to the operation, acquires a multi-user session identifier corresponding to the currently logged-in first user identifier, sends the multi-user session identifier and the first user identifier to the server, and the server adds the first user identifier into a member list of a virtual session scene identified by the multi-user session identifier so as to add the first user identifier into the corresponding virtual session scene.
In one embodiment, the server may return access information of the virtual session context identified with the multi-person session identification to the first terminal, and the first terminal may join the virtual session context according to the access information. Wherein, the access information comprises an access IP address and a port.
In the above embodiment, the currently logged-in first user identifier is added to the virtual session scene created based on the corresponding multi-person session, so that an interactive communication mode in which the virtual session member in the virtual session scene triggers the expression action corresponding to the expression data is realized, which is equivalent to improvement on the multi-person session, and a new interactive interaction mode is provided.
Fig. 5 is a timing diagram of an implementation of the interactive data processing method in an embodiment, which specifically includes the following steps:
1) and the first terminal opens the session and sends the multi-user session identification corresponding to the first user identification to the server to apply for joining the virtual session scene.
2) The server creates a virtual session scene identified by the multi-person session identifier and allocates corresponding access information to the virtual session scene.
3) The server returns the allocated access information to the first terminal.
4) And the first terminal establishes a data channel with the server according to the access information so as to join the virtual session scene.
5) And the terminals corresponding to other members in the multi-person conversation establish data channels with the server respectively according to the above modes, and join in the virtual conversation scene.
6) And the server informs the number of the members added into the virtual session scene to the terminal corresponding to each member added into the virtual session scene.
7) And when the number of members in the current virtual conversation scene is more than or equal to 2, each terminal starts local image acquisition equipment, acquires and identifies head image data to obtain expression data.
8) And each terminal sends the expression data to the server.
9) And the server forwards the expression data to terminals corresponding to other members of the virtual session scene.
10) And the terminal receiving the expression data controls the virtual conversation member corresponding to the expression data in the virtual conversation scene, and triggers the expression action represented by the expression data.
In one embodiment, sending the multi-person session identifier and the first user identifier to the server, so that the server joins the first user identifier to a member list of a virtual session scenario identified by the multi-person session identifier, includes: sending the multi-user session identifier and the first user identifier to a server, and adding the first user identifier into a member list of a virtual session scene identified by the multi-user session identifier when the virtual session scene identified by the multi-user session identifier exists in the server; or, the multi-person session identifier and the first user identifier are sent to the server, so that the server creates the virtual session scene identified by the multi-person session identifier when the virtual session scene identified by the multi-person session identifier does not exist, and the first user identifier is added into a member list of the created virtual session scene.
Specifically, the first terminal sends the multi-person session identifier and the first user identifier to the server, and requests the server to add the first user identifier to the virtual session scene identified by the multi-person session identifier. The server may detect whether a virtual session context corresponding to the multi-person session identification exists.
When the virtual session scene corresponding to the multi-person session identifier already exists, the server may add the first user identifier to a member list of the virtual session scene identified by the multi-person session identifier, so as to enable the first user identifier to be added to the virtual session scene identified by the multi-person session identifier.
When the virtual session scene corresponding to the multi-person session identifier does not exist, the server may create a new virtual session scene according to the multi-person session identifier, and use the multi-person session identifier as a direct identifier to uniquely identify the newly created virtual session scene, or generate a virtual session scene identifier that is unique to the multi-person session identifier and uniquely corresponds to the newly created virtual session scene, that is, use the multi-person session identifier as an indirect identifier to uniquely identify the created virtual session scene, and further, the server may add the first user identifier to a member list of the created virtual session scene.
In one embodiment, the virtual session context is a real-time virtual session context. The first terminal can send the multi-user session identifier and the first user identifier to a real-time signaling service program in the server through a real-time signaling channel, and requests the real-time signaling service program to join a real-time virtual session scene. After receiving the request, the real-time signaling service program detects whether a real-time virtual session scene corresponding to the multi-person session identifier exists at present, and when the real-time virtual session scene corresponding to the multi-person session identifier does not exist, the real-time signaling service program can create a new real-time virtual session scene identified by the multi-person session identifier according to the multi-person session identifier, apply for access information corresponding to the virtual session scene from the real-time data service program, and return the access information to the first terminal. When a real-time virtual session scene corresponding to the multi-person session identifier already exists, the real-time signaling service program can directly return access information corresponding to the real-time virtual session scene to the first terminal. Wherein, the access information comprises an access IP address and a port.
And the first terminal starts to establish a data channel with a real-time data service program according to the access information, and the real-time signaling service program adds the first user identification into a member list of the created virtual session scene. The terminal corresponding to the second user identifier can also join the real-time virtual session scene through the above method, and join the virtual session scene through a data channel between the terminal and the real-time data service program. And the first terminal and the terminal corresponding to the second user identifier can receive and transmit expression data based on the established data channel. Fig. 6 is an architecture diagram of an implementation of the interactive data processing method in one embodiment.
In the embodiment, the creation and the joining processing of the virtual session scene are integrated, the user only needs to send the corresponding multi-person session identifier to the server through the first terminal to request to join the virtual session scene, and the user can join the virtual session scene identified by the multi-person session identifier no matter whether the virtual session scene corresponding to the multi-person session identifier exists in the server at present, so that the operation step of independently creating the virtual session scene is omitted. In addition, the same rule set is used for all users, namely the user who creates the virtual session scene for the first time and the user who applies for joining after the virtual session scene is created, so that redundancy caused by a plurality of sets of rules is avoided, and the applicability of the logic rule is improved.
In one embodiment, step S306 includes: identifying expression characteristics in the head image data to obtain expression types and corresponding expression characteristic values; and generating expression data comprising expression characteristic values corresponding to the identified expression types.
The expression type is the category of the expression in the action expression dimension, and comprises the types of opening mouth, blinking, laughing, crying, turning head or nodding head and the like. The expression type obtained by the first terminal identifying the expression features in the head image data is at least one. And the expression characteristic value is used for representing the magnitude of the expression action amplitude and/or degree corresponding to the expression type. For example, if the expression type "cry" is different in corresponding expression characteristic values, the degree of crying is different, and for example, the expression type "cry" may be classified into different degrees such as twilling or crying. For another example, the expression type "turn left", the expression feature value may be an angle of turning, and the larger the angle of turning, the larger the amplitude of turning.
In one embodiment, generating expression data including expression feature values corresponding to the identified expression types includes: and combining the expression characteristic values corresponding to the expression types obtained by identification to obtain expression data.
Specifically, the first terminal may directly combine the expression type obtained by the recognition and the corresponding expression feature value to obtain expression data. The first terminal may also add the expression feature value corresponding to the identified expression type to a position corresponding to the corresponding expression type to generate corresponding expression data. It can be understood that the terminal corresponding to the second user identifier may determine the expression type corresponding to the expression data according to the position corresponding to the expression data. For example, if the expression type "open mouth" corresponds to the 1 st position, the expression feature value "10 degrees" corresponding to "open mouth" is added to the 1 st position, and if the expression type "turn left" corresponds to the 2 nd position, the expression feature value "15 degrees" corresponding to "turn left" is added to the 2 nd position, and so on, the expression feature values are combined to generate corresponding expression data.
It is to be understood that, in this embodiment, the expression feature values included in the generated expression data may be only expression feature values corresponding to the identified expression types. For example, only the expression types "head left turn" and "mouth open" are recognized, and the expression feature values included in the expression data are only the expression feature values corresponding to the expression types "head left turn" and "mouth open".
In another embodiment, the identified expression type belongs to a preset expression type set. Generating expression data including expression feature values corresponding to the identified expression types, including: giving expression characteristic values which represent that corresponding expression actions are not triggered to the unrecognized expression types in the preset expression type set; and combining the expression characteristic values corresponding to the expression types according to the preset sequence of the expression types in the preset expression type set to form expression data.
In this embodiment, an expression type set is preset in the first terminal. And identifying that the obtained expression type belongs to the preset expression type set. The number of the identified expression types can be one or more.
The expression characteristic value which indicates that the corresponding expression action is not triggered can enable the target virtual conversation member not to trigger the corresponding expression action.
Specifically, for expression types that are not identified in the preset expression type set, the first terminal may assign expression feature values that represent that corresponding expression actions are not triggered, and combine the expression feature values corresponding to the expression types according to a preset sequence of the expression types in the preset expression type set to form expression data. It is to be understood that the composed emotion data may include only emotion feature values, and may not include emotion types or emotion type identifications. When the terminal corresponding to the second user identifier controls the virtual session member corresponding to the first user identifier to trigger the expression action represented by the expression data, the expression type corresponding to the virtual session member can be determined according to the sequence of each expression characteristic value in the expression data, and then the expression action represented by the expression characteristic value is triggered. The expression data can be very small, and the transmission efficiency and the transmission quality of the expression data are guaranteed.
In one embodiment, the method further comprises: acquiring expression data sent by a terminal corresponding to the second user identifier; and in the virtual session scene, controlling the virtual session member corresponding to the second user identifier to trigger the expression action represented by the acquired expression data.
Specifically, the terminal corresponding to the second user identifier may also acquire head image data, recognize expression features from the acquired head image data, obtain corresponding expression data, and send the expression data to the first terminal. The first terminal may directly receive the expression data sent by the terminal corresponding to the second user identifier in a point-to-point manner, or may receive the expression data sent by the terminal corresponding to the second user identifier and forwarded by the server.
In one embodiment, the first terminal may control, in the virtual conversation scenario, the virtual conversation member corresponding to the second user identifier to implement the emotive action represented by the emotive data. For example, if the expression motion represented by the expression data is "open mouth 10 degrees", the virtual conversation member corresponding to the second user identifier is controlled to perform an action of "open mouth 10 degrees".
In one embodiment, the first terminal may further generate corresponding texture information according to the expression data, and in the virtual session scene, the texture information is displayed at an expression display part of the virtual session member corresponding to the second user identifier. For example, when the expression motion represented by the expression data is "crying", the first terminal may generate "teardrop" texture information corresponding to the "crying" according to the expression data, and display the "teardrop" texture information under the eyes of the virtual session member corresponding to the second user identifier.
In one embodiment, the first terminal may extract an expression feature value from the expression data, and in the virtual session scene, control a virtual session member corresponding to the second user identifier to trigger an expression action represented by the expression feature value.
Specifically, the first terminal may determine an expression type corresponding to the extracted expression feature value, and control, in the virtual session scene, a virtual session member corresponding to the first user identifier to implement a corresponding expression action according to an expression control logic code corresponding to the determined expression type and the extracted expression feature value. The first terminal can also generate corresponding texture information according to the expression characteristic value and the corresponding expression type, and the texture information is displayed on the expression display part of the virtual conversation member corresponding to the first user identifier in the virtual conversation scene.
It can be understood that, when the expression data sent by the terminal corresponding to the same second user identifier includes expression feature values corresponding to a plurality of recognized expression types, the first terminal may control the virtual session member corresponding to the second user identifier to trigger expression actions represented by the plurality of expression feature values at the same time. For example, if the expression data includes expression feature values corresponding to "turn left head" and "open mouth", the virtual conversation member may be controlled to trigger the expression motions of "turn left head" and "open mouth" at the same time.
In the above embodiment, the method for controlling the virtual session member to trigger the emotions represented by the emotions data is another expression mode of the real emotions of the user for interactive communication, so that the user can recognize the online user through the emotions, and a new interactive mode is provided.
In one embodiment, before joining the corresponding virtual session scenario through the currently logged-in first user identifier, the method further comprises: acquiring user face image data corresponding to a first user identifier which is currently logged in; and generating a virtual conversation member corresponding to the first user identification according to the user face image data and the virtual conversation member initial model.
The initial model of the virtual conversation member is a default virtual image model.
Specifically, the first terminal may perform real-time facial image acquisition on a user corresponding to the currently logged-in first user identifier, so as to obtain user facial image data. The first terminal may also obtain a picture (e.g., a photograph) of a user corresponding to the currently logged-in first user identifier, and extract facial image data from the picture to obtain the facial image data of the user. And the user corresponding to the first user identification is the user uniquely identified by the first user identification.
The first terminal can send the user face image data to the server, and the server generates a virtual conversation member corresponding to the first user identification according to the user face image data and the virtual conversation member initial model. The first terminal can also set a virtual conversation member initial model, and generates a virtual conversation member corresponding to the first user identification according to the user face image data and the virtual conversation member initial model.
In one embodiment, generating a virtual conversation member corresponding to the first user identification from the user facial image data and the virtual conversation member initial model comprises: and analyzing the user face image data to generate corresponding face texture information, and overlaying the generated face texture information on the virtual conversation member initial model to obtain a virtual conversation member corresponding to the first user identification.
It can be understood that the virtual session members corresponding to other user identifiers in the virtual session scene may also be obtained according to the user image data corresponding to each user identifier and the virtual session member initial model by the above method.
In the above embodiment, the corresponding virtual session member is obtained according to the user image data corresponding to each user identifier and the virtual session member initial model, so that the virtual session member can more clearly represent the corresponding member, identity recognition in the interactive communication process is facilitated, and further improvement of the efficiency and effectiveness of user interactive communication is facilitated.
In one embodiment, the method further comprises: acquiring a virtual session member corresponding to the second user identifier; determining the distribution positions of the virtual session members in the virtual session scene; acquiring a background picture corresponding to the virtual session scene; and distributing the acquired virtual session members at the corresponding distribution positions, and displaying the acquired virtual session members in an overlapping manner with the background picture to form a virtual session scene.
The background picture is a picture as a display background. The background picture may be a two-dimensional background picture or a three-dimensional background picture. The background picture may be a virtual background picture or a real background picture. The virtual background picture is a picture for displaying a virtual scene, for example, a cartoon is a virtual scene. The real background picture is a picture showing a real scene, for example, a picture obtained by photographing the real scene shows the real scene.
Specifically, the first terminal may acquire the virtual session member corresponding to the second user identifier from the server, that is, acquire the avatar data corresponding to the second user identifier. The virtual session member may be a three-dimensional virtual session member or a two-dimensional virtual session member.
The first terminal may determine, according to the number of the second user identifiers in the member list of the virtual session scene, the size of a geometric figure used for distributing the virtual session members, and select a position in the geometric figure that satisfies the number, so as to determine the distribution position of the virtual session member corresponding to the second user identifier in the virtual session scene. The positions meeting the number can be randomly selected from the geometric figures, and the positions meeting the number can also be selected according to a preset position selection rule.
For example, if the number of the second user identifiers is 5, the size of the geometric figure of the distributed virtual session members is determined according to the number of the second user identifiers 5, and then 5 positions are selected from the geometric figure, where each position is the distributed position of the virtual session member corresponding to the second user identifier in the virtual session scene.
In one embodiment, the first terminal may also determine a size of a geometric figure for distributing the virtual session members according to the number of all the user identifiers in the member list of the virtual session scene, select a position in the geometric figure that satisfies the number, and determine a distribution position of the virtual session member corresponding to the second user identifier in the virtual session scene.
The first terminal may distribute the acquired virtual session members corresponding to the second user identifier at the corresponding distribution positions, superimpose the acquired background pictures to form a virtual session scene, and output and display the virtual session scene.
In the embodiment, the virtual session members and the background picture are superposed and displayed to form the displayed virtual session scene, so that the virtual session scene is enriched, the virtual session scene displayed in the interactive communication is closer to the session scene in the real life, and the diversity of the interactive communication modes is realized.
In one embodiment, the virtual session members are three-dimensional virtual session members and the virtual session context is a three-dimensional virtual session context. As shown in fig. 7, the method further includes a virtual session scene displaying step, which specifically includes the following steps:
s702, acquiring a three-dimensional virtual conversation member corresponding to the second user identifier.
Specifically, the first terminal may obtain the three-dimensional virtual session member corresponding to the second user identifier from the server, that is, obtain the three-dimensional avatar data corresponding to the second user identifier.
S704, determining the distribution positions of the three-dimensional virtual conversation members in the three-dimensional virtual conversation scene.
The first terminal may determine, according to the number of the second user identifiers in the member list of the three-dimensional virtual session scene, the size of a geometric figure used for distributing the three-dimensional virtual session members, and select a position in the geometric figure that satisfies the number, so as to determine the distribution position of the three-dimensional virtual session member corresponding to the second user identifier in the three-dimensional virtual session scene. The positions meeting the number can be randomly selected from the geometric figures, and the positions meeting the number can also be selected according to a preset position selection rule.
For example, if the number of the second user identifiers is 5, the size of the geometric figure for distributing the three-dimensional virtual session members is determined according to the number of the second user identifiers 5, and then 5 positions are selected from the geometric figure, wherein each position is a distribution position of the three-dimensional virtual session member corresponding to the second user identifier in the three-dimensional virtual session scene.
In one embodiment, the first terminal may also determine, according to the number of all the user identifiers in the member list of the three-dimensional virtual conversation scene, the size of a geometric figure for distributing the three-dimensional virtual conversation members, select a position in the geometric figure that satisfies the number, and determine the distribution position of the three-dimensional virtual conversation member corresponding to the second user identifier in the three-dimensional virtual conversation scene.
S706, a three-dimensional background model corresponding to the three-dimensional virtual session scene is obtained.
The three-dimensional background model is a three-dimensional model serving as a display background. The three-dimensional background model can be a three-dimensional virtual background model or a three-dimensional real background model. The three-dimensional virtual background model is a model for displaying a three-dimensional virtual scene. The three-dimensional real background model is a model for displaying a three-dimensional real scene.
And S708, distributing the three-dimensional virtual session members to the corresponding distribution positions, and performing combined display with the three-dimensional background model to form a three-dimensional virtual session scene.
The first terminal may distribute the acquired three-dimensional virtual session members corresponding to the second user identifier at the corresponding distribution positions, combine the three-dimensional virtual session members with the three-dimensional background model to form a three-dimensional virtual session scene, and output and display the three-dimensional virtual session scene.
In the embodiment, the three-dimensional virtual conversation members and the three-dimensional background model are combined and displayed to form the displayed three-dimensional virtual conversation scene, so that the displayed three-dimensional virtual conversation scene in interactive communication is further closer to the conversation scene in real life, and the diversity of interactive communication modes is realized.
As shown in fig. 8, in an embodiment, the method further includes a step of view angle conversion operation, specifically including the following steps:
s802, detecting touch operation acting on the three-dimensional virtual session scene to obtain a touch track.
Specifically, a user may perform a touch operation on a three-dimensional virtual session scene displayed on a first terminal interface to obtain a touch trajectory. The touch operation comprises pressing and dragging operations.
S804, the touch track is mapped to a movement track of an observation point in the three-dimensional virtual conversation scene.
It can be understood that the three-dimensional virtual conversation scene finally displayed on the display screen of the first terminal is displayed by projecting the three-dimensional virtual conversation scene formed by the three-dimensional background model and the three-dimensional virtual conversation members onto the display screen according to the observation point. The three-dimensional virtual conversation scenes projected on the display screen are different due to different observation points.
Specifically, the first terminal may map the touch trajectory to a movement trajectory of the observation point in the three-dimensional virtual session scene according to a mapping relationship between the touch point and the observation point.
And S806, determining the position of the moved observation point according to the moving track.
Specifically, the first terminal determines the position of the moved observation point according to the determined movement track of the observation point.
And S808, performing projection display on the three-dimensional background model and the three-dimensional virtual conversation members according to the position of the moved observation point.
Specifically, the first terminal re-projects the three-dimensional background model and the three-dimensional virtual session member to the first terminal display screen for display according to the position of the moved observation point as a new observation point, so that the three-dimensional background model image re-projected on the first terminal display screen is different from the three-dimensional background model image displayed before the touch operation, and the display angle of the three-dimensional virtual session member re-projected on the first terminal display screen is different from the display angle displayed before the touch operation, and the three-dimensional background model and the three-dimensional virtual session member re-projected form a three-dimensional virtual session scene under a new viewing angle.
According to the embodiment, the visual angle conversion processing of the three-dimensional virtual session scene is realized, the viewing angle required by the user can be adjusted, the display of the three-dimensional virtual session scene in the interactive communication process is more flexible, the displayed three-dimensional virtual session scene can better meet the requirements of the user, and the effectiveness of the displayed three-dimensional virtual session scene is improved.
As shown in fig. 9, in an embodiment, another interactive data processing method is provided, which specifically includes the following steps:
s902, obtaining the user face image data corresponding to the first user identification of the current login.
And S904, generating a three-dimensional virtual conversation member corresponding to the first user identification according to the user face image data and the virtual conversation member initial model.
S906, acquiring a multi-user session identifier corresponding to the currently logged-in first user identifier, sending the multi-user session identifier and the first user identifier to the server, and enabling the server to add the first user identifier into a virtual session scene member list identified by the multi-user session identifier.
And S908, collecting head image data, and identifying expression features in the head image data to obtain expression types and corresponding expression feature values.
S910, for the expression types which are not identified in the preset expression type set, giving expression characteristic values which represent that corresponding expression actions are not triggered.
And S912, combining the expression characteristic values corresponding to the expression types according to the preset sequence of the expression types in the preset expression type set to form expression data.
And S914, sending the expression data to a terminal corresponding to a second user identifier added into the virtual session scene, so that the terminal extracts an expression characteristic value corresponding to the identified expression type from the expression data, and controlling a three-dimensional virtual session member corresponding to the first user identifier to trigger an expression action represented by the extracted expression characteristic value in the virtual session scene.
And S916, acquiring the three-dimensional virtual session members corresponding to the second user identifier, and determining the distribution positions of the three-dimensional virtual session members in the three-dimensional virtual session scene.
S918, acquiring a three-dimensional background model corresponding to the three-dimensional virtual conversation scene, distributing the three-dimensional virtual conversation members at the corresponding distribution positions, and performing combined display with the three-dimensional background model to form the three-dimensional virtual conversation scene.
S920, detecting a touch operation acting on the three-dimensional virtual session scene to obtain a touch track.
And S922, mapping the touch track to a movement track of the observation point in the three-dimensional virtual conversation scene, and determining the position of the moved observation point according to the movement track.
And S924, performing projection display on the three-dimensional background model and the three-dimensional virtual conversation members according to the position of the moved observation point.
And S926, obtaining the expression data sent by the terminal corresponding to the second user identifier.
And S928, in the virtual conversation scene, controlling the virtual conversation member corresponding to the second user identification to trigger the expression action represented by the acquired expression data.
According to the interactive data processing method, interactive communication is achieved by controlling the virtual session member to trigger the expression action represented by the expression data, and compared with interactive communication based on the real image of the user, privacy safety in the interactive communication process is improved to a certain extent. And the control of the virtual conversation member to trigger the expression action represented by the expression data is another expression mode of the real expression of the user for interactive communication, so that the user can identify the online user through the expression action, and a new interactive mode is provided.
Secondly, the currently logged-in first user identifier is added into a virtual session scene created based on the corresponding multi-person session, so that an interactive communication mode that a virtual session member in the virtual session scene triggers an expression action corresponding to expression data is realized, which is equivalent to improvement on the multi-person session, and a new interactive interaction mode is provided.
Then, for expression types which are not identified in the preset expression type set, the first terminal may give expression feature values which do not trigger corresponding expression actions, and combine the expression feature values corresponding to the expression types according to a preset sequence of the expression types in the preset expression type set to form expression data. The expression data can be very small, and the transmission efficiency and the transmission quality of the expression data are guaranteed.
Moreover, the corresponding virtual session members are obtained according to the user image data corresponding to each user identification and the virtual session member initial model, so that the virtual session members can more clearly represent the corresponding members, identity recognition in the interactive communication process is facilitated, and further improvement of the interactive communication efficiency and effectiveness of the users is facilitated.
Finally, the visual angle conversion processing of the three-dimensional virtual conversation scene is realized, the observation visual angle required by the user can be adjusted, the display of the three-dimensional virtual conversation scene in the interactive communication process is more flexible, the displayed three-dimensional virtual conversation scene can better meet the requirements of the user, and the effectiveness of the displayed three-dimensional virtual conversation scene is improved.
As shown in fig. 10, in one embodiment, another interactive data processing method is provided, which may be applied to the first terminal 110 and/or the second terminal 120 in fig. 1. The embodiment is mainly illustrated by applying the method to the second terminal 120 in fig. 1. The method comprises the following steps:
and S1002, adding a corresponding virtual session scene through the currently logged second user identifier.
The virtual conversation scene is a conversation scene provided for virtual conversation members, and members joining the virtual conversation scene are displayed with the images of the virtual conversation members when displaying the images. In one embodiment, the virtual conversation scenario may be a virtual room. The virtual session scene may be a three-dimensional virtual session scene or a two-dimensional virtual session scene. The virtual session scene may be created based on a session, and specifically, the virtual session scene may be created based on a multi-person session (a session with session members greater than or equal to 3), or may be created based on a double-person session (a session with session members of only 2). In one embodiment, the virtual conversation scene may further include presented background information, wherein the presented background information may include a background picture or a three-dimensional background model, and the background picture may be a two-dimensional picture or a three-dimensional picture.
In one embodiment, the virtual session context may be a real-time virtual session context. The real-time virtual session scene means that the virtual session scene is used for realizing real-time communication. For example, the wechat group is a multi-person conversation, a real-time call is created in the wechat group, and all members joining the real-time call can be displayed by using the virtual image, that is, the members of the virtual conversation are displayed to realize the real-time call, so that the real-time virtual conversation scene is formed.
The virtual conversation member is an avatar when the member in the virtual conversation scene shows. It will be appreciated that the avatar is an avatar that is virtualized, unlike a real avatar. The virtual conversation member includes a virtual character figure. Virtual session members may also include avatars of animals, plants, or other things. The virtual session member may be a three-dimensional virtual session member or a two-dimensional virtual session member. The virtual session member may be a default avatar (e.g., a virtual session initial model) or an avatar derived from the virtual session member initial model in combination with user characteristics (e.g., user facial characteristics) and/or user-defined attributes (e.g., clothing attributes).
And the second user identifier of the current login is the second user identifier of the application program which is used for realizing the virtual session scene and is currently logged in. The application program for implementing the virtual session scene may be an instant messaging application program, a social application program, a game application program, or the like. The terminal corresponding to the currently logged-in second user identifier may be referred to as a "second terminal".
In one embodiment, the second terminal may request the server to add the currently logged second user identifier to the member list of the corresponding virtual session scenario, so as to enable the server to join the corresponding virtual session scenario through the currently logged second user identifier. After joining the virtual session scenario, the second terminal may communicate with the terminal corresponding to the other user identifier joining the virtual session scenario, for example, send expression data to the terminal corresponding to the other user identifier joining the virtual session scenario. It is to be understood that the user id joining the virtual session scenario may be a user id located in a member list of the virtual session scenario.
In one embodiment, the second terminal may further present the members in the virtual session scene as virtual session members of an avatar in the virtual session scene. The virtual session member displayed by the second terminal may include a virtual session member corresponding to the second user identifier currently logged in through the second terminal, or may not include a virtual session member corresponding to the second user identifier currently logged in through the second terminal. The virtual session member corresponding to the second user identifier is not displayed in the virtual session scene displayed by the second terminal, so that the interactive communication between the second terminal and the terminals corresponding to other members is not influenced, and the computing processing resources and the display resources of the system can be saved.
And S1004, receiving expression data sent by the terminal corresponding to the first user identifier joining the virtual session scene.
The expression data is data that can represent a corresponding expression action.
The expression data may include a string of expression feature values arranged in order. In one embodiment, the position or sequence corresponding to each expression characteristic value represents the corresponding expression type. For example, the expression type in the first position is "crying", and the expression feature value in the first position is used for representing the crying degree.
The expression data may also include expression type identifications and corresponding expression feature values. The expression type is the category of the expression in the motion expression dimension, and comprises opening the mouth, blinking, laughing, crying, turning the head or nodding the head and the like. It is to be understood that the above-mentioned expression types are only used for examples, and are not used for limiting the classification of expressions, and the types of expression types may be set according to actual needs.
The second terminal may receive the expression data transmitted by the terminal corresponding to the first user identifier joining the virtual session scenario forwarded by the server, or may directly receive the expression data transmitted by the terminal corresponding to the second user identifier joining the virtual session scenario in a point-to-point manner.
And S1006, extracting expression characteristic values from the expression data.
And S1008, controlling the virtual session member corresponding to the first user identifier to trigger the expression action represented by the expression characteristic value in the virtual session scene.
In an embodiment, the second terminal may control, in the virtual session scene, the virtual session member corresponding to the first user identifier to implement a corresponding emotive action according to the emotive control logic code corresponding to the determined emotive type and the extracted emotive feature value. For example, if the expression motion represented by the expression data is "open mouth 10 degrees", the virtual conversation member corresponding to the first user identifier is controlled to perform an action of "open mouth 10 degrees".
In another embodiment, the second terminal may further generate corresponding texture information according to the expression feature value and the corresponding expression type, and in the virtual session scene, the texture information is displayed on an expression display part of the virtual session member corresponding to the first user identifier. For example, when the expression motion represented by the expression data is "crying", the terminal corresponding to the second user identifier may generate "teardrop" texture information corresponding to "crying" according to the expression data, and display the texture information of the "teardrop" below the eyes of the virtual session member corresponding to the first user identifier.
The interactive data processing method, the interactive data processing device and the computer equipment are used for joining the corresponding virtual session scene through the currently logged second user identifier and receiving expression data sent by the terminal corresponding to the first user identifier joining the virtual session scene; extracting an expression characteristic value from the expression data; and in the virtual conversation scene, controlling the virtual conversation member corresponding to the first user identification to trigger the expressive action represented by the expressive characteristic value. The interactive communication is realized by controlling the virtual session member to trigger the expression action represented by the expression data, and compared with the interactive communication based on the real image of the user, the privacy security in the interactive communication process is improved to a certain extent.
In addition, the method for controlling the virtual conversation member to trigger the expression action represented by the expression data is another expression mode of the real expression of the user for the interactive communication, so that the user can recognize the online user through the expression action, and a new interaction mode is provided.
In one embodiment, step S1002 includes: acquiring a multi-user session identifier corresponding to a currently logged-in second user identifier; and sending the multi-person session identifier and the second user identifier to the server, so that the server adds the second user identifier into the member list of the virtual session scene identified by the multi-person session identifier.
Wherein the multi-person session identifier is used for uniquely identifying the multi-person session. The number of members in the multi-person session is greater than or equal to 3. The multi-person session may be a group or temporary multi-person chat session, but may also be other types of multi-person sessions.
It is to be understood that the currently logged-on second user identifier is a member of the multi-person session corresponding to the corresponding multi-person session identifier. The virtual session scene identified by the multi-person session identifier may be a virtual session scene with the multi-person session identifier as a direct identifier, that is, the unique identifier of the virtual session scene is the multi-person session identifier itself. The virtual session scene identified by the multi-person session identifier can also be a virtual session scene with the multi-person session identifier as an indirect identifier, namely, the unique identifier of the virtual session scene is a virtual session scene identifier uniquely corresponding to the multi-person session identifier, and the virtual session scene identifier can be determined according to the multi-person session identifier, so that the corresponding virtual session scene can be indirectly and uniquely identified by the multi-person session identifier.
Specifically, the user may log in the application program for implementing the virtual session scenario through the second user identifier, and after the login is successful, open a multi-user session interface in the second terminal, where the open multi-user session interface is an interface of a multi-user session corresponding to the multi-user session identifier corresponding to the second user identifier. The user can initiate an operation of joining a virtual session scene in the opened multi-user session interface, the second terminal responds to the operation, obtains a multi-user session identifier corresponding to the currently logged-in second user identifier, sends the multi-user session identifier and the second user identifier to the server, and the server joins the second user identifier to a member list of the virtual session scene identified by the multi-user session identifier so as to join the second user identifier to the corresponding virtual session scene.
In one embodiment, the server may return access information of the virtual session context identified with the multi-person session identification to the first terminal, and the first terminal may join the virtual session context according to the access information. Wherein, the access information comprises an access IP address and a port.
In the above embodiment, the currently logged-in first user identifier is added to the virtual session scene created based on the corresponding multi-person session, so that an interactive communication mode in which the virtual session member in the virtual session scene triggers the expression action corresponding to the expression data is realized, which is equivalent to improvement on the multi-person session, and a new interactive interaction mode is provided.
In one embodiment, sending the multi-person session identifier and the second user identifier to the server, so that the server joins the second user identifier to a member list of the virtual session scenario identified by the multi-person session identifier, includes: sending the multi-user session identifier and the second user identifier to the server, so that the second user identifier is added into a member list of the virtual session scene identified by the multi-user session identifier when the virtual session scene identified by the multi-user session identifier exists in the server; or, the multi-user session identifier and the second user identifier are sent to the server, so that the server creates the virtual session scene identified by the multi-user session identifier when the virtual session scene identified by the multi-user session identifier does not exist, and the second user identifier is added into a member list of the created virtual session scene.
Specifically, the second terminal sends the multi-user session identifier and the second user identifier to the server, and requests the server to add the second user identifier to the virtual session scene identified by the multi-user session identifier. The server may detect whether a virtual session context corresponding to the multi-person session identification exists.
When the virtual session scene corresponding to the multi-person session identifier already exists, the server may add the second user identifier to the member list of the virtual session scene identified by the multi-person session identifier, so as to enable the second user identifier to be added to the virtual session scene identified by the multi-person session identifier.
When the virtual session scene corresponding to the multi-person session identifier does not exist, the server may create a new virtual session scene according to the multi-person session identifier, and use the multi-person session identifier as a direct identifier to uniquely identify the newly created virtual session scene, or generate a virtual session scene identifier that is unique to the multi-person session identifier and uniquely corresponds to the newly created virtual session scene, that is, use the multi-person session identifier as an indirect identifier to uniquely identify the created virtual session scene, and further, the server may add the second user identifier to a member list of the created virtual session scene.
In the embodiment, the creation and the joining processing of the virtual session scene are integrated, the user only needs to send the corresponding multi-person session identifier to the server through the first terminal to request to join the virtual session scene, and the user can join the virtual session scene identified by the multi-person session identifier no matter whether the virtual session scene corresponding to the multi-person session identifier exists in the server at present, so that the operation step of independently creating the virtual session scene is omitted. In addition, the same rule set is used for all users, namely the user who creates the virtual session scene for the first time and the user who applies for joining after the virtual session scene is created, so that redundancy caused by a plurality of sets of rules is avoided, and the applicability of the logic rule is improved.
In one embodiment, the method further comprises: acquiring virtual session members corresponding to user identifications in a member list of a virtual session scene; determining the distribution positions of the virtual session members in the virtual session scene; acquiring a background picture corresponding to the virtual session scene; and distributing the acquired virtual session members at the corresponding distribution positions, and displaying the acquired virtual session members in an overlapping manner with the background picture to form a virtual session scene.
The background picture is a picture as a display background. The background picture may be a two-dimensional background picture or a three-dimensional background picture. The background picture may be a virtual background picture or a real background picture. The virtual background picture is a picture for displaying a virtual scene, for example, a cartoon is a virtual scene. The real background picture is a picture showing a real scene, for example, a picture obtained by photographing the real scene shows the real scene.
Specifically, the second terminal may obtain, from the server, a virtual session member corresponding to the user identifier in the member list of the virtual session scene, that is, obtain avatar data corresponding to the user identifier in the member list of the virtual session scene. The virtual session member may be a three-dimensional virtual session member or a two-dimensional virtual session member.
The second terminal may determine, according to the number of the user identifiers in the member list of the virtual session scene, the size of the geometric figure used for distributing the virtual session members, and select a position satisfying the number from the geometric figure, so as to determine the distribution position of the virtual session member in the virtual session scene, where the virtual session member corresponds to the user identifier in the member list of the virtual session scene. The positions meeting the number can be randomly selected from the geometric figures, and the positions meeting the number can also be selected according to a preset position selection rule.
For example, if the number of the user identifiers in the member list of the virtual session scene is 5, the size of the geometric figure for distributing the virtual session members is determined according to the number of the user identifiers 5, and then 5 positions are selected from the geometric figure, where each position is a distribution position of the virtual session member in the virtual session scene corresponding to the user identifier in the member list of the virtual session scene.
In an embodiment, the second terminal may also determine, according to the number of the user identifiers after removing the second user identifier from the member list of the virtual session scene, the size of a geometric figure used for distributing the virtual session members, select a position satisfying the number from the geometric figure, and determine the distribution position of the virtual session member in the virtual session scene, which is corresponding to the user identifier after removing the second user identifier, from the member list of the virtual session scene.
The second terminal may distribute the acquired virtual session members at the corresponding distribution positions, superimpose the acquired background pictures to form a virtual session scene, and output and display the virtual session scene.
In the embodiment, the virtual session members and the background picture are superposed and displayed to form the displayed virtual session scene, so that the virtual session scene is enriched, the displayed virtual session scene in interactive communication is closer to the session scene in real life, and the diversity of interactive communication modes is realized.
In one embodiment, the virtual session members are three-dimensional virtual session members and the virtual session context is a three-dimensional virtual session context. As shown in fig. 11, the method further includes a virtual session scene displaying step, which specifically includes the following steps:
and S1102, acquiring three-dimensional virtual session members corresponding to the user identifications in the member list of the virtual session scene.
Specifically, the second terminal may obtain, from the server, a three-dimensional virtual session member corresponding to the user identifier in the member list of the virtual session scene, that is, obtain three-dimensional avatar data corresponding to the user identifier in the member list of the virtual session scene.
And S1104, determining the distribution positions of the three-dimensional virtual session members in the three-dimensional virtual session scene.
The second terminal may determine, according to the number of the user identifiers in the member list of the three-dimensional virtual session scene, the size of the geometric figure used for distributing the three-dimensional virtual session members, and select a position satisfying the number from the geometric figure, so as to determine the distribution position of the three-dimensional virtual session member in the three-dimensional virtual session scene, where the three-dimensional virtual session member corresponds to the user identifier in the member list of the virtual session scene. The positions meeting the number can be randomly selected from the geometric figures, and the positions meeting the number can also be selected according to a preset position selection rule.
For example, if the number of the user identifiers in the member list of the virtual session scene is 5, the size of the geometric figure for distributing the three-dimensional virtual session members is determined according to the number of the user identifiers 5, and then 5 positions are selected from the geometric figure, where each position is a distribution position of the three-dimensional virtual session member in the three-dimensional virtual session scene corresponding to the user identifier in the member list of the virtual session scene.
In an embodiment, the second terminal may also determine, according to the number of all the user identifiers after removing the second user identifier from the member list of the three-dimensional virtual session scene, the size of a geometric figure used for distributing the three-dimensional virtual session members, select a position satisfying the number from the geometric figure, and determine the distribution position of the three-dimensional virtual session member in the three-dimensional virtual session scene corresponding to the user identifier after removing the second user identifier from the member list of the virtual session scene.
S1106, acquiring a three-dimensional background model corresponding to the three-dimensional virtual session scene.
The three-dimensional background model is a three-dimensional model serving as a display background. The three-dimensional background model can be a three-dimensional virtual background model or a three-dimensional real background model. The three-dimensional virtual background model is a model for displaying a three-dimensional virtual scene. The three-dimensional real background model is a model for displaying a three-dimensional real scene.
And S1108, distributing the three-dimensional virtual session members to the corresponding distribution positions, and performing combined display with the three-dimensional background model to form a three-dimensional virtual session scene.
The second terminal may distribute the acquired three-dimensional virtual session members at the corresponding distribution positions, combine the three-dimensional virtual session members with the three-dimensional background model to form a three-dimensional virtual session scene, and output and display the three-dimensional virtual session scene.
In the embodiment, the three-dimensional virtual conversation members and the three-dimensional background model are combined and displayed to form the displayed three-dimensional virtual conversation scene, so that the displayed three-dimensional virtual conversation scene in interactive communication is further closer to the conversation scene in real life, and the diversity of interactive communication modes is realized.
In one embodiment, the method further includes an operation of viewing angle change, specifically including the following steps: detecting touch operation acting on a three-dimensional virtual session scene to obtain a touch track; mapping the touch track into a moving track of an observation point in a three-dimensional virtual session scene; determining the position of the moved observation point according to the moving track; and performing projection display on the three-dimensional background model and the three-dimensional virtual conversation members according to the position of the moved observation point.
Specifically, the user may perform a touch operation on the three-dimensional virtual session scene displayed on the second terminal interface to obtain a touch trajectory. The touch operation comprises pressing and dragging operations.
It can be understood that the three-dimensional virtual conversation scene finally displayed on the display screen of the second terminal is displayed by projecting the three-dimensional virtual conversation scene formed by the three-dimensional background model and the three-dimensional virtual conversation members onto the display screen according to the observation point. The three-dimensional virtual conversation scenes projected on the display screen are different due to different observation points.
The second terminal may map the touch trajectory to a movement trajectory of the observation point in the three-dimensional virtual session scene according to a mapping relationship between the touch point and the observation point. And the second terminal determines the position of the moved observation point according to the determined movement track of the observation point.
And the second terminal takes the position of the moved observation point as a new observation point, re-projects the three-dimensional background model and the three-dimensional virtual conversation member to a display screen of the second terminal for displaying, so that the three-dimensional background model image obtained by re-projecting on the display screen of the second terminal is different from the three-dimensional background model image displayed before touch operation, and the display angle of the three-dimensional virtual conversation member obtained by re-projecting on the display screen of the second terminal is different from the angle displayed before touch operation, so that the re-projected three-dimensional background model and the three-dimensional virtual conversation member form a three-dimensional virtual conversation scene under a new visual angle.
According to the embodiment, the visual angle conversion processing of the three-dimensional virtual session scene is realized, the viewing angle required by the user can be adjusted, the display of the three-dimensional virtual session scene in the interactive communication process is more flexible, the displayed three-dimensional virtual session scene can better meet the requirements of the user, and the effectiveness of the displayed three-dimensional virtual session scene is improved.
As shown in fig. 12, in one embodiment, there is provided an interactive data processing apparatus 1200, including: a join module 1202, an image acquisition module 1204, an expression recognition module 1206, and a control module 1208, wherein:
a joining module 1202, configured to join the corresponding virtual session scenario through the currently logged-in first user identifier.
An image acquisition module 1204 for acquiring head image data.
And the expression recognition module 1206 is used for recognizing the expression features in the head image data to obtain expression data.
The control module 1208 is configured to send the expression data to a terminal corresponding to a second user identifier added to the virtual session scene, so that the terminal controls a virtual session member corresponding to the first user identifier to trigger an expression action represented by the expression data in the virtual session scene.
In an embodiment, the joining module 1202 is further configured to obtain a multi-user session identifier corresponding to the currently logged-in first user identifier; and sending the multi-person session identifier and the first user identifier to the server, so that the server adds the first user identifier into a member list of the virtual session scene identified by the multi-person session identifier.
In one embodiment, the joining module 1202 is further configured to send the multi-person session identifier and the first user identifier to the server, so that the server joins the first user identifier to the member list of the virtual session scenario identified by the multi-person session identifier when the virtual session scenario identified by the multi-person session identifier already exists; or, the multi-person session identifier and the first user identifier are sent to the server, so that the server creates the virtual session scene identified by the multi-person session identifier when the virtual session scene identified by the multi-person session identifier does not exist, and the first user identifier is added into a member list of the created virtual session scene.
In one embodiment, the expression recognition module 1206 is further configured to recognize expression features in the head image data, and obtain expression types and corresponding expression feature values; and generating expression data comprising expression characteristic values corresponding to the identified expression types.
In one embodiment, the expression recognition module 1206 is further configured to assign, to an expression type that is not recognized in the preset expression type set, an expression feature value that indicates that a corresponding expression action is not triggered; and combining the expression characteristic values corresponding to the expression types according to the preset sequence of the expression types in the preset expression type set to form expression data.
In an embodiment, the control module 1208 is further configured to send the expression data to a terminal corresponding to a second user identifier added to the virtual session scenario, so that the terminal extracts an expression feature value corresponding to the identified expression type from the expression data, and in the virtual session scenario, controls a virtual session member corresponding to the first user identifier to trigger an expression action represented by the extracted expression feature value.
In an embodiment, the control module 1208 is further configured to obtain expression data sent by a terminal corresponding to the second user identifier; and in the virtual session scene, controlling the virtual session member corresponding to the second user identifier to trigger the expression action represented by the acquired expression data.
As shown in fig. 13, in one embodiment, the apparatus 1200 further comprises:
a virtual session member generation module 1201, configured to obtain user facial image data corresponding to a currently logged-in first user identifier; and generating a virtual conversation member corresponding to the first user identification according to the user face image data and the virtual conversation member initial model.
In one embodiment, the apparatus 1200 further comprises:
a virtual session scene display module (not shown in the figure) configured to obtain a virtual session member corresponding to the second user identifier; determining the distribution positions of the virtual session members in the virtual session scene; acquiring a background picture corresponding to the virtual session scene; and distributing the acquired virtual session members at the corresponding distribution positions, and displaying the acquired virtual session members in an overlapping manner with the background picture to form a virtual session scene.
As shown in fig. 14, in one embodiment, the virtual session members are three-dimensional virtual session members and the virtual session context is a three-dimensional virtual session context. The apparatus 1200 further comprises:
a three-dimensional virtual session scene display module 1210, configured to obtain a three-dimensional virtual session member corresponding to the second user identifier; determining the distribution positions of the three-dimensional virtual session members in the three-dimensional virtual session scene; acquiring a three-dimensional background model corresponding to a three-dimensional virtual session scene; and distributing the three-dimensional virtual conversation members at the corresponding distribution positions, and performing combined display with the three-dimensional background model to form a three-dimensional virtual conversation scene.
As shown in fig. 15, in one embodiment, the apparatus 1200 further comprises:
the view angle adjusting module 1212 is configured to detect a touch operation applied to the three-dimensional virtual session scene to obtain a touch trajectory; mapping the touch track into a moving track of an observation point in a three-dimensional virtual session scene; determining the position of the moved observation point according to the moving track; and performing projection display on the three-dimensional background model and the three-dimensional virtual conversation members according to the position of the moved observation point.
As shown in fig. 16, in one embodiment, an interactive data processing apparatus 1600 is provided, the apparatus comprising a joining module 1602, an expressive feature extraction module 1604, and a control module 1606, wherein:
a joining module 1602, joining the corresponding virtual session scene through the currently logged second user identifier;
the expression feature extraction module 1604 receives expression data sent by a terminal corresponding to the first user identifier joining the virtual session scene; extracting an expression characteristic value from the expression data;
the control module 1608, in the virtual session scenario, controls the virtual session member corresponding to the first user identifier to trigger the emotive action represented by the emotive feature value.
In one embodiment, the control module 1608 is further configured to determine an expression type corresponding to the extracted expression feature value; in a virtual conversation scene, controlling a virtual conversation member corresponding to a first user identifier to implement a corresponding expression action according to an expression control logic code corresponding to a determined expression type and the extracted expression characteristic value; and/or generating corresponding texture information according to the expression characteristic value and the corresponding expression type, and displaying the texture information on an expression display part of the virtual conversation member corresponding to the first user identifier in the virtual conversation scene.
In an embodiment, the joining module 1602 is further configured to obtain a multi-user session identifier corresponding to the currently logged-in second user identifier; and sending the multi-person session identifier and the second user identifier to the server, so that the server adds the second user identifier into the member list of the virtual session scene identified by the multi-person session identifier.
In one embodiment, the joining module 1602 is further configured to send the multi-user session identifier and the second user identifier to the server, so that the server joins the second user identifier to the member list of the virtual session scene identified by the multi-user session identifier when the virtual session scene identified by the multi-user session identifier already exists; or, the multi-user session identifier and the second user identifier are sent to the server, so that the server creates the virtual session scene identified by the multi-user session identifier when the virtual session scene identified by the multi-user session identifier does not exist, and the second user identifier is added into a member list of the created virtual session scene.
In one embodiment, apparatus 1600 further comprises:
a virtual session scene display module (not shown in the figure), configured to obtain a virtual session member corresponding to a user identifier in a member list of a virtual session scene; determining the distribution positions of the virtual session members in the virtual session scene; acquiring a background picture corresponding to the virtual session scene; and distributing the acquired virtual session members at the corresponding distribution positions, and displaying the acquired virtual session members in an overlapping manner with the background picture to form a virtual session scene.
In one embodiment, the virtual session members are three-dimensional virtual session members and the virtual session context is a three-dimensional virtual session context. The apparatus 1600 further comprises:
a three-dimensional virtual session scene display module (not shown in the figure) for acquiring three-dimensional virtual session members corresponding to the user identifiers in the member list of the virtual session scene; determining the distribution positions of the three-dimensional virtual session members in the three-dimensional virtual session scene; acquiring a three-dimensional background model corresponding to a three-dimensional virtual session scene; and distributing the three-dimensional virtual conversation members at the corresponding distribution positions, and performing combined display with the three-dimensional background model to form a three-dimensional virtual conversation scene.
In one embodiment, apparatus 1600 further comprises:
the visual angle adjusting module is used for detecting touch operation acting on the three-dimensional virtual session scene to obtain a touch track; mapping the touch track into a moving track of an observation point in a three-dimensional virtual session scene; determining the position of the moved observation point according to the moving track; and performing projection display on the three-dimensional background model and the three-dimensional virtual conversation members according to the position of the moved observation point.
In one embodiment, the interactive data processing apparatus provided in the present application may be implemented in the form of a computer program that is executable on a computer device shown in fig. 2, and a non-volatile storage medium of the computer device may store various program modules that constitute the interactive data processing apparatus, such as a join module 1202, an image capture module 1204, an expression recognition module 1206, and a control module 1208 shown in fig. 12. Each program module includes computer readable instructions for causing the computer device to execute the steps in the interactive data processing method according to each embodiment of the present application described in this specification, for example, the computer device may join a corresponding virtual conversation scene through a joining module 1202 in the interactive data processing apparatus 1200 shown in fig. 12 by using a first user identifier currently logged in, collect head graphic data through an image collection module 1204, and recognize an expression feature in the head image data through an expression recognition module 1206, so as to obtain expression data. And sending the expression data to a terminal corresponding to a second user identifier added to the virtual session scene through the control module 1208, so that the terminal controls the virtual session member corresponding to the first user identifier to trigger the expression action represented by the expression data in the virtual session scene.
In one embodiment, a computer device is provided, comprising a memory and a processor, the memory having stored therein computer-readable instructions that, when executed by the processor, cause the processor to perform the steps of: adding a corresponding virtual session scene through a currently logged first user identifier; acquiring head image data; identifying expression features in the head image data to obtain expression data; and sending the expression data to a terminal corresponding to a second user identifier added into the virtual session scene, so that the terminal controls a virtual session member corresponding to the first user identifier to trigger an expression action represented by the expression data in the virtual session scene.
In one embodiment, the joining of the corresponding virtual session scenario by the first user identification currently logged in, executed by the processor, includes: acquiring a multi-user session identifier corresponding to a currently logged-in first user identifier; and sending the multi-person session identifier and the first user identifier to a server, so that the server adds the first user identifier into a member list of the virtual session scene identified by the multi-person session identifier.
In one embodiment, said sending said multi-person session identification and said first user identification to a server executed by a processor to cause said server to join said first user identification to a member list of a virtual session context identified by said multi-person session identification comprises: sending the multi-person session identifier and the first user identifier to a server, so that the server adds the first user identifier to a member list of a virtual session scene identified by the multi-person session identifier when the virtual session scene identified by the multi-person session identifier exists; or, the multi-person session identifier and the first user identifier are sent to a server, so that the server creates a virtual session scene identified by the multi-person session identifier when the virtual session scene identified by the multi-person session identifier does not exist, and adds the first user identifier to a member list of the created virtual session scene.
In one embodiment, said identifying expressive features in said head image data performed by the processor, resulting in expression data, comprises: identifying expression features in the head image data to obtain expression types and corresponding expression feature values; and generating expression data comprising expression characteristic values corresponding to the identified expression types.
In one embodiment, the expression type obtained by identification belongs to a preset expression type set; the generating, performed by the processor, expression data including expression feature values corresponding to the identified expression types includes: giving expression characteristic values which represent that corresponding expression actions are not triggered to the unrecognized expression types in the preset expression type set; and combining the expression characteristic values corresponding to the expression types according to the preset sequence of the expression types in the preset expression type set to form expression data.
In one embodiment, the sending, executed by the processor, the expression data to a terminal corresponding to a second user identifier joined in the virtual session scenario, so that the terminal controls, in the virtual session scenario, a virtual session member corresponding to the first user identifier to trigger an expression action represented by the expression data, includes: and sending the expression data to a terminal corresponding to a second user identifier added into the virtual session scene, so that the terminal extracts an expression characteristic value corresponding to the identified expression type from the expression data, and controlling a virtual session member corresponding to the first user identifier to trigger an expression action represented by the extracted expression characteristic value in the virtual session scene.
In one embodiment, the computer readable instructions further cause the processor to perform the steps of: obtaining expression data sent by a terminal corresponding to the second user identifier; and in the virtual session scene, controlling a virtual session member corresponding to the second user identifier to trigger the expression action represented by the acquired expression data.
In one embodiment, the computer readable instructions further cause the processor to, prior to said joining the corresponding virtual session context with the currently logged on first user identification, perform the steps of: acquiring user face image data corresponding to a first user identifier which is currently logged in; and generating a virtual conversation member corresponding to the first user identification according to the user face image data and the virtual conversation member initial model.
In one embodiment, the computer readable instructions further cause the processor to perform the steps of: acquiring a virtual session member corresponding to the second user identifier; determining the distribution positions of the virtual session members in the virtual session scene; acquiring a background picture corresponding to the virtual session scene; and distributing the acquired virtual session members at the corresponding distribution positions, and displaying the acquired virtual session members and the background picture in an overlapping manner to form the virtual session scene.
In one embodiment, the virtual session member is a three-dimensional virtual session member, and the virtual session scene is a three-dimensional virtual session scene; the computer readable instructions further cause the processor to perform the steps of: acquiring a three-dimensional virtual conversation member corresponding to the second user identifier; determining the distribution positions of the three-dimensional virtual session members in the three-dimensional virtual session scene; acquiring a three-dimensional background model corresponding to the three-dimensional virtual session scene; and distributing the three-dimensional virtual session members at the corresponding distribution positions, and performing combined display on the three-dimensional virtual session members and the three-dimensional background model to form the three-dimensional virtual session scene.
In one embodiment, the computer readable instructions further cause the processor to perform the steps of: detecting touch operation acting on the three-dimensional virtual session scene to obtain a touch track; mapping the touch track to a movement track of an observation point in the three-dimensional virtual session scene; determining the position of the moved observation point according to the movement track; and performing projection display on the three-dimensional background model and the three-dimensional virtual conversation members according to the position of the moved observation point.
In one embodiment, a computer device is provided, comprising a memory and a processor, the memory having stored therein computer-readable instructions that, when executed by the processor, cause the processor to perform the steps of: adding a corresponding virtual session scene through the currently logged second user identifier; receiving expression data sent by a terminal corresponding to a first user identifier added into the virtual session scene; extracting expression characteristic values from the expression data; and in the virtual session scene, controlling a virtual session member corresponding to the first user identifier to trigger the expression action represented by the expression characteristic value.
In one embodiment, the processor executes the steps of controlling, in the virtual session scene, a virtual session member corresponding to the first user identifier to trigger an expressive action represented by the expressive feature value, including: determining an expression type corresponding to the extracted expression characteristic value; in the virtual session scene, controlling a virtual session member corresponding to the first user identifier to implement a corresponding expression action according to an expression control logic code corresponding to the determined expression type and the extracted expression characteristic value; and/or generating corresponding texture information according to the expression characteristic value and the corresponding expression type, and displaying the texture information on an expression display part of a virtual conversation member corresponding to the first user identifier in the virtual conversation scene.
In one embodiment, the joining of the corresponding virtual session scenario by the second user identifier currently logged in, executed by the processor, includes: acquiring a multi-user session identifier corresponding to a currently logged-in second user identifier; and sending the multi-person session identifier and the second user identifier to a server, so that the server adds the second user identifier into a member list of the virtual session scene identified by the multi-person session identifier.
In one embodiment, the sending the multi-person session identifier and the second user identifier to a server executed by a processor, so that the server joins the second user identifier to a member list of a virtual session scenario identified by the multi-person session identifier, includes: sending the multi-person session identifier and the second user identifier to a server, so that the server adds the second user identifier to a member list of a virtual session scene identified by the multi-person session identifier when the virtual session scene identified by the multi-person session identifier exists; or, the multi-person session identifier and the second user identifier are sent to a server, so that the server creates a virtual session scene identified by the multi-person session identifier when the virtual session scene identified by the multi-person session identifier does not exist, and adds the second user identifier to a member list of the created virtual session scene.
In one embodiment, the computer readable instructions further cause the processor to perform the steps of: acquiring virtual session members corresponding to user identifications in a member list of the virtual session scene; determining the distribution positions of the virtual session members in the virtual session scene; acquiring a background picture corresponding to the virtual session scene; and distributing the acquired virtual session members at the corresponding distribution positions, and displaying the acquired virtual session members and the background picture in an overlapping manner to form the virtual session scene.
In one embodiment, the virtual session member is a three-dimensional virtual session member, and the virtual session scene is a three-dimensional virtual session scene. The computer readable instructions further cause the processor to perform the steps of: acquiring three-dimensional virtual session members corresponding to user identifications in a member list of the virtual session scene; determining the distribution positions of the three-dimensional virtual session members in the three-dimensional virtual session scene; acquiring a three-dimensional background model corresponding to the three-dimensional virtual session scene; and distributing the three-dimensional virtual session members at the corresponding distribution positions, and performing combined display on the three-dimensional virtual session members and the three-dimensional background model to form the three-dimensional virtual session scene.
In one embodiment, the computer readable instructions further cause the processor to perform the steps of: detecting touch operation acting on the three-dimensional virtual session scene to obtain a touch track; mapping the touch track to a movement track of an observation point in the three-dimensional virtual session scene; determining the position of the moved observation point according to the movement track; and performing projection display on the three-dimensional background model and the three-dimensional virtual conversation members according to the position of the moved observation point.
In one embodiment, a non-transitory readable storage medium is provided that stores computer readable instructions which, when executed by one or more processors, cause the one or more processors to perform the steps of: adding a corresponding virtual session scene through a currently logged first user identifier; acquiring head image data; identifying expression features in the head image data to obtain expression data; and sending the expression data to a terminal corresponding to a second user identifier added into the virtual session scene, so that the terminal controls a virtual session member corresponding to the first user identifier to trigger an expression action represented by the expression data in the virtual session scene.
In one embodiment, the joining of the corresponding virtual session scenario by the first user identification currently logged in, executed by the processor, includes: acquiring a multi-user session identifier corresponding to a currently logged-in first user identifier; and sending the multi-person session identifier and the first user identifier to a server, so that the server adds the first user identifier into a member list of the virtual session scene identified by the multi-person session identifier.
In one embodiment, said sending said multi-person session identification and said first user identification to a server executed by a processor to cause said server to join said first user identification to a member list of a virtual session context identified by said multi-person session identification comprises: sending the multi-person session identifier and the first user identifier to a server, so that the server adds the first user identifier to a member list of a virtual session scene identified by the multi-person session identifier when the virtual session scene identified by the multi-person session identifier exists; or, the multi-person session identifier and the first user identifier are sent to a server, so that the server creates a virtual session scene identified by the multi-person session identifier when the virtual session scene identified by the multi-person session identifier does not exist, and adds the first user identifier to a member list of the created virtual session scene.
In one embodiment, said identifying expressive features in said head image data performed by the processor, resulting in expression data, comprises: identifying expression features in the head image data to obtain expression types and corresponding expression feature values; and generating expression data comprising expression characteristic values corresponding to the identified expression types.
In one embodiment, the expression type obtained by identification belongs to a preset expression type set; the generating, performed by the processor, expression data including expression feature values corresponding to the identified expression types includes: giving expression characteristic values which represent that corresponding expression actions are not triggered to the unrecognized expression types in the preset expression type set; and combining the expression characteristic values corresponding to the expression types according to the preset sequence of the expression types in the preset expression type set to form expression data.
In one embodiment, the sending, executed by the processor, the expression data to a terminal corresponding to a second user identifier joined in the virtual session scenario, so that the terminal controls, in the virtual session scenario, a virtual session member corresponding to the first user identifier to trigger an expression action represented by the expression data, includes: and sending the expression data to a terminal corresponding to a second user identifier added into the virtual session scene, so that the terminal extracts an expression characteristic value corresponding to the identified expression type from the expression data, and controlling a virtual session member corresponding to the first user identifier to trigger an expression action represented by the extracted expression characteristic value in the virtual session scene.
In one embodiment, the computer readable instructions further cause the processor to perform the steps of: obtaining expression data sent by a terminal corresponding to the second user identifier; and in the virtual session scene, controlling a virtual session member corresponding to the second user identifier to trigger the expression action represented by the acquired expression data.
In one embodiment, the computer readable instructions further cause the processor to, prior to said joining the corresponding virtual session context with the currently logged on first user identification, perform the steps of: acquiring user face image data corresponding to a first user identifier which is currently logged in; and generating a virtual conversation member corresponding to the first user identification according to the user face image data and the virtual conversation member initial model.
In one embodiment, the computer readable instructions further cause the processor to perform the steps of: acquiring a virtual session member corresponding to the second user identifier; determining the distribution positions of the virtual session members in the virtual session scene; acquiring a background picture corresponding to the virtual session scene; and distributing the acquired virtual session members at the corresponding distribution positions, and displaying the acquired virtual session members and the background picture in an overlapping manner to form the virtual session scene.
In one embodiment, the virtual session member is a three-dimensional virtual session member, and the virtual session scene is a three-dimensional virtual session scene; the computer readable instructions further cause the processor to perform the steps of: acquiring a three-dimensional virtual conversation member corresponding to the second user identifier; determining the distribution positions of the three-dimensional virtual session members in the three-dimensional virtual session scene; acquiring a three-dimensional background model corresponding to the three-dimensional virtual session scene; and distributing the three-dimensional virtual session members at the corresponding distribution positions, and performing combined display on the three-dimensional virtual session members and the three-dimensional background model to form the three-dimensional virtual session scene.
In one embodiment, the computer readable instructions further cause the processor to perform the steps of: detecting touch operation acting on the three-dimensional virtual session scene to obtain a touch track; mapping the touch track to a movement track of an observation point in the three-dimensional virtual session scene; determining the position of the moved observation point according to the movement track; and performing projection display on the three-dimensional background model and the three-dimensional virtual conversation members according to the position of the moved observation point.
In one embodiment, a non-transitory readable storage medium is provided that stores computer readable instructions which, when executed by one or more processors, cause the one or more processors to perform the steps of: adding a corresponding virtual session scene through the currently logged second user identifier; receiving expression data sent by a terminal corresponding to a first user identifier added into the virtual session scene; extracting expression characteristic values from the expression data; and in the virtual session scene, controlling a virtual session member corresponding to the first user identifier to trigger the expression action represented by the expression characteristic value.
In one embodiment, the processor executes the steps of controlling, in the virtual session scene, a virtual session member corresponding to the first user identifier to trigger an expressive action represented by the expressive feature value, including: determining an expression type corresponding to the extracted expression characteristic value; in the virtual session scene, controlling a virtual session member corresponding to the first user identifier to implement a corresponding expression action according to an expression control logic code corresponding to the determined expression type and the extracted expression characteristic value; and/or generating corresponding texture information according to the expression characteristic value and the corresponding expression type, and displaying the texture information on an expression display part of a virtual conversation member corresponding to the first user identifier in the virtual conversation scene.
In one embodiment, the joining of the corresponding virtual session scenario by the second user identifier currently logged in, executed by the processor, includes: acquiring a multi-user session identifier corresponding to a currently logged-in second user identifier; and sending the multi-person session identifier and the second user identifier to a server, so that the server adds the second user identifier into a member list of the virtual session scene identified by the multi-person session identifier.
In one embodiment, the sending the multi-person session identifier and the second user identifier to a server executed by a processor, so that the server joins the second user identifier to a member list of a virtual session scenario identified by the multi-person session identifier, includes: sending the multi-person session identifier and the second user identifier to a server, so that the server adds the second user identifier to a member list of a virtual session scene identified by the multi-person session identifier when the virtual session scene identified by the multi-person session identifier exists; or, the multi-person session identifier and the second user identifier are sent to a server, so that the server creates a virtual session scene identified by the multi-person session identifier when the virtual session scene identified by the multi-person session identifier does not exist, and adds the second user identifier to a member list of the created virtual session scene.
In one embodiment, the computer readable instructions further cause the processor to perform the steps of: acquiring virtual session members corresponding to user identifications in a member list of the virtual session scene; determining the distribution positions of the virtual session members in the virtual session scene; acquiring a background picture corresponding to the virtual session scene; and distributing the acquired virtual session members at the corresponding distribution positions, and displaying the acquired virtual session members and the background picture in an overlapping manner to form the virtual session scene.
In one embodiment, the virtual session member is a three-dimensional virtual session member, and the virtual session scene is a three-dimensional virtual session scene. The computer readable instructions further cause the processor to perform the steps of: acquiring three-dimensional virtual session members corresponding to user identifications in a member list of the virtual session scene; determining the distribution positions of the three-dimensional virtual session members in the three-dimensional virtual session scene; acquiring a three-dimensional background model corresponding to the three-dimensional virtual session scene; and distributing the three-dimensional virtual session members at the corresponding distribution positions, and performing combined display on the three-dimensional virtual session members and the three-dimensional background model to form the three-dimensional virtual session scene.
In one embodiment, the computer readable instructions further cause the processor to perform the steps of: detecting touch operation acting on the three-dimensional virtual session scene to obtain a touch track; mapping the touch track to a movement track of an observation point in the three-dimensional virtual session scene; determining the position of the moved observation point according to the movement track; and performing projection display on the three-dimensional background model and the three-dimensional virtual conversation members according to the position of the moved observation point.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium, and can include the processes of the embodiments of the methods described above when the computer program is executed. The storage medium may be a non-volatile storage medium such as a magnetic disk, an optical disk, a Read-Only Memory (ROM), or a Random Access Memory (RAM).
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above examples only show some embodiments of the present invention, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the inventive concept, which falls within the scope of the present invention. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (26)

1. A method of interactive data processing, the method comprising:
adding a corresponding three-dimensional virtual conversation scene through a first user identifier which is logged in currently, wherein the three-dimensional virtual conversation scene is a conversation scene provided for three-dimensional virtual conversation members; the three-dimensional virtual session scene is created based on a session;
acquiring head image data;
identifying expression features in the head image data to obtain expression data;
sending the expression data to a terminal corresponding to a second user identifier added into the three-dimensional virtual session scene, so that the terminal controls a three-dimensional virtual session member corresponding to the first user identifier to trigger an expression action represented by the expression data in the three-dimensional virtual session scene;
acquiring a three-dimensional virtual conversation member corresponding to the second user identifier;
determining the size of a geometric figure used for distributing the three-dimensional virtual session members according to the number of the user identifications in the member list of the three-dimensional virtual session scene, and selecting positions meeting the number from the geometric figure to determine the distribution positions of the three-dimensional virtual session members in the three-dimensional virtual session scene;
acquiring a three-dimensional background model corresponding to the three-dimensional virtual session scene;
distributing the three-dimensional virtual session members to the corresponding distribution positions, and performing combined display with the three-dimensional background model to form and output and display the three-dimensional virtual session scene;
detecting touch operation acting on the three-dimensional virtual session scene to obtain a touch track;
mapping the touch track into a movement track of an observation point in the three-dimensional virtual session scene according to a mapping relation between the touch point and the observation point;
determining the position of the moved observation point according to the movement track; the three-dimensional virtual conversation scene is obtained by projecting a three-dimensional background model and three-dimensional virtual conversation members onto a display screen for displaying according to the observation points, wherein the three-dimensional virtual conversation scenes projected onto the display screen are different due to different observation points;
and performing projection display on the three-dimensional background model and the three-dimensional virtual conversation members according to the position of the moved observation point to form a three-dimensional virtual conversation scene under a new visual angle.
2. The method of claim 1, wherein joining the corresponding three-dimensional virtual session scene through the currently logged-in first user identifier comprises:
acquiring a multi-user session identifier corresponding to a currently logged-in first user identifier;
and sending the multi-person conversation identifier and the first user identifier to a server, so that the server adds the first user identifier into a member list of the three-dimensional virtual conversation scene identified by the multi-person conversation identifier.
3. The method of claim 2, wherein said sending the multi-person session identifier and the first user identifier to a server to cause the server to join the first user identifier to a list of members of a three-dimensional virtual session scene identified by the multi-person session identifier comprises:
sending the multi-person conversation identifier and the first user identifier to a server, so that the server adds the first user identifier to a member list of the three-dimensional virtual conversation scene identified by the multi-person conversation identifier when the three-dimensional virtual conversation scene identified by the multi-person conversation identifier exists; alternatively, the first and second electrodes may be,
and sending the multi-person conversation identifier and the first user identifier to a server, so that the server creates the three-dimensional virtual conversation scene identified by the multi-person conversation identifier when the three-dimensional virtual conversation scene identified by the multi-person conversation identifier does not exist, and adds the first user identifier into a member list of the created three-dimensional virtual conversation scene.
4. The method of claim 1, wherein the identifying expressive features in the head image data, resulting in expression data, comprises:
identifying expression features in the head image data to obtain expression types and corresponding expression feature values;
and generating expression data comprising expression characteristic values corresponding to the identified expression types.
5. The method of claim 4, wherein the identified expression type belongs to a preset expression type set;
the generating expression data including expression feature values corresponding to the expression types obtained through recognition includes:
giving expression characteristic values which represent that corresponding expression actions are not triggered to the unrecognized expression types in the preset expression type set;
and combining the expression characteristic values corresponding to the expression types according to the preset sequence of the expression types in the preset expression type set to form expression data.
6. The method according to claim 4 or 5, wherein the sending the expression data to a terminal corresponding to a second user identifier added to the three-dimensional virtual session scene to enable the terminal to control a three-dimensional virtual session member corresponding to the first user identifier to trigger an expression action represented by the expression data in the three-dimensional virtual session scene comprises:
and sending the expression data to a terminal corresponding to a second user identifier added into the three-dimensional virtual session scene, so that the terminal extracts an expression characteristic value corresponding to the identified expression type from the expression data, and controlling a three-dimensional virtual session member corresponding to the first user identifier to trigger an expression action represented by the extracted expression characteristic value in the three-dimensional virtual session scene.
7. The method according to any one of claims 1 to 5, further comprising:
obtaining expression data sent by a terminal corresponding to the second user identifier;
and in the three-dimensional virtual conversation scene, controlling a three-dimensional virtual conversation member corresponding to the second user identifier to trigger the expression action represented by the acquired expression data.
8. The method according to any of claims 1 to 5, wherein before said joining the corresponding three-dimensional virtual session context by the currently logged-on first user identifier, the method further comprises:
acquiring user face image data corresponding to a first user identifier which is currently logged in;
and generating a three-dimensional virtual conversation member corresponding to the first user identification according to the user face image data and the virtual conversation member initial model.
9. A method of interactive data processing, the method comprising:
adding a corresponding three-dimensional virtual session scene through a currently logged second user identifier; the three-dimensional virtual session scene is created based on a session;
receiving expression data sent by a terminal corresponding to a first user identifier added into the three-dimensional virtual session scene;
extracting expression characteristic values from the expression data;
in the three-dimensional virtual conversation scene, controlling a three-dimensional virtual conversation member corresponding to the first user identifier to trigger an expression action represented by the expression characteristic value;
acquiring three-dimensional virtual session members corresponding to user identifications in a member list of the three-dimensional virtual session scene;
determining the size of a geometric figure used for distributing the three-dimensional virtual session members according to the number of the user identifications in the member list of the three-dimensional virtual session scene, and selecting positions meeting the number from the geometric figure to determine the distribution positions of the three-dimensional virtual session members in the three-dimensional virtual session scene;
acquiring a three-dimensional background model corresponding to the three-dimensional virtual session scene;
distributing the three-dimensional virtual session members to the corresponding distribution positions, and performing combined display with the three-dimensional background model to form and output and display the three-dimensional virtual session scene; detecting touch operation acting on the three-dimensional virtual session scene to obtain a touch track;
mapping the touch track into a movement track of an observation point in the three-dimensional virtual session scene according to a mapping relation between the touch point and the observation point;
determining the position of the moved observation point according to the movement track; the three-dimensional virtual conversation scene is obtained by projecting a three-dimensional background model and three-dimensional virtual conversation members onto a display screen for displaying according to the observation points, wherein the three-dimensional virtual conversation scenes projected onto the display screen are different due to different observation points;
and performing projection display on the three-dimensional background model and the three-dimensional virtual conversation members according to the position of the moved observation point to form a three-dimensional virtual conversation scene under a new visual angle.
10. The method according to claim 9, wherein in the three-dimensional virtual conversation scene, controlling a three-dimensional virtual conversation member corresponding to the first user identifier to trigger an expressive action represented by the expressive feature value comprises:
determining an expression type corresponding to the extracted expression characteristic value;
in the three-dimensional virtual conversation scene, controlling a three-dimensional virtual conversation member corresponding to the first user identifier to implement a corresponding expression action according to an expression control logic code corresponding to the determined expression type and the extracted expression characteristic value; and/or the presence of a gas in the gas,
and generating corresponding texture information according to the expression characteristic value and the corresponding expression type, and displaying the texture information on an expression display part of the three-dimensional virtual conversation member corresponding to the first user identifier in the three-dimensional virtual conversation scene.
11. The method of claim 9, wherein joining the corresponding three-dimensional virtual session scenario via the currently logged-in second user identifier comprises:
acquiring a multi-user session identifier corresponding to a currently logged-in second user identifier;
and sending the multi-person conversation identifier and the second user identifier to a server, so that the server adds the second user identifier into a member list of the three-dimensional virtual conversation scene identified by the multi-person conversation identifier.
12. The method of claim 11, wherein the sending the multi-person session identifier and the second user identifier to a server to enable the server to join the second user identifier to a member list of a three-dimensional virtual session scene identified by the multi-person session identifier comprises:
sending the multi-user session identifier and the second user identifier to a server, so that when the three-dimensional virtual session scene identified by the multi-user session identifier exists in the server, the second user identifier is added into a member list of the three-dimensional virtual session scene identified by the multi-user session identifier; alternatively, the first and second electrodes may be,
and sending the multi-user session identifier and the second user identifier to a server, so that the server creates the three-dimensional virtual session scene identified by the multi-user session identifier when the three-dimensional virtual session scene identified by the multi-user session identifier does not exist, and adds the second user identifier to a member list of the created three-dimensional virtual session scene.
13. An interactive data processing apparatus, characterized in that the apparatus comprises:
the joining module is used for joining the corresponding three-dimensional virtual session scene through the currently logged first user identifier; the three-dimensional virtual session scene is created based on a session;
the image acquisition module is used for acquiring head image data;
the expression recognition module is used for recognizing expression characteristics in the head image data to obtain expression data;
the control module is used for sending the expression data to a terminal corresponding to a second user identifier added into the three-dimensional virtual session scene, so that the terminal controls a three-dimensional virtual session member corresponding to the first user identifier to trigger an expression action represented by the expression data in the three-dimensional virtual session scene;
the virtual session scene display module is used for acquiring a three-dimensional virtual session member corresponding to the second user identifier; determining the size of a geometric figure used for distributing the three-dimensional virtual session members according to the number of the user identifications in the member list of the three-dimensional virtual session scene, and selecting positions meeting the number from the geometric figure to determine the distribution positions of the three-dimensional virtual session members in the three-dimensional virtual session scene; acquiring a three-dimensional background model corresponding to the three-dimensional virtual session scene; distributing the three-dimensional virtual session members to the corresponding distribution positions, and performing combined display with the three-dimensional background model to form and output and display the three-dimensional virtual session scene;
the visual angle adjusting module is used for detecting touch operation acting on the three-dimensional virtual session scene to obtain a touch track; mapping the touch track into a movement track of an observation point in the three-dimensional virtual session scene according to a mapping relation between the touch point and the observation point; determining the position of the moved observation point according to the movement track; the three-dimensional virtual conversation scene is obtained by projecting a three-dimensional background model and three-dimensional virtual conversation members onto a display screen for displaying according to the observation points, wherein the three-dimensional virtual conversation scenes projected onto the display screen are different due to different observation points; and performing projection display on the three-dimensional background model and the three-dimensional virtual conversation members according to the position of the moved observation point to form a three-dimensional virtual conversation scene under a new visual angle.
14. The apparatus of claim 13, wherein the joining module is further configured to obtain a multi-user session identifier corresponding to a currently logged-in first user identifier; and sending the multi-person conversation identifier and the first user identifier to a server, so that the server adds the first user identifier into a member list of the three-dimensional virtual conversation scene identified by the multi-person conversation identifier.
15. The apparatus of claim 14, wherein the joining module is further configured to send the multi-person session identifier and the first user identifier to a server, so that the server joins the first user identifier to a member list of a three-dimensional virtual session scene identified by the multi-person session identifier when the three-dimensional virtual session scene identified by the multi-person session identifier already exists; or, the multi-person conversation identifier and the first user identifier are sent to a server, so that the server creates a three-dimensional virtual conversation scene identified by the multi-person conversation identifier when the three-dimensional virtual conversation scene identified by the multi-person conversation identifier does not exist, and the first user identifier is added into a member list of the created three-dimensional virtual conversation scene.
16. The apparatus of claim 13, wherein the expression recognition module is further configured to recognize expression features in the head image data to obtain expression types and corresponding expression feature values; and generating expression data comprising expression characteristic values corresponding to the identified expression types.
17. The apparatus of claim 16, wherein the identified expression type belongs to a preset expression type set; the expression recognition module is also used for endowing expression characteristic values which represent that corresponding expression actions are not triggered to the expression types which are not recognized in the preset expression type set; and combining the expression characteristic values corresponding to the expression types according to the preset sequence of the expression types in the preset expression type set to form expression data.
18. The apparatus according to claim 16 or 17, wherein the control module is further configured to send the expression data to a terminal corresponding to a second user identifier added to the three-dimensional virtual session scenario, so that the terminal extracts an expression feature value corresponding to the identified expression type from the expression data, and in the three-dimensional virtual session scenario, controls a three-dimensional virtual session member corresponding to the first user identifier to trigger an expression action represented by the extracted expression feature value.
19. The apparatus according to any one of claims 13 to 17, wherein the control module is further configured to obtain expression data sent by a terminal corresponding to the second user identifier; and in the three-dimensional virtual conversation scene, controlling a three-dimensional virtual conversation member corresponding to the second user identifier to trigger the expression action represented by the acquired expression data.
20. The apparatus of any one of claims 13 to 17, further comprising:
the virtual session member generation module is used for acquiring user face image data corresponding to a first user identifier which is currently logged in; and generating a three-dimensional virtual conversation member corresponding to the first user identification according to the user face image data and the virtual conversation member initial model.
21. An interactive data processing apparatus, characterized in that the apparatus comprises:
the joining module joins the corresponding three-dimensional virtual session scene through the currently logged second user identifier; the three-dimensional virtual session scene is created based on a session;
the expression feature extraction module is used for receiving expression data sent by a terminal corresponding to a first user identifier added into the three-dimensional virtual session scene; extracting expression characteristic values from the expression data;
the control module is used for controlling a three-dimensional virtual conversation member corresponding to the first user identifier to trigger the expression action represented by the expression characteristic value in the three-dimensional virtual conversation scene;
the virtual session scene display module is used for acquiring three-dimensional virtual session members corresponding to user identifications in a member list of the three-dimensional virtual session scene; determining the size of a geometric figure used for distributing the three-dimensional virtual session members according to the number of the user identifications in the member list of the three-dimensional virtual session scene, and selecting positions meeting the number from the geometric figure to determine the distribution positions of the three-dimensional virtual session members in the three-dimensional virtual session scene; acquiring a three-dimensional background model corresponding to the three-dimensional virtual session scene; distributing the three-dimensional virtual session members to the corresponding distribution positions, and performing combined display with the three-dimensional background model to form and output and display the three-dimensional virtual session scene;
the visual angle adjusting module is used for detecting touch operation acting on the three-dimensional virtual session scene to obtain a touch track; mapping the touch track into a movement track of an observation point in the three-dimensional virtual session scene according to a mapping relation between the touch point and the observation point; determining the position of the moved observation point according to the movement track; the three-dimensional virtual conversation scene is obtained by projecting a three-dimensional background model and three-dimensional virtual conversation members onto a display screen for displaying according to the observation points, wherein the three-dimensional virtual conversation scenes projected onto the display screen are different due to different observation points; and performing projection display on the three-dimensional background model and the three-dimensional virtual conversation members according to the position of the moved observation point to form a three-dimensional virtual conversation scene under a new visual angle.
22. The apparatus of claim 21, wherein the control module is further configured to determine an expression type corresponding to the extracted expression feature value; in the three-dimensional virtual conversation scene, controlling a three-dimensional virtual conversation member corresponding to the first user identifier to implement a corresponding expression action according to an expression control logic code corresponding to the determined expression type and the extracted expression characteristic value; and/or generating corresponding texture information according to the expression characteristic value and the corresponding expression type, and displaying the texture information on an expression display part of the three-dimensional virtual conversation member corresponding to the first user identifier in the three-dimensional virtual conversation scene.
23. The apparatus of claim 21, wherein the joining module is further configured to obtain a multi-user session identifier corresponding to a currently logged-in second user identifier; and sending the multi-person conversation identifier and the second user identifier to a server, so that the server adds the second user identifier into a member list of the three-dimensional virtual conversation scene identified by the multi-person conversation identifier.
24. The apparatus of claim 23, wherein the joining module is further configured to send the multi-person session identifier and the second user identifier to a server, so that the server joins the second user identifier to a member list of the three-dimensional virtual session scene identified by the multi-person session identifier when the three-dimensional virtual session scene identified by the multi-person session identifier already exists; or, the multi-person conversation identifier and the second user identifier are sent to a server, so that the server creates the three-dimensional virtual conversation scene identified by the multi-person conversation identifier when the three-dimensional virtual conversation scene identified by the multi-person conversation identifier does not exist, and the second user identifier is added into a member list of the created three-dimensional virtual conversation scene.
25. A computer device comprising a memory and a processor, the memory having stored therein computer-readable instructions that, when executed by the processor, cause the processor to perform the steps of the method of any of claims 1 to 5 or 9 to 12.
26. A storage medium storing computer readable instructions which, when executed by one or more processors, cause the one or more processors to perform the steps of the method of any one of claims 1 to 5 or 9 to 12.
CN201710458909.5A 2017-06-16 2017-06-16 Interactive data processing method and device, computer equipment and storage medium Active CN109150690B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710458909.5A CN109150690B (en) 2017-06-16 2017-06-16 Interactive data processing method and device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710458909.5A CN109150690B (en) 2017-06-16 2017-06-16 Interactive data processing method and device, computer equipment and storage medium

Publications (2)

Publication Number Publication Date
CN109150690A CN109150690A (en) 2019-01-04
CN109150690B true CN109150690B (en) 2021-05-25

Family

ID=64830555

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710458909.5A Active CN109150690B (en) 2017-06-16 2017-06-16 Interactive data processing method and device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN109150690B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110418095B (en) * 2019-06-28 2021-09-14 广东虚拟现实科技有限公司 Virtual scene processing method and device, electronic equipment and storage medium
CN110401810B (en) * 2019-06-28 2021-12-21 广东虚拟现实科技有限公司 Virtual picture processing method, device and system, electronic equipment and storage medium
CN111444389A (en) * 2020-03-27 2020-07-24 焦点科技股份有限公司 Conference video analysis method and system based on target detection
CN116129006A (en) * 2021-11-12 2023-05-16 腾讯科技(深圳)有限公司 Data processing method, device, equipment and readable storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101127737A (en) * 2007-09-25 2008-02-20 腾讯科技(深圳)有限公司 Implementation method of UI, user terminal and instant communication system
CN101635705A (en) * 2008-07-23 2010-01-27 上海赛我网络技术有限公司 Interaction method based on three-dimensional virtual map and figure and system for realizing same
CN102142154A (en) * 2011-05-10 2011-08-03 中国科学院半导体研究所 Method and device for generating virtual face image
CN103368816A (en) * 2012-03-29 2013-10-23 深圳市腾讯计算机系统有限公司 Instant communication method based on virtual character and system
CN103368929A (en) * 2012-04-11 2013-10-23 腾讯科技(深圳)有限公司 Video chatting method and system
CN105797376A (en) * 2014-12-31 2016-07-27 深圳市亿思达科技集团有限公司 Method and terminal for controlling role model behavior according to expression of user
CN105797374A (en) * 2014-12-31 2016-07-27 深圳市亿思达科技集团有限公司 Method for giving out corresponding voice in following way by being matched with face expressions and terminal
CN106652015A (en) * 2015-10-30 2017-05-10 深圳超多维光电子有限公司 Virtual figure head portrait generation method and apparatus

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA2653820A1 (en) * 2008-02-11 2009-08-11 Ganz Friends list management
CN101908232B (en) * 2010-07-30 2012-09-12 重庆埃默科技有限责任公司 Interactive scene simulation system and scene virtual simulation method
CN105653012A (en) * 2014-08-26 2016-06-08 蔡大林 Multi-user immersion type full interaction virtual reality project training system
CN106326678A (en) * 2016-09-13 2017-01-11 捷开通讯(深圳)有限公司 Sample room experiencing method, equipment and system based on virtual reality
CN106598438A (en) * 2016-12-22 2017-04-26 腾讯科技(深圳)有限公司 Scene switching method based on mobile terminal, and mobile terminal

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101127737A (en) * 2007-09-25 2008-02-20 腾讯科技(深圳)有限公司 Implementation method of UI, user terminal and instant communication system
CN101635705A (en) * 2008-07-23 2010-01-27 上海赛我网络技术有限公司 Interaction method based on three-dimensional virtual map and figure and system for realizing same
CN102142154A (en) * 2011-05-10 2011-08-03 中国科学院半导体研究所 Method and device for generating virtual face image
CN103368816A (en) * 2012-03-29 2013-10-23 深圳市腾讯计算机系统有限公司 Instant communication method based on virtual character and system
CN103368929A (en) * 2012-04-11 2013-10-23 腾讯科技(深圳)有限公司 Video chatting method and system
CN105797376A (en) * 2014-12-31 2016-07-27 深圳市亿思达科技集团有限公司 Method and terminal for controlling role model behavior according to expression of user
CN105797374A (en) * 2014-12-31 2016-07-27 深圳市亿思达科技集团有限公司 Method for giving out corresponding voice in following way by being matched with face expressions and terminal
CN106652015A (en) * 2015-10-30 2017-05-10 深圳超多维光电子有限公司 Virtual figure head portrait generation method and apparatus

Also Published As

Publication number Publication date
CN109150690A (en) 2019-01-04

Similar Documents

Publication Publication Date Title
US11595617B2 (en) Communication using interactive avatars
TWI650675B (en) Method and system for group video session, terminal, virtual reality device and network device
WO2013027893A1 (en) Apparatus and method for emotional content services on telecommunication devices, apparatus and method for emotion recognition therefor, and apparatus and method for generating and matching the emotional content using same
CN109150690B (en) Interactive data processing method and device, computer equipment and storage medium
US9357174B2 (en) System and method for avatar management and selection
CN108874114B (en) Method and device for realizing emotion expression of virtual object, computer equipment and storage medium
WO2018107918A1 (en) Method for interaction between avatars, terminals, and system
US11151796B2 (en) Systems and methods for providing real-time composite video from multiple source devices featuring augmented reality elements
CN110418095B (en) Virtual scene processing method and device, electronic equipment and storage medium
JP2016521929A (en) Method, user terminal, and server for information exchange in communication
WO2022252866A1 (en) Interaction processing method and apparatus, terminal and medium
US11423627B2 (en) Systems and methods for providing real-time composite video from multiple source devices featuring augmented reality elements
CN109428859B (en) Synchronous communication method, terminal and server
US20220398816A1 (en) Systems And Methods For Providing Real-Time Composite Video From Multiple Source Devices Featuring Augmented Reality Elements
CN110536095A (en) Call method, device, terminal and storage medium
CN109039851B (en) Interactive data processing method and device, computer equipment and storage medium
KR20130082693A (en) Apparatus and method for video chatting using avatar
US20230386147A1 (en) Systems and Methods for Providing Real-Time Composite Video from Multiple Source Devices Featuring Augmented Reality Elements
CN116437137B (en) Live broadcast processing method and device, electronic equipment and storage medium
CN114915852B (en) Video call interaction method, device, computer equipment and storage medium
CN111614926B (en) Network communication method, device, computer equipment and storage medium
CN112749357A (en) Interaction method and device based on shared content and computer equipment
WO2023082737A1 (en) Data processing method and apparatus, and device and readable storage medium
WO2023071556A1 (en) Virtual image-based data processing method and apparatus, computer device, and storage medium
JP2023114548A (en) Image processing apparatus, image processing method, presence display apparatus, presence display method, and program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant