CN108933723B - Message display method and device and terminal - Google Patents

Message display method and device and terminal Download PDF

Info

Publication number
CN108933723B
CN108933723B CN201710355687.4A CN201710355687A CN108933723B CN 108933723 B CN108933723 B CN 108933723B CN 201710355687 A CN201710355687 A CN 201710355687A CN 108933723 B CN108933723 B CN 108933723B
Authority
CN
China
Prior art keywords
interactive
animation
image
message
virtual
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710355687.4A
Other languages
Chinese (zh)
Other versions
CN108933723A (en
Inventor
李斌
张玖林
陈郁
刘文婷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201710355687.4A priority Critical patent/CN108933723B/en
Publication of CN108933723A publication Critical patent/CN108933723A/en
Application granted granted Critical
Publication of CN108933723B publication Critical patent/CN108933723B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L51/00User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
    • H04L51/06Message adaptation to terminal or network requirements
    • H04L51/063Content adaptation, e.g. replacement of unsuitable content
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L51/00User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
    • H04L51/06Message adaptation to terminal or network requirements
    • H04L51/066Format adaptation, e.g. format conversion or compression
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L51/00User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
    • H04L51/07User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail characterised by the inclusion of specific contents
    • H04L51/10Multimedia information
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L51/00User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
    • H04L51/52User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail for supporting social networking services

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Computing Systems (AREA)
  • Multimedia (AREA)
  • User Interface Of Digital Computer (AREA)
  • Information Transfer Between Computers (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention discloses a message display method, a message display device and a message display terminal, and belongs to the field of computers. The method comprises the following steps: receiving an interactive message; determining an interactive image corresponding to the interactive message, wherein the interactive image is generated in advance according to personalized data and a three-dimensional virtual model, and the interactive image comprises a first virtual image corresponding to the first client and/or a second virtual image corresponding to the second client; the interactive message corresponds to at least one interactive animation, and each interactive animation is used for displaying the message content of the interactive message through the interactive image; and displaying the interactive animation through the interactive image in a virtual scene. The method and the device solve the problem that the terminal only displays the two-dimensional image of the expression message, so that the terminal has a single expression message display mode, and expand the expression message display modes.

Description

Message display method and device and terminal
Technical Field
The embodiment of the invention relates to the field of computers, in particular to a message display method, a message display device and a message display terminal.
Background
With the development of computer technology, some network-based social clients have appeared, and these social clients generally have a function of sending interactive messages. The interaction messages are used for interaction among different social clients.
If the interactive message sent by the first social client to the second social client is an emoticon message, the emoticon message is displayed in a chat interface in a form of a dynamic picture by both the first social client and the second social client.
The social client displays the emoticons in the form of dynamic pictures in the chat interface, so that the emoticons are relatively single and limited in display mode.
Disclosure of Invention
In order to solve the problem of single display mode of the emotion message, the embodiment of the invention provides a message display method, a message display device and a terminal. The technical scheme is as follows:
in a first aspect, a message display method is provided, where the method includes:
receiving an interactive message, wherein the interactive message is used for realizing interaction between a first client and a second client, the first client is a client for sending the interactive message, and the second client is a client for receiving the interactive message;
determining an interactive image corresponding to the interactive message, wherein the interactive image is generated in advance according to personalized data and a three-dimensional virtual model, and the interactive image comprises a first virtual image corresponding to the first client and/or a second virtual image corresponding to the second client; the interactive message corresponds to at least one interactive animation, and each interactive animation is used for displaying the message content of the interactive message through the interactive image;
and displaying the interactive animation through the interactive image in a virtual scene.
In a second aspect, there is provided a message presentation apparatus, the apparatus comprising:
the message receiving module is used for receiving an interactive message, wherein the interactive message is used for realizing interaction between a first client and a second client, the first client is a client for sending the interactive message, and the second client is a client for receiving the interactive message;
the image determining module is used for determining an interactive image corresponding to the interactive message, the interactive image is generated in advance according to personalized data and a three-dimensional virtual model, and the interactive image comprises a first virtual image corresponding to the first client and/or a second virtual image corresponding to the second client; the interactive message corresponds to at least one interactive animation, and each interactive animation is used for displaying the message content of the interactive message through the interactive image;
and the animation display module is used for displaying the interactive animation through the interactive image in a virtual scene.
In a third aspect, a terminal is provided, where the terminal includes a processor and a memory, where the memory stores at least one instruction, and the instruction is loaded by the processor and executes the message display method provided in the first aspect.
In a fourth aspect, a computer-readable storage medium is provided, in which instructions are stored, and when the instructions are executed on a terminal, the terminal is enabled to execute the message presentation method provided by the first aspect.
The technical scheme provided by the embodiment of the invention has the following beneficial effects:
according to the method, the first client side and/or the second client side display the interactive animation corresponding to the interactive message through the interactive image in the virtual scene, when the interactive message is the expression message, the expression message is displayed through the three-dimensional interactive image, the problem that the terminal only displays a two-dimensional image of the expression message, so that the terminal display mode of the expression message is single is solved, and the display mode of the expression message is expanded to be the three-dimensional mode.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a schematic diagram of a terminal acquiring an image according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of adjusting the position of feature points provided by one embodiment of the present invention;
FIG. 3 is another schematic diagram of adjusting the positions of feature points provided by one embodiment of the present invention;
FIG. 4 is a schematic diagram of setting personalization data provided by one embodiment of the present invention;
FIG. 5 is a block diagram of a message display system according to an embodiment of the present invention;
FIG. 6 is a flow chart of a message presentation method provided by an embodiment of the invention;
FIG. 7 is a diagram illustrating selectable messages provided by one embodiment of the present invention;
FIG. 8 is a diagram illustrating an interactive animation according to an embodiment of the invention;
FIG. 9 is another diagram for showing an interactive animation according to an embodiment of the invention;
FIG. 10 is a schematic diagram of an adjusted interactive figure provided by an embodiment of the present invention;
FIG. 11 is another schematic diagram of an adjusted interactive figure provided by an embodiment of the present invention;
FIG. 12 is a diagram illustrating switching between different types of interactive animations provided by one embodiment of the present invention;
FIG. 13 is a flowchart of a message presentation method according to an embodiment of the present invention;
FIG. 14 is a block diagram of a message presentation device provided by one embodiment of the present invention;
fig. 15 is a schematic structural diagram of a terminal according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, embodiments of the present invention will be described in detail with reference to the accompanying drawings.
First, a number of terms related to embodiments of the present invention will be described.
And (3) interactive image: the method refers to a three-dimensional virtual image generated by the terminal according to personalized data and a preset three-dimensional (3D) virtual model. An interactive character is a character used to characterize a real person in a virtual scene. The interactive image can be any one of a cartoon character, a three-dimensional virtual character, a cartoon animal, a three-dimensional virtual animal and a three-dimensional virtual object in a three-dimensional virtual environment, and the specific form of the interactive image is not limited. The interactive character may also be referred to as an internet virtual character (Avatar), an interactive character, a virtual character, an Avatar, and the like, which is not limited in this embodiment.
Virtual model: refers to a skeleton used to generate an interactive character and/or an interactive animation. Living beings generally support limbs by a skeleton, and control the movement of the skeleton by contraction and stretching of muscles, thereby generating various postures and movements. In the virtual scene, in order to ensure that the movement effect of the interactive image is more real, the virtual skeleton is used for controlling the movement of the corresponding points on the limbs of the interactive image, and the skeleton of each part forms a virtual model. By adjusting the virtual model, the interactive image can make various shapes and actions. Alternatively, different interactive figures of the same type may share the same set of virtual models. The virtual model may also be referred to as: a model skeleton, a basic model, a main model, etc., which are not limited in this embodiment.
Optionally, the virtual model is stored in an animation program for use by a developer when designing the interactive animation; alternatively, the virtual model is drawn by the developer himself through the animation program.
Illustratively, Biped in Character Studio (CS) by 3D Studio Max (3DSMAX) provides a set of skeletal systems with human skeletal features. The framework system integrates a forward dynamics system (Forward dynamics) and a reverse dynamics system (Inverse dynamics), and can set any type of action for the framework. Biped has the advantages that: the skeleton can be adjusted, the number of branches can be reduced or the branches can be complicated, and the shape can be changed through translation, rotation, scaling and other transformation modes. The Biped can also store the motion of the interactive image to external files, and the motion files can be applied to Biped frameworks with different structures to automatically coordinate the structural difference and obtain smooth motion.
The 3DSMAX is three-dimensional animation rendering and making software. Under the 3D SMAX software system, a typical function is to produce role character animation, which integrates a set of complete role production and animation realization tools and commands owned by the system and integrates the advantages of a third party plug-in and the like.
Optionally, the above animation program is only illustrative, and in actual implementation, the animation program may also be a plug-in developed based on 3DSMAX SDK, Maya, zbrosh, or the like, which is not limited in this embodiment.
Personalized data: for characterizing the appearance of the interactive character. The personalized data may also be referred to as basic data, appearance data, etc., and this embodiment does not limit this. Optionally, the personalization data comprises at least one of facial texture data, grooming information, pose information, and gender information.
Wherein facial texture data is used to characterize facial biometrics of a user (real world user); the dressing information includes, but is not limited to: information representing at least one of clothing, hats, shoes, earrings, necklaces, scarves, jewelry, headwear, glasses, hairstyles, and skin tones; pose information includes, but is not limited to: information representing a gesture (e.g., standing) or motion (e.g., gesture) etc.
The terminal acquires the face texture data includes but is not limited to the following two ways:
in the first mode, a terminal calls an image acquisition component to acquire an image containing a face; facial texture data is obtained from the image.
Optionally, the image acquisition assembly is a camera assembly provided in the terminal.
Optionally, before the terminal invokes the image acquisition component, a reference line is displayed in the acquisition interface, and the reference line is used for prompting that a preset part of the face of the user displayed in the acquisition interface is aligned with the corresponding reference line. The acquisition interface is used for displaying the face image acquired by the image acquisition assembly.
Optionally, the terminal plays a prompt message for prompting alignment of the preset part with the corresponding reference. The prompt message can be text message and/or video message and/or image direction message displayed in the acquisition interface; or, the voice information may be played when the acquisition interface is displayed, which is not limited in this embodiment.
Referring to fig. 1, the terminal displays reference lines 11 of eyes and a nose and prompt information 12 of 'click to take a picture after aligning the eyes and the nose with the reference lines' in a collection interface, wherein the prompt information 12 is text information.
Optionally, after acquiring the image of the face, the terminal displays the image and the n feature points; receiving a position adjusting instruction, wherein the position adjusting instruction is used for adjusting the position of at least one characteristic point; adjusting the position of at least one characteristic point in the n characteristic points according to the position adjusting instruction; and carrying out face recognition on the image according to the adjusted positions of the n feature points to obtain face texture data.
Wherein n is an integer greater than or equal to 2, and the n feature points include feature points corresponding to at least one facial feature of eyes, nose, eyebrows, mouth, or facial contour.
Optionally, the position adjustment instruction is a drag instruction generated according to an event of dragging the feature point.
Optionally, after receiving the position adjustment instruction, the terminal enlarges and displays the position of the feature point indicated by the position adjustment instruction in the image, so as to improve the accuracy of the user in aligning the feature point with the position of the face in the image.
Optionally, after receiving the position adjustment instruction, the terminal displays a prompt message for prompting to adjust the feature point to the target position. The prompt information may be at least one of text information, voice information, image information and video information, which is not limited in this embodiment.
Referring to fig. 2, after receiving a position adjustment instruction for adjusting the position of the feature point 21, the terminal presents a prompt message 22 of "align with chin", where the prompt message 22 is a text message.
Referring to fig. 3, after receiving a position adjustment instruction for adjusting the position of the feature point 31, the terminal displays a prompt message 32, where the prompt message 32 is picture information.
Optionally, the picture information is displayed superimposed on the acquired image.
In the second mode, the terminal calls an image storage program; receiving an image selection instruction, wherein the image selection instruction is used for indicating at least one image stored in an image storage program; facial texture data is acquired from at least one image.
Optionally, a manner of acquiring the facial texture data according to the at least one image by the terminal is similar to the manner of acquiring the facial texture data according to the acquired image in the first manner, and this embodiment is not described herein again.
Optionally, the face texture data is one of at least one face texture data pre-stored by the terminal. Such as: the terminal stores the face texture data of 5 in advance, and the user selects the face texture data of 3 rd type. In this implementation, the terminal presents at least one pre-stored facial texture data in the user interface for selection by the user.
Optionally, the dressing information and the posture information are information set by the user; or, default information set in the terminal.
Optionally, the gender information is information set by the user; or, after the terminal acquires the face image, the information is identified according to a face identification algorithm.
When at least one of facial texture data, dressing information, posture information and gender information in the personalized data is user-defined information, the terminal acquires the facial texture data, the dressing information, the posture information or the gender information, and the method comprises the following steps: receiving a setting instruction, and generating personalized data according to the setting instruction.
Referring to fig. 4, after the terminal acquires an image, the image is recognized to generate facial texture data in the personalized data. Optionally, in presenting the image (refer to fig. 4(1)), the option 42 representing gender as female and the option 41 representing gender as male are presented; receiving a setting instruction acting on option 41; generating gender information 'male' in the personalized data according to the setting instruction; the interface of the first setting entrance 43 for setting the upper garment, the second setting entrance 44 for setting the lower garment, and the third setting entrance 45 for setting the glasses shown in fig. 4(2) is shown; after receiving a selection instruction acting on the first setting entry 43, displaying a candidate item corresponding to the first setting entry 43, and receiving a setting instruction acting on the candidate item; and generating the dressing information in the personalized data according to the setting instruction. And the terminal generates an interactive image according to the personalized data and a preset virtual model.
Optionally, before the terminal receives the user-defined personalized data, default personalized data is displayed, such as: default facial textures, skin, clothing, etc. are presented.
Interactive information: the method is used for realizing interaction between a first client and a second client, wherein the first client is a client for sending the interactive message, and the second client is a client for receiving the interactive message.
The interactive message comprises at least one of character information, picture message, video message and audio message. In the embodiment of the invention, the interactive message displays the specific content indicated by the interactive message through the interactive image, and at the moment, the interactive message further comprises a field for indicating the message type, wherein the field is used for indicating whether the client displays the specific content through the interactive image or not, and indicating whether the client displays the specific content through a single interactive image or a plurality of interactive images.
The specific content indicated by the interactive message may be a content carried by the interactive message, or a content corresponding to a message identifier in the interactive message, such as: the interactive message carries a message identifier '001', and the message identifier '001' is used for indicating the picture message 'laugh'.
Such as: the interactive message is used for indicating the picture message "laugh", the interactive message comprises a field 1, the field 1 is used for indicating that the interactive message is displayed through a single interactive image, and then the client displays the laugh according to the single interactive image indicated by the interactive message.
For another example: the interactive message is used for indicating the picture message to send the flower, the interactive message comprises a field 2, the field 2 is used for indicating that the interactive message is displayed through two interactive images, the client side displays the flower sending according to the two interactive images indicated by the interactive message, one interactive image displays the animation of the received flower, and the other interactive object displays the animation of the flower sending.
Optionally, the client may be a social client, a game client, a shopping client, or the like, which is not limited in this embodiment. In the embodiment of the present invention, a client is a social client as an example for explanation, and accordingly, a virtual scene in the embodiment of the present invention refers to a virtual social scene.
The client sends the interactive message and displays the interactive message.
And (3) interactive animation: and the message content is used for displaying the interactive message through the interactive image. That is, message content for presenting the interactive message by the action and/or gesture of the interactive character in the virtual environment.
At this time, the client also has the functions of generating an interactive character and displaying an interactive message through the interactive character, such as: a client supporting the Unity3D engine is installed.
Among them, Unity3D is an engine developed by Unity Technologies that supports the creation of types of interactive content such as three-dimensional video games, architectural visualizations, real-time three-dimensional animations, and the like. The interactive content created by the Unity3D can be distributed to various platforms such as Windows, Mac, Wii, iPhone, Windows phone 8 and Android.
Optionally, the interactive animation in Unity3D in the client is derived from 3DSMAX to Unity3D after the developer makes the interactive animation by 3 DSMAX.
Optionally, since the Scale for Unity3D to display the interactive animation may be different from the Scale for animation production at 3DSMAX, in order to ensure that the animation produced at 3DSMAX can be normally displayed through Unity3D, a Scale Factor (Scale Factor) needs to be preset in Unity3D, so that Unity3D can display the interactive object and the interactive animation at a correct Scale.
An application scenario of the embodiment of the present invention is described below.
Referring to fig. 5, a schematic structural diagram of a message presentation system according to an embodiment of the present invention is shown. The system includes a first terminal 510 and a second terminal 520.
The first terminal 510 and the second terminal 530 may be a mobile phone, a tablet computer, a wearable device, an Augmented Reality (AR) device, a Virtual Reality (VR) device, a laptop computer, a desktop computer, and the like, which is not limited in this embodiment.
The first terminal 510 has a first client 511 installed therein. The first client 511 has a function of sending the interactive message 512. The interactive message 512 is generated by the first client 511 according to the received input message, and the interactive message 512 includes a field for indicating a message type.
Optionally, the input message is character information, picture information, and the like received by the first terminal 510 through the human-computer interaction interface 513; alternatively, the input message is character information, picture information, etc. from the third-party application 514, such as: the input information is picture information obtained from the third-party application 514 by the first client 511 calling the third-party application 514, which is not limited in this embodiment.
Optionally, the first client 511 has a function of presenting the interactive message 512. The first client 511 determines whether to display the content indicated by the interactive message through the interactive character according to the field in the interactive message 512; and when the content indicated by the interactive character display interactive message is determined, acquiring the interactive character indicated by the interactive message 512, and displaying the message content according to the interactive character.
The first terminal 510 establishes a communication connection with the second terminal 520 through a wireless network manner or a wired network manner. The first client 511 sends the interactive message 512 to the second terminal 520 via the communication connection.
The second terminal 520 is installed with a second client 521, and the second client 521 receives the interactive message 512 and displays the interactive message 512. The description of the second terminal displaying interactive message 512 is the same as that of the first terminal displaying interactive message 512, and is not repeated herein in this embodiment.
The first client 511 and the second client 521 are the same client. Optionally, the second client 511 belongs to a buddy relationship chain of the first client 511.
Optionally, the first terminal 510 establishes a communication connection with the second terminal 520 through the server 530. The server 520 has a function of receiving and forwarding the interactive message 512.
Alternatively, server 530 may be a cluster of servers or a single server, which is not limited in this implementation.
Optionally, the wireless or wired networks described above use standard communication techniques and/or protocols. The Network is typically the Internet, but may be any Network including, but not limited to, a Local Area Network (LAN), a Metropolitan Area Network (MAN), a Wide Area Network (WAN), a mobile, wireline or wireless Network, a private Network, or any combination of virtual private networks. In some embodiments, data exchanged over a network is represented using techniques and/or formats including HyperText Mark-up Language (HTML), Extensible Mark-up Language (XML), and so forth. All or some of the links may also be encrypted using conventional encryption techniques such as Secure Socket Layer (SSL), Transport Layer Security (TLS), Virtual Private Network (VPN), Internet protocol Security (IPsec). In other embodiments, custom and/or dedicated data communication techniques may also be used in place of, or in addition to, the data communication techniques described above.
The information presentation methods provided in the following embodiments are used in a terminal, which may be the first terminal 510 or the second terminal 520, and this implementation does not limit this.
Referring to fig. 6, a flowchart of a message presentation method according to an embodiment of the present invention is shown. The message display method is applied to the message display system shown in fig. 5, and may include the following steps:
step 601, the first client receives an interactive message.
The first client receives an interactive message, and comprises: the first client displays all or at least one interactive image; receiving a selection operation of at least one interactive object; displaying a virtual social scene according to the selection operation, wherein the virtual social scene comprises an identifier corresponding to the at least one interactive object; in a virtual social scenario, an interactive message is received by a first client.
The identification corresponding to at least one interactive object comprises a first identification and a second identification, and an association relationship exists between the first identification and the second identification. The second identifier is used for logging in the second client, and the first identifier is used for logging in the first client. The first identifier and the second identifier may be a mobile phone number, an identification number, a bank card number, a character string allocated by a server, a user-defined nickname, and the like of the user, which is not limited in this embodiment.
The server stores the association relationship between the first identifier and the second identifiers and the interactive image corresponding to each second identifier. Therefore, before the first client displays all the interactive images, the server can determine all the second identifications according to the first identifications, then determine the corresponding interactive images according to the second identifications, send the interactive images corresponding to each second identification to the first client, and display all the interactive images by the first client or display one or more interactive images currently chatting.
Optionally, the terminal displays all interactive images in a three-dimensional virtual scene.
Optionally, after acquiring the interactive image corresponding to the second identifier (hereinafter referred to as a second avatar), the first client stores all the interactive images in the terminal.
Optionally, the terminal stores an interactive character (hereinafter, referred to as a first avatar) corresponding to the first identifier. The first virtual avatar is generated by the first client from the virtual model and the personalization data.
Optionally, the association relationship includes, but is not limited to, at least one of a binding relationship (friend relationship) between the first identifier and the second identifier, an attention relationship, a distance between the first client and the second client being smaller than a preset distance, and the first client and the second client paying attention to the same information.
Optionally, after receiving the selection operation, the terminal switches the virtual scene to the virtual social scene, that is, the virtual scene and the virtual social scene are not visually displayed in the same user interface; or after receiving the selection operation, the terminal controls the first avatar and the selected second avatar to move to the virtual social scene together, and at the moment, the virtual social scene and the virtual scene are linked together, that is, the virtual scene and the virtual social scene are visually displayed in the same user interface.
The virtual social scene is used for displaying the interactive messages between the first client and the second client corresponding to the identification of the interactive object. Optionally, the virtual social scene is a three-dimensional social scene, and the virtual social scene may be generated according to a real scene acquired by the terminal; or, the default scene preset in the terminal may be also used.
In a virtual social scenario, receiving an interactive message through a first client, including: presenting a message selection option in a virtual social scene; receiving a trigger operation acting on a message selection option; displaying at least one selectable message according to the triggering operation; receiving a selection operation of a selectable message; and generating an interactive message according to the selectable message indicated by the selection operation.
Alternatively, the first and second electrodes may be,
in a virtual social scenario, receiving an interactive message through a first client, including: presenting a message input option in a virtual social scene; receiving a trigger operation acting on a message input option; displaying a message input box according to the triggering operation; receiving an input message through the message input box; and generating an interactive message according to the input message.
Referring to fig. 7, 3 second identifiers 701, 702, and 703 are shown in the virtual scene, and interactive characters 704, 705, and 706 corresponding to each second identifier are shown; the first client receives selection operation of the interactive character 705 and the interactive character 706; according to the selection operation, a virtual social scene 707 is displayed, and a first identifier 708 and second identifiers 702 and 703 are displayed in the virtual social scene 707.
In the virtual social scene 707, the first client receives a trigger operation acting on the message selection option 709, displays at least one selectable message 710 (an emoticon within a curve box indicated by 710) according to the trigger operation, receives a selection operation on the selectable message 710, and generates an interactive message according to the selectable message "flower" indicated by the selection operation.
Step 602, the first client determines an interactive image corresponding to the interactive message.
The interactive image corresponding to the interactive message is an interactive image for displaying the message content of the interactive message.
In this embodiment, the determination manner of the interactive image is different according to the type of the interactive message.
In the first scenario, the type of the interactive message is used to instruct the terminal to display the message content through the interactive image of multiple people. Wherein, the multiple persons means two persons or more than two persons.
In this scenario, two cases are included, which are: 1. the type of the interactive message is used for indicating the terminal to display the message content through the first virtual image and at least one second virtual image; 2. the type of the interactive message is used for indicating the terminal to display the message content through at least two second avatars.
For case 1: the first client determines an interactive image corresponding to the interactive message, and the method comprises the following steps: acquiring the type of the received selectable message or input message; and determining the type of the interactive message according to the type, receiving a selection operation acted on at least one second virtual image in a virtual social scene when the type of the interactive message is the type of the message displayed by the multi-person interactive image, and the multi-person interactive image comprises a first virtual image and a second virtual image, and determining the first virtual image and the at least one second virtual image as interactive objects of the interactive message.
At this time, a field in the interactive message generated by the first client is used for indicating the second social client to display the message content through an interactive image of multiple persons, and the interactive image of multiple persons comprises a first virtual image and at least one second virtual image.
Optionally, the interactive message carries an identifier of an interactive image corresponding to the interactive message, that is, a first identifier corresponding to the first client and a second identifier corresponding to the at least one second client.
For case 2: the first client determines an interactive image corresponding to the interactive message, and the method comprises the following steps: acquiring the type of the received selectable message or input message; and determining the type of the interactive message according to the type, and when the type of the interactive message is that the message content is displayed through the multi-person interactive image, and the multi-person interactive image comprises at least two second avatars, receiving selection operation acted on the at least two second avatars in the virtual social scene, and determining the at least two second avatars as interactive objects of the interactive message.
At this time, a field in the interactive message generated by the first client is used for indicating the second social client to display the message content through the interactive image of multiple persons, and the interactive image of the multiple persons is at least two second virtual images.
Optionally, the interactive message carries identifiers of the interactive images corresponding to the interactive message, that is, second identifiers corresponding to at least two second clients.
In each scene, the corresponding relation between the types of the selectable messages and the corresponding selectable messages is pre-stored in the terminal; and/or the corresponding relation between the type of the input message and the corresponding input message is pre-stored in the terminal.
Optionally, the terminal displays the selectable messages belonging to the same category in the same area when displaying the selectable messages. In this way, the terminal can determine the type of the selectable message when receiving the selection operation acting on the selectable message in a certain area.
The selectable message shown with reference to fig. 7, wherein the selectable message presented by the first avatar is displayed in area 720; a selectable message presented via the second avatar is displayed in area 720; selectable messages presented via the first avatar and the second avatar are displayed in area 730; a selectable message presented by at least two second avatars is presented in area 740.
Optionally, the terminal displays different types of selectable messages through different display interfaces.
The operation of selecting the second avatar may be a click operation, a sliding operation in a predetermined direction, a long-press operation, a voice input operation, etc. which are applied to the second avatar, but the embodiment is not limited thereto.
Referring to fig. 7, after the terminal receives a selection operation on a selectable message 710, an aperture 750 is displayed in a head region of the second avatar 705 and 706, and the terminal determines that the second avatar 705 is an interactive avatar corresponding to an interactive message upon receiving a click operation on the aperture 750 of the second avatar 705.
In a second scenario, the type of the interactive message is used to instruct the first social client to display the message content through the single interactive character.
In this scenario, two cases are included, which are: 1. the type of the interactive message is used for indicating the terminal to display the message content through the first virtual image; 2. the type of the interactive message is used for indicating the terminal to display the message content through the second virtual image.
For case 1: the first client determines an interactive image corresponding to the interactive message, and the method comprises the following steps: acquiring the type of the received selectable message or input message; and determining the type of the interactive message according to the type, and determining the first avatar as an interactive object of the interactive message when the type of the interactive message is that the message content is displayed through the single-quota interactive avatar and the single-quota interactive avatar is the first avatar.
At this time, a field in the interactive message generated by the first client is used for indicating the second social client to display the message content through the single interactive image, and the single interactive image is the first virtual image.
Optionally, the interactive message carries an identifier of an interactive image corresponding to the interactive message, that is, a first identifier corresponding to the first client.
For case 2: the first client determines an interactive image corresponding to the interactive message, and the method comprises the following steps: acquiring the type of the received selectable message or input message; and determining the type of the interactive message according to the type, and when the type of the interactive message is that the message content is displayed through the single interactive image and the single interactive image is a second virtual image, receiving a selection operation acted on the second virtual image in the virtual social scene, and determining the second virtual image as an interactive object of the interactive message.
At this time, a field in the interactive message generated by the first client is used for indicating the second social client to display the message content through the single interactive image, and the single interactive image is the second virtual image.
Optionally, the interactive message carries an identifier of an interactive image corresponding to the interactive message, that is, a first identifier corresponding to the second client.
Optionally, when the virtual social scene only includes one second identifier, the terminal directly determines a second avatar corresponding to the second identifier as an interactive object of the interactive message without receiving a selection operation.
Optionally, the identifier of the interactive character corresponding to the interactive message carried in the interactive message is represented by an interactive people list InteractionList.
When the interactive message is displayed through the single interactive image, the InteractionList comprises a first identifier or a second identifier; when the interactive message is displayed through an interactive character of a plurality of persons, the InteractionList includes a first identifier and at least one second identifier; alternatively, the InteractionList includes at least two second identifications.
Step 603, in the virtual scene, the first social client displays the interactive animation through the interactive image.
In this embodiment, each interactive message corresponds to at least one interactive animation, and each interactive animation is used for displaying the message content of the interactive message through the corresponding interactive image.
Optionally, at least one interactive animation corresponding to the interactive message is stored in the terminal; or the terminal sends the interactive message to the server and then the server sends the interactive message to the terminal.
And for the interactive animation displayed through the single interactive image, the terminal directly displays the interactive animation according to the determined first virtual image or the determined second virtual image.
Optionally, for an interactive animation displayed by the interactive character of the plurality of people, if the interactive message corresponds to one interactive animation, the interactive animation is displayed by the interactive character of the plurality of people.
Optionally, for an interactive animation displayed by interactive images of multiple persons, if the interactive message corresponds to at least two interactive animations, for each interactive animation, the terminal further needs to determine an interactive image for executing the corresponding interactive animation from the determined multiple interactive images.
When the plurality of interactive characters include a first virtual character and at least one second virtual character, the terminal determines an interactive character for executing a corresponding interactive animation from the determined plurality of interactive characters, including: executing a first interactive animation by the first avatar; a second interactive animation is performed by the at least one second avatar.
The first interactive animation refers to an interactive animation executed by a first avatar corresponding to a first client sending the interactive message, and the second interactive animation refers to an interactive animation executed by a second avatar corresponding to a second client receiving the interactive message.
Optionally, when the number of the second avatars is plural, the second interactive animations performed by the different second avatars are the same or different.
Referring to fig. 8, the terminal determines that the interactive character corresponding to the interactive message includes a first avatar 801 and a second avatar 802, and the interactive animation corresponding to the interactive message includes a first type animation and a second type animation. If the interactive message is 'fresh flowers', the terminal executes interactive animation for sending flowers through the first avatar 801 and executes interactive animation for receiving flowers through the second avatar 802.
Referring to fig. 9, the terminal determines that the interactive character corresponding to the interactive message includes a first avatar 901 and a second avatar 902, and the interactive animation corresponding to the interactive message includes a first type animation and a second type animation. If the interactive message is "hug", the terminal executes the interactive animation of the hugger through the first avatar 901, and executes the interactive animation of the hugee through the second avatar 902.
When the plurality of interactive characters include at least two second avatars, in a first manner, the terminal determines an interactive character for executing a corresponding interactive animation from the determined plurality of interactive characters, including: receiving a first setting operation, wherein the first setting operation is used for setting a second virtual image for executing the first interactive animation; receiving a second setting operation, wherein the second setting operation is used for setting a second virtual image for executing a second interactive animation; determining a second avatar indicated by the first setting operation as an interactive avatar for executing the first interactive animation; and determining the second avatar of the second setting operation indication as the interactive avatar for executing the second interactive animation.
At this time, the InteractionList carried in the interactive message generated by the terminal includes not only the at least two second identifiers, but also the type of the interactive animation corresponding to each second identifier.
When the plurality of interactive characters include at least two second avatars, in a second manner, the terminal determines an interactive character for executing a corresponding interactive animation from the determined plurality of interactive characters, including: randomly selecting a part of second virtual images from at least two second virtual images, and determining the part of second virtual images as the interactive images for executing the first interactive animation; another portion of the second avatar is determined as the interactive avatar performing the second interactive animation.
At this time, the InteractionList carried in the interactive message generated by the terminal includes not only the at least two second identifiers, but also the type of the interactive animation corresponding to each second identifier.
In step 604, the first social client sends the interactive message to the second social client.
Alternatively, this step may be performed after steps 602 and 603; alternatively, it may be performed before steps 602 and 603; alternatively, the steps 602 and 603 may be performed simultaneously, which is not limited in this embodiment.
Step 605, the second client receives the interactive message sent by the first social client.
Step 606, the second client determines the interactive image corresponding to the interactive message.
Optionally, the interactive message carries an identifier of the interactive image corresponding to the interactive message, and the second client determines the corresponding interactive image according to the identifier.
Illustratively, the interactive message carries an InteractionList, which includes the first identifier and/or the second identifier.
Optionally, the correspondence between the identifier in the second client and the interactive image may be obtained by the second client from the server; or may be pre-stored in the second client, which is not limited in this embodiment.
Step 607, in the virtual scene, the second client displays the interactive animation through the interactive image.
The virtual scene is a virtual social scene, and the virtual social scene comprises a first identifier and a second identifier.
Optionally, the virtual social scenario in the second client is the same as or different from the virtual social scenario in the first client.
And when the interactive image corresponding to the interactive message is a single person, displaying the interactive animation by the interactive image.
When the interactive image corresponding to the interactive message is a plurality of people and the interactive message corresponds to one interactive animation, the same interactive animation is displayed by the interactive images of the plurality of people.
When the interactive image corresponding to the interactive message is a plurality of people and the interactive message corresponds to at least two interactive animations, the first interactive animation is displayed by the first virtual image; displaying a second interactive animation by a second avatar; or, according to the corresponding relation between the second virtual image and the type of the interactive animation in the interactive message, determining the second virtual image for displaying the first interactive animation, and displaying the first animation by the second virtual image; and determining a second virtual image for displaying the second interactive animation, and displaying the second animation by the second virtual image.
In summary, in the message display method provided in this embodiment, the first client and/or the second client displays the interactive animation corresponding to the interactive message through the interactive image in the virtual scene, and when the interactive message is an emoticon message, the emoticon message is displayed by the interactive image, so that the display form of the emoticon message is expanded.
Alternatively, steps 601 to 604 may be implemented separately as a method embodiment of the first client side, and steps 605 to 607 may be implemented separately as a method embodiment of the second client side, which is not limited in this embodiment.
Optionally, based on the embodiment shown in fig. 6, when the interactive image corresponding to the interactive message is a plurality of people, and the interactive image of the plurality of people includes a first avatar and at least one second avatar, the first social client and/or the second social client displays an interactive animation through the interactive image, including: displaying the first interactive animation through the first virtual image according to the first action parameter in the first interactive animation; and displaying the second interactive animation through the second virtual image according to the second action parameter in the second interactive animation.
The interactive animation that Unity3D in the terminal derives from 3DSMAX includes: the nodes of the virtual model indicate data and action parameters. The node indicating data is used for indicating a certain node (or skeleton) in the virtual model; the action parameter is used for indicating that when the virtual model executes a certain action, the node indicates the motion trail of the bone indicated by the data.
Such as: the interactive animation derived from 3DSMAX is that the index finger of the virtual model is lifted upwards, and then the interactive animation comprises: the node for indicating the index finger indicates data and corresponding action parameters are lifted upwards.
Since the first avatar and the second avatar are generated by the Unity3D in advance according to the virtual model and the personalized parameters, and the virtual model is generally the same as the virtual model in the interactive animation, the node indication data in the interactive animation is common to the virtual model in the first avatar and the virtual model in the second avatar, so that the terminal can control the virtual model in the first avatar to display the first interactive animation according to the first action parameters, thereby realizing the display of the first interactive animation through the first avatar; and controlling the virtual model in the second virtual image to display the second interactive animation according to the second action parameter, thereby realizing the display of the second interactive animation through the second virtual image.
Alternatively, for multiple interactive animations of an interactive character presentation of multiple people, different interactive characters often need to be spaced at a suitable distance in the virtual social scene before performing the interactive animation, and/or the interactive characters need to have a suitable relative rotation angle therebetween, such as: before the first virtual image executes the hugging animation and the second virtual image executes the hugged animation, the appropriate relative angle between the first virtual image and the second virtual image is 0 degree, the appropriate distance is 0.3m, and at the moment, the terminal needs to control the interactive image to move to the appropriate distance and the appropriate rotation angle.
At the moment, the terminal acquires an initial relative parameter between the first virtual image and the second virtual image, wherein the initial relative parameter comprises an initial relative position; determining a first initial display position corresponding to the first virtual image and a second initial display position corresponding to the second virtual image according to the initial relative parameters; displaying a first interactive animation from a first initial display position through the first virtual image, and simultaneously displaying a second interactive animation from a second initial display position through the second virtual image; determining a first initial rotation angle corresponding to the first avatar and a second initial rotation angle corresponding to the second avatar according to the initial relative parameters; and displaying the first interactive animation by the first virtual image at a first initial rotation angle, and simultaneously displaying the second interactive animation by the second virtual image at a second initial rotation angle.
Where the starting relative parameters are recorded in Unity3D when Unity3D derives the interactive animation from 3 DSMAX.
The starting relative position in the starting relative parameters means: the position of each point of the virtual model in the first avatar relative to the corresponding point of the virtual model in the second avatar; the initial relative rotation angle is: the angle of rotation of each point of the virtual model in the first avatar relative to the point corresponding to the virtual model in the second avatar.
Optionally, each interactive animation presented by the interactive character of the plurality of persons corresponds to a starting relative parameter.
The terminal determines a first initial display position corresponding to the first avatar and a second initial display position corresponding to the second avatar according to the initial relative parameters, including but not limited to the following ways.
In the first mode, the terminal controls the second virtual image to move by taking the virtual model in the first virtual image as a reference system.
At this time, when the second interactive animation is produced through 3DSMAX, the position and the rotation angle of the virtual model corresponding to the second interactive animation are recorded by taking the coordinate axis of the virtual model corresponding to the first interactive animation as a reference system. When Unity3D derives the second interactive animation from 3DSMAX, the animation identifier and the corresponding starting relative parameter of the second interactive animation are recorded in Unity 3D.
Accordingly, before the terminal displays the interactive animation through Unity3D, the position and the rotation angle of the first avatar are controlled to be fixed, the coordinate axis of the virtual model in the first avatar is taken as a reference system, the second avatar is controlled to move to the coordinate position indicated by the position in the initial relative parameter (i.e. the initial relative position), and the second avatar is rotated according to the rotation angle in the initial relative parameter (i.e. the initial relative rotation angle). At the moment, the position of the first virtual image is a first initial display position, and the position of the second virtual image after movement is a second initial display position; the current rotation angle of the first virtual image is a first initial rotation angle, and the rotation angle of the second virtual image after rotation is a second initial rotation angle.
Referring to fig. 10, the terminal uses the coordinate axis 1002 where the virtual model in the first avatar 1001 is located as a reference system, controls the second avatar 1003 to move to a second start position display position according to the start relative parameter, and adjusts the rotation angle of the second avatar 1003 to a second start rotation angle according to the start relative parameter.
In the second mode, the terminal controls the first virtual image to move by taking the virtual model in the second virtual image as a reference system.
In this way, the description of the terminal controlling the movement of the first avatar is the same as the description of the terminal controlling the movement of the second avatar in the first way, except that the coordinate axis where the virtual model in the second avatar is located is used as a reference system, which is not repeated herein.
In a third mode, the terminal controls the first avatar and the second avatar to move. At this time, the terminal may use any position as a reference frame when displaying the interactive animation, which is not limited in this embodiment.
At this time, when the first interactive animation and the second interactive animation are manufactured through 3d sma, the coordinate axis of the virtual model corresponding to the first interactive animation is used as a reference system, or the coordinate axis of the virtual model corresponding to the second interactive animation is used as a reference system, and the initial relative position and the initial relative rotation angle between the virtual model corresponding to the first interactive animation and the virtual model corresponding to the second interactive animation are recorded. When Unity3D derives the first interactive animation and the second interactive animation from 3DSMAX, the animation identifier corresponding to the first interactive animation, the animation identifier of the second interactive animation, and the starting relative parameter are recorded in Unity 3D.
Correspondingly, before the terminal displays the interactive animation through the Unity3D, selecting any position to establish a reference system, controlling the movement of the first virtual image and controlling the movement of the second virtual image until the distance between the virtual model in the first virtual image and the virtual model in the second virtual image reaches the initial relative position; and controlling the first virtual image to rotate and controlling the second virtual image to rotate until the rotation angle between the virtual model in the first virtual image and the virtual model in the second virtual image reaches the initial relative rotation angle. At the moment, the position of the first virtual image after moving is a first initial display position, and the position of the second virtual image after moving is a second initial display position; the rotating angle of the first virtual image after rotating is a first initial rotating angle, and the rotating angle of the second virtual image after rotating is a second initial rotating angle.
Referring to fig. 11, the terminal establishes a reference frame according to a position 1101, controls the first avatar 1102 to move to a first start position according to the start relative parameter, controls the second avatar 1103 to move to a second start position display position according to the start relative parameter, and adjusts the rotation angle of the first avatar 1102 to a second start rotation angle according to the start relative parameter; the rotation angle of the second avatar 1103 is adjusted to a second initial rotation angle according to the initial relative parameter.
Optionally, in some scenarios, the terminal adjusts only the position of the first avatar and/or the second avatar; or, the terminal adjusts only the rotation angle of the first avatar and/or the second avatar, which is not limited in this embodiment.
Optionally, based on the embodiment shown in fig. 6, when the interactive character corresponding to the interactive message is a plurality of people, and the interactive character of the plurality of people includes at least two second avatars, the first social client and/or the second social client displays an interactive animation through the interactive character, including: displaying the first interactive animation through part of the second virtual image according to the first action parameter in the first interactive animation; and displaying the second interactive animation through the other part of the second avatar according to the second action parameters in the second interactive animation.
Wherein the interactive message indicates that a portion of the second avatar corresponds to the first interactive animation and another portion of the second avatar corresponds to the second interactive animation.
The related introduction in this embodiment is the same as the introduction in the interactive image of multiple persons, including the first avatar and the at least one second avatar, except that the first avatar is replaced with a part of the second avatar, and the at least one second avatar is replaced with another part of the second avatar, which is not described herein again.
Alternatively, in the above embodiment, the manner in which the terminal controls the first avatar and/or the second avatar to move may be in a walking form, a running form, a jumping form, etc., which is not limited in this embodiment.
Optionally, based on the embodiment shown in fig. 6, when the interactive character corresponding to the interactive message is a single person, the first social client and/or the second social client displays an interactive animation through the interactive character, including: and displaying the third interactive animation according to the third action parameter in the third interactive animation through the interactive image.
The third interactive animation is an interactive animation displayed by the single person's interactive character. The interactive character is a first avatar or a second avatar.
The third interactive animation only needs one interactive image to be displayed, and at the moment, when the third interactive animation is manufactured through 3DSMAX, relative parameters of the interactive image do not need to be recorded.
Alternatively, for multiple interactive animations of an interactive character presentation for multiple people, often the interactive animations of different interactive character presentations need to be performed simultaneously, such as: the two interactive images respectively execute the hugging animation and the hugged animation, and at the moment, the terminal needs to ensure that the interactive images of multiple persons synchronously display the multiple interactive animations.
In the embodiment of the invention, in order to ensure that the terminal can synchronously display a plurality of interactive animations through the interactive images of a plurality of people, the terminal displays the plurality of interactive animations through a direct playing (Play) method.
Among them, the Play method is the animation switching method in Unity3D, which is used to switch the current animation state to the next animation state within a preset time length. Wherein, the preset time length is usually short, close to 0, such as: 0.1 ms.
Optionally, the terminal displays the third interactive animation by a direct Play method; alternatively, the third interactive animation is displayed by a cross fade (cross fade) method, which is not limited in this embodiment.
Among them, the CrossFade method is also an animation switching method in Unity3D, which is used to smoothly transition a current animation state to a next animation state. In general, the CrossFade method is implemented according to a linear interpolation method, that is, between a current animation state and a next animation state, the terminal generates at least one intermediate state by itself, so as to realize smooth transition.
Optionally, based on the embodiment shown in fig. 6, when the terminal receives another interactive message before the interactive animation is displayed through the interactive character, the terminal displays another interactive animation corresponding to the another interactive message after the current interactive animation is displayed.
The relevant description of the terminal displaying other interactive animations is the same as the relevant description of the terminal displaying the current interactive animation, and is not described herein again in this embodiment.
In one implementation, the displaying, by the terminal, another interactive animation corresponding to the other interactive message after displaying the current interactive animation includes: after the terminal displays the current interactive animation, controlling the interactive image to display default animation; and then controlling the interactive image to display the next interactive animation.
Optionally, if the current interactive animation is an interactive animation displayed by an interactive image of multiple people, the terminal controls the interactive image to display a default animation after the current interactive animation is displayed; and then controlling the interactive image to display the next interactive animation.
In another implementation manner, the displaying, by the terminal, another interactive animation corresponding to the other interactive message after the displaying of the current interactive animation includes: and after the terminal displays the current interactive animation, directly controlling the interactive image to display the next interactive animation.
Optionally, if the current interactive animation is an interactive animation displayed through a single interactive image, the terminal directly controls the interactive image to display a next interactive animation after displaying the current interactive animation.
Optionally, when the terminal does not receive other interactive messages, the terminal displays the default animation after displaying the current interactive animation.
In an embodiment of the invention, the default animation comprises at least one of a default standing position, a default rotation angle and default motion parameters.
Referring to the schematic diagram of switching between different types of interactive animations shown in fig. 12, when the interactive animation currently played by the terminal is displayed by an interactive image of multiple people (abbreviated as multi-person animation in the figure), the terminal plays a default animation after playing the interactive animation; then, the next interactive animation (in the figure, the single-person animation is abbreviated as the next interactive animation) displayed by the single-person interactive image is played, or the next interactive animation displayed by the multi-person interactive image is played.
When the interactive animation currently played by the terminal is displayed through the single interactive image, the terminal directly plays the next interactive animation displayed through the interactive images of multiple people after playing the interactive animation.
When the interactive animation currently played by the terminal is displayed through the interactive image of a single person, the terminal plays the default animation after playing the interactive animation; then, the next interactive animation displayed through the single interactive character is played.
And when the terminal does not need to play a next interactive animation, playing the default animation after the current interactive animation is played.
The message display method provided by the embodiment of the invention is described as a specific example. Referring to fig. 13, the method includes the following steps.
Step 1301, the developer makes an interactive animation through 3 DSMAX.
The developer (animation designer) makes the interactive animation through 3DSMAX, illustratively, acquires the skeleton system through Biped in 3DSMAX, and designs the action of each frame of animation in the interactive animation through Biped, such as: walking up stairs, jumping over obstacles, dancing in tempo, etc.
In step 1302, the developer exports the interactive animation to Unity3D through 3DSMAX and records the initial relative parameters of the interactive animation.
3, recording the animation identification and the corresponding initial relative parameter of each interactive animation; the developer exports the record to Unity3D, and accordingly Unity3D also records the animation id and the corresponding starting relative parameters.
In step 1303, the developer sets the scale factor for Unity 3D.
Since the scale for making the interactive animation in 3DSMAX may be different from the scale for showing the interactive animation in Unity3D, for example: the scale of 3DSMAX is 1:10, and the scale of Unity3D is 1:20, at this time, in order to ensure that Unity3D can correctly display the interactive animation produced in 3DSMAX without the problems of limb distortion, limb decomposition, and the like, the developer needs to adjust the scale of Unity3D to be consistent with the scale of 3 DSMAX. The developer sets the scaling factor in Unity3D in the client, such as: the scale factor is 2, and at this time, the scale of 3DSMAX is consistent with the scale of Unity3D, so that Unity3D is ensured to correctly display the interactive animation produced in 3 DSMAX.
In step 1304, the client receives the interactive message.
The interactive message may be input by the user or sent by other clients, which is not limited in this embodiment.
Step 1305, the client controls the plurality of interactive images to move to the corresponding initial display positions and initial rotation angles according to the initial relative parameters.
Such as: the interactive animation is a hug animation and is displayed by two interactive images, and the initial relative parameters recorded in the Unity3D comprise an initial relative position and an initial relative rotation angle. Wherein the initial relative position is 0.3m, and the initial relative rotation angle is 0 degree. In the terminal control virtual social scene, two interactive images indicated by the interactive messages move to corresponding initial display positions respectively in a walking mode, and the distance between the initial display positions of the two interactive images is equal to the initial relative position of 0.3 m. And the terminal controls the rotation angles of the two interactive images to reach the corresponding initial rotation angles, and the relative rotation angle between the initial rotation positions of the two interactive images is equal to the initial relative rotation angle of 0 degree.
Step 1306, the client controls a plurality of interactive images to start playing the corresponding interactive animations at the same time.
The client controls a plurality of interactive images to simultaneously start playing the corresponding interactive animation through the direct Play method in the Unity 3D.
Steps 1301 to 1303 belong to a development phase, and steps 1304 to 1306 belong to an application phase.
The following are embodiments of the apparatus of the present invention that may be used to perform embodiments of the method of the present invention. For details which are not disclosed in the embodiments of the apparatus of the present invention, reference is made to the embodiments of the method of the present invention.
Referring to fig. 14, a block diagram of a message presentation apparatus according to an embodiment of the present invention is shown, where the message presentation apparatus has a function of executing the above method example, and the function may be implemented by hardware, or by hardware executing corresponding software. The apparatus may include: a message receiving module 1410, a character determining module 1420, and an animation display module 1430.
A message receiving module 1410, configured to execute the message receiving functions implied in the above steps 601 and 605 and in each step;
an image determining module 1420, configured to execute the implicit image determining functions in the steps 602 and 606 and in each step;
an animation display module 1430 for performing the steps 603 and 607 and the animation display function implied in each step.
Reference may be made to the method embodiment shown in fig. 6 for details.
Optionally, the interactive character comprises a first avatar and a second avatar, and the at least one interactive animation corresponding to the interactive message comprises: a first interactive animation corresponding to the first virtual image and a second interactive animation corresponding to the second virtual image;
an animation display module comprising:
the first display unit is used for displaying the first interactive animation according to the first action parameter in the first interactive animation through the first virtual image; and displaying the second interactive animation through the second virtual image according to the second action parameter in the second interactive animation.
Optionally, the animation display unit is further configured to:
acquiring an initial relative parameter between the first virtual image and the second virtual image, wherein the initial relative parameter comprises an initial relative position;
determining a first initial display position corresponding to the first avatar and a second initial display position corresponding to the second avatar according to the initial relative positions;
and displaying the first interactive animation from the first initial display position through the first virtual image, and simultaneously displaying the second interactive animation from the second initial display position through the second virtual image.
Optionally, the animation display unit is further configured to:
acquiring an initial relative parameter between the first virtual image and the second virtual image, wherein the initial relative parameter comprises an initial relative rotation angle;
determining a first initial rotation angle corresponding to the first avatar and a second initial rotation angle corresponding to the second avatar according to the initial relative rotation angles;
and displaying the first interactive animation by the first virtual image at a first initial rotation angle, and simultaneously displaying the second interactive animation by the second virtual image at a second initial rotation angle.
Optionally, the interactive character comprises a first avatar or a second avatar, and the at least one interactive animation corresponding to the interactive message comprises: a third interactive animation corresponding to the interactive image;
optionally, the animation display unit comprises:
and the second display unit is used for displaying the third interactive animation according to the third action parameter in the third interactive animation through the interactive image.
Optionally, the animation display module is further configured to: displaying default animation of the interactive image; or displaying default animation of the interactive image; and displaying the next interactive animation through the interactive image; or, the next interactive animation is displayed through the interactive image.
Reference may be made to the method embodiment shown in fig. 6 for details.
Referring to fig. 15, a schematic structural diagram of a terminal according to an embodiment of the present invention is shown. The terminal 1500 is configured to implement the message display method provided in the foregoing embodiment. Specifically, the method comprises the following steps:
the terminal 1500 may include components such as a RF (Radio Frequency) circuit 1510, a memory 1520 including one or more computer-readable storage media, an input unit 1530, a display unit 1540, a sensor 1550, an audio circuit 1560, a WiFi (wireless fidelity) module 1570, a processor 1580 including one or more processing cores, and a power supply 1590. Those skilled in the art will appreciate that the terminal structure shown in fig. 15 is not intended to be limiting and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components. Wherein:
the RF circuit 1510 may be configured to receive and transmit signals during a message transmission or communication process, and in particular, receive downlink messages from a base station and then send the received downlink messages to the one or more processors 1580 for processing; in addition, data relating to uplink is transmitted to the base station. In general, RF circuit 1510 includes, but is not limited to, an antenna, at least one Amplifier, a tuner, one or more oscillators, a Subscriber Identity Module (SIM) card, a transceiver, a coupler, an LNA (Low Noise Amplifier), a duplexer, and the like. In addition, RF circuit 1510 may also communicate with networks and other devices via wireless communication. The wireless communication may use any communication standard or protocol, including but not limited to GSM (Global System for Mobile communications), GPRS (General Packet Radio Service), CDMA (Code Division Multiple Access), WCDMA (Wideband Code Division Multiple Access), LTE (Long Term Evolution), e-mail, SMS (Short Messaging Service), and the like.
The memory 1520 may be used to store software programs and modules, and the processor 1580 performs various functional applications and data processing by operating the software programs and modules stored in the memory 1520. The memory 1520 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the terminal 1500, and the like. Further, the memory 1520 may include high-speed random access memory and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid-state storage device. Accordingly, the memory 1520 may also include a memory controller to provide access to the memory 1520 by the processor 1580 and the input unit 1530.
The input unit 1530 may be used to receive input numeric or character messages and generate keyboard, mouse, joystick, optical or trackball signal inputs related to user settings and function control. In particular, the input unit 1530 may include an image input device 1531 and other input devices 1532. The image input device 1531 may be a camera or a photo scanning device. The input unit 1530 may include other input devices 1532 in addition to the image input device 1531. In particular, other input devices 1532 may include, but are not limited to, one or more of a physical keyboard, function keys (such as volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and the like.
The display unit 1540 can be used to display messages entered by or provided to the user as well as various graphical user interfaces of the terminal 1500, which can be made up of graphics, text, icons, video, and any combination thereof. The Display unit 1540 may include a Display panel 1541, and the Display panel 1541 may be optionally configured in the form of an LCD (Liquid Crystal Display), an OLED (Organic Light-Emitting Diode), or the like.
The terminal 1500 can also include at least one sensor 1550, such as light sensors, motion sensors, and other sensors. Specifically, the light sensor may include an ambient light sensor that may adjust the brightness of the display panel 1541 according to the brightness of ambient light and a proximity sensor that may turn off the display panel 1541 and/or backlight when the terminal 1500 is moved to the ear. As one of the motion sensors, the gravity acceleration sensor can detect the magnitude of acceleration in each direction (generally, three axes), can detect the magnitude and direction of gravity when the mobile phone is stationary, and can be used for applications of recognizing the posture of the mobile phone (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), vibration recognition related functions (such as pedometer and tapping), and the like; as for other sensors such as a gyroscope, a barometer, a hygrometer, a thermometer, and an infrared sensor, which can be configured in the terminal 1500, detailed descriptions thereof are omitted.
Audio circuit 1560, speaker 1561, and microphone 1562 may provide an audio interface between a user and terminal 1500. The audio circuit 1560 may transmit the electrical signal converted from the received audio data to the speaker 1561, and convert the electrical signal into an audio signal by the speaker 1561 and output the audio signal; on the other hand, the microphone 1562 converts collected sound signals into electrical signals, which are received by the audio circuit 1560 and converted into audio data, which are then processed by the audio data output processor 1580 and then passed through the RF circuit 1510 to be transmitted to, for example, another terminal, or output to the memory 1520 for further processing. The audio circuit 1560 may also include an earbud jack to provide communication of peripheral headphones with the terminal 1500.
WiFi belongs to short distance wireless transmission technology, and the terminal 1500 can help the user send and receive e-mail, browse web pages, access streaming media, etc. through the WiFi module 1570, which provides the user with wireless broadband internet access. Although fig. 15 shows WiFi module 1570, it is understood that it does not belong to the essential constitution of terminal 1500 and may be omitted entirely as needed within the scope not changing the essence of the invention.
The processor 1580 is a control center of the terminal 1500, connects various parts of the entire mobile phone using various interfaces and lines, and performs various functions of the terminal 1500 and processes data by operating or executing software programs and/or modules stored in the memory 1520 and calling data stored in the memory 1520, thereby integrally monitoring the mobile phone. Alternatively, the processor 1580 may include one or more processing cores; preferably, the processor 1580 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, and the like, and a modem processor, which mainly handles wireless communications. It is to be appreciated that the modem processor may not be integrated into the processor 1580.
The terminal 1500 also includes a power supply 1590 (e.g., a battery) for powering the various components, which may be logically coupled to the processor 1580 via a power management system for managing charging, discharging, and power consumption management functions via the power management system. The power supply 1590 may also include any components of one or more dc or ac power sources, recharging systems, power failure detection circuitry, power converters or inverters, power status indicators, and the like.
Although not shown, the terminal 1500 may further include a bluetooth module or the like, which is not described in detail herein.
In this embodiment, the terminal 1500 further comprises a memory and one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors. The one or more programs include instructions for performing the above-described methods.
An embodiment of the present invention further provides a computer-readable storage medium, where the computer-readable storage medium may be a computer-readable storage medium contained in the memory in the foregoing embodiment; or it may be a separate computer-readable storage medium not incorporated in the terminal. The computer readable storage medium stores instructions executed by the processor to implement the message presentation method provided by the above embodiments.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, where the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.

Claims (14)

1. A method for message presentation, the method comprising:
displaying at least one interactive image in a virtual scene, wherein an identifier corresponding to the at least one interactive image comprises a first identifier and a second identifier, an association relationship exists between the first identifier and the second identifier, the second identifier is used for logging in a second client, and the first identifier is used for logging in a first client;
in response to receiving a selection operation of the at least one interactive character, switching the virtual scene to a virtual social scene, the virtual scene and the virtual social scene not being visually displayed in the same user interface; or, in response to receiving the selection operation of the at least one interactive image, controlling the first avatar and the selected second avatar to move to the virtual social scene, the virtual social scene being linked with the virtual scene, the virtual scene and the virtual social scene being visually displayed in the same user interface; the virtual scene and the virtual social scene are three-dimensional scenes;
receiving an interactive message, wherein the interactive message is used for realizing interaction between the first client and the second client, the first client is a client for sending the interactive message, and the second client is a client for receiving the interactive message;
determining an interactive image corresponding to the interactive message, wherein the interactive image is generated in advance according to personalized data and a three-dimensional virtual model, and the interactive image comprises the first virtual image corresponding to the first client and/or the second virtual image corresponding to the second client; the interactive message corresponds to at least one interactive animation, and each interactive animation is used for displaying the message content of the interactive message through the interactive image;
and displaying the interactive animation through the interactive image in the virtual social scene.
2. The method of claim 1, wherein the interactive character comprises the first avatar and the second avatar, and wherein the at least one interactive animation to which the interactive message corresponds comprises: a first interactive animation corresponding to the first avatar and a second interactive animation corresponding to the second avatar;
in the virtual social scene, the interactive animation is displayed through the interactive image, and the method comprises the following steps:
in the virtual social scene, displaying the first interactive animation through the first virtual image according to a first action parameter in the first interactive animation; and displaying the second interactive animation through the second virtual image according to the second action parameter in the second interactive animation.
3. The method of claim 2, wherein the first interactive animation is presented by the first avatar according to a first motion parameter in the first interactive animation; and displaying the second interactive animation through the second virtual image according to a second action parameter in the second interactive animation, wherein the displaying comprises:
acquiring an initial relative parameter between the first avatar and the second avatar, wherein the initial relative parameter comprises an initial relative position;
determining a first initial display position corresponding to the first avatar and a second initial display position corresponding to the second avatar according to the initial relative positions;
and displaying the first interactive animation from the first initial display position through the first avatar, and simultaneously displaying the second interactive animation from the second initial display position through the second avatar.
4. The method of claim 2, wherein the first interactive animation is presented by the first avatar according to a first motion parameter in the first interactive animation; and displaying the second interactive animation through the second virtual image according to a second action parameter in the second interactive animation, wherein the displaying comprises:
acquiring an initial relative parameter between the first virtual image and the second virtual image, wherein the initial relative parameter comprises an initial relative rotation angle;
determining a first initial rotation angle corresponding to the first avatar and a second initial rotation angle corresponding to the second avatar according to the initial relative rotation angle;
and displaying the first interactive animation by starting at the first initial rotation angle through the first avatar, and simultaneously displaying the second interactive animation by starting at the second initial rotation angle through the second avatar.
5. The method of claim 1, wherein the interactive character comprises the first avatar or the second avatar, and wherein the at least one interactive animation to which the interactive message corresponds comprises: a third interactive animation corresponding to the interactive image;
in the virtual social scene, the interactive animation is displayed through the interactive image, and the method comprises the following steps:
and in the virtual social scene, displaying the third interactive animation according to a third action parameter in the third interactive animation through the interactive image.
6. The method of claim 1, wherein after presenting the interactive animation through the interactive avatar in the virtual social scene, further comprising:
displaying a default animation of the interactive image;
alternatively, the first and second electrodes may be,
displaying a default animation of the interactive image; displaying the next interactive animation through the interactive image;
alternatively, the first and second electrodes may be,
and displaying the next interactive animation through the interactive image.
7. A message presentation device, the device comprising:
the image display module is used for displaying at least one interactive image in a virtual scene, the identifier corresponding to the at least one interactive image comprises a first identifier and a second identifier, an association relationship exists between the first identifier and the second identifier, the second identifier is used for logging in a second client, and the first identifier is used for logging in a first client;
a scene transformation module, configured to switch the virtual scene to a virtual social scene in response to receiving a selection operation on the at least one interactive image, where the virtual scene and the virtual social scene are not visually displayed in a same user interface; or, in response to receiving the selection operation of the at least one interactive image, controlling the first avatar and the selected second avatar to move to the virtual social scene, the virtual social scene being linked with the virtual scene, the virtual scene and the virtual social scene being visually displayed in the same user interface; the virtual scene and the virtual social scene are three-dimensional scenes;
the message receiving module is used for receiving an interactive message, wherein the interactive message is used for realizing interaction between a first client and a second client, the first client is a client for sending the interactive message, and the second client is a client for receiving the interactive message;
the image determining module is used for determining an interactive image corresponding to the interactive message, the interactive image is generated in advance according to personalized data and a three-dimensional virtual model, and the interactive image comprises a first virtual image corresponding to the first client and/or a second virtual image corresponding to the second client; the interactive message corresponds to at least one interactive animation, and each interactive animation is used for displaying the message content of the interactive message through the interactive image;
and the animation display module is used for displaying the interactive animation through the interactive image in the virtual social scene.
8. The apparatus of claim 7, wherein the interactive character comprises the first avatar and the second avatar, and wherein the at least one interactive animation to which the interactive message corresponds comprises: a first interactive animation corresponding to the first avatar and a second interactive animation corresponding to the second avatar;
the animation display module comprises:
the first display unit is used for displaying the first interactive animation in the virtual social scene through the first virtual image according to a first action parameter in the first interactive animation; and displaying the second interactive animation through the second virtual image according to the second action parameter in the second interactive animation.
9. The apparatus of claim 8, wherein the first display unit is further configured to:
acquiring an initial relative parameter between the first avatar and the second avatar, wherein the initial relative parameter comprises an initial relative position;
determining a first initial display position corresponding to the first avatar and a second initial display position corresponding to the second avatar according to the initial relative positions;
and displaying the first interactive animation from the first initial display position through the first avatar, and simultaneously displaying the second interactive animation from the second initial display position through the second avatar.
10. The apparatus of claim 8, wherein the first display unit is further configured to:
acquiring an initial relative parameter between the first virtual image and the second virtual image, wherein the initial relative parameter comprises an initial relative rotation angle;
determining a first initial rotation angle corresponding to the first avatar and a second initial rotation angle corresponding to the second avatar according to the initial relative rotation angle;
and displaying the first interactive animation by starting at the first initial rotation angle through the first avatar, and simultaneously displaying the second interactive animation by starting at the second initial rotation angle through the second avatar.
11. The apparatus of claim 7, wherein the interactive character comprises the first avatar or the second avatar, and wherein the at least one interactive animation to which the interactive message corresponds comprises: a third interactive animation corresponding to the interactive image;
the animation display module comprises:
and the second display unit is used for displaying the third interactive animation according to a third action parameter in the third interactive animation through the interactive image in the virtual social scene.
12. The apparatus of claim 7, wherein the animation display module is further configured to:
displaying a default animation of the interactive image;
alternatively, the first and second electrodes may be,
displaying a default animation of the interactive image; displaying the next interactive animation through the interactive image;
alternatively, the first and second electrodes may be,
and displaying the next interactive animation through the interactive image.
13. A terminal, characterized in that it comprises a processor and a memory, said memory having stored therein at least one instruction, said instruction being loaded by said processor and executing the message presentation method according to any one of claims 1 to 7.
14. A computer-readable storage medium having stored thereon at least one instruction, which is loaded by a processor and executes the message presentation method of any one of claims 1 to 7.
CN201710355687.4A 2017-05-19 2017-05-19 Message display method and device and terminal Active CN108933723B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710355687.4A CN108933723B (en) 2017-05-19 2017-05-19 Message display method and device and terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710355687.4A CN108933723B (en) 2017-05-19 2017-05-19 Message display method and device and terminal

Publications (2)

Publication Number Publication Date
CN108933723A CN108933723A (en) 2018-12-04
CN108933723B true CN108933723B (en) 2020-11-06

Family

ID=64450563

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710355687.4A Active CN108933723B (en) 2017-05-19 2017-05-19 Message display method and device and terminal

Country Status (1)

Country Link
CN (1) CN108933723B (en)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110308792B (en) * 2019-07-01 2023-12-12 北京百度网讯科技有限公司 Virtual character control method, device, equipment and readable storage medium
CN110298925B (en) * 2019-07-04 2023-07-25 珠海金山数字网络科技有限公司 Augmented reality image processing method, device, computing equipment and storage medium
CN112306254A (en) * 2019-07-31 2021-02-02 北京搜狗科技发展有限公司 Expression processing method, device and medium
CN111246225B (en) * 2019-12-25 2022-02-08 北京达佳互联信息技术有限公司 Information interaction method and device, electronic equipment and computer readable storage medium
CN111259183B (en) * 2020-02-21 2023-08-01 北京百度网讯科技有限公司 Image recognition method and device, electronic equipment and medium
CN111580661A (en) * 2020-05-09 2020-08-25 维沃移动通信有限公司 Interaction method and augmented reality device
CN114401438B (en) * 2021-12-31 2022-12-09 魔珐(上海)信息科技有限公司 Video generation method and device for virtual digital person, storage medium and terminal
CN114419201B (en) * 2022-01-19 2024-06-18 北京字跳网络技术有限公司 Animation display method and device, electronic equipment and medium
CN115209205A (en) * 2022-07-08 2022-10-18 上海哔哩哔哩科技有限公司 Interactive animation generation method and device, and animation material processing method and device
CN117193541B (en) * 2023-11-08 2024-03-15 安徽淘云科技股份有限公司 Virtual image interaction method, device, terminal and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101931621A (en) * 2010-06-07 2010-12-29 上海那里网络科技有限公司 Device and method for carrying out emotional communication in virtue of fictional character
CN103116463A (en) * 2013-01-31 2013-05-22 广东欧珀移动通信有限公司 Interface control method of personal digital assistant applications and mobile terminal
CN103797761A (en) * 2013-08-22 2014-05-14 华为技术有限公司 Communication method, client, and terminal
CN105677060A (en) * 2016-02-02 2016-06-15 百度在线网络技术(北京)有限公司 Method and device for inputting according to input method
CN106355629A (en) * 2016-08-19 2017-01-25 腾讯科技(深圳)有限公司 Virtual image configuration method and device
CN106527864A (en) * 2016-11-11 2017-03-22 厦门幻世网络科技有限公司 Interference displaying method and device

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050153678A1 (en) * 2004-01-14 2005-07-14 Tiberi Todd J. Method and apparatus for interaction over a network
CN105988578B (en) * 2015-03-04 2019-06-21 华为技术有限公司 A kind of method that interactive video is shown, equipment and system
CN106303555B (en) * 2016-08-05 2019-12-03 深圳市摩登世纪科技有限公司 A kind of live broadcasting method based on mixed reality, device and system

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101931621A (en) * 2010-06-07 2010-12-29 上海那里网络科技有限公司 Device and method for carrying out emotional communication in virtue of fictional character
CN103116463A (en) * 2013-01-31 2013-05-22 广东欧珀移动通信有限公司 Interface control method of personal digital assistant applications and mobile terminal
CN103797761A (en) * 2013-08-22 2014-05-14 华为技术有限公司 Communication method, client, and terminal
CN105677060A (en) * 2016-02-02 2016-06-15 百度在线网络技术(北京)有限公司 Method and device for inputting according to input method
CN106355629A (en) * 2016-08-19 2017-01-25 腾讯科技(深圳)有限公司 Virtual image configuration method and device
CN106527864A (en) * 2016-11-11 2017-03-22 厦门幻世网络科技有限公司 Interference displaying method and device

Also Published As

Publication number Publication date
CN108933723A (en) 2018-12-04

Similar Documents

Publication Publication Date Title
CN108933723B (en) Message display method and device and terminal
CN109427083B (en) Method, device, terminal and storage medium for displaying three-dimensional virtual image
US11830118B2 (en) Virtual clothing try-on
US11763481B2 (en) Mirror-based augmented reality experience
CN108876878B (en) Head portrait generation method and device
US20220206581A1 (en) Communication interface with haptic feedback response
US11989348B2 (en) Media content items with haptic feedback augmentations
US11531400B2 (en) Electronic communication interface with haptic feedback response
US20220206584A1 (en) Communication interface with haptic feedback response
US20220317774A1 (en) Real-time communication interface with haptic and audio feedback response
US20220317773A1 (en) Real-time communication interface with haptic and audio feedback response
US20240013463A1 (en) Applying animated 3d avatar in ar experiences
US20240096040A1 (en) Real-time upper-body garment exchange
US20230120037A1 (en) True size eyewear in real time
US20220319059A1 (en) User-defined contextual spaces
EP4165859A1 (en) Contextual application menu
WO2023121896A1 (en) Real-time motion and appearance transfer
US20230196602A1 (en) Real-time garment exchange
WO2022212144A1 (en) User-defined contextual spaces
US20240161242A1 (en) Real-time try-on using body landmarks
US20240007585A1 (en) Background replacement using neural radiance field
US20240203072A1 (en) Dynamic augmented reality experience
US20240231500A1 (en) Real-time communication interface with haptic and audio feedback response
US20230343004A1 (en) Augmented reality experiences with dual cameras
US20240139611A1 (en) Augmented reality physical card games

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant