CN110717974A - Control method and device for displaying state information, electronic equipment and storage medium - Google Patents

Control method and device for displaying state information, electronic equipment and storage medium Download PDF

Info

Publication number
CN110717974A
CN110717974A CN201910927772.2A CN201910927772A CN110717974A CN 110717974 A CN110717974 A CN 110717974A CN 201910927772 A CN201910927772 A CN 201910927772A CN 110717974 A CN110717974 A CN 110717974A
Authority
CN
China
Prior art keywords
virtual image
user
social
information
avatar
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910927772.2A
Other languages
Chinese (zh)
Other versions
CN110717974B (en
Inventor
毛竹
王珊珊
蒋玮楠
苏智威
马国伟
邱晓磊
王猛
田宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Cyber Tianjin Co Ltd
Original Assignee
Tencent Cyber Tianjin Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Cyber Tianjin Co Ltd filed Critical Tencent Cyber Tianjin Co Ltd
Priority to CN201910927772.2A priority Critical patent/CN110717974B/en
Publication of CN110717974A publication Critical patent/CN110717974A/en
Application granted granted Critical
Publication of CN110717974B publication Critical patent/CN110717974B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Systems or methods specially adapted for specific business sectors, e.g. utilities or tourism
    • G06Q50/01Social networking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L51/00User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
    • H04L51/04Real-time or near real-time messaging, e.g. instant messaging [IM]
    • H04L51/046Interoperability with other network applications or services
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L51/00User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
    • H04L51/07User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail characterised by the inclusion of specific contents
    • H04L51/10Multimedia information
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L51/00User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
    • H04L51/52User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail for supporting social networking services

Abstract

The application provides a control method and device for displaying state information, electronic equipment and a storage medium, and belongs to the technical field of computers. When the social application runs, responding to the mood state selection instruction input through the social information generation interface, determining the mood state selected by the mood state selection instruction, and acquiring and displaying the virtual image corresponding to the mood state. The virtual image is obtained by modifying the basic virtual image of the user according to the expression model corresponding to the mood state, so that the expression of the virtual image of the user can be really changed, the mood state of the user is really displayed through the virtual image of the user, and the state information display effect of the social application is improved.

Description

Control method and device for displaying state information, electronic equipment and storage medium
Technical Field
The application relates to the field of computers, in particular to a voice control technology, and provides a control method and device for displaying state information, electronic equipment and a storage medium.
Background
With the development of computer technology and communication technology, avatar modeling technology is increasingly applied to social applications. For example, the electronic device may obtain a photo of the user, create an avatar of the user, such as a cartoon avatar, according to the photo of the user, and present the avatar of the user as an avatar of the user on the social platform.
However, in the existing social application, the mood state of the user cannot be shown through the avatar of the user.
Disclosure of Invention
In order to solve the existing technical problem, embodiments of the present application provide a control method and apparatus for displaying state information, an electronic device, and a storage medium, which can display a mood state of a user through an avatar of the user.
In order to solve the existing technical problem, embodiments of the present application provide a control method and apparatus for displaying state information, an electronic device, and a storage medium, which can display a mood state of a user through an avatar of the user.
In order to achieve the above purpose, the technical solution of the embodiment of the present application is implemented as follows:
in a first aspect, an embodiment of the present application provides a control method for displaying status information, where the method includes:
responding to a mood state selection instruction input through a social information generation interface, and determining the mood state selected by the mood state selection instruction;
acquiring an avatar corresponding to the mood state, wherein the avatar is obtained by modifying a basic avatar of a user according to an expression model corresponding to the mood state;
and displaying the obtained virtual image.
In a second aspect, an embodiment of the present application provides a control device for displaying status information, where the device includes:
the mood state determining unit is used for responding to a mood state selection instruction input through a social information generation interface and determining the mood state selected by the mood state selection instruction;
the avatar acquisition unit is used for acquiring an avatar corresponding to the mood state, wherein the avatar is obtained by modifying a basic avatar of a user according to an expression model corresponding to the mood state;
and the virtual image display unit is used for displaying the obtained virtual image.
In a third aspect, an embodiment of the present application provides a computer-readable storage medium, where a computer program is stored in the computer-readable storage medium, and when the computer program is executed by a processor, the method for controlling presentation status information in the first aspect is implemented.
In a fourth aspect, an embodiment of the present application provides an electronic device, including a memory and a processor, where the memory stores a computer program executable on the processor, and when the computer program is executed by the processor, the processor is enabled to implement the control method for presenting status information of the first aspect.
According to the control method and device for displaying the state information, the electronic equipment and the storage medium, when the social application runs, the mood state selection instruction input through the social information generation interface is responded, the mood state selected by the mood state selection instruction is determined, and the virtual image corresponding to the mood state is obtained and displayed. The virtual image is obtained by modifying the basic virtual image of the user according to the expression model corresponding to the mood state, so that the expression of the virtual image of the user can be really changed, the mood state of the user is really displayed through the virtual image of the user, and the state information display effect of the social application is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
Fig. 1 is an application scenario diagram of a control method for displaying state information according to an embodiment of the present application;
fig. 2 is a schematic flowchart of a control method for displaying status information according to an embodiment of the present disclosure;
fig. 3 is a schematic diagram of a mood state selection interface according to an embodiment of the present disclosure;
FIG. 4 is a schematic illustration of the skeletal structure of a base avatar provided in an embodiment of the present application;
fig. 5 is a schematic diagram of a skeleton structure of an avatar corresponding to a mood state according to an embodiment of the present disclosure;
FIG. 6 is a schematic diagram of an avatar with a smile derived from a base avatar modification according to an embodiment of the present disclosure;
FIG. 7 is a schematic diagram of an avatar for laughing and crying according to an embodiment of the present disclosure;
fig. 8 is a schematic flowchart of another control method for displaying status information according to an embodiment of the present disclosure;
fig. 9 is a schematic diagram of a social information generating interface provided in an embodiment of the present application;
fig. 10 is a schematic diagram of a state information sharing interface according to an embodiment of the present disclosure;
FIG. 11 is a schematic diagram of a social interaction interface provided by an embodiment of the present application;
fig. 12 is an interaction diagram of a control method for displaying status information according to an embodiment of the present disclosure;
fig. 13 is an interaction diagram of another control method for displaying status information according to an embodiment of the present disclosure;
fig. 14 is a block diagram illustrating a control device for displaying status information according to an embodiment of the present disclosure;
fig. 15 is a block diagram of another control device for displaying status information according to an embodiment of the present disclosure;
fig. 16 is a block diagram of another control device for displaying status information according to an embodiment of the present disclosure;
fig. 17 is a block diagram of an electronic device according to an embodiment of the present application.
Detailed Description
Some terms in the embodiments of the present application are explained below to facilitate understanding by those skilled in the art.
(1) Basic virtual image: the virtual image of the user created by the virtual image reconstruction technology can be a cartoon image, a character image, an animal image or other self-defined images and the like. Illustratively, the avatar of the user may be created based on a photo of the user, with the head of the avatar being closer to the facial image of the user. The base avatar may include only the head, but may also include the head, torso, and other body parts. The base avatar may be a neutral avatar without expression, comprising, in addition to the non-exchangeable base avatar, also the sculpts decorating the base avatar, such as hairstyles, clothes, worn weapons props, etc., which can be exchanged. The base avatar may be a static avatar or may be a dynamic avatar.
(2) Pasting paper materials: a material for forming an expression sticker can be attached to the face of an avatar to assist the avatar to exhibit different expressions, such as a tear sticker or the like.
(3) Pinching the face: the method is an operation of carrying out modification on the face of the virtual image, the face of the virtual image can be adjusted through a preset expression template, face pinching is also achieved, the virtual image after face pinching can express the emotion or behavior of a user, and for example, people can be liked or disliked through face pinching. In general, the virtual image after pinching the face has a great pleasure and looks very lovely.
(4) Social applications: a web application that connects different users through a friend relationship or a common interest can enable social interaction between at least two users. The social application can be an application in various forms, and only the social interaction function needs to be realized, such as a chat application for multi-player chat, a game application for game enthusiasts to play games, or a game forum application for game enthusiasts to share game information. The social application generally sets different operation interfaces according to functions to be implemented, for example, a social information generation interface for a user to input content to be published, a state information sharing interface for sharing personal state information, a social interaction interface for chatting with a selected target contact, and the like.
(5) The terminal equipment: the electronic device can be used for installing various applications, including social applications, displaying various interfaces provided in the installed applications and various objects in the interfaces, and can be mobile or fixed. For example, a mobile phone, a tablet computer, various wearable devices, a vehicle-mounted device, a Personal Digital Assistant (PDA), or other electronic devices capable of implementing the above functions.
The present application will be described in further detail with reference to the following drawings and specific embodiments.
The mood state display method provided in the embodiment of the present application may be applied to the application scenario shown in fig. 1, and as shown in fig. 1, the service server 100 may be a service server of social application, and is in communication connection with a plurality of terminal devices (such as terminal devices 301, 302, 303, and the like) through a network 200, where the network 200 may be, but is not limited to, a local area network, a metropolitan area network, a wide area network, or the like. A plurality of terminal devices can transmit communication data and messages to each other through the network 200 and the service server 100. The terminal devices 301 to 303 may be portable devices (e.g., mobile phones, tablet computers, and notebook computers), or may be computers, smart screens, and Personal Computers (PCs). The business server 100 may be any device capable of providing internet services, for example, the business server 100 may be a cloud server and may include one or more servers.
The mood status display method provided by the embodiment of the application can be executed by a social application installed on the terminal device, or can be executed by the service server 100 and the social application installed on the terminal device together. And each terminal device can be provided with a social application, and a plurality of terminals can establish connection with the service server through the social application.
Taking the terminal device 301 as an example for explanation, a user may log in a social application installed on the terminal device 301, perform daily communication and process some daily transactions through the social application, and each user may have an identity, that is, a user identifier, such as a user account, a user nickname, a phone number, and the like, that can be recognized by other users on the social application. In a social application, different users may establish a friend relationship by confirming with each other, for example, add friends to each other or pay attention to each other. When two users establish a friend relationship, they become social contacts of each other. Each user in the social application has a social contact list for the user to communicate with the social contacts in their correspondence list in the form of instant messaging messages and the like. The social contact of the user can see the dynamic information published by the user through the state information sharing interface, wherein the dynamic information comprises the personal state information in the following text. For example, the state information sharing interface may be a friend circle display interface of the social application, and the user may publish the own dynamic information to the friend circle and display the own dynamic information through the friend circle display interface, so that the friend knows the own real-time dynamics, and also knows the real-time dynamics of the friend through the friend circle. In some social applications, a group of users may form a friend relationship with each other, thereby forming a social group, each member of the social group being a social contact of all other members, and users within a social group may communicate with each other through the social application.
Each user has a personalized avatar in the social application, called the base avatar. For example, the social application may take a photograph of the user, create a base avatar from the user's photograph using the avatar modeling tool, store in the terminal device, or upload to the service server 100, the base avatar stored by the service server 100 for the user. The base avatar may embody personalized features of the user. After the user logs in the social application, the basic avatar of the user can be displayed in the interface, or the basic avatar of the user can be used as the head portrait of the user.
In order to enable the avatar of the user to truly show the mood state of the user, the embodiment of the application provides a control method, a control device, electronic equipment and a storage medium for showing state information. The virtual image is obtained by modifying the basic virtual image of the current user according to the expression model corresponding to the mood state, so that the expression of the virtual image of the user can be really changed, and the mood state of the user can be really displayed through the virtual image of the user.
Fig. 2 is a flowchart illustrating a control method for presenting status information according to an embodiment of the present application, where as shown in fig. 2, the method may include the following steps:
step S201, responding to the mood state selection instruction input through the social information generation interface, and determining the mood state selected by the mood state selection instruction.
As shown in fig. 3, in the social application of the embodiment of the present application, a plurality of mood state flags are displayed on the social information generating interface for the user to select. Such as "surprise", "minor depression", "happy", "angry", etc. The user clicks or double clicks on the mark of any mood state, and the mood state can be considered as selected by the user. And responding to the mood state selection instruction input by the user through the social information generation interface, and determining the mood state selected by the mood state selection instruction. For example, if the user clicks the mood status flag of "minor depression", the mood status selected by the mood status selection instruction of the user is determined to be "minor depression".
Step S202, obtaining the virtual image corresponding to the selected mood state.
The virtual image corresponding to the mood state can be obtained by modifying the basic virtual image of the user according to the expression model corresponding to the mood state, and the virtual image has an expression and a expression state which can show the mood state of the user.
In some embodiments, the avatar corresponding to each mood state may be generated and stored in advance according to the expression model corresponding to each mood state for the base avatar of the current user. And after receiving a mood state selection instruction input by a user, acquiring the virtual image corresponding to the mood state selected by the mood state selection instruction from the pre-stored virtual images.
In other embodiments, the expression models corresponding to the mood states may be pre-stored, after receiving a mood state selection instruction input by a user, the expression model corresponding to the mood state selected by the mood state selection instruction is obtained, and the base avatar of the current user is modified according to the expression model corresponding to the mood state, so as to obtain the avatar corresponding to the mood state.
And step S203, displaying the obtained virtual image.
For example, the obtained avatar with an expression may be used instead of the base avatar as the avatar of the user, or the avatar may be presented in a set area in the social information generating interface.
According to the control method for displaying the state information, when the social application runs, the mood state selection instruction input through the social information generation interface is responded, the mood state selected by the mood state selection instruction is determined, and the virtual image corresponding to the mood state is obtained and displayed. The virtual image is obtained by modifying the basic virtual image of the user according to the expression model corresponding to the mood state, so that the expression of the virtual image of the user can be really changed, the mood state of the user is really displayed through the virtual image of the user, the emotion of the user is more vividly expressed, and the state information display effect of the social application is improved.
In an alternative embodiment, the base avatar created from the user's photograph may be established based on a preset biological skeletal structure. Through image recognition, personalized features of a user are extracted from a photo of the user, the weight of each skeleton in a preset biological skeleton structure is adjusted according to the personalized features of the user, the personalized biological skeleton structure of the user is obtained, and appearance data such as skin, five sense organs and the like are overlaid on the personalized biological skeleton structure of the user, so that a basic virtual image of the user can be generated. The biological skeleton structure may be a human skeleton structure or a biological skeleton structure of other small animals suitable for cartoon image, and the biological skeleton structure is exemplified as a human skeleton structure hereinafter.
In some embodiments, as shown in fig. 4, the bio-skeletal structure of the base avatar may include a predetermined number of bones, for example, the base avatar may include 121 head bones, or may include 192 bones, 192 bones including both head bones and bones of the torso and limbs. Each bone has a unique number. The expression model corresponding to each mood state comprises skeleton variant data set for the mood state, the skeleton variant data is used for carrying out variant processing on the skeleton needing variant in the basic virtual image of each user, and the skeleton needing variant can comprise part or all of the skeleton in the basic virtual image. Illustratively, as shown in table 1, the bone variant data may include variant information of each bone that needs to be modified, and the variant information of each bone includes one or any combination of the following information: position change information, angle change information, length change information. The "-" in table 1 indicates that this information is not present.
TABLE 1
Figure BDA0002219381650000081
In the step S202, the following method may be adopted to generate an avatar corresponding to any mood state: the expression models corresponding to the mood states are obtained, and the expression models corresponding to the mood states can be stored in the terminal equipment or the server. And adjusting the skeleton of the basic virtual image according to the obtained skeleton variant data in the expression model so as to enable the basic virtual image to be a virtual image corresponding to the mood state in a variant manner. The skeleton of the base avatar may be adjusted by one or any combination of the following: adjusting the position of the corresponding skeleton in the basic virtual image according to the position change information of any skeleton in the expression model; adjusting the angle of the corresponding skeleton in the basic virtual image according to the angle change information of any skeleton in the expression model; and adjusting the length of the corresponding skeleton in the basic virtual image according to the length change information of any skeleton in the expression model. Illustratively, for one portion of bone, only the length of the bone may need to be adjusted, for another portion of bone, the angle and position of the bone may need to be changed, and for another portion of bone, the position, angle and length of the bone may need to be changed. By adjusting the skeleton of the basic virtual image, the obtained virtual image can simulate the facial expression of a human. Illustratively, as shown in fig. 5, by adjusting the angles and lengths of the bones numbered 155 and 156 and the lengths of the bones numbered 157 and 158, the base avatar of the user can be changed to an avatar corresponding to the mood state "happy", and the effect of the change is changed from a neutral base avatar without expression to an avatar with a smile face as shown in fig. 6. The face pinching and the like can also be realized by adjusting the skeleton of the basic virtual image.
In other embodiments, the biological skeleton structure of the base avatar may include a predetermined number of nodes corresponding to joints of the human body, where the bones are connected to each other, in addition to the predetermined number of bones. Correspondingly, the skeleton variant data of the expression model comprises the variant data of each skeleton and the variant data of each node, and the skeleton and the node of the basic virtual image can be adjusted according to the skeleton variant data in different expression models so as to obtain the virtual image corresponding to each mood state.
In other embodiments, the expression model corresponding to each mood state may include, in addition to the bone variant data, motion state data set for the selected mood state. For example, the motion state data of a bone may include the amplitude of the bone motion and the number of movements. Optionally, the motion state data may further include motion state data of the node, such as a pitch angle change number of the node. For example, by changing the pitch angle of the knee joint, a jumping action may be exhibited. And adjusting the skeleton of the basic virtual image according to skeleton variant data in the expression model, and after obtaining the virtual image corresponding to the mood state, controlling the skeleton movement of the virtual image corresponding to the mood state according to the movement state data. The virtual image obtained through the steps can simulate human actions such as jumping or clapping in a happy place, and dynamic face pinching can be achieved through the steps. By controlling the skeletal movements of the avatar, a dynamic avatar may also be obtained.
In other embodiments, the virtual image corresponding to the partial mood state can be more real and more obvious by adding the paster. Sticker patterns can be displayed on the social information generating interface, and a user can select corresponding stickers by clicking the sticker patterns. And responding to a sticker selection instruction input through the social information generation interface, determining a sticker material selected by the sticker selection instruction and an adding position of the sticker material, wherein the adding position is set based on the position of some bones in the biological bone structure in the virtual image and can embody the relative position of the sticker material and the virtual image. For example, the expression model corresponding to the mood state "laughing and crying" includes a tear sticker. If the expression model further comprises the sticker material and the adding position of the sticker material, the sticker material can be added to the corresponding position of the virtual image according to the adding position of the sticker material, as shown in fig. 7. Optionally, besides the sticker pattern for expressing mood, other sticker patterns may also be displayed on the social information generating interface, such as a sticker pattern for decoration, and the user may select the sticker pattern according to the needs of the user, which is not limited in the present application.
The step of generating the avatar corresponding to the mood state may be performed by the terminal device or a social application installed on the terminal device, or may be performed by the service server.
Specifically, if the step of generating the avatar corresponding to the mood state is performed by the terminal device, the terminal device may obtain the expression model corresponding to the mood state from the local, or obtain the expression model corresponding to the mood state from the service server. For example, after determining the mood state selected by the mood state selection instruction input by the user, the terminal device may send an expression model acquisition request to the service server, where the expression model acquisition request carries the mood state selected by the mood state selection instruction. And the service server sends the expression model corresponding to the mood state to the terminal equipment, and the terminal equipment carries out modification on the basic virtual image of the current user according to the expression model corresponding to the mood state to obtain the virtual image corresponding to the mood state. It should be noted that the terminal device may generate and store the avatar corresponding to each mood state when the user logs in the social application or the user registers, or may regenerate the avatar corresponding to the mood state after receiving the mood state selection instruction input by the user.
If the step of generating the virtual image corresponding to the mood state is executed by the service server, the terminal equipment receives the mood state selection instruction input by the user and sends a virtual image acquisition request to the service server, wherein the virtual image acquisition request carries the mood state selected by the mood state selection instruction. The service server obtains the basic virtual image of the user from the local or the terminal equipment. For example, the avatar acquisition request sent by the terminal device may include user identification information such as a user name, and the service server may locally search the basic avatar of the user according to the user identification information. If the service server does not find the basic virtual image of the user, the terminal equipment can be informed to send the basic virtual image of the user to the service server. The service server determines the expression model corresponding to the mood state selected by the mood state selection instruction, modifies the basic virtual image of the current user according to the expression model corresponding to the mood state to obtain the virtual image corresponding to the mood state, and sends the obtained virtual image to the terminal equipment. It should be noted that, the service server may also generate and store the avatar corresponding to each mood state of the user when the user logs in the social application or the user registers, and when receiving the avatar acquisition request sent by the terminal device, directly send the corresponding avatar to the terminal device according to the user identification information carried in the avatar acquisition request and the mood state selected by the user.
In order to enable the social friends of the user to know the mood state of the user, the communication effect of the social interaction mode is increased. The social application may also share the user's personal status information to social friends. For example, the user can dynamically publish his personal status information to the circle of friends through publication in real time.
In some embodiments, as shown in fig. 8, the control method for presenting the state information may include the following steps:
step S801, determining a mood state selected by the mood state selection instruction in response to the mood state selection instruction input through the social information generation interface.
Step S802, obtaining the virtual image corresponding to the mood state.
And the virtual image is obtained by modifying the basic virtual image of the current user according to the expression model corresponding to the selected mood state.
And step S803, displaying the obtained virtual image.
As shown in fig. 9, the acquired avatar may be presented in the avatar display area of the current user.
Step S804, responding to a publication instruction input by the user through the social information generating interface, and acquiring content input through the social information generating interface.
As shown in fig. 9, a content editing area is provided on the social information generating interface, and a user can input a content to be published in the content editing area. And a publishing button is also arranged on the social information generation interface, and after the user edits the content to be published, the user can click the publishing button to publish. And receiving a publication instruction input by a user through the social information generation interface, and acquiring the content input by the user through the social information generation interface, wherein the content can comprise picture content or text content.
In step S805, personal state information including the input content and the avatar of the user is generated.
Step S806, displaying the personal status information on the status information sharing interface, and sharing the personal status information to the social contact of the user.
The personal state information containing the virtual image can be displayed on a state information sharing interface, the state information sharing interface can be a friend circle display interface, the personal state information published by the user can be displayed on the interface, and the personal state information published by social friends of the user can also be displayed on the interface. The personal status information including the avatar may also be sent to the service server, so that the service server shares the personal status information of the user with the social contact of the user, and the social contact of the user may view the personal status information published by the user through a status information sharing interface, as shown in fig. 10.
In some embodiments, different background patterns may be set for different mood states, a background pattern corresponding to the mood state selected by the state selection instruction is obtained, and the background pattern corresponding to the mood state is displayed on the social information generation interface. Specifically, a background pattern corresponding to the mood state can be used as a background of the content editing area, and text content or picture content input by the user is displayed above the background pattern; alternatively, a background pattern corresponding to the mood state may be used as the background of the avatar, the background pattern may be displayed in the area where the avatar is displayed, and the avatar may be displayed above the background pattern. The personal status information shared in step S806 may also include the background pattern.
In some embodiments, the user may also send personal status information to a social chat group. For example, in an instant messaging application, a social chat group also corresponds to a social information generation interface, obtains content input by a user through the social information generation interface, responds to a publication instruction input through the social information generation interface, and displays the content containing the virtual image on a state information sharing interface; the state information sharing interface is a chat interface of the social chat group, and the content containing the virtual image is used as personal state information and sent to the service server, so that the service server can share the personal state information of the user to each person in the social chat group.
In some embodiments, the user may also see personal status information of the social contacts at a circle of friends. For example, a user may open a status information sharing interface through a portal of a circle of friends, such as the exemplary interface shown in fig. 10. Responding to the operation of opening the state information sharing interface, and acquiring personal state information of each social contact person needing to display the personal state information from the business server; the social contact persons are the concerned social friends set by the user or allow the user to see the social friends in the friend circle. And when the obtained personal state information of the social contact person comprises the virtual image of the social contact person, displaying the virtual image of the social contact person in a set display area on a state information sharing interface, wherein the virtual image of each social contact person is obtained by modifying the basic virtual image of the social contact person according to an expression model corresponding to the mood state selected by the social contact person. The set display area may be an area for displaying the personal status information of the social contact on the status information sharing interface.
In other embodiments, when the user chats with the social friends through the social application, the chat interface may also display the personal image of the user or the other social friends according to the setting. The counterpart social friend may be referred to as the target contact. Specifically, in the chat process, the avatar may be included in the avatar display information of the current user and transmitted to the service server, so that the service server transmits the avatar display information of the current user to the target contact. As shown in fig. 11, when a user opens a chat interface (social interaction interface) with a certain social friend, the social friend is a target contact in the following text, in response to an operation of opening the social interaction interface with the target contact, the personal image display information of the target contact is obtained from the service server, when the personal image display information of the target contact includes an avatar of the target contact, the avatar of the target contact is displayed in a set area of the social interaction interface, and the avatar of the target contact is displayed on the right side of the social interaction interface in fig. 11. And the virtual image of the target contact person is obtained by modifying the basic virtual image of the target contact person according to the expression model corresponding to the mood state selected by the target contact person.
For easier understanding, the following description of the embodiments of the present application provides a control method for presenting status information from the perspective of interaction between a terminal device and a service server. The terminal device is provided with a social application, and the action performed by the terminal device can also be considered to be performed by the social application. In one embodiment, as shown in fig. 12, the control method for displaying status information includes the following steps:
step S1201, the terminal device responds to the mood state selection instruction input through the social information generation interface, and the mood state selected by the mood state selection instruction is determined.
Step S1202, the terminal device sends an avatar acquisition request to the service server, wherein the avatar acquisition request carries the mood state selected by the mood state selection instruction and the user identifier of the current user.
Step S1203, the service server determines an expression model corresponding to the mood state, and obtains a basic avatar of the current user according to the user identifier of the current user.
Step S1204, the business server carries on the modification to the basic virtual image of the present user according to the expression model that the mood state corresponds to, produce the virtual image that the mood state that the present user chooses corresponds to.
The generation of the avatar corresponding to the mood state selected by the current user by the service server can be performed by referring to the method described above, which is not described herein again.
Step S1205, the service server sends the generated avatar to the terminal device.
In step S1206, the terminal device displays the received avatar.
In another embodiment, as shown in FIG. 13, social contacts refer to social friends that the user has confirmed by both parties in a certain social application. The control method for displaying the state information comprises the following steps:
step S1301, the terminal device of the user responds to the mood state selection instruction input through the social information generation interface, and determines the mood state selected by the mood state selection instruction.
Step 1302, the terminal device of the user acquires and displays the avatar corresponding to the mood state.
The terminal equipment of the user can locally acquire the expression model corresponding to the mood state, and the basic virtual image of the current user is modified according to the expression model corresponding to the mood state, so that the virtual image corresponding to the mood state is obtained. The terminal device of the user can generate and store the virtual image corresponding to each mood state when the user logs in the social application or the user registers, and can also regenerate the virtual image corresponding to the mood state after receiving a mood state selection instruction input by the user.
Step S1303, the terminal device of the user responds to the publication instruction input through the social information generating interface to obtain the content input through the social information generating interface.
In step S1304, the terminal device of the user generates personal status information including the input content and the avatar of the user, and displays the personal status information on the status information sharing interface.
Step S1305, the user' S terminal device transmits contents containing the avatar as personal status information to the service server.
Step S1306, the service server sends the personal status information of the user to the terminal devices of the social contacts of the user respectively.
When a certain social contact opens the state information sharing interface through the terminal equipment, the personal state information of the user is displayed on the state information sharing interface and comprises the virtual image corresponding to the mood state selected by the user, and the social contact can visually see the expression of the user and quickly know and experience the mood state of the user.
Optionally, the method shown in fig. 13 may further include the following steps:
in step S1307, the terminal device of the user receives an operation of opening the state information sharing interface by the user.
Step S1308, the terminal device of the user sends a status information obtaining request to the service server, where the status information obtaining request carries the identification information of the user.
Step 1309, the service server obtains the personal status information of each social contact who needs to show the personal status information.
The service server determines social contact persons contained in the user communication list according to the identification information of the user, obtains the social contact persons for publishing the personal state information in the social contact persons according to the publication time sequence, takes the social contact persons publishing the personal state information as target objects needing to display the personal state information, and obtains the personal state information of each target object.
In some embodiments, the social contacts that need to show the personal status information may be social contacts that have published the personal status information in the user's communication list and allow the user to see the personal status information.
Step S1310, the service server sends the individual status information of each social contact needing to display the individual status information to the terminal device of the user.
Step S1311, the terminal device of the user displays the personal state information of each social contact on the state information sharing interface.
And if the obtained personal state information of the social contact person comprises the avatar of the social contact person, displaying the avatar of the social contact person in a display area for displaying the personal state information of the social contact person in a state information sharing interface. And the virtual image of each target object is obtained by modifying the basic virtual image of the target object according to the expression model corresponding to the mood state selected by the target object.
In the method, each user has a basic virtual image in the social application, the user selects the mood state, the mood state selected by the user can be displayed on the basic virtual image of the user, and the mood state is different from the mood state simply hung on the face of the virtual image by a common sticker expression.
Corresponding to the embodiment of the control method for displaying the state information, the embodiment of the application also provides a control device for displaying the state information. Fig. 14 is a schematic structural diagram of a control device for displaying status information according to an embodiment of the present application; as shown in fig. 14, the control device for presenting the state information includes a mood state determining unit 141, an avatar acquisition unit 142, and an avatar presenting unit 143; wherein the content of the first and second substances,
the mood state determining unit 141 is configured to determine a mood state selected by the mood state selection instruction in response to the mood state selection instruction input through the social information generation interface;
an avatar obtaining unit 142, configured to obtain an avatar corresponding to the mood status, where the avatar is obtained by modifying a basic avatar of the user according to an expression model corresponding to the mood status;
an avatar presenting unit 143 for presenting the acquired avatar.
In one possible implementation, the base avatar is established based on a preset biological skeletal structure; the avatar acquisition unit 142 may be further configured to: obtaining an expression model corresponding to the mood state, wherein the expression model comprises skeleton variant data set aiming at the selected mood state; and adjusting the skeleton of the basic virtual image according to the skeleton variant data in the expression model so as to enable the basic virtual image to be the virtual image corresponding to the mood state in a variant manner.
In a possible implementation manner, the avatar obtaining unit 142 may further be configured to: and if the expression model further comprises the motion state data of the skeleton set aiming at the selected mood state, controlling the skeleton motion of the virtual image corresponding to the mood state according to the motion state data.
In a possible implementation manner, the avatar obtaining unit 142 may further be configured to: the bone variant data comprises variant information of each bone, and the variant information of each bone comprises one or any combination of the following information: position change information, angle change information, length change information; the adjustment of the skeleton of the basic virtual image according to the skeleton variant data in the expression model comprises one or any combination of the following modes: adjusting the position of the corresponding skeleton in the basic virtual image according to the position change information of any skeleton in the expression model; adjusting the angle of the corresponding skeleton in the basic virtual image according to the angle change information of any skeleton in the expression model; and adjusting the length of the corresponding skeleton in the basic virtual image according to the length change information of any skeleton in the expression model.
In a possible implementation manner, the avatar obtaining unit 142 may further be configured to: responding to a sticker selection instruction input through a social information generation interface, and determining a sticker material selected by the sticker selection instruction and an adding position of the sticker material; and adding the sticker materials to the corresponding positions of the virtual image according to the adding positions of the sticker materials.
In a possible implementation manner, as shown in fig. 15, the apparatus may further include a state information sharing unit 151, configured to: responding to a publication instruction input through a social information generation interface, and acquiring content input through the social information generation interface; generating personal state information including the input contents and an avatar of the user; and displaying the personal state information to a state information sharing interface, and sharing the personal state information to the social contact of the user.
In a possible implementation manner, as shown in fig. 15, the apparatus may further include a status information presentation unit 152, configured to: responding to the operation of opening the state information sharing interface, and acquiring the personal state information of each social contact needing to display the personal state information; and when the obtained personal state information of the social contact person comprises the virtual image of the social contact person, displaying the virtual image of the social contact person in a set display area on the state information sharing interface, wherein the virtual image of each social contact person is obtained by modifying the basic virtual image of the social contact person according to an expression model corresponding to the mood state selected by the social contact person.
In a possible implementation manner, as shown in fig. 16, the apparatus may further include a social interaction unit 161, configured to: responding to the operation of opening a social interaction interface with the target contact, and including the virtual image of the user in the personal image display information of the user to share the virtual image of the user with the target contact; acquiring personal image display information of the target contact person; when the personal image display information of the target contact person comprises the virtual image of the target contact person, the virtual image of the target contact person is displayed in a set area of the social interaction interface, and the virtual image of the target contact person is obtained by changing the basic virtual image of the target contact person according to the expression model corresponding to the mood state selected by the target contact person.
The control device for displaying the state information, provided by the embodiment of the application, responds to the mood state selection instruction input through the social information generation interface when the social application runs, determines the mood state selected by the mood state selection instruction, and acquires and displays the virtual image corresponding to the mood state. The virtual image is obtained by modifying the basic virtual image of the user according to the expression model corresponding to the mood state, so that the expression of the virtual image of the user can be really changed, the mood state of the user is really displayed through the virtual image of the user, and the state information display effect of the social application is improved.
Corresponding to the embodiment of the control method for displaying the state information, the embodiment of the application also provides electronic equipment. The electronic device may be the terminal device shown in fig. 1, and may be an electronic device such as a smart phone, a tablet computer, a laptop computer or a PC, and the terminal device includes at least a memory for storing data and a processor for processing data. Wherein, for the processor for data processing, when executing processing, the processor can be realized by a microprocessor, a CPU, a DSP or an FPGA; for the memory, the memory contains an operation instruction, which may be a computer executable code, and the operation instruction implements each step in the flow of the control method for displaying status information according to the embodiment of the present application.
Fig. 17 is a schematic structural diagram of an electronic device according to an embodiment of the present application; as shown in fig. 17, the electronic device 170 in the embodiment of the present application includes: a processor 171, a display 172, a memory 173, a communication device 174, a bus 175, and an input device 176. The processor 171, the memory 173, the input device 176, the display 172 and the communication device 174 are all connected by a bus 175, and the bus 175 is used for data transmission among the processor 171, the memory 173, the display 172 and the communication device 174.
The memory 173 stores therein a computer storage medium, in which computer-executable instructions are stored, and the computer-executable instructions are used for implementing the network game control method according to the embodiment of the present application. A processor 171 for performing the above-described control method of presenting status information and presenting the avatar and personal status information of the user on the display 172. The processor 171 is connected with the service server through the communication device 174 and realizes data transmission.
The input device 176 is mainly used for acquiring input operations of a user, and when the electronic devices are different, the input device 176 may also be different. For example, when the electronic device is a PC, the input device 176 may be a mouse, a keyboard, or other input device; when the electronic device is a portable device such as a smart phone or a tablet computer, the input device 176 may be a touch screen.
The embodiment of the application also provides a computer storage medium, wherein computer-executable instructions are stored in the computer storage medium and used for realizing the control method for displaying the state information in any embodiment of the application.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. The above-described device embodiments are merely illustrative, for example, the division of the unit is only a logical functional division, and there may be other division ways in actual implementation, such as: multiple units or components may be combined, or may be integrated into another system, or some features may be omitted, or not implemented. In addition, the coupling, direct coupling or communication connection between the components shown or discussed may be through some interfaces, and the indirect coupling or communication connection between the devices or units may be electrical, mechanical or other forms.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed on a plurality of network units; some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, all functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may be separately regarded as one unit, or two or more units may be integrated into one unit; the integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional unit.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application.

Claims (11)

1. A control method for presenting status information, the method comprising:
responding to a mood state selection instruction input through a social information generation interface, and determining the mood state selected by the mood state selection instruction;
acquiring an avatar corresponding to the mood state, wherein the avatar is obtained by modifying a basic avatar of a user according to an expression model corresponding to the mood state;
and displaying the obtained virtual image.
2. The method according to claim 1, wherein the base avatar is established based on a preset bio-skeletal structure; the obtaining of the virtual image corresponding to the mood state includes:
obtaining an expression model corresponding to the mood state, wherein the expression model comprises skeleton variant data set aiming at the selected mood state;
and adjusting the skeleton of the basic virtual image according to the skeleton variant data in the expression model so as to enable the basic virtual image to be the virtual image corresponding to the mood state in a variant manner.
3. The method according to claim 2, wherein said obtaining the avatar corresponding to the mood status further comprises:
and if the expression model further comprises the motion state data of the skeleton set aiming at the selected mood state, controlling the skeleton motion of the virtual image corresponding to the mood state according to the motion state data.
4. A method according to claim 2 or 3, wherein the bone variant data comprises variant information of each bone, the variant information of each bone comprising one or any combination of the following information: position change information, angle change information, length change information; the adjustment of the skeleton of the basic virtual image according to the skeleton variant data in the expression model comprises one or any combination of the following modes:
adjusting the position of the corresponding skeleton in the basic virtual image according to the position change information of any skeleton in the expression model;
adjusting the angle of the corresponding skeleton in the basic virtual image according to the angle change information of any skeleton in the expression model;
and adjusting the length of the corresponding skeleton in the basic virtual image according to the length change information of any skeleton in the expression model.
5. The method according to claim 2 or 3, wherein before presenting the obtained avatar, the method further comprises:
responding to a sticker selection instruction input through a social information generation interface, and determining a sticker material selected by the sticker selection instruction and an adding position of the sticker material;
and adding the sticker materials to the corresponding positions of the virtual image according to the adding positions of the sticker materials.
6. The method according to any one of claims 1-3, wherein after presenting the obtained avatar, the method further comprises:
responding to a publication instruction input through a social information generation interface, and acquiring content input through the social information generation interface;
generating personal state information including the input contents and an avatar of the user;
and displaying the personal state information to a state information sharing interface, and sharing the personal state information to the social contact of the user.
7. The method of claim 6, further comprising:
responding to the operation of opening the state information sharing interface, and acquiring the personal state information of each social contact needing to display the personal state information;
and when the obtained personal state information of the social contact person comprises the virtual image of the social contact person, displaying the virtual image of the social contact person in a set display area on the state information sharing interface, wherein the virtual image of each social contact person is obtained by modifying the basic virtual image of the social contact person according to an expression model corresponding to the mood state selected by the social contact person.
8. The method according to any one of claims 1-3, wherein after presenting the obtained avatar, the method further comprises:
responding to the operation of opening a social interaction interface with the target contact, and including the virtual image of the user in the personal image display information of the user to share the virtual image of the user with the target contact;
acquiring personal image display information of the target contact person;
when the personal image display information of the target contact person comprises the virtual image of the target contact person, the virtual image of the target contact person is displayed in a set area of the social interaction interface, and the virtual image of the target contact person is obtained by changing the basic virtual image of the target contact person according to the expression model corresponding to the mood state selected by the target contact person.
9. A control device for presenting status information, the device comprising:
the mood state determining unit is used for responding to a mood state selection instruction input through a social information generation interface and determining the mood state selected by the mood state selection instruction;
the avatar acquisition unit is used for acquiring an avatar corresponding to the mood state, wherein the avatar is obtained by modifying a basic avatar of a user according to an expression model corresponding to the mood state;
and the virtual image display unit is used for displaying the obtained virtual image.
10. A computer-readable storage medium having a computer program stored therein, the computer program characterized by: the computer program, when executed by a processor, implements the method of any of claims 1 to 8.
11. An electronic device comprising a memory and a processor, the memory having stored thereon a computer program operable on the processor, the computer program, when executed by the processor, causing the processor to carry out the method of any one of claims 1 to 8.
CN201910927772.2A 2019-09-27 2019-09-27 Control method and device for displaying state information, electronic equipment and storage medium Active CN110717974B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910927772.2A CN110717974B (en) 2019-09-27 2019-09-27 Control method and device for displaying state information, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910927772.2A CN110717974B (en) 2019-09-27 2019-09-27 Control method and device for displaying state information, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN110717974A true CN110717974A (en) 2020-01-21
CN110717974B CN110717974B (en) 2023-06-09

Family

ID=69211098

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910927772.2A Active CN110717974B (en) 2019-09-27 2019-09-27 Control method and device for displaying state information, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN110717974B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112749357A (en) * 2020-09-15 2021-05-04 腾讯科技(深圳)有限公司 Interaction method and device based on shared content and computer equipment
CN114338577A (en) * 2020-10-12 2022-04-12 腾讯科技(深圳)有限公司 Information processing method and device, electronic equipment and storage medium
CN115665507A (en) * 2022-12-26 2023-01-31 海马云(天津)信息技术有限公司 Method, apparatus, medium, and device for generating video stream data including avatar
WO2023246852A1 (en) * 2022-06-22 2023-12-28 北京字跳网络技术有限公司 Virtual image publishing method and apparatus, electronic device, and storage medium
WO2024027285A1 (en) * 2022-08-04 2024-02-08 腾讯科技(深圳)有限公司 Facial expression processing method and apparatus, computer device and storage medium

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN201845196U (en) * 2010-08-24 2011-05-25 北京水晶石数字科技有限公司 Performance control system utilizing three-dimensional software
CN102571633A (en) * 2012-01-09 2012-07-11 华为技术有限公司 Method for demonstrating user state, demonstration terminal and server
WO2014201838A1 (en) * 2013-06-19 2014-12-24 腾讯科技(深圳)有限公司 Social information push method and system, push server and storage medium
CN105578110A (en) * 2015-11-19 2016-05-11 掌赢信息科技(上海)有限公司 Video call method, device and system
CN106204698A (en) * 2015-05-06 2016-12-07 北京蓝犀时空科技有限公司 Virtual image for independent assortment creation generates and uses the method and system of expression
CN107657651A (en) * 2017-08-28 2018-02-02 腾讯科技(上海)有限公司 Expression animation generation method and device, storage medium and electronic installation
CN107845126A (en) * 2017-11-21 2018-03-27 江西服装学院 A kind of three-dimensional animation manufacturing method and device
CN108874114A (en) * 2017-05-08 2018-11-23 腾讯科技(深圳)有限公司 Realize method, apparatus, computer equipment and the storage medium of virtual objects emotion expression service
CN108984087A (en) * 2017-06-02 2018-12-11 腾讯科技(深圳)有限公司 Social interaction method and device based on three-dimensional avatars
CN110135226A (en) * 2018-02-09 2019-08-16 腾讯科技(深圳)有限公司 Expression animation data processing method, device, computer equipment and storage medium

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN201845196U (en) * 2010-08-24 2011-05-25 北京水晶石数字科技有限公司 Performance control system utilizing three-dimensional software
CN102571633A (en) * 2012-01-09 2012-07-11 华为技术有限公司 Method for demonstrating user state, demonstration terminal and server
WO2014201838A1 (en) * 2013-06-19 2014-12-24 腾讯科技(深圳)有限公司 Social information push method and system, push server and storage medium
CN106204698A (en) * 2015-05-06 2016-12-07 北京蓝犀时空科技有限公司 Virtual image for independent assortment creation generates and uses the method and system of expression
CN105578110A (en) * 2015-11-19 2016-05-11 掌赢信息科技(上海)有限公司 Video call method, device and system
CN108874114A (en) * 2017-05-08 2018-11-23 腾讯科技(深圳)有限公司 Realize method, apparatus, computer equipment and the storage medium of virtual objects emotion expression service
CN108984087A (en) * 2017-06-02 2018-12-11 腾讯科技(深圳)有限公司 Social interaction method and device based on three-dimensional avatars
CN107657651A (en) * 2017-08-28 2018-02-02 腾讯科技(上海)有限公司 Expression animation generation method and device, storage medium and electronic installation
CN107845126A (en) * 2017-11-21 2018-03-27 江西服装学院 A kind of three-dimensional animation manufacturing method and device
CN110135226A (en) * 2018-02-09 2019-08-16 腾讯科技(深圳)有限公司 Expression animation data processing method, device, computer equipment and storage medium

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112749357A (en) * 2020-09-15 2021-05-04 腾讯科技(深圳)有限公司 Interaction method and device based on shared content and computer equipment
CN112749357B (en) * 2020-09-15 2024-02-06 腾讯科技(深圳)有限公司 Interaction method and device based on shared content and computer equipment
CN114338577A (en) * 2020-10-12 2022-04-12 腾讯科技(深圳)有限公司 Information processing method and device, electronic equipment and storage medium
WO2023246852A1 (en) * 2022-06-22 2023-12-28 北京字跳网络技术有限公司 Virtual image publishing method and apparatus, electronic device, and storage medium
WO2024027285A1 (en) * 2022-08-04 2024-02-08 腾讯科技(深圳)有限公司 Facial expression processing method and apparatus, computer device and storage medium
CN115665507A (en) * 2022-12-26 2023-01-31 海马云(天津)信息技术有限公司 Method, apparatus, medium, and device for generating video stream data including avatar
CN115665507B (en) * 2022-12-26 2023-03-21 海马云(天津)信息技术有限公司 Method, apparatus, medium, and device for generating video stream data including avatar

Also Published As

Publication number Publication date
CN110717974B (en) 2023-06-09

Similar Documents

Publication Publication Date Title
CN110717974B (en) Control method and device for displaying state information, electronic equipment and storage medium
CN109885367B (en) Interactive chat implementation method, device, terminal and storage medium
US20210146255A1 (en) Emoji-based communications derived from facial features during game play
CN108874114B (en) Method and device for realizing emotion expression of virtual object, computer equipment and storage medium
TW201423419A (en) System and method for touch-based communications
WO2013120851A1 (en) Method for sharing emotions through the creation of three-dimensional avatars and their interaction through a cloud-based platform
JP2013156986A (en) Avatar service system and method provided through wired or wireless web
US11960792B2 (en) Communication assistance program, communication assistance method, communication assistance system, terminal device, and non-verbal expression program
US11238667B2 (en) Modification of animated characters
US11960784B2 (en) Shared augmented reality unboxing experience
CN111899319A (en) Expression generation method and device of animation object, storage medium and electronic equipment
KR20190071241A (en) Method and System for Providing Virtual Blind Date Service
Vilhjálmsson Avatar augmented online conversation
JP2009223419A (en) Creation editing method for avatar in network chat service, chat service system and creation editing method for image data
KR20200085029A (en) Avatar virtual pitting system
KR20090058760A (en) Avatar presenting method and computer readable medium processing the method
Weerasinghe et al. Emotion expression for affective social communication
Prendinger The Global Lab: Towards a virtual mobility platform for an eco-friendly society
US20240144569A1 (en) Danceability score generator
CN114430506B (en) Virtual action processing method and device, storage medium and electronic equipment
Perl Distributed Multi-User VR With Full-Body Avatars
JP2009037336A (en) Communication server device and method
Carretero et al. Preserving avatar genuineness in different display media
KR20170048863A (en) Method for providing storybook animation using ar
WO2024097553A1 (en) Danceability score generator

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40021449

Country of ref document: HK

SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant