CN110717974B - Control method and device for displaying state information, electronic equipment and storage medium - Google Patents

Control method and device for displaying state information, electronic equipment and storage medium Download PDF

Info

Publication number
CN110717974B
CN110717974B CN201910927772.2A CN201910927772A CN110717974B CN 110717974 B CN110717974 B CN 110717974B CN 201910927772 A CN201910927772 A CN 201910927772A CN 110717974 B CN110717974 B CN 110717974B
Authority
CN
China
Prior art keywords
user
virtual image
avatar
social
state
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910927772.2A
Other languages
Chinese (zh)
Other versions
CN110717974A (en
Inventor
毛竹
王珊珊
蒋玮楠
苏智威
马国伟
邱晓磊
王猛
田宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Cyber Tianjin Co Ltd
Original Assignee
Tencent Cyber Tianjin Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Cyber Tianjin Co Ltd filed Critical Tencent Cyber Tianjin Co Ltd
Priority to CN201910927772.2A priority Critical patent/CN110717974B/en
Publication of CN110717974A publication Critical patent/CN110717974A/en
Application granted granted Critical
Publication of CN110717974B publication Critical patent/CN110717974B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/01Social networking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L51/00User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
    • H04L51/04Real-time or near real-time messaging, e.g. instant messaging [IM]
    • H04L51/046Interoperability with other network applications or services
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L51/00User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
    • H04L51/07User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail characterised by the inclusion of specific contents
    • H04L51/10Multimedia information
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L51/00User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
    • H04L51/52User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail for supporting social networking services

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Business, Economics & Management (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Resources & Organizations (AREA)
  • Architecture (AREA)
  • Geometry (AREA)
  • Health & Medical Sciences (AREA)
  • Economics (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Hardware Design (AREA)
  • Marketing (AREA)
  • Primary Health Care (AREA)
  • Strategic Management (AREA)
  • Tourism & Hospitality (AREA)
  • General Business, Economics & Management (AREA)
  • Multimedia (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application provides a control method, a control device, electronic equipment and a storage medium for displaying state information, and belongs to the technical field of computers. When the social application runs, the mood state selected by the mood state selection instruction is determined in response to the mood state selection instruction input through the social information generation interface, and the virtual image corresponding to the mood state is obtained and displayed. The virtual image is obtained by modifying the basic virtual image of the user according to the expression model corresponding to the mood state, so that the expression of the virtual image of the user can be truly changed, the mood state of the user is truly displayed through the virtual image of the user, and the state information display effect of the social application is improved.

Description

Control method and device for displaying state information, electronic equipment and storage medium
Technical Field
The application relates to the field of computers, in particular to a voice control technology, and provides a control method, a device, electronic equipment and a storage medium for displaying state information.
Background
With the development of computer technology and communication technology, avatar modeling technology is increasingly applied in social applications. For example, the electronic device may obtain a photograph of the user, establish an avatar, such as a cartoon, of the user based on the photograph of the user, and present the avatar of the user as an avatar of the user on the social platform.
However, in the existing social application, the mood state of the user cannot be shown through the avatar of the user.
Disclosure of Invention
In order to solve the existing technical problems, the embodiment of the application provides a control method, a device, electronic equipment and a storage medium for displaying state information, and the mood state of a user can be displayed through the avatar of the user.
In order to solve the existing technical problems, the embodiment of the application provides a control method, a device, electronic equipment and a storage medium for displaying state information, and the mood state of a user can be displayed through the avatar of the user.
In order to achieve the above purpose, the technical solution of the embodiments of the present application is implemented as follows:
in a first aspect, an embodiment of the present application provides a control method for displaying status information, where the method includes:
responding to a mood state selection instruction input through a social information generation interface, and determining a mood state selected by the mood state selection instruction;
acquiring an virtual image corresponding to the mood state, wherein the virtual image is obtained by modifying a basic virtual image of a user according to an expression model corresponding to the mood state;
and displaying the obtained avatar.
In a second aspect, an embodiment of the present application provides a control device for displaying status information, where the device includes:
the mood state determining unit is used for responding to mood state selection instructions input through the social information generating interface and determining mood states selected by the mood state selection instructions;
the virtual image acquisition unit is used for acquiring the virtual image corresponding to the mood state, and the virtual image is obtained by modifying the basic virtual image of the user according to the expression model corresponding to the mood state;
and the avatar display unit is used for displaying the obtained avatar.
In a third aspect, embodiments of the present application provide a computer readable storage medium, where a computer program is stored, where the computer program is executed by a processor to implement the method for controlling presentation state information according to the first aspect.
In a fourth aspect, an embodiment of the present application provides an electronic device, including a memory and a processor, where the memory stores a computer program that can be executed on the processor, and when the computer program is executed by the processor, the processor is caused to implement the method for controlling the display state information in the first aspect.
According to the control method, the control device, the electronic equipment and the storage medium for displaying the state information, when the social application runs, the mood state selected by the mood state selection instruction is determined in response to the mood state selection instruction input through the social information generation interface, and the virtual image corresponding to the mood state is obtained and displayed. The virtual image is obtained by modifying the basic virtual image of the user according to the expression model corresponding to the mood state, so that the expression of the virtual image of the user can be truly changed, the mood state of the user is truly displayed through the virtual image of the user, and the state information display effect of the social application is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the description of the embodiments will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is an application scenario diagram of a control method for displaying status information according to an embodiment of the present application;
Fig. 2 is a flow chart of a control method for displaying status information according to an embodiment of the present application;
FIG. 3 is a schematic diagram of a mood status selection interface according to an embodiment of the present disclosure;
fig. 4 is a schematic view showing a bone structure of a basic avatar provided in an embodiment of the present application;
fig. 5 is a schematic diagram of a skeleton structure of an avatar corresponding to a mood state according to an embodiment of the present application;
FIG. 6 is a schematic diagram of an avatar with a smile derived from a base avatar variation provided in an embodiment of the present application;
fig. 7 is a schematic view of an avatar for laughter crying according to an embodiment of the present application;
fig. 8 is a flowchart of another control method for displaying status information according to an embodiment of the present application;
FIG. 9 is a schematic diagram of a social information generating interface provided in an embodiment of the present application;
fig. 10 is a schematic diagram of a status information sharing interface according to an embodiment of the present application;
FIG. 11 is a schematic diagram of a social interaction interface provided in an embodiment of the present application;
fig. 12 is an interaction schematic diagram of a control method for displaying status information according to an embodiment of the present application;
fig. 13 is an interaction schematic diagram of another control method for displaying status information according to an embodiment of the present application;
Fig. 14 is a block diagram of a control device for displaying status information according to an embodiment of the present application;
fig. 15 is a block diagram of another control device for displaying status information according to an embodiment of the present application;
fig. 16 is a block diagram of another control device for displaying status information according to an embodiment of the present application;
fig. 17 is a block diagram of an electronic device according to an embodiment of the present application.
Detailed Description
Some of the terms in the embodiments of the present application are explained below to facilitate understanding by those skilled in the art.
(1) Basic avatar: the user's avatar created using the avatar reconstruction technique may be a cartoon character, a character, an animal character, or other custom characters, etc. For example, the avatar of the user may be created based on the photograph of the user such that the head of the avatar is relatively close to the face image of the user. The base avatar may include only the head, but may also include the head, torso, and other body parts. The basic avatar may be a neutral avatar without expression, and may include, in addition to the basic avatar, a model for decorating the basic avatar, such as a hairstyle, a dress, a weapon prop worn, etc., which can be replaced. The base avatar may be a static avatar or a dynamic avatar.
(2) And (3) a sticker material: a material for forming an expression sticker can be attached to the face of an avatar, and assist the avatar to exhibit different expressions, such as tear stickers, etc.
(3) Pinching face: the face of the avatar can be adjusted through a preset expression template, the pinching of the face is realized, the avatar after pinching the face can express the emotion or behavior of the user, and the pinching of the face can express the likes or dislikes of someone. In general, the avatar after pinching the face has a sense of happiness and looks very lovely.
(4) Social application: a network application that links different users through a friend relationship or common interest can enable social interaction between at least two users. The social application can be various applications, and only needs to realize social interaction functions, such as chat application for multi-person chat, game application for game fans to play games, game forum application for game fans to share game information, and the like. The social application typically sets different operation interfaces according to the function to be implemented, for example, a social information generating interface for the user to input the content to be published, a status information sharing interface for sharing personal status information, a social interaction interface for chatting with the selected target contact, and the like.
(5) Terminal equipment: various applications, including social applications, can be installed, and electronic devices, which can be mobile or fixed, can display various interfaces and various objects in the interfaces provided in the installed applications. Such as a cell phone, tablet, various types of wearable devices, car mounted devices, personal digital assistants (personal digital assistant, PDA) or other electronic devices capable of performing the above functions, etc.
The present application will now be described in further detail with reference to the accompanying drawings and specific examples.
The mood status displaying method provided in the embodiment of the present application may be applied to the application scenario shown in fig. 1, and as shown in fig. 1, the service server 100 may be a service server of a social application, and is in communication connection with a plurality of terminal devices (such as terminal devices 301, 302, 303, etc.) through the network 200, where the network 200 may be, but is not limited to, a local area network, a metropolitan area network, a wide area network, etc. The plurality of terminal devices may transmit communication data and messages to each other through the network 200 and the service server 100. The terminal devices 301 to 303 may be portable devices (for example, mobile phones, tablet computers, notebook computers, etc.), computers, smart screens, personal computers (PC, personal Computer), etc. The business server 100 may be any device capable of providing internet services, for example, the business server 100 may be a cloud server, and may include one or more servers.
The mood state display method provided by the embodiment of the application can be executed by the social application installed on the terminal equipment, and can also be executed by the service server 100 and the social application installed on the terminal equipment together. Each terminal device can be provided with a social application, and a plurality of terminals can be connected with the service server through the social application.
Taking the terminal device 301 as an example for illustration, a user may log into a social application installed on the terminal device 301, and perform daily communications and process some daily transactions through the social application, and each user may have an identity, i.e. a user identifier, such as a user account number, a user nickname, a phone number, etc., that may be identified by other users on the social application. In social applications, different users can establish a friend relationship by mutually confirming, for example, mutually adding friends or paying attention to each other. When two users establish a friend relationship, they become social contacts of each other. Each user in the social application has a social contact list for the user to communicate with the social contacts in the communication list in the form of instant messaging messages and the like. The user's social contacts can see dynamic information published by the user through the status information sharing interface, where the dynamic information includes personal status information below. For example, the status information sharing interface may be a friend circle display interface of a social application, and the user may issue dynamic information of the user to a friend circle, and display the dynamic information through the friend circle display interface, so that the friend knows about the real-time dynamic of the user, and also knows about the real-time dynamic of the friend through the friend circle. In some social applications, a group of users may form a friends relationship with each other, forming a social group, each member within the social group being a social contact of all other members, users within a social group may communicate with each other through the social application.
Each user has a personalized avatar, called the base avatar, in the social application. For example, the social application may obtain a photograph of the user, create a base avatar from the photograph of the user using the avatar modeling tool, save in the terminal device, or upload to the service server 100, the base avatar saved for the user by the service server 100. The base avatar may embody the personalized features of the user. After the user logs into the social application, the user's base avatar may be displayed in the interface or may be used as a user's avatar.
In order to enable an avatar of a user to truly display a mood state of the user, the embodiment of the application provides a control method, a device, electronic equipment and a storage medium for displaying state information. The virtual image is obtained by modifying the basic virtual image of the current user according to the expression model corresponding to the mood state, so that the expression of the virtual image of the user can be truly changed, and the mood state of the user is truly displayed through the virtual image of the user.
Fig. 2 shows a flowchart of a control method for displaying status information according to an embodiment of the present application, where, as shown in fig. 2, the method may include the following steps:
step S201, in response to the mood state selection instruction input through the social information generation interface, determining the mood state selected by the mood state selection instruction.
As shown in fig. 3, in the social application of the embodiment of the present application, the social information generating interface displays a plurality of marks of mood states for the user to select. Such as "surprise", "smoldering", "happy", "vital energy", etc. The user clicking or double clicking on a sign of any one of the mood states may be considered as the user has selected that mood state. And responding to the mood state selection instruction input by the user through the social information generation interface, and determining the mood state selected by the mood state selection instruction. For example, when the user clicks the mood state flag of "little depression", it is determined that the mood state selected by the mood state selection instruction of the user is "little depression".
Step S202, obtaining the avatar corresponding to the selected mood state.
The virtual image corresponding to the mood state can be obtained by modifying a basic virtual image of the user according to an expression model corresponding to the mood state, and the virtual image has an expression and a mind state capable of showing the mood state of the user.
In some embodiments, for the basic avatar of the current user, the avatar corresponding to each mood state may be generated and stored in advance according to the expression model corresponding to each mood state. After receiving the mood state selection instruction input by the user, acquiring the avatar corresponding to the mood state selected by the mood state selection instruction from the prestored avatar.
In other embodiments, the expression model corresponding to each mood state may be pre-stored, after the mood state selection instruction input by the user is received, the expression model corresponding to the mood state selected by the mood state selection instruction is obtained, and the basic avatar of the current user is modified according to the expression model corresponding to the mood state, so as to obtain the avatar corresponding to the mood state.
And step S203, displaying the obtained avatar.
For example, the obtained avatar with expression may be used instead of the basic avatar as the avatar of the user, and the avatar may be presented in a set area in the social information generating interface.
According to the control method for displaying the state information, when the social application runs, the mood state selected by the mood state selection instruction is determined in response to the mood state selection instruction input through the social information generation interface, and the virtual image corresponding to the mood state is obtained and displayed. The virtual image is obtained by modifying the basic virtual image of the user according to the expression model corresponding to the mood state, so that the expression of the virtual image of the user can be truly changed, the mood state of the user is truly displayed through the virtual image of the user, the emotion of the user is more vividly expressed, and the state information display effect of the social application is improved.
In an alternative embodiment, the base avatar created from the user's photograph may be created based on a preset bio-skeletal structure. Through image recognition, the personalized features of the user are extracted from the photos of the user, the weights of bones in the preset biological bone structures are adjusted according to the personalized features of the user, the personalized biological bone structures of the user are obtained, and the personalized biological bone structures of the user are covered with skin, five sense organs and other appearance data, so that the basic virtual image of the user can be generated. The biological bone structure may be a human bone structure or may be a biological bone structure of other small animals suitable for cartoon figures, and the biological bone structure is described below as a human bone structure.
In some embodiments, as shown in fig. 4, a predetermined number of bones may be included in the bio-skeletal structure of the base avatar, for example, the base avatar may include 121 head bones, or may include 192 bones including both head bones and bones of the trunk and limbs. Each bone has a unique number. The expression model corresponding to each mood state comprises skeleton modification data set for the mood state, the skeleton modification data are used for carrying out modification treatment on bones needing modification in the basic avatar of each user, and the bones needing modification can comprise part or all of bones in the basic avatar. Illustratively, as shown in table 1, the bone modification data may include modification information of each bone that needs to be modified, the modification information of each bone including one or any combination of the following information: position change information, angle change information, length change information. The "-" in table 1 indicates that this information is not present.
TABLE 1
Figure BDA0002219381650000081
In the above step S202, an avatar corresponding to any one of the mood states may be generated by the following method: the expression models corresponding to the mood states are obtained, and the expression models corresponding to the mood states can be stored in the terminal equipment or the server. And adjusting bones of the basic avatar according to the acquired bone modification data in the expression model so as to modify the basic avatar into an avatar corresponding to the mood state. The bones of the basic avatar may be adjusted by one or any combination of the following means: adjusting the position of the corresponding skeleton in the basic virtual image according to the position change information of any skeleton in the expression model; adjusting the angle of the corresponding skeleton in the basic virtual image according to the angle change information of any skeleton in the expression model; and adjusting the length of the corresponding skeleton in the basic avatar according to the length change information of any skeleton in the expression model. For example, for a portion of bone, only the length of the bone may need to be adjusted, for another portion of bone, the angle and position of the bone may need to be changed, and for another portion of bone, the position, angle and length of the bone may need to be changed. Through the above-mentioned adjustment of the bones of the basic avatar, the resulting avatar can simulate a facial expression of a person. Illustratively, as shown in fig. 5, the angles and lengths of bones numbered 155 and 156 are adjusted, and the lengths of bones numbered 157 and 158 are adjusted, so that the basic avatar of the user can be changed to an avatar corresponding to the mood state "happy", and the changed effect is changed from a neutral basic avatar without expression to a smiling avatar as shown in fig. 6. By adjusting bones of the basic avatar, a pinching of a face or the like can also be achieved.
In other embodiments, the biological bone structure of the base avatar may include a predetermined number of nodes in addition to a predetermined number of bones, where the nodes are bone-to-bone positions, corresponding to joints of the human body. Correspondingly, the skeleton variation data of the expression model comprise variation data of each skeleton and variation data of each node, and the skeletons and the nodes of the basic avatar can be adjusted according to the skeleton variation data of different expression models so as to obtain the avatar corresponding to each mood state.
In other embodiments, the expression model corresponding to each mood state may include movement state data set for the selected mood state in addition to bone modification data. For example, the motion state data of a bone may include the magnitude of the bone motion and the number of motions. Optionally, the motion state data of the node may also include motion state data of the node, such as pitch angle change times of the node, and the like. For example, by changing the pitch angle of the knee node, a jumping motion may be exhibited. And adjusting bones of the basic avatar according to the bone variation data in the expression model to obtain an avatar corresponding to the mood state, and controlling bone movement of the avatar corresponding to the mood state according to the movement state data. The avatar obtained through the above steps can simulate human actions such as happy jump up or applause, and the like, and can realize dynamic face pinching and the like through the above steps. By controlling the skeletal movement of the avatar, a dynamic avatar can also be obtained.
In other embodiments, the virtual image corresponding to part of the mood state can be made more real and more obvious by adding the sticker. The decal patterns can be displayed on the social information generation interface, and a user can select the corresponding decal by clicking the decal patterns. And responding to the sticker selection instruction input through the social information generation interface, determining the sticker material selected by the sticker selection instruction and the adding position of the sticker material, wherein the adding position is set by taking the position of certain bones in the biological bone structure in the avatar as a reference, so that the relative position of the sticker material and the avatar can be embodied. For example, tear stickers are included in expression models corresponding to the mood state "laugh cry". If the expression model further includes a sticker material and an addition position of the sticker material, the sticker material may be added to a corresponding position of the avatar according to the addition position of the sticker material, as shown in fig. 7. Optionally, besides displaying the decal pattern for representing the mood, other decal patterns may be displayed on the social information generating interface, for example, the decal pattern for decoration may be selected by the user according to his own needs, which is not limited in this application.
The step of generating the avatar corresponding to the mood state may be performed by the terminal device or a social application installed on the terminal device, or may be performed by a service server.
Specifically, if the step of generating the avatar corresponding to the mood state is performed by the terminal device, the terminal device may locally acquire the expression model corresponding to the mood state, or may acquire the expression model corresponding to the mood state from the service server. For example, after determining the mood state selected by the mood state selection instruction input by the user, the terminal device may send an expression model acquisition request to the service server, where the expression model acquisition request carries the mood state selected by the mood state selection instruction. And the business server sends the expression model corresponding to the mood state to the terminal equipment, and the terminal equipment carries out modification on the basic virtual image of the current user according to the expression model corresponding to the mood state to obtain the virtual image corresponding to the mood state. The terminal device may generate and store the avatar corresponding to each mood state when the user logs in the social application or registers the user, or may regenerate the avatar corresponding to the mood state after receiving the mood state selection instruction input by the user.
If the service server executes the step of generating the avatar corresponding to the mood state, the terminal device receives the mood state selection instruction input by the user and sends an avatar acquisition request to the service server, wherein the avatar acquisition request carries the mood state selected by the mood state selection instruction. The service server acquires the user's basic avatar from the local or from the terminal device. For example, the avatar acquisition request sent by the terminal device may include user identification information such as a user name, and the service server may locally search for the base avatar of the user according to the user identification information. If the service server does not find the base avatar of the user, the terminal device may also be notified to transmit the base avatar of the user to the service server. The business server determines an expression model corresponding to the mood state selected by the mood state selection instruction, and modifies the basic virtual image of the current user according to the expression model corresponding to the mood state to obtain the virtual image corresponding to the mood state, and sends the obtained virtual image to the terminal equipment. It should be noted that, the service server may also generate and store the avatar corresponding to each mood state of the user when the user logs in the social application or registers the user, and when receiving the avatar acquisition request sent by the terminal device, send the corresponding avatar to the terminal device directly according to the user identification information carried in the avatar acquisition request and the mood state selected by the user.
In order to enable the social friends of the user to know the mood state of the user, the communication effect of the social interaction mode is improved. The social application may also share the user's personal status information to the social friends. For example, a user may dynamically publish his personal status information to a circle of friends in real time by publishing.
In some embodiments, as shown in fig. 8, the control method for presenting the status information may include the following steps:
step S801, in response to a mood state selection instruction input through the social information generation interface, determines a mood state selected by the mood state selection instruction.
Step S802, obtaining the virtual image corresponding to the mood state.
The virtual image is obtained by modifying the basic virtual image of the current user according to the expression model corresponding to the selected mood state.
Step S803, the obtained avatar is displayed.
As shown in fig. 9, the acquired avatar may be presented in the avatar display area of the current user.
Step S804, responding to the posting instruction input by the user through the social information generating interface, and acquiring the content input through the social information generating interface.
As shown in fig. 9, a content editing area is provided on the social information generating interface, and a user can input desired content in the content editing area. The social information generation interface is also provided with a posting button, and after the user edits the content to be posted, the user can click the posting button to post. And receiving a posting instruction input by a user through the social information generation interface, and acquiring content input by the user through the social information generation interface, wherein the content can comprise picture content or text content.
Step S805 generates personal status information including the input content and the avatar of the user.
Step S806, the personal status information is displayed on the status information sharing interface, and the personal status information is shared with the social contacts of the user.
The personal state information containing the virtual image can be displayed on a state information sharing interface, the state information sharing interface can be a friend circle display interface, and the interface can display personal state information published by a user and personal state information published by a social friend of the user. The personal status information including the avatar may also be sent to the service server, so that the service server may share the personal status information of the user with the social contacts of the user, and the social contacts of the user may see the personal status information published by the user through the status information sharing interface, as shown in fig. 10.
In some embodiments, different background patterns may be set for different mood states, a background pattern corresponding to the mood state selected by the state selection instruction may be obtained, and the background pattern corresponding to the mood state may be displayed on the social information generating interface. Specifically, a background pattern corresponding to a mood state may be used as a background of the content editing area, over which text content or picture content input by the user is displayed; alternatively, a background pattern corresponding to the mood may be used as the background of the avatar, and the background pattern may be displayed in the region where the avatar is displayed, and the avatar may be displayed above the background pattern. The background pattern may also be included in the personal status information shared in step S806.
In some embodiments, the user may also send personal status information into a certain social chat group. For example, in the instant messaging application, the social chat group also corresponds to a social information generation interface, acquires the content input by the user through the social information generation interface, responds to the posting instruction input through the social information generation interface, and displays the content containing the virtual image on the status information sharing interface; the status information sharing interface is a chat interface of the social chat group, and the content including the virtual image is sent to the service server as personal status information, so that the service server shares the personal status information of the user to each person in the social chat group.
In some embodiments, the user may also see personal status information of social contacts in a circle of friends. For example, a user may open a status information sharing interface through an entry of a friend circle, such as the exemplary interface shown in FIG. 10. Responding to the operation of opening the status information sharing interface, and acquiring the personal status information of each social contact person needing to display the personal status information from the service server; the social contact is a social friend of the user, which is concerned by the user, or a social friend allowing the user to watch the circle of friends. When the obtained personal state information of the social contact contains the virtual image of the social contact, the virtual image of the social contact is displayed in a set display area on a state information sharing interface, and the virtual image of each social contact is obtained by modifying the basic virtual image of the social contact according to an expression model corresponding to the mood state selected by the social contact. The setting display area may be an area on the status information sharing interface for displaying personal status information of the social contact.
In other embodiments, the chat interface may also display the personal image of the user's own or other social friends according to settings while the user is chatting with the social friends through the social application. The counterpart social friends may be referred to as target contacts. Specifically, in the chat process, the avatar may be included in the personal image display information of the current user to be transmitted to the service server, so that the service server transmits the personal image display information of the current user to the target contact. As shown in fig. 11, when a user opens a chat interface (social interaction interface) with a certain social friend, the social friend is a target contact in the following, and in response to an operation of opening the social interaction interface with the target contact, personal image display information of the target contact is obtained from a service server, when the personal image display information of the target contact includes an avatar of the target contact, the avatar of the target contact is displayed in a setting area of the social interaction interface, and in fig. 11, the avatar of the target contact is displayed on the right side of the social interaction interface. The virtual image of the target contact person is obtained by modifying the basic virtual image of the target contact person according to the expression model corresponding to the mood state selected by the target contact person.
For easier understanding, the following description describes a control method for displaying status information from the point of view of interaction between a terminal device and a service server. The terminal device has a social application installed thereon, and actions performed by the terminal device may also be considered as performed by the social application. In one embodiment, as shown in fig. 12, the control method for displaying status information includes the following steps:
in step S1201, the terminal device determines the mood state selected by the mood state selection instruction in response to the mood state selection instruction input through the social information generation interface.
In step S1202, the terminal device sends an avatar acquisition request to the service server, where the avatar acquisition request carries the mood state selected by the mood state selection instruction and the user identifier of the current user.
In step S1203, the service server determines an expression model corresponding to the mood state, and obtains a basic avatar of the current user according to the user identifier of the current user.
In step S1204, the business server modifies the basic avatar of the current user according to the expression model corresponding to the mood state, and generates the avatar corresponding to the mood state selected by the current user.
The generating of the avatar corresponding to the mood state selected by the current user by the service server may be performed with reference to the method described above, and will not be described herein.
In step S1205, the service server transmits the generated avatar to the terminal device.
In step S1206, the terminal device displays the received avatar.
In another embodiment, as shown in fig. 13, a social contact refers to a social friend that a user confirms with both parties in a certain social application. The control method for displaying the state information comprises the following steps:
in step S1301, the terminal device of the user determines the mood state selected by the mood state selection instruction in response to the mood state selection instruction input through the social information generation interface.
In step S1302, the user' S terminal device acquires and displays an avatar corresponding to the mood state.
The terminal equipment of the user can locally acquire an expression model corresponding to the mood state, and the basic virtual image of the current user is modified according to the expression model corresponding to the mood state to obtain the virtual image corresponding to the mood state. The terminal device of the user can generate and store the virtual image corresponding to each mood state when the user logs in the social application or the user registers, and can also regenerate the virtual image corresponding to the mood state after receiving a mood state selection instruction input by the user.
In step S1303, the terminal device of the user obtains the content input through the social information generating interface in response to the posting instruction input through the social information generating interface.
In step S1304, the user 'S terminal device generates personal status information including the input content and the user' S avatar, and displays the personal status information on the status information sharing interface.
In step S1305, the user' S terminal device transmits the contents including the avatar as personal status information to the service server.
In step S1306, the service server sends the personal status information of the user to the terminal devices of the social contacts of the user, respectively.
When a certain social contact person opens the state information sharing interface through the terminal equipment, the personal state information of the user is displayed on the state information sharing interface, wherein the personal state information comprises an virtual image corresponding to the mood state selected by the user, and the social contact person can intuitively see the expression of the user and quickly understand and understand the mood state of the user.
Optionally, the method shown in fig. 13 may further include the steps of:
in step S1307, the terminal device of the user receives the operation of opening the status information sharing interface by the user.
In step S1308, the terminal device of the user sends a status information acquisition request to the service server, where the status information acquisition request carries the identification information of the user.
In step S1309, the service server obtains personal status information of each social contact who needs to display the personal status information.
The service server determines social contacts contained in the user communication list according to the identification information of the user, obtains social contacts publishing personal state information from the social contacts according to the publishing time sequence, takes the social contacts publishing the personal state information as target objects needing to display the personal state information, and obtains the personal state information of each target object.
In some embodiments, the social contact who needs to display personal status information may be a social contact who publishes personal status information in the communication list of the user and allows the user to see the personal status information.
In step S1310, the service server sends the personal status information of each social contact person that needs to display the personal status information to the terminal device of the user.
In step S1311, the terminal device of the user displays the personal status information of each social contact on the status information sharing interface.
And if the obtained personal state information of the social contact contains the avatar of the social contact, displaying the avatar of the social contact in a display area for displaying the personal state information of the social contact in a state information sharing interface. The virtual image of each target object is obtained by modifying the basic virtual image of the target object according to the expression model corresponding to the mood state selected by the target object.
In the method, each user has a basic avatar in social application, the user selects the mood state, the mood state selected by the user can be displayed on the basic avatar of the user, and the user can see the avatar with the real expression and the mind state based on the facial expression template by adjusting the basic avatar based on the expression template.
Corresponding to the embodiment of the control method for displaying the state information, the embodiment of the application also provides a control device for displaying the state information. Fig. 14 is a schematic structural diagram of a control device for displaying status information according to an embodiment of the present disclosure; as shown in fig. 14, the control device for displaying the status information includes a mood-state determining unit 141, an avatar obtaining unit 142, and an avatar displaying unit 143; wherein, the liquid crystal display device comprises a liquid crystal display device,
a mood state determining unit 141, configured to determine a mood state selected by a mood state selection instruction in response to the mood state selection instruction input through a social information generating interface;
An avatar obtaining unit 142, configured to obtain an avatar corresponding to the mood state, where the avatar is obtained by modifying a basic avatar of the user according to an expression model corresponding to the mood state;
and an avatar display unit 143 for displaying the acquired avatar.
In one possible implementation, the base avatar is established based on a preset biological bone structure; the avatar acquisition unit 142 may be further configured to: acquiring an expression model corresponding to the mood state, wherein the expression model comprises skeleton variation data set for the selected mood state; and adjusting bones of the basic avatar according to the bone modification data in the expression model so as to modify the basic avatar into an avatar corresponding to the mood state.
In one possible implementation, the avatar acquisition unit 142 may be further configured to: and if the expression model further comprises skeleton motion state data set for the selected mood state, controlling the skeleton motion of the avatar corresponding to the mood state according to the motion state data.
In one possible implementation, the avatar acquisition unit 142 may be further configured to: the bone modification data comprises modification information of each bone, and the modification information of each bone comprises one or any combination of the following information: position change information, angle change information, and length change information; the adjusting the skeleton of the basic avatar according to the skeleton variation data in the expression model comprises one or any combination of the following modes: adjusting the position of the corresponding skeleton in the basic virtual image according to the position change information of any skeleton in the expression model; adjusting the angle of the corresponding skeleton in the basic virtual image according to the angle change information of any skeleton in the expression model; and adjusting the length of the corresponding skeleton in the basic virtual image according to the length change information of any skeleton in the expression model.
In one possible implementation, the avatar acquisition unit 142 may be further configured to: responding to a sticker selection instruction input through a social information generation interface, and determining a sticker material selected by the sticker selection instruction and an adding position of the sticker material; and adding the sticker material to the corresponding position of the avatar according to the adding position of the sticker material.
In a possible implementation manner, as shown in fig. 15, the apparatus may further include a status information sharing unit 151, configured to: responding to a posting instruction input through the social information generating interface, and acquiring content input through the social information generating interface; generating personal status information including the input content and the user's avatar; and displaying the personal state information on a state information sharing interface, and sharing the personal state information to social contacts of the user.
In one possible implementation manner, as shown in fig. 15, the apparatus may further include a status information display unit 152, configured to: responding to the operation of opening the status information sharing interface, and acquiring the personal status information of each social contact person needing to display the personal status information; when the obtained personal state information of the social contact contains the virtual image of the social contact, the virtual image of the social contact is displayed in a set display area on the state information sharing interface, and the virtual image of each social contact is obtained by modifying the basic virtual image of the social contact according to the expression model corresponding to the mood state selected by the social contact.
In a possible implementation manner, as shown in fig. 16, the apparatus may further include a social interaction unit 161, configured to: responding to the operation of opening a social interaction interface with the target contact person, and enabling the virtual image of the user to be contained in personal image display information of the user and shared with the target contact person; acquiring personal image display information of the target contact person; when the personal image display information of the target contact person comprises the virtual image of the target contact person, the virtual image of the target contact person is displayed in a set area of the social interaction interface, and the virtual image of the target contact person is obtained by modifying the basic virtual image of the target contact person according to an expression model corresponding to the mood state selected by the target contact person.
When the social application runs, the control device for displaying the state information responds to the mood state selection instruction input through the social information generation interface, determines the mood state selected by the mood state selection instruction, and acquires and displays the virtual image corresponding to the mood state. The virtual image is obtained by modifying the basic virtual image of the user according to the expression model corresponding to the mood state, so that the expression of the virtual image of the user can be truly changed, the mood state of the user is truly displayed through the virtual image of the user, and the state information display effect of the social application is improved.
Corresponding to the embodiment of the control method for displaying the state information, the embodiment of the application also provides electronic equipment. The electronic device may be a terminal device shown in fig. 1, for example, an electronic device such as a smart phone, a tablet computer, a portable computer or a PC, where the terminal device includes at least a memory for storing data and a processor for processing data. Wherein, for the processor used for data processing, when executing the processing, the processor can be realized by a microprocessor, a CPU, a DSP or an FPGA; the memory includes operation instructions, which may be computer executable codes, and the operation instructions implement each step in the control method flow for displaying the state information in the embodiment of the present application.
Fig. 17 is a schematic structural diagram of an electronic device according to an embodiment of the present application; as shown in fig. 17, the electronic device 170 in the embodiment of the present application includes: a processor 171, a display 172, a memory 173, a communication device 174, a bus 175, and an input device 176. The processor 171, the memory 173, the input device 176, the display 172 and the communication device 174 are all connected through a bus 175, and the bus 175 is used to transmit data between the processor 171, the memory 173, the display 172 and the communication device 174.
The memory 173 stores therein a computer storage medium having stored therein computer executable instructions for implementing the network game control method according to the embodiment of the present application. The processor 171 is used for executing the control method for displaying the state information and displaying the avatar and the personal state information of the user on the display 172. The processor 171 is connected to the service server through the communication device 174 and performs data transmission.
The input device 176 is mainly used for acquiring input operations of a user, and when the electronic devices are different, the input device 176 may be different. For example, when the electronic device is a PC, the input device 176 may be a mouse, keyboard, or other input device; when the electronic device is a portable device such as a smart phone, tablet computer, etc., the input device 176 may be a touch screen.
The embodiment of the application also provides a computer storage medium, wherein the computer storage medium stores computer executable instructions for realizing the control method for displaying the state information in any embodiment of the application.
In the several embodiments provided in this application, it should be understood that the disclosed apparatus and method may be implemented in other ways. The above described device embodiments are only illustrative, e.g. the division of the units is only one logical function division, and there may be other divisions in practice, such as: multiple units or components may be combined or may be integrated into another system, or some features may be omitted, or not performed. In addition, the various components shown or discussed may be coupled or directly coupled or communicatively coupled to each other via some interface, whether indirectly coupled or communicatively coupled to devices or units, whether electrically, mechanically, or otherwise.
The units described as separate units may or may not be physically separate, and units displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units; some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing unit, or each unit may be separately used as one unit, or two or more units may be integrated in one unit; the integrated units may be implemented in hardware or in hardware plus software functional units.
The foregoing is merely specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily think about changes or substitutions within the technical scope of the present application, and the changes and substitutions are intended to be covered by the scope of the present application.

Claims (10)

1. A control method for displaying status information, the method comprising:
acquiring a photo of a user, and extracting personalized features of the user from the photo of the user through image recognition;
According to the personalized features of the user, the weight of each bone in a preset biological bone structure is adjusted to obtain the personalized biological bone structure of the user;
overlaying appearance data on the personalized feature skeleton structure of the user to generate a basic avatar of the user; the basic virtual image is a neutral image without expression;
responding to a mood state selection instruction input through a social information generation interface, and determining a mood state selected by the mood state selection instruction; the social information generation interface displays marks of a plurality of selectable mood states; the emotion state corresponds to an expression model, and each expression model corresponding to the emotion state comprises skeleton modification data set for the corresponding emotion state, wherein the skeleton modification data are used for carrying out modification treatment on bones needing modification in a basic virtual image of a user;
acquiring an virtual image corresponding to the mood state, wherein the virtual image is obtained by modifying a basic virtual image of a user according to an expression model corresponding to the mood state; the avatar having an expression and a look for exhibiting the selected mood state;
Responding to a sticker selection instruction input through a social information generation interface, and determining a sticker material selected by the sticker selection instruction and an adding position of the sticker material; the sticker material comprises a sticker pattern for representing mood, or a sticker pattern for decoration, or other sticker patterns;
adding the sticker material to a corresponding position of the avatar according to the addition position of the sticker material;
displaying the virtual image; acquiring a background pattern corresponding to the mood state, and displaying the background pattern in a region for displaying the virtual image so as to take the background pattern as the background of the virtual image; wherein, different mood states correspond to different background patterns.
2. The method of claim 1, wherein the base avatar is established based on a preset bio-skeletal structure; the obtaining the virtual image corresponding to the mood state comprises the following steps:
acquiring an expression model corresponding to the mood state, wherein the expression model comprises skeleton variation data set for the selected mood state;
and adjusting bones of the basic avatar according to the bone modification data in the expression model so as to modify the basic avatar into an avatar corresponding to the mood state.
3. The method of claim 2, wherein the acquiring the avatar corresponding to the mood state further comprises:
and if the expression model further comprises skeleton motion state data set for the selected mood state, controlling the skeleton motion of the avatar corresponding to the mood state according to the motion state data.
4. A method according to claim 2 or 3, characterized in that the bone modification data comprise modification information of the respective bones, the modification information of the respective bones comprising one or any combination of the following information: position change information, angle change information, and length change information; the adjusting the skeleton of the basic avatar according to the skeleton variation data in the expression model comprises one or any combination of the following modes:
adjusting the position of the corresponding skeleton in the basic virtual image according to the position change information of any skeleton in the expression model;
adjusting the angle of the corresponding skeleton in the basic virtual image according to the angle change information of any skeleton in the expression model;
and adjusting the length of the corresponding skeleton in the basic virtual image according to the length change information of any skeleton in the expression model.
5. A method according to any one of claims 1 to 3, wherein after displaying the obtained avatar, the method further comprises:
responding to a posting instruction input through the social information generating interface, and acquiring content input through the social information generating interface;
generating personal status information including the input content and the user's avatar;
and displaying the personal state information on a state information sharing interface, and sharing the personal state information to social contacts of the user.
6. The method of claim 5, wherein the method further comprises:
responding to the operation of opening the status information sharing interface, and acquiring the personal status information of each social contact person needing to display the personal status information;
when the obtained personal state information of the social contact contains the virtual image of the social contact, the virtual image of the social contact is displayed in a set display area on the state information sharing interface, and the virtual image of each social contact is obtained by modifying the basic virtual image of the social contact according to the expression model corresponding to the mood state selected by the social contact.
7. A method according to any one of claims 1 to 3, wherein after displaying the obtained avatar, the method further comprises:
responding to the operation of opening a social interaction interface with the target contact person, and enabling the virtual image of the user to be contained in personal image display information of the user and shared with the target contact person;
acquiring personal image display information of the target contact person;
when the personal image display information of the target contact person comprises the virtual image of the target contact person, the virtual image of the target contact person is displayed in a set area of the social interaction interface, and the virtual image of the target contact person is obtained by modifying the basic virtual image of the target contact person according to an expression model corresponding to the mood state selected by the target contact person.
8. A control device for displaying status information, the device comprising:
an avatar obtaining unit for obtaining a photograph of a user, and extracting personalized features of the user from the photograph of the user through image recognition; according to the personalized features of the user, the weight of each bone in a preset biological bone structure is adjusted to obtain the personalized biological bone structure of the user; overlaying appearance data on the personalized feature skeleton structure of the user to generate a basic avatar of the user; the basic virtual image is a neutral image without expression;
The mood state determining unit is used for responding to mood state selection instructions input through the social information generating interface and determining mood states selected by the mood state selection instructions; the social information generation interface displays marks of a plurality of selectable mood states; the emotion state corresponds to an expression model, and each expression model corresponding to the emotion state comprises skeleton modification data set for the corresponding emotion state, wherein the skeleton modification data are used for carrying out modification treatment on bones needing modification in a basic virtual image of a user;
the virtual image acquisition unit is further used for acquiring the virtual image corresponding to the mood state, and the virtual image is obtained by modifying the basic virtual image of the user according to the expression model corresponding to the mood state; the avatar having an expression and a look for exhibiting the selected mood state;
the virtual image acquisition unit is further used for responding to the sticker selection instruction input through the social information generation interface and determining the sticker material selected by the sticker selection instruction and the adding position of the sticker material; the sticker material comprises a sticker pattern for representing mood, or a sticker pattern for decoration, or other sticker patterns; adding the sticker material to a corresponding position of the avatar according to the addition position of the sticker material;
An avatar display unit for displaying the obtained avatar; the background pattern corresponding to the mood state is obtained, and the background pattern is displayed in the area for displaying the virtual image, so that the background pattern is used as the background of the virtual image; wherein, different mood states correspond to different background patterns.
9. A computer-readable storage medium having a computer program stored therein, characterized in that: the computer program, when executed by a processor, implements the method of any of claims 1-7.
10. An electronic device comprising a memory and a processor, the memory having stored thereon a computer program executable on the processor, which when executed by the processor causes the processor to implement the method of any of claims 1-7.
CN201910927772.2A 2019-09-27 2019-09-27 Control method and device for displaying state information, electronic equipment and storage medium Active CN110717974B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910927772.2A CN110717974B (en) 2019-09-27 2019-09-27 Control method and device for displaying state information, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910927772.2A CN110717974B (en) 2019-09-27 2019-09-27 Control method and device for displaying state information, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN110717974A CN110717974A (en) 2020-01-21
CN110717974B true CN110717974B (en) 2023-06-09

Family

ID=69211098

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910927772.2A Active CN110717974B (en) 2019-09-27 2019-09-27 Control method and device for displaying state information, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN110717974B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112749357B (en) * 2020-09-15 2024-02-06 腾讯科技(深圳)有限公司 Interaction method and device based on shared content and computer equipment
CN114338577B (en) * 2020-10-12 2023-05-23 腾讯科技(深圳)有限公司 Information processing method and device, electronic equipment and storage medium
CN115098817A (en) * 2022-06-22 2022-09-23 北京字跳网络技术有限公司 Method and device for publishing virtual image, electronic equipment and storage medium
CN117576270A (en) * 2022-08-04 2024-02-20 深圳市腾讯网域计算机网络有限公司 Facial expression processing method, device, computer equipment and storage medium
CN115665507B (en) * 2022-12-26 2023-03-21 海马云(天津)信息技术有限公司 Method, apparatus, medium, and device for generating video stream data including avatar

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN201845196U (en) * 2010-08-24 2011-05-25 北京水晶石数字科技有限公司 Performance control system utilizing three-dimensional software
CN107845126A (en) * 2017-11-21 2018-03-27 江西服装学院 A kind of three-dimensional animation manufacturing method and device

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102571633B (en) * 2012-01-09 2016-03-30 华为技术有限公司 Show the method for User Status, displaying terminal and server
CN104243275B (en) * 2013-06-19 2016-04-27 腾讯科技(深圳)有限公司 Social information method for pushing and system
CN106204698A (en) * 2015-05-06 2016-12-07 北京蓝犀时空科技有限公司 Virtual image for independent assortment creation generates and uses the method and system of expression
CN105578110B (en) * 2015-11-19 2019-03-19 掌赢信息科技(上海)有限公司 A kind of video call method
CN108874114B (en) * 2017-05-08 2021-08-03 腾讯科技(深圳)有限公司 Method and device for realizing emotion expression of virtual object, computer equipment and storage medium
CN108984087B (en) * 2017-06-02 2021-09-14 腾讯科技(深圳)有限公司 Social interaction method and device based on three-dimensional virtual image
CN107657651B (en) * 2017-08-28 2019-06-07 腾讯科技(上海)有限公司 Expression animation generation method and device, storage medium and electronic device
CN110135226B (en) * 2018-02-09 2023-04-07 腾讯科技(深圳)有限公司 Expression animation data processing method and device, computer equipment and storage medium

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN201845196U (en) * 2010-08-24 2011-05-25 北京水晶石数字科技有限公司 Performance control system utilizing three-dimensional software
CN107845126A (en) * 2017-11-21 2018-03-27 江西服装学院 A kind of three-dimensional animation manufacturing method and device

Also Published As

Publication number Publication date
CN110717974A (en) 2020-01-21

Similar Documents

Publication Publication Date Title
CN110717974B (en) Control method and device for displaying state information, electronic equipment and storage medium
US11348301B2 (en) Avatar style transformation using neural networks
US11615592B2 (en) Side-by-side character animation from realtime 3D body motion capture
US11450051B2 (en) Personalized avatar real-time motion capture
US11763481B2 (en) Mirror-based augmented reality experience
US12002175B2 (en) Real-time motion transfer for prosthetic limbs
CN108874114B (en) Method and device for realizing emotion expression of virtual object, computer equipment and storage medium
US20220319075A1 (en) Customizable avatar modification system
KR20230107655A (en) Body animation sharing and remixing
US11960784B2 (en) Shared augmented reality unboxing experience
US11960792B2 (en) Communication assistance program, communication assistance method, communication assistance system, terminal device, and non-verbal expression program
US20220319078A1 (en) Customizable avatar generation system
JP2023524119A (en) Facial image generation method, device, electronic device and readable storage medium
WO2022056118A1 (en) Augmented reality messenger system
JP2009223419A (en) Creation editing method for avatar in network chat service, chat service system and creation editing method for image data
WO2023129391A1 (en) Protecting image features in stylized representations of a source image
WO2023121896A1 (en) Real-time motion and appearance transfer
US20240144569A1 (en) Danceability score generator
US20230215062A1 (en) Protecting image features in stylized representations of a source image
WO2024097553A1 (en) Danceability score generator

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40021449

Country of ref document: HK

SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant