CN112489174A - Action display method, device electronic equipment and storage medium of virtual image model - Google Patents

Action display method, device electronic equipment and storage medium of virtual image model Download PDF

Info

Publication number
CN112489174A
CN112489174A CN202011563647.7A CN202011563647A CN112489174A CN 112489174 A CN112489174 A CN 112489174A CN 202011563647 A CN202011563647 A CN 202011563647A CN 112489174 A CN112489174 A CN 112489174A
Authority
CN
China
Prior art keywords
target
dimensional model
model
avatar
dynamic effect
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011563647.7A
Other languages
Chinese (zh)
Inventor
王众怡
孙佳佳
刘晓强
李秋帆
马里千
张国鑫
王可欣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Amusement Starcraft Beijing Technology Co ltd
Original Assignee
Amusement Starcraft Beijing Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Amusement Starcraft Beijing Technology Co ltd filed Critical Amusement Starcraft Beijing Technology Co ltd
Priority to CN202011563647.7A priority Critical patent/CN112489174A/en
Publication of CN112489174A publication Critical patent/CN112489174A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/403D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/30Creation or generation of source code
    • G06F8/35Creation or generation of source code model driven
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/30Creation or generation of source code
    • G06F8/38Creation or generation of source code for implementing user interfaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2213/00Indexing scheme for animation
    • G06T2213/08Animation software package
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/20Indexing scheme for editing of 3D models

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention discloses an action display method, device electronic equipment and storage medium of an avatar model, and belongs to the technical field of computers.

Description

Action display method, device electronic equipment and storage medium of virtual image model
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a method and an apparatus for displaying an action of an avatar model, an electronic device, and a storage medium.
Background
With the development of multimedia technology and the continuous improvement of interest requirements of users on entertainment products, the application of the virtual image three-dimensional model is more and more extensive.
At present, the three-dimensional models of different types and appearance styles of the virtual images have the same dynamic effect when the same action is performed. For example, when the head is shaken, the bones corresponding to the hair are driven to move, and when the three-dimensional models of various virtual images shake the head, the dynamic effects presented by the hair are the same, that is, the attribute parameters of the bones at the hair are the same no matter whether the virtual images have long hair, short hair or other hairstyles. This results in a single dynamic effect of the different three-dimensional models and a poor display effect of the avatar model.
Disclosure of Invention
The present disclosure provides an action display method of an avatar model, an apparatus electronic device and a storage medium, which can flexibly adjust a dynamic effect of a three-dimensional model of an avatar, and optimize a display effect of the three-dimensional model. The technical scheme of the disclosure is as follows:
according to a first aspect of the embodiments of the present disclosure, there is provided a method for displaying an action of an avatar model, including:
displaying a three-dimensional model of a target virtual image and at least one editing control on a model editing interface, wherein the editing control is used for adjusting the attribute parameters of a skeleton of the three-dimensional model, and the attribute parameters of the skeleton are used for determining the dynamic effect of the three-dimensional model;
for any editing control, adjusting the attribute parameters of the skeleton corresponding to the any editing control based on the triggering operation of the any editing control;
determining action parameters corresponding to the bones of the three-dimensional model based on the adjusted attribute parameters;
and displaying the dynamic effect of the three-dimensional model based on the action parameters.
The editing control is arranged on the model editing interface, so that a user can adjust the attribute parameters of the skeleton of the three-dimensional model, and the dynamic effect presented by the three-dimensional model can be influenced by the attribute parameters of the skeleton, so that the dynamic effect of the three-dimensional model can be flexibly controlled based on the adjustment of the attribute parameters, the dynamic effect presented by the three-dimensional model is more diversified, and the interestingness of the three-dimensional model is improved.
In one possible implementation, the displaying a three-dimensional model of a target avatar and at least one editing control in a model editing interface includes:
determining a target avatar type of the target avatar;
acquiring a first attribute parameter corresponding to the target image type;
displaying a three-dimensional model of the target avatar and an initial dynamic effect of the three-dimensional model on the model editing interface, wherein the initial dynamic effect is determined based on the first attribute parameter;
and displaying the at least one editing control on the model editing interface.
The three-dimensional model is endowed with an initial dynamic effect, the interestingness of model display is improved, the initial dynamic effect of the three-dimensional model is displayed based on the image type of the target virtual image, and the initial dynamic effect can be ensured to be consistent with the character setting or appearance style of the target virtual image.
In one possible implementation, the determining the target avatar type of the target avatar includes any one of:
performing image recognition on the two-dimensional image corresponding to the target virtual image, and determining the target image type of the target virtual image based on the image recognition result;
the model editing interface displays at least two image type options, and determines the image type indicated by the selected image type option as the target image type of the target virtual image.
By identifying the graph corresponding to the target virtual image or based on the selection operation of the user, the image type of the target virtual image can be determined quickly and accurately, so that the initial dynamic effect which is consistent with the target virtual image can be presented accurately in the follow-up process.
In one possible implementation, the bone is a bone of a hair region of the three-dimensional model;
the obtaining of the first attribute parameter corresponding to the target image type includes:
obtaining at least one candidate parameter group corresponding to the target image type, wherein the candidate parameter group comprises at least one attribute parameter, and one candidate parameter group corresponds to a hair style;
determining a candidate parameter group of which the corresponding hair style is matched with the hair style of the target virtual image as a target parameter group;
and determining the attribute parameters in the target parameter group as the first attribute parameters corresponding to the target image type.
By matching different first attribute parameters for different hairstyles, namely, matching different initial dynamic effects for different hairstyles, the initial dynamic effect presented by the three-dimensional model is more fit with the actual hair movement effect, and the initial dynamic effect presented by the three-dimensional model is more real and vivid.
In one possible implementation, the displaying the dynamic effect of the three-dimensional model based on the motion parameter includes:
obtaining a fixation symbol associated with the bone, the fixation symbol for distinguishing between fixed and non-fixed regions in the bone;
and controlling the bones of the non-fixed areas in the three-dimensional model to present the dynamic effect based on the action parameters corresponding to the bones and the associated fixed symbols.
By distinguishing the fixed region from the non-fixed region in the skeleton, the dynamic effect presented by the three-dimensional model can be more fitted with the actual dynamic effect of the body part, so that the initial dynamic effect presented by the three-dimensional model is more real and vivid.
In one possible implementation, the symbol identity of the fixed symbol matches the bone identity of the associated bone.
In a possible implementation manner, the adjusting, based on the triggering operation on the any editing control, the property parameter of the bone to which the any editing control corresponds includes:
detecting a trigger operation on any editing control;
responding to the stop of the triggering operation of any editing control, and determining an adjusting parameter corresponding to any editing control;
and adjusting the attribute parameters of the skeleton corresponding to any editing control based on the adjustment parameters.
By converting the parameter adjustment of the rear end into visual editing control adjustment, the user can conveniently adjust various attributes of the skeleton, and the dynamic effect presented by the three-dimensional model can be flexibly controlled.
In one possible implementation, after the model editing interface displays the three-dimensional model of the target avatar and the at least one editing control, the method further comprises:
determining a target skeleton of which the dynamic effect needs to be adjusted;
and displaying at least one editing control corresponding to the target skeleton on the model editing interface.
By means of adjusting requirements of a user on bones, the editing control is displayed, too many redundant controls are prevented from being displayed in an interface, complexity of searching the editing control is reduced, and user experience is improved.
In one possible implementation, the determining a target bone for which the dynamic effect needs to be adjusted includes any one of:
detecting selection operation of any region of the three-dimensional model, and determining bones corresponding to the selected region as the target bones;
and displaying bone selection controls corresponding to at least two bones on the model editing interface, and determining the bones corresponding to the selected bone selection controls as the target bones.
By providing a plurality of target bone selection modes, a user can quickly and accurately select a target bone needing to be adjusted.
According to a second aspect of the embodiments of the present disclosure, there is provided an action display apparatus of an avatar model, including:
the display unit is configured to display a three-dimensional model of a target virtual image and at least one editing control on a model editing interface, wherein the editing control is used for adjusting the attribute parameters of a skeleton of the three-dimensional model, and the attribute parameters of the skeleton are used for determining the dynamic effect of the three-dimensional model;
the adjusting unit is configured to adjust the attribute parameters of the skeleton corresponding to any editing control based on the triggering operation of the any editing control;
a parameter determining unit configured to determine an action parameter corresponding to a bone of the three-dimensional model based on the adjusted attribute parameter;
the display unit is configured to display a dynamic effect of the three-dimensional model based on the motion parameter.
In one possible implementation, the display unit includes:
a type determining subunit configured to determine a target avatar type of the target avatar;
the obtaining subunit is configured to obtain a first attribute parameter corresponding to the target character type;
the display unit is configured to display a three-dimensional model of the target avatar and an initial dynamic effect of the three-dimensional model on the model editing interface, wherein the initial dynamic effect is determined based on the first attribute parameter; and displaying the at least one editing control on the model editing interface.
In one possible implementation, the type determining subunit is configured to perform any one of:
performing image recognition on the two-dimensional image corresponding to the target virtual image, and determining the target image type of the target virtual image based on the image recognition result;
the model editing interface displays at least two image type options, and determines the image type indicated by the selected image type option as the target image type of the target virtual image.
In one possible implementation, the bone is a bone of a hair region of the three-dimensional model;
the acquisition subunit configured to:
obtaining at least one candidate parameter group corresponding to the target image type, wherein the candidate parameter group comprises at least one attribute parameter, and one candidate parameter group corresponds to a hair style;
determining a candidate parameter group of which the corresponding hair style is matched with the hair style of the target virtual image as a target parameter group;
and determining the attribute parameters in the target parameter group as the first attribute parameters corresponding to the target image type.
In one possible implementation, the display unit is configured to:
obtaining a fixation symbol associated with the bone, the fixation symbol for distinguishing between fixed and non-fixed regions in the bone;
and controlling the bones of the non-fixed areas in the three-dimensional model to present the dynamic effect based on the action parameters corresponding to the bones and the associated fixed symbols.
In one possible implementation, the symbol identity of the fixed symbol matches the bone identity of the associated bone.
In one possible implementation, the adjusting unit is configured to:
detecting a trigger operation on any editing control;
responding to the stop of the triggering operation of any editing control, and determining an adjusting parameter corresponding to any editing control;
and adjusting the attribute parameters of the skeleton corresponding to any editing control based on the adjustment parameters.
In one possible implementation, the apparatus further comprises a bone determination unit configured to determine a target bone for which the dynamic effect needs to be adjusted;
the display unit is configured to display at least one editing control corresponding to the target bone on the model editing interface.
In one possible implementation, the bone determination unit is configured to perform any one of:
detecting selection operation of any region of the three-dimensional model, and determining bones corresponding to the selected region as the target bones;
and displaying bone selection controls corresponding to at least two bones on the model editing interface, and determining the bones corresponding to the selected bone selection controls as the target bones.
According to a third aspect of the embodiments of the present disclosure, there is provided an electronic apparatus including:
a processor;
a memory for storing the processor-executable instructions;
wherein the processor is configured to execute the instructions to implement the action display method of the avatar model.
According to a fourth aspect of embodiments of the present disclosure, there is provided a computer-readable storage medium when the computer-readable storage medium isComputer readableThe instructions in the storage medium, when executed by a processor of the electronic device, enable the electronic device to perform the above-described action display method of the avatar model.
According to a fifth aspect of the embodiments of the present disclosure, there is provided a computer program product comprising a computer program, wherein the computer program is characterized in that the computer program realizes the above-mentioned action display method of the avatar model when being executed by a processor.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the principles of the disclosure and are not to be construed as limiting the disclosure.
FIG. 1 is a flow diagram illustrating a method for action display of an avatar model in accordance with an exemplary embodiment;
FIG. 2 is a flow diagram illustrating a method for action display of an avatar model in accordance with an exemplary embodiment;
FIG. 3 is a diagram illustrating a model editing interface in accordance with an illustrative embodiment;
FIG. 4 is a diagram illustrating a model editing interface in accordance with an illustrative embodiment;
FIG. 5 is a diagram illustrating a model editing interface in accordance with an illustrative embodiment;
FIG. 6 is a block diagram of an avatar model action display apparatus according to an exemplary embodiment;
FIG. 7 is a block diagram illustrating a computer device in accordance with an example embodiment.
Detailed Description
In order to make the technical solutions of the present disclosure better understood by those of ordinary skill in the art, the technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings.
It should be noted that the terms "first," "second," and the like in the description and claims of the present disclosure and in the above-described drawings are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the disclosure described herein are capable of operation in sequences other than those illustrated or otherwise described herein. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
Fig. 1 is a flowchart illustrating a method for action display of an avatar model, as shown in fig. 1, applied in a computer device, according to an exemplary embodiment, which includes the following steps in one possible implementation.
In step 101, a three-dimensional model of a target avatar and at least one editing control are displayed on a model editing interface, the editing control is used for adjusting the attribute parameters of a skeleton of the three-dimensional model, and the attribute parameters of the skeleton are used for determining the dynamic effect of the three-dimensional model.
In step 102, for any editing control, based on the triggering operation on the any editing control, the property parameter of the bone corresponding to the any editing control is adjusted.
In step 103, based on the adjusted attribute parameters, motion parameters corresponding to the bones of the three-dimensional model are determined.
In step 104, a dynamic effect of the three-dimensional model is displayed based on the motion parameter.
According to the technical scheme provided by the embodiment of the disclosure, the editing control is arranged on the model editing interface, so that a user can adjust the attribute parameters of the skeleton of the three-dimensional model, and the dynamic effect presented by the three-dimensional model can be influenced by the attribute parameters of the skeleton, so that the dynamic effect of the three-dimensional model can be flexibly controlled based on the adjustment of the attribute parameters, the dynamic effect presented by the three-dimensional model is more diversified, and the interestingness of the three-dimensional model is improved.
In one possible implementation, the displaying a three-dimensional model of a target avatar and at least one editing control in a model editing interface includes:
determining a target avatar type of the target avatar;
acquiring a first attribute parameter corresponding to the target image type;
displaying a three-dimensional model of the target avatar and an initial dynamic effect of the three-dimensional model on the model editing interface, wherein the initial dynamic effect is determined based on the first attribute parameter;
and displaying the at least one editing control on the model editing interface.
In one possible implementation, the determining the target avatar type of the target avatar includes any one of:
performing image recognition on the two-dimensional image corresponding to the target virtual image, and determining the target image type of the target virtual image based on the image recognition result;
the model editing interface displays at least two image type options, and determines the image type indicated by the selected image type option as the target image type of the target virtual image.
In one possible implementation, the bone is a bone of a hair region of the three-dimensional model;
the obtaining of the first attribute parameter corresponding to the target image type includes:
obtaining at least one candidate parameter group corresponding to the target image type, wherein the candidate parameter group comprises at least one attribute parameter, and one candidate parameter group corresponds to a hair style;
determining a candidate parameter group of which the corresponding hair style is matched with the hair style of the target virtual image as a target parameter group;
and determining the attribute parameters in the target parameter group as the first attribute parameters corresponding to the target image type.
In one possible implementation, the displaying the dynamic effect of the three-dimensional model based on the motion parameter includes:
obtaining a fixation symbol associated with the bone, the fixation symbol for distinguishing between fixed and non-fixed regions in the bone;
and controlling the bones of the non-fixed areas in the three-dimensional model to present the dynamic effect based on the action parameters corresponding to the bones and the associated fixed symbols.
In one possible implementation, the symbol identity of the fixed symbol matches the bone identity of the associated bone.
In a possible implementation manner, the adjusting, based on the triggering operation on the any editing control, the property parameter of the bone to which the any editing control corresponds includes:
detecting a trigger operation on any editing control;
responding to the stop of the triggering operation of any editing control, and determining an adjusting parameter corresponding to any editing control;
and adjusting the attribute parameters of the skeleton corresponding to any editing control based on the adjustment parameters.
In one possible implementation, after the model editing interface displays the three-dimensional model of the target avatar and the at least one editing control, the method further comprises:
determining a target skeleton of which the dynamic effect needs to be adjusted;
and displaying at least one editing control corresponding to the target skeleton on the model editing interface.
In one possible implementation, the determining a target bone for which the dynamic effect needs to be adjusted includes any one of:
detecting selection operation of any region of the three-dimensional model, and determining bones corresponding to the selected region as the target bones;
and displaying bone selection controls corresponding to at least two bones on the model editing interface, and determining the bones corresponding to the selected bone selection controls as the target bones.
The above embodiment is only a brief introduction to the present solution, fig. 2 is a flowchart illustrating a motion display method of an avatar model according to an exemplary embodiment, and the following describes the motion display method of the avatar with reference to fig. 2, as shown in fig. 2, in one possible implementation, the method includes the following steps.
In step 201, the terminal displays a model editing interface, and obtains a three-dimensional model of a target avatar to be edited.
In a possible implementation manner, the terminal may be a terminal used by any user, the terminal is installed and runs a target application program supporting editing of the three-dimensional model, and optionally, a user account is logged in the target application program. In the embodiment of the present disclosure, the target application includes a model editing interface, for example, the model editing interface includes a model display area and a model editing area, the model display area is used for displaying a three-dimensional model to be edited, and the model editing area may include multiple types of controls, so that a user may adjust multiple properties of the three-dimensional model.
In a possible implementation manner, the three-dimensional model of the target avatar may be a three-dimensional model generated by a user through a modeling application, or a three-dimensional model obtained from a network, and the user may import the three-dimensional model into the target application and further edit the three-dimensional model of the target avatar by using the target application. In one possible implementation, the target application has a model generation function, for example, the target application can generate the three-dimensional model based on a two-dimensional image, and the user can upload the two-dimensional image of the target avatar to the target application, and the target application generates the three-dimensional model of the target avatar based on the two-dimensional image. In one possible implementation, at least one three-dimensional model is preset in the target application program, and a user can select a three-dimensional model to be edited from the at least one three-dimensional model. It should be noted that, the embodiment of the present disclosure does not limit the method for acquiring the three-dimensional model to be edited.
In the embodiment of the disclosure, the three-dimensional model of the target avatar includes a plurality of bones, each of which is connected to a virtual skeleton constituting the three-dimensional model, wherein the bones are formed by key points in each body part of the target avatar, and each of the bones can rotate in different degrees of freedom, so that the three-dimensional model can present various actions.
In step 202, the terminal displays the three-dimensional model of the target avatar and at least one editing control in the model editing interface.
In one possible implementation manner, after the terminal imports the three-dimensional model of the target avatar into the target application program or the target application program generates the three-dimensional model of the target avatar, the terminal displays the three-dimensional model of the target avatar in the model display area of the model editing interface and displays the editing control in the model editing area of the model editing interface. In the embodiment of the present disclosure, the attribute parameters of the skeleton are used to determine a dynamic effect of the three-dimensional model, and the flexibility and fluency of the three-dimensional model can be changed by adjusting the attribute parameters of the skeleton when an action is presented. Exemplary attribute parameters of the bone include softness of the bone, rebound speed of the bone when in motion, resistance value, and the like. The softness is used for indicating a deformable program of the skeleton, the larger the softness is, the larger the deformable degree of the skeleton is, namely, the larger the deformation generated by the skeleton during movement is, and the more flexible the dynamic effect presented by the three-dimensional model is; the less the softness, the less the bone is deformable, i.e. the more pin the bone is deformed during movement, the stiffer the dynamic effect exhibited by the three-dimensional model. The rebound speed is used for indicating the speed of restoring the skeleton to the initial state after the skeleton is deformed, the higher the rebound speed is, the faster the skeleton is deformed, and the more flexible and smooth the dynamic effect presented by the three-dimensional model is; the smaller the rebound speed, the slower the bone recovery deformation, and the slower the dynamic effect presented by the three-dimensional model. The resistance value is used for controlling the time length of the deformation of the skeleton, and for the same deformation effect, the larger the resistance value is, the longer the time for the skeleton to reach the deformation effect and recover from the deformation effect is, and the slower the dynamic effect presented by the three-dimensional model is; the smaller the resistance value, the shorter the time it takes for the bone to reach and recover from the deformation effect, and the more flexible the dynamic effect exhibited by the three-dimensional model.
The influence of the attribute parameters on the dynamic effect of the three-dimensional model is explained below in conjunction with the bones of different body parts and different types of avatars. Taking the skeleton of the hair region as an example, the texture of the hair of the infant is softer, the softness and the rebound speed of the skeleton corresponding to the hair region in the three-dimensional model corresponding to the infant can be set to be larger values, the resistance value is set to be smaller values, the dynamic effect presented by the hair during the movement of the three-dimensional model is more flexible, the soft texture can be embodied, and the movement effect presented by the three-dimensional model is consistent with the appearance image of the three-dimensional model. For the virtual image of an adult male, the hair texture is hard, the softness and the rebound speed of the skeleton can be set to be small values, and the resistance value is set to be large values, so that the hair has a hard dynamic effect. Taking three-dimensional models corresponding to the old and the teenagers as examples, the softness and the rebound speed corresponding to the skeleton of the three-dimensional model of the image of the old can be set to be smaller values, the resistance value is set to be larger values, the softness and the rebound speed corresponding to the skeleton of the three-dimensional model of the image of the teenagers are set to be larger values, and the resistance value is set to be smaller values.
In the embodiment of the disclosure, by adjusting the attribute parameters of the skeleton, the flexibility and the flexibility of the skeleton can be changed, and the dynamic effect of the skeleton during movement can be changed, so that the dynamic effect of the three-dimensional model during movement can be changed, the three-dimensional models with different virtual images can have different dynamic effects, and the display effect of the three-dimensional model can be enriched.
Fig. 3 is a schematic diagram of a model editing interface according to an exemplary embodiment, and referring to fig. 3, the model editing interface includes a model display area 301 and a model editing area 302, the model editing area 302 displays a plurality of editing controls 303, which may be represented as slide bar controls or other styles, and the embodiment of the present disclosure does not limit this. As shown in fig. 3, the editing controls may be displayed in groups by taking a body part as a unit, for example, the hair region corresponds to a plurality of editing controls, and the eye corresponds to a plurality of editing controls, and of course, the editing controls may also be presented in groups based on other dimensions, which is not limited in this disclosure. In the embodiment of the disclosure, the parameter adjustment of the back end is converted into the visual editing control adjustment, so that the user can conveniently adjust various attributes of the skeleton, and the dynamic effect presented by the three-dimensional model can be flexibly controlled.
In fig. 3, the model editing interface displays editing controls corresponding to bones of a plurality of body parts, and in a possible implementation manner, the terminal may first determine a target bone for which a dynamic effect needs to be adjusted, and then display at least one editing control corresponding to the target bone on the model editing interface. In one possible implementation, the terminal may determine the target bone based on a user's selection operation on the bone. For example, the terminal detects a selection operation on any region of the three-dimensional model, and determines a bone corresponding to the selected region as the target bone, fig. 4 is a schematic diagram of a model editing interface according to an exemplary embodiment, and in response to a user clicking a hair region 401 in the three-dimensional model, the terminal displays an editing control 402 corresponding to the hair region in the model editing region. Or, the terminal displays bone selection controls corresponding to at least two bones on the model editing interface, and determines the bone corresponding to the selected bone selection control as the target bone, fig. 5 is a schematic diagram of a model editing interface according to an exemplary embodiment, a model editing region of the model editing interface displays a plurality of bone selection controls 501, as shown in fig. 5 (a), and in response to a click operation on the bone selection control of a hair region, displays an editing control 502 corresponding to the hair region, as shown in fig. 5 (b). It should be noted that the above description of the display mode of the editing control is only an exemplary description of one possible implementation mode, and the embodiment of the present disclosure does not limit this. In the embodiment of the disclosure, by providing multiple target skeleton selection modes, a user can conveniently and quickly and accurately select a target skeleton to be adjusted, and flexibly display an editing control based on the adjustment requirement of the user on the skeleton, thereby avoiding displaying too many redundant controls in an interface, reducing the complexity of searching the editing control, and improving the user experience.
In step 203, the terminal displays an initial dynamic effect of the three-dimensional model of the target avatar.
In one possible implementation manner, the three-dimensional model has an initial dynamic effect, the initial dynamic effect can be set by a developer, and the terminal can display the initial dynamic effect of the three-dimensional model when the terminal displays the three-dimensional model of the target avatar on the model editing interface. In one possible implementation, different types of avatars may correspond to different initial dynamic effects, e.g., an avatar for an elderly person and an avatar for a teenager may correspond to different initial dynamic effects. Illustratively, the terminal determines a target image type of the target avatar, and then obtains a first attribute parameter corresponding to the target image type. The first attribute parameters comprise attribute parameters corresponding to all bones in the three-dimensional model, and the first attribute parameters are used for indicating initial dynamic effects of the three-dimensional model. And the terminal displays the three-dimensional model of the target virtual image and the initial dynamic effect of the three-dimensional model on the model editing interface based on the first attribute parameter, namely, the terminal drives each bone in the three-dimensional model to move based on the first attribute parameter so that the three-dimensional model presents the initial dynamic effect.
In a possible implementation manner, the target character type of the target avatar can be automatically recognized by the terminal and can also be provided by the user. For example, the terminal performs image recognition on the two-dimensional image corresponding to the target avatar, and determines the target avatar type of the target avatar based on the image recognition result, for example, the terminal may determine the head-body ratio, gender, and the like of the target avatar by image recognition, so as to determine the avatar type of the target avatar. Illustratively, the model editing interface displays at least two image type options, the user can select the image type of the target virtual image through the image selection control, and the image type indicated by the selected image type option is determined as the target image type of the target virtual image. It should be noted that the above description of the method for determining the target character type of the target avatar is only an exemplary description of one possible implementation manner, and the method for determining the target character type is not limited in the embodiments of the present disclosure. In the embodiment of the disclosure, the image type of the target avatar can be determined quickly and accurately by identifying the graph corresponding to the target avatar or based on the selection operation of the user, so that the initial dynamic effect conforming to the target avatar can be presented accurately in the follow-up process.
In one possible implementation, for the same type of avatar, the initial dynamic effects exhibited by the three-dimensional model of the avatar may differ due to differences in the appearance of the avatar, and taking the skeleton corresponding to the hair region of the three-dimensional model as an example, the long and short hairs may correspond to different initial dynamic effects, i.e., the long and short hairs correspond to different first attribute parameters. For example, a type of an avatar may correspond to at least one set of candidate parameter sets, one of the candidate parameter sets corresponds to a hair style, the candidate parameter set includes at least one attribute parameter, after determining that the target avatar corresponds to the target avatar type, the terminal may obtain at least one candidate parameter set corresponding to the target avatar type, determine a hair style corresponding to the target avatar, determine a candidate parameter set matching the corresponding hair style with the hair style of the target avatar, as a target parameter set, determine an attribute parameter in the target parameter set as a first attribute parameter corresponding to the target avatar type, and further display an initial dynamic effect of the three-dimensional model based on the first parameter. It should be noted that, in the embodiment of the present disclosure, only the hair region is taken as an example for description, and the terminal may also determine the first attribute parameter based on the features of other body parts of the target avatar, which is not limited by the embodiment of the present disclosure. In the embodiment of the present disclosure, different first attribute parameters are matched for different hair styles, that is, different initial dynamic effects are matched for different hair styles, so that the initial dynamic effect presented by the three-dimensional model is more suitable for the actual hair movement effect, and the initial dynamic effect presented by the three-dimensional model is more real and vivid.
It should be noted that, the above step 202 and step 203 may be executed simultaneously, that is, the terminal displays the three-dimensional model of the target avatar on the model editing interface, and displays the initial dynamic effect of the three-dimensional model at the same time. In the embodiment of the present disclosure, by giving the initial dynamic effect to the three-dimensional model, the interestingness of the model display is improved, and the initial dynamic effect of the three-dimensional model is displayed based on the image type of the target avatar, which can ensure that the initial dynamic effect is consistent with the character setting or appearance style of the target avatar.
It should be noted that step 203 is an optional step, that is, the three-dimensional model may not have the initial dynamic effect, and the terminal may directly execute step 204 described below.
In step 204, the terminal adjusts, for any editing control of the editing region, the attribute parameters of the bone corresponding to the any editing control based on the triggering operation on the any editing control.
In a possible implementation manner, the terminal detects a trigger operation on any editing control, for example, the editing control may be represented in a form of a slide bar control, and then the trigger operation is a drag operation on a slider in the slide bar control, or the editing control is represented in a form of a text input control, and then the trigger operation is a data input operation, and the style of the editing control and the trigger operation are not limited in the embodiments of the present disclosure. The terminal determines an adjustment parameter corresponding to any editing control in response to the stop of the triggering operation on any editing control, and adjusts the attribute parameter of the bone corresponding to any editing control based on the adjustment parameter. It should be noted that the above description of the method for adjusting the attribute parameter is only an exemplary description of one possible implementation manner, and the embodiment of the present disclosure does not limit which method is specifically used to adjust the attribute parameter.
In step 205, the terminal determines an action parameter corresponding to the skeleton of the three-dimensional model based on the adjusted attribute parameter, and displays a dynamic effect of the three-dimensional model based on the action parameter.
In a possible implementation manner, the motion parameters may include the attribute parameters and displacement parameters of key points in the bone, and the terminal may drive the bone to move based on the motion parameters, so that the three-dimensional model presents a dynamic effect. It should be noted that the embodiment of the present disclosure does not limit the manner of driving the bone to move. In one possible implementation manner, the action parameter is used to indicate a target action, which may be a test action set by a developer, for example, after a user completes adjustment on any editing control, the three-dimensional model is triggered to present the target action, so that the user can preview an adjusted model dynamic effect in real time. Optionally, the target action is specified by the user, for example, a certain action selected by the user, or an action taken by the user based on a motion capture technology, which is not limited by the embodiment of the present disclosure.
In one possible implementation, fixed and non-fixed regions may also be distinguished for any bone. For example, a region in a bone may be marked by a fixation symbol to distinguish a fixed region from a non-fixed region in the bone, for example, the region marked by the fixation symbol is the fixed region, and the other regions are the non-fixed regions. In one possible implementation, the terminal may obtain a fixed symbol associated with a bone, for example, a symbol identifier of the fixed symbol matches with a bone identifier of the associated bone, for example, the associated fixed symbol has the same identifier as the bone, or includes the same keyword in the representation, and the terminal may determine the fixed symbol associated with a certain bone based on the bone identifier and the symbol identifier. Of course, the terminal may determine the fixed symbol associated with the bone based on other manners, which is not limited in the embodiment of the present disclosure. The terminal can control the bone in the non-fixed region in the three-dimensional model to present the dynamic effect based on the action parameter corresponding to the bone and the associated fixed symbol, that is, the non-fixed region in the bone is driven to move based on the action parameter and the mark position of the fixed symbol, so that the three-dimensional model presents the dynamic effect. In the embodiment of the disclosure, by distinguishing the fixed region and the non-fixed region in the skeleton, the dynamic effect presented by the three-dimensional model can be more fitted with the actual dynamic effect of the body part, so that the initial dynamic effect presented by the three-dimensional model is more real and vivid.
Taking the example of adjusting the skeleton of the hair region, the method for adjusting the dynamic effect presented by the three-dimensional model is described, in a possible implementation manner, the hair clusters at different positions of the head have different dynamic effects, for example, the dynamic effects of the hair in the bang and the hair in the back side can be obviously different, and the dynamic effects of the hair in different character images can be greatly different, for example, the hair of the infant is soft, while the hair of the adult male with the same length is hard. By applying the technical scheme provided by the embodiment of the disclosure, the parameters of the hair are encapsulated, for example, the parameters of the soft and hard degree, the deformable degree, the elasticity degree, the gravity degree and the gravity direction of the hair are encapsulated, visual editing controls are set according to different parameters, and a user can adjust the attribute parameters of the skeleton through the editing controls. Illustratively, by adjusting the attribute parameters, the dynamic effect presented by the hair region of the three-dimensional model can be made more flexible, and the dynamic effect presented by the hair region can be made more rigid. In the embodiment of the application, a user can flexibly adjust the attribute parameters of the hair through the visual editing control, so that the flexible control on the dynamic effect of the three-dimensional model is realized, and the display effect of the three-dimensional model is enriched. In addition, in the embodiment of the application, initial values can be given to all the parameters based on the hair motion effect in the actual scene, so that the initial dynamic effect of the hair is consistent with the actual motion condition of the hair, and the display effect of the three-dimensional model is optimized.
According to the technical scheme provided by the embodiment of the disclosure, the editing control is arranged on the model editing interface, so that a user can adjust the attribute parameters of the skeleton of the three-dimensional model, and the dynamic effect presented by the three-dimensional model can be influenced by the attribute parameters of the skeleton, so that the dynamic effect of the three-dimensional model can be flexibly controlled based on the adjustment of the attribute parameters, the dynamic effect presented by the three-dimensional model is more diversified, and the interestingness of the three-dimensional model is improved.
Fig. 6 is a block diagram of an avatar model action display apparatus according to an exemplary embodiment. Referring to fig. 6, the apparatus includes a display unit 601, an adjustment unit 602, and a parameter determination unit 603.
A display unit 601 configured to display a three-dimensional model of a target avatar and at least one editing control on a model editing interface, where the editing control is used to adjust property parameters of a skeleton of the three-dimensional model, and the property parameters of the skeleton are used to determine a dynamic effect of the three-dimensional model;
an adjusting unit 602, configured to, for any editing control, adjust a property parameter of a skeleton corresponding to the any editing control based on a trigger operation on the any editing control;
a parameter determining unit 603 configured to determine an action parameter corresponding to a bone of the three-dimensional model based on the adjusted attribute parameter;
the display unit 601 is configured to display a dynamic effect of the three-dimensional model based on the motion parameter.
In one possible implementation, the display unit 601 includes:
a type determining subunit configured to determine a target avatar type of the target avatar;
the obtaining subunit is configured to obtain a first attribute parameter corresponding to the target character type;
the display unit is configured to display a three-dimensional model of the target avatar and an initial dynamic effect of the three-dimensional model on the model editing interface, wherein the initial dynamic effect is determined based on the first attribute parameter; and displaying the at least one editing control on the model editing interface.
In one possible implementation, the type determining subunit is configured to perform any one of:
performing image recognition on the two-dimensional image corresponding to the target virtual image, and determining the target image type of the target virtual image based on the image recognition result;
the model editing interface displays at least two image type options, and determines the image type indicated by the selected image type option as the target image type of the target virtual image.
In one possible implementation, the bone is a bone of a hair region of the three-dimensional model;
the acquisition subunit configured to:
obtaining at least one candidate parameter group corresponding to the target image type, wherein the candidate parameter group comprises at least one attribute parameter, and one candidate parameter group corresponds to a hair style;
determining a candidate parameter group of which the corresponding hair style is matched with the hair style of the target virtual image as a target parameter group;
and determining the attribute parameters in the target parameter group as the first attribute parameters corresponding to the target image type.
In one possible implementation, the display unit 601 is configured to:
obtaining a fixation symbol associated with the bone, the fixation symbol for distinguishing between fixed and non-fixed regions in the bone;
and controlling the bones of the non-fixed areas in the three-dimensional model to present the dynamic effect based on the action parameters corresponding to the bones and the associated fixed symbols.
In one possible implementation, the symbol identity of the fixed symbol matches the bone identity of the associated bone.
In one possible implementation, the adjusting unit 602 is configured to:
detecting a trigger operation on any editing control;
responding to the stop of the triggering operation of any editing control, and determining an adjusting parameter corresponding to any editing control;
and adjusting the attribute parameters of the skeleton corresponding to any editing control based on the adjustment parameters.
In one possible implementation, the apparatus further comprises a bone determination unit configured to determine a target bone for which the dynamic effect needs to be adjusted;
the display unit 601 is configured to display at least one editing control corresponding to the target bone on the model editing interface.
In one possible implementation, the bone determination unit is configured to perform any one of:
detecting selection operation of any region of the three-dimensional model, and determining bones corresponding to the selected region as the target bones;
and displaying bone selection controls corresponding to at least two bones on the model editing interface, and determining the bones corresponding to the selected bone selection controls as the target bones.
According to the device provided by the embodiment of the disclosure, the attribute parameters of the skeleton of the three-dimensional model can be adjusted by a user through the editing control arranged on the model editing interface, and the dynamic effect presented by the three-dimensional model can be influenced by the attribute parameters of the skeleton, so that the dynamic effect of the three-dimensional model can be flexibly controlled based on the adjustment of the attribute parameters, the dynamic effect presented by the three-dimensional model is more diversified, and the interestingness of the three-dimensional model is improved.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
FIG. 7 is a block diagram illustrating a computer device in accordance with an example embodiment. The computer device 700 may have a relatively large difference due to different configurations or performances, and may include one or more processors (CPUs) 701 and one or more memories 702, where the memory 702 stores at least one instruction, and the at least one instruction is loaded and executed by the processor 701 to implement the action display method of the avatar model provided by the above-mentioned method embodiments. Certainly, the computer device may further have components such as a wired or wireless network interface, a keyboard, and an input/output interface, so as to perform input and output, and the computer device may further include other components for implementing the functions of the device, which is not described herein again.
In an exemplary embodiment, a computer-readable storage medium comprising instructions, such as a memory comprising instructions, executable by a processor of a computer device to perform the above-described method is also provided. Alternatively, the computer-readable storage medium may be a ROM (Read-Only Memory), a RAM (Random Access Memory), a CD-ROM (Compact Disc Read-Only Memory), a magnetic tape, a floppy disk, an optical data storage device, and the like.
In an exemplary embodiment, there is also provided a computer program product comprising a computer program, characterized in that the computer program, when executed by a processor, implements the action display method of the avatar model described above.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This disclosure is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (10)

1. A method for displaying an action of an avatar model, comprising:
displaying a three-dimensional model of a target virtual image and at least one editing control on a model editing interface, wherein the editing control is used for adjusting the attribute parameters of a skeleton of the three-dimensional model, and the attribute parameters of the skeleton are used for determining the dynamic effect of the three-dimensional model;
for any editing control, adjusting the attribute parameters of the skeleton corresponding to the any editing control based on the triggering operation of the any editing control;
determining action parameters corresponding to bones of the three-dimensional model based on the adjusted attribute parameters;
and displaying the dynamic effect of the three-dimensional model based on the action parameters.
2. The method of displaying an avatar model according to claim 1, wherein said displaying a three-dimensional model of a target avatar and at least one editing control in a model editing interface comprises:
determining a target avatar type of the target avatar;
acquiring a first attribute parameter corresponding to the target image type;
displaying a three-dimensional model of the target avatar and an initial dynamic effect of the three-dimensional model on the model editing interface, the initial dynamic effect being determined based on the first attribute parameters;
and displaying the at least one editing control on the model editing interface.
3. The action display method of an avatar model according to claim 2, wherein said determining a target avatar type of said target avatar includes any one of:
performing image recognition on the two-dimensional image corresponding to the target virtual image, and determining the target image type of the target virtual image based on the image recognition result;
the model editing interface displays at least two image type options, and determines the image type indicated by the selected image type option as the target image type of the target virtual image.
4. The action display method of an avatar model according to claim 2, wherein said bone is a bone of a hair region of said three-dimensional model;
the obtaining of the first attribute parameter corresponding to the target image type includes:
obtaining at least one candidate parameter group corresponding to the target image type, wherein the candidate parameter group comprises at least one attribute parameter, and one candidate parameter group corresponds to a hair style;
determining a candidate parameter group of which the corresponding hair style is matched with the hair style of the target virtual image as a target parameter group;
and determining the attribute parameters in the target parameter group as first attribute parameters corresponding to the target image type.
5. The action display method of an avatar model according to claim 1, wherein said displaying a dynamic effect of said three-dimensional model based on said action parameters comprises:
obtaining a fixation symbol associated with the bone, the fixation symbol for distinguishing between fixed and non-fixed regions in the bone;
and controlling the bones of the non-fixed areas in the three-dimensional model to present the dynamic effect based on the corresponding action parameters of the bones and the associated fixed symbols.
6. The method of displaying actions of an avatar model according to claim 1, further comprising, after said displaying the three-dimensional model of the target avatar and the at least one editing control in a model editing interface:
determining a target skeleton of which the dynamic effect needs to be adjusted;
and displaying at least one editing control corresponding to the target skeleton on the model editing interface.
7. An action display device of an avatar model, comprising:
the display unit is configured to display a three-dimensional model of a target virtual image and at least one editing control on a model editing interface, wherein the editing control is used for adjusting the attribute parameters of a skeleton of the three-dimensional model, and the attribute parameters of the skeleton are used for determining the dynamic effect of the three-dimensional model;
the adjusting unit is configured to adjust the attribute parameters of the skeleton corresponding to any editing control based on the triggering operation of the any editing control;
a parameter determination unit configured to determine an action parameter corresponding to a bone of the three-dimensional model based on the adjusted attribute parameter;
the display unit is configured to display a dynamic effect of the three-dimensional model based on the motion parameter.
8. An electronic device, comprising:
a processor;
a memory for storing the processor-executable instructions;
wherein the processor is configured to execute the instructions to implement the action display method of the avatar model according to any one of claims 1 to 6.
9. A computer-readable storage medium, instructions of which, when executed by a processor of an electronic device, enable the electronic device to perform the action display method of the avatar model according to any one of claims 1 to 6.
10. A computer program product comprising a computer program, characterized in that the computer program, when being executed by a processor, implements a method of action display of an avatar model according to any of claims 1 to 6.
CN202011563647.7A 2020-12-25 2020-12-25 Action display method, device electronic equipment and storage medium of virtual image model Pending CN112489174A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011563647.7A CN112489174A (en) 2020-12-25 2020-12-25 Action display method, device electronic equipment and storage medium of virtual image model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011563647.7A CN112489174A (en) 2020-12-25 2020-12-25 Action display method, device electronic equipment and storage medium of virtual image model

Publications (1)

Publication Number Publication Date
CN112489174A true CN112489174A (en) 2021-03-12

Family

ID=74915545

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011563647.7A Pending CN112489174A (en) 2020-12-25 2020-12-25 Action display method, device electronic equipment and storage medium of virtual image model

Country Status (1)

Country Link
CN (1) CN112489174A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113362263A (en) * 2021-05-27 2021-09-07 百度在线网络技术(北京)有限公司 Method, apparatus, medium, and program product for changing the image of a virtual idol
CN114092675A (en) * 2021-11-22 2022-02-25 北京百度网讯科技有限公司 Image display method, image display device, electronic apparatus, and storage medium
CN114742984A (en) * 2022-04-14 2022-07-12 北京数字冰雹信息技术有限公司 Editing method and device of dynamic three-dimensional model

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130250050A1 (en) * 2012-03-23 2013-09-26 Objectvideo, Inc. Video surveillance systems, devices and methods with improved 3d human pose and shape modeling
CN107705365A (en) * 2017-09-08 2018-02-16 郭睿 Editable three-dimensional (3 D) manikin creation method, device, electronic equipment and computer program product
CN108961365A (en) * 2017-05-19 2018-12-07 腾讯科技(深圳)有限公司 Three-dimensional object swinging method, device, storage medium and computer equipment
CN110111417A (en) * 2019-05-15 2019-08-09 浙江商汤科技开发有限公司 Generation method, device and the equipment of three-dimensional partial body's model
CN111324250A (en) * 2020-01-22 2020-06-23 腾讯科技(深圳)有限公司 Three-dimensional image adjusting method, device and equipment and readable storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130250050A1 (en) * 2012-03-23 2013-09-26 Objectvideo, Inc. Video surveillance systems, devices and methods with improved 3d human pose and shape modeling
CN108961365A (en) * 2017-05-19 2018-12-07 腾讯科技(深圳)有限公司 Three-dimensional object swinging method, device, storage medium and computer equipment
CN107705365A (en) * 2017-09-08 2018-02-16 郭睿 Editable three-dimensional (3 D) manikin creation method, device, electronic equipment and computer program product
CN110111417A (en) * 2019-05-15 2019-08-09 浙江商汤科技开发有限公司 Generation method, device and the equipment of three-dimensional partial body's model
CN111324250A (en) * 2020-01-22 2020-06-23 腾讯科技(深圳)有限公司 Three-dimensional image adjusting method, device and equipment and readable storage medium

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113362263A (en) * 2021-05-27 2021-09-07 百度在线网络技术(北京)有限公司 Method, apparatus, medium, and program product for changing the image of a virtual idol
CN113362263B (en) * 2021-05-27 2023-09-15 百度在线网络技术(北京)有限公司 Method, apparatus, medium and program product for transforming an image of a virtual idol
CN114092675A (en) * 2021-11-22 2022-02-25 北京百度网讯科技有限公司 Image display method, image display device, electronic apparatus, and storage medium
CN114742984A (en) * 2022-04-14 2022-07-12 北京数字冰雹信息技术有限公司 Editing method and device of dynamic three-dimensional model

Similar Documents

Publication Publication Date Title
KR102296906B1 (en) Virtual character generation from image or video data
CN112489174A (en) Action display method, device electronic equipment and storage medium of virtual image model
US20230351663A1 (en) System and method for generating an avatar that expresses a state of a user
CN107657651B (en) Expression animation generation method and device, storage medium and electronic device
CN107180446B (en) Method and device for generating expression animation of character face model
CN112396679B (en) Virtual object display method and device, electronic equipment and medium
CN112634466B (en) Expression display method, device, equipment and storage medium of virtual image model
KR102491140B1 (en) Method and apparatus for generating virtual avatar
US9202312B1 (en) Hair simulation method
CN110555507B (en) Interaction method and device for virtual robot, electronic equipment and storage medium
CN109621419B (en) Game character expression generation device and method, and storage medium
KR20180071833A (en) Computer interface management system by 3D digital actor
US11978145B2 (en) Expression generation for animation object
CN110837294A (en) Facial expression control method and system based on eyeball tracking
CN112669422B (en) Simulated 3D digital person generation method and device, electronic equipment and storage medium
CN111968248A (en) Intelligent makeup method and device based on virtual image, electronic equipment and storage medium
CN111768478A (en) Image synthesis method and device, storage medium and electronic equipment
CN113633983A (en) Method, device, electronic equipment and medium for controlling expression of virtual character
CN111383642A (en) Voice response method based on neural network, storage medium and terminal equipment
CN114904268A (en) Virtual image adjusting method and device, electronic equipment and storage medium
CN111078005A (en) Virtual partner creating method and virtual partner system
Deng et al. Perceptually guided expressive facial animation
CN112802162A (en) Face adjustment method and device for virtual character, electronic device and storage medium
Wang et al. Hierarchical facial expression animation by motion capture data
EP4385592A1 (en) Computer-implemented method for controlling a virtual avatar

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20210312

RJ01 Rejection of invention patent application after publication