WO2019041902A1 - 表情动画生成方法和装置、存储介质及电子装置 - Google Patents

表情动画生成方法和装置、存储介质及电子装置 Download PDF

Info

Publication number
WO2019041902A1
WO2019041902A1 PCT/CN2018/088150 CN2018088150W WO2019041902A1 WO 2019041902 A1 WO2019041902 A1 WO 2019041902A1 CN 2018088150 W CN2018088150 W CN 2018088150W WO 2019041902 A1 WO2019041902 A1 WO 2019041902A1
Authority
WO
WIPO (PCT)
Prior art keywords
control
target
model
bone
virtual object
Prior art date
Application number
PCT/CN2018/088150
Other languages
English (en)
French (fr)
Inventor
金英刚
Original Assignee
腾讯科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 腾讯科技(深圳)有限公司 filed Critical 腾讯科技(深圳)有限公司
Priority to KR1020197029031A priority Critical patent/KR102338136B1/ko
Priority to JP2019549402A priority patent/JP7297359B2/ja
Publication of WO2019041902A1 publication Critical patent/WO2019041902A1/zh
Priority to US16/553,005 priority patent/US10872452B2/en
Priority to US17/068,675 priority patent/US11270489B2/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/403D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/60Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/60Methods for processing data by generating or executing the game program
    • A63F2300/66Methods for processing data by generating or executing the game program for rendering three dimensional images
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/60Methods for processing data by generating or executing the game program
    • A63F2300/66Methods for processing data by generating or executing the game program for rendering three dimensional images
    • A63F2300/6607Methods for processing data by generating or executing the game program for rendering three dimensional images for animating game characters, e.g. skeleton kinematics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Definitions

  • the present invention relates to the field of computers, and in particular to an expression animation generating method and apparatus, a storage medium, and an electronic device.
  • Embodiments of the present invention provide an expression animation generating method and apparatus, a storage medium, and an electronic device to at least solve the technical problem that the generated expression animation is too singular due to limitations of complicated generation operations.
  • a method for generating an expression animation includes: acquiring an object model of a target virtual object; adapting the acquired object model to a target bone to obtain a target virtual object.
  • the object control model wherein the object control model includes a control vertex and a control bone, each of the control bones is used to control a part of the object control model, and the control vertex is an intersection of the control bone; according to the obtained adjustment instruction Adjusting the control vertex and/or the control bone in the object control model to obtain an emoticon frame image of the target virtual object; and generating an emoticon animation of the target virtual object by using the emoticon frame.
  • the adjusting the control vertex and/or the control bone in the object control model according to the obtained adjustment instruction to obtain an expression frame picture of the target virtual object including:
  • the adjusting the target control vertex and/or the target controlling bone comprises:
  • the generating an animation of the target virtual object by using the expression frame screen includes:
  • the acquired multi-frame emoticon frame screen is generated, and the emoticon animation of the target virtual object is generated in a predetermined order.
  • the adapting the acquired object model to the target bone to obtain an object control model that matches the target virtual object includes:
  • the obtained key points in the object model are bound to the target bone, and the object model is skinned onto the target bone to obtain an object control model that matches the target virtual object.
  • the target bone comprises: a general facial bone
  • the object model comprises: a facial model
  • an expression animation generating apparatus including: a first acquiring unit, configured to acquire an object model of a target virtual object; and an adapting unit, configured to acquire the obtained object model Adapting to the target bone to obtain an object control model matching the target virtual object, wherein the object control model includes a control vertex and a control bone, and each of the control bones is used to control a part of the object control model, The control vertex is an intersection point of the control bone; the adjustment unit is configured to adjust the control vertex and/or the control bone in the object control model according to the obtained adjustment instruction to obtain an expression frame picture of the target virtual object; And a generating unit, configured to generate an emoticon animation of the target virtual object by using the emoticon frame image.
  • the adjusting unit includes:
  • a processing module configured to repeatedly perform the following steps until an expression frame picture of the target virtual object is obtained: determining, according to the obtained adjustment instruction, a target control vertex and/or a target control bone to be adjusted in the object control model; The target controls the vertices and/or the target controls the bones.
  • the processing module includes:
  • a first adjustment submodule configured to adjust a control weight corresponding to each target control bone connected to the target control vertex, wherein the target control bone has a larger control weight, and the target control bone is controlled by the object The larger the range of areas controlled in the model; and/or,
  • the second adjustment submodule is configured to adjust a display position of the target control bone.
  • the generating unit includes:
  • An obtaining module configured to acquire an emoticon frame picture of the target virtual object in multiple frames
  • a generating module configured to generate the emoticon animation of the target virtual object in a predetermined order by using the acquired multi-frame emoticon frame image.
  • the adapting unit includes:
  • An adaptation module configured to bind the acquired key points in the object model to the target bone, and skin the object model onto the target bone to obtain matching with the target virtual object Object control model.
  • the target bone comprises: a general facial bone
  • the object model comprises: a facial model
  • an expression animation generating method is further provided, where the method is applied to a terminal, and the method includes:
  • the terminal executes the above-described expression animation generation method.
  • a storage medium for storing a program, wherein the above-described program execution method executes the above-described expression animation generation method.
  • an electronic device includes a memory and a processor
  • the memory is used to store a program
  • the processor is configured to execute a program stored in the memory
  • the processor executes the above-described expression animation generation method by the program.
  • a computer program product comprising instructions which, when run on a computer, cause the computer to perform the above-described expression animation generating method.
  • the expression animation generation method by adapting the acquired object model of the target virtual object to the common target bone, an object control model matching the target virtual object is obtained, and the control included in the object control model is adjusted by adjusting The vertices and/or the control bones realize the adjustment control of the object control model to obtain the expression frame picture of the target virtual object, and generate the expression animation of the target virtual object by using the acquired expression frame picture.
  • the invention adapts the object model of the target virtual object to the common target bone, and directly adjusts the control bone and/or the control vertex in the object control model obtained after the adaptation, thereby obtaining a rich and diverse expression frame picture. Therefore, the purpose of generating a variety of vivid and realistic expression animations for the target virtual object is achieved, thereby overcoming the problem that the expression animation generated in the related art is too single.
  • the invention utilizes a common target bone to adapt to an object model of different target virtual objects, obtains an object control model corresponding to each target virtual object, and expands a range of target virtual objects for generating an animated animation, for different target virtual objects.
  • a rich and diverse expression animation can be generated, thereby expanding the use range of the above-described expression animation generation method and improving the versatility.
  • FIG. 1 is a schematic diagram of an application environment of an optional expression animation generating method according to an embodiment of the present invention
  • FIG. 2 is a flow chart of an alternative expression animation generation method according to an embodiment of the present invention.
  • FIG. 3 is a schematic diagram of an effect of an optional expression animation generating method according to an embodiment of the present invention.
  • FIG. 4 is a schematic diagram of an effect of another optional expression animation generating method according to an embodiment of the present invention.
  • FIG. 5 is a flowchart of another optional expression animation generating method according to an embodiment of the present invention.
  • FIG. 6 is a schematic diagram of an operation interface of an optional expression animation generating method according to an embodiment of the present invention.
  • FIG. 7 is a schematic diagram of an operation interface of another optional expression animation generating method according to an embodiment of the present invention.
  • FIG. 8 is a schematic diagram of an optional expression animation generating apparatus according to an embodiment of the present invention.
  • FIG. 9 is a schematic diagram of an alternative electronic device in accordance with an embodiment of the present invention.
  • the expression animation generating method may be, but is not limited to, being applied to an application environment as shown in FIG. 1.
  • the terminal 102 runs an expression animation editing application, and the object of the target virtual object is acquired in the application.
  • the object control model includes a control vertex and a control bone, and the control vertex is an intersection of the control bone, each The control bones are used to control a part of the area in the object control model; according to the obtained adjustment instruction, the control vertices and/or the control bones in the object control model are adjusted to obtain an expression frame picture of the target virtual object. Then, using the above-described emoticon frame screen, an emoticon animation of the target virtual object is generated.
  • the object model of the acquired target virtual object is adapted to the common target bone to obtain an object control model that matches the target virtual object, thereby adjusting the control vertices included in the object control model by adjusting / or control the bone, to achieve the adjustment control of the above object control model, to obtain the expression frame picture of the target virtual object, using the acquired expression frame picture to generate the expression animation of the target virtual object. That is to say, by adapting the object model of the target virtual object to the common target bone, by directly adjusting the control bones and/or control vertices in the object control model obtained after the adaptation, a rich and diverse expression frame is obtained.
  • the screen in order to achieve a variety of vivid and realistic expression animation for the target virtual object, thereby overcoming the problem that the expression animation generated in the related art is too single.
  • the object model of the target virtual object is adapted by using the common target bone, and the object control model corresponding to each target virtual object is obtained, and the range of the target virtual object used to generate the expression animation is expanded, and the target virtual object can be expanded for different target objects.
  • By adjusting the control bones and/or controlling the vertices in the corresponding object control model a rich and varied expression animation is generated, thereby expanding the use range of the above-described expression animation generation method and improving the versatility.
  • the terminal 102 may include, but is not limited to, at least one of the following: a mobile phone, a tablet computer, a notebook computer, a desktop PC, and other hardware devices for generating an emoticon animation.
  • a mobile phone a tablet computer
  • a notebook computer a desktop PC
  • other hardware devices for generating an emoticon animation a mobile phone, a tablet computer, a notebook computer, a desktop PC, and other hardware devices for generating an emoticon animation.
  • a mobile phone a tablet computer
  • a notebook computer a desktop PC
  • other hardware devices for generating an emoticon animation for generating an emoticon animation.
  • an expression animation generation method is provided. As shown in FIG. 2, the method includes:
  • the acquired object model to the target bone to obtain an object control model that matches the target virtual object, where the object control model includes a control vertex and a control bone, and each control bone is used to control the object control model.
  • the control vertices are the intersections of the control bones;
  • S206 Adjust, according to the obtained adjustment instruction, a control vertex and/or a control bone in the object control model to obtain an expression frame picture of the target virtual object;
  • S208 Generate an expression animation of the target virtual object by using the expression frame screen.
  • the expression animation generating method may be applied to, but not limited to, an application that needs to edit an emoticon animation, for example, in a game application, where the target virtual object is a virtual game character object in the game application.
  • a plurality of different expression animations are generated for the game character object by the above-described expression animation generation method.
  • Obtaining an object model of the game character object (such as a three-dimensional model of a game character object), adapting the object model to a general target bone, to obtain an object control model matching the game character object, thereby implementing the object adjustment Controlling the vertices and/or controlling the bones in the model to obtain a corresponding emoticon frame screen, and then using the emoticon frame screen to generate an emoticon animation of the game character object.
  • an object model of the game character object such as a three-dimensional model of a game character object
  • adapting the object model to a general target bone to obtain an object control model matching the game character object, thereby implementing the object adjustment Controlling the vertices and/or
  • the control bone and/or the control vertex in the object control model obtained by the adaptation are directly adjusted to obtain a rich and diverse expression.
  • the frame picture is used to achieve the purpose of generating a variety of vivid and realistic expression animations for the target virtual object, thereby overcoming the problem that the expression animation generated in the related art is too single.
  • the object model of the target virtual object is adapted by using the common target bone, and the object control model corresponding to each target virtual object is obtained, and the range of the target virtual object used to generate the expression animation is expanded, and the target virtual object can be expanded for different target objects.
  • the target bone may be, but not limited to, the same set of bones that are common.
  • the object models for different target virtual objects can be, but are not limited to, adapted to the above-mentioned general target bones, and the object control models respectively corresponding to the respective target virtual objects are obtained, thereby realizing the control model in the object. Control the vertices and/or control the adjustment of the bones to achieve the adjustment control of the emoticon frame of the target virtual object, thereby facilitating a rich and diverse expression animation.
  • the facial model of different face types may be adapted to the facial bones of the above face, so as to use the general bone of the face to control the generation of the animation corresponding to the different face types, thereby achieving Enrich the style of emoticons and provide versatility in the process of generating emoticons.
  • control vertices and/or the control bones in the object control model may be, but are not limited to, an object control model obtained by adapting the object model and the target bone to control the target virtual object.
  • the key elements of the emoticon frame By adjusting the control vertices and/or controlling bones in the object that matches the target virtual object, different emoticons can be generated for the target virtual object.
  • the above-mentioned expression animation may be, but not limited to, an N-frame emoticon frame screen, and a dynamic expression having a continuous change generated in a predetermined order.
  • N-frame emoticon frame screen and a dynamic expression having a continuous change generated in a predetermined order.
  • the above target bones may include, but are not limited to, general body bones and facial bones. That is to say, the generated expression animation may include an expression animation composed of a whole body motion, and may also include an animation animation composed of a facial features, which is not limited in this embodiment.
  • control vertex and/or the control bone in the object control model are adjusted according to the obtained adjustment instruction to obtain an expression frame image of the target virtual object, including:
  • S14 adjusting the target control vertex and/or the target control bone, including but not limited to:
  • the corresponding object control model is a face control model
  • the control bone is a face control bone
  • the control vertex is a face control vertex.
  • the range of the area controlled by each face control bone in the object control model is determined by adjusting the control weights corresponding to the respective face control bones connected to the face control vertices shown in FIG. 3, wherein the control weight is larger, and the control is controlled. The larger the range of the area.
  • the acquired object model of the target virtual object is adapted to the target bone to obtain an object control model that matches the target virtual object, including: the acquired object model The key point is bound to the target bone and the object model is skinned onto the target bone to obtain an object control model that matches the target virtual object.
  • each control bone is used to control a certain partial area in the object control model to form a different facial features such as a nose shape, a mouth shape, an eye, a forehead, and the like.
  • the same set of target bones are used to adapt the object models of different target virtual objects, and the control vertices and control bones in the obtained object control model are adjusted to change the facial features of the object control model. Get different expressions of the target virtual object.
  • the control weights of the respective control bones connected to the respective control vertices are used to determine the strength control of each control bone at the corresponding control vertices. For example, as shown in FIG. 4, around the bone in the eye 1, the eye position in the eye 2, the dark control vertices indicate strong control, and the light colored control vertices indicate weak control, thereby showing the control bone around the eyes.
  • the scope of control Suppose that eye 1 and eye 2 are matched using the same bone.
  • the initial object model of eye 1 is large, the initial object model of eye 2 is small, the two eyes are currently at the same amplitude, and the bones move the same, in order to achieve the same eye bone.
  • the generated expression can be generalized by adjusting the control weights of the control vertices.
  • the facial expression animation of the game character object in the game application is still taken as an example.
  • the above-mentioned expression animation generation process may be as follows:
  • the 2D model of the game character object is transformed into a 3D object model by an editing tool in the 3DSMAX software.
  • the above 3DSMAX software is used to create 3D models, animations, special effects and other applications.
  • the model is split.
  • the above-mentioned object model is divided into two parts: a head object model and a body object model.
  • S508 skin versatility adaptation. Skinning the object model to the target bone, obtaining the object control model, controlling the movement of the bone in the control object control model, and adjusting the control weight of the control vertex to achieve the purpose of adaptation.
  • the head body is split and the fbx file is output.
  • Split the created emoticons into head animations, and the body animation outputs two fbx files.
  • the above-mentioned expression animation may include, but is not limited to, facial expression animation, whole body expression animation.
  • different body shapes can also be included in the expression animation of the object. The above is only an example, and is not limited in this embodiment.
  • the state machine is built and integrated into the state machine.
  • an animation state machine with the effect shown in Fig. 6 is built, and the generated emoticon animation is integrated and outputted to the corresponding state machine, and the effect is as shown in Fig. 7.
  • the object model of the target virtual object is adapted to the common target bone, and the control bone and/or the control vertex in the object control model obtained by the adaptation is directly adjusted to obtain the richness.
  • a variety of expression frame images achieve the purpose of generating a variety of vivid and realistic expression animations for the target virtual object, thereby overcoming the problem that the expression animation generated in the related art is too single.
  • the object model of the target virtual object is adapted by using the common target bone, and the object control model corresponding to each target virtual object is obtained, and the range of the target virtual object used to generate the expression animation is expanded, and the target virtual object can be expanded for different target objects.
  • control vertices and/or the control bones in the object control model are adjusted according to different adjustment commands to obtain an expression frame picture required for one frame. After acquiring the multi-frame emoticon frame, it can be combined and a dynamic emotic animation generated in a predetermined order.
  • the target control bone to be adjusted in the object control model matching the game character object may be determined according to the adjustment instruction. Then directly adjust the display position of the target control bone on the face of the game character object to obtain the adjusted expression frame picture.
  • the target control vertex to be adjusted in the object control model matching the game character object may be determined according to the adjustment instruction. And then directly adjust the control weight corresponding to each target control bone connected to the target control vertex.
  • the target control vertex is connected to four target control bones, such as bone A-skeleton D, and the corresponding control weights are 0.1, 0.2, 0.3, and 0.4, respectively.
  • the control weight of the skeleton D is the largest, the range of the controlled area is the largest, the control force is the strongest, and the bone C to the bone A are sequentially lowered.
  • the display position of the bone is adjusted, and the degree of change of the control area corresponding to each bone is different to obtain the adjusted expression frame screen.
  • the purpose of acquiring a rich and diverse expression frame picture is achieved by adjusting the control weights on the target control bone and/or the target control vertex, so that the target virtual object has various expression animations. Thereby making the animation of the target virtual object more realistic and vivid.
  • an emoticon animation of the target virtual object is generated by using the emoticon frame image, including:
  • the multi-frame emoticon frame screen is generated to generate an emoticon animation of the target virtual object in a predetermined order.
  • the expression animation may be, but is not limited to, a dynamic expression having a continuous change generated in a predetermined order for the multi-frame table frame picture.
  • the multi-frame emoticon frame screen is frame A: the mouth corner is closed, the frame B: the mouth angle is raised, and the frame C: the mouth is descriptive
  • the target virtual object for example, the game character exclusive
  • the emoticon animation of the target virtual object is generated in a predetermined order, thereby realizing the purpose of generating a rich and diverse emoticon animation for different target virtual objects, so as to generate Process generalization.
  • the acquired object model of the target virtual object is adapted to the target bone to obtain an object control model that matches the target virtual object, including:
  • S1 Bind the key points in the acquired object model to the target bone, and skin the object model to the target bone to obtain an object control model that matches the target virtual object.
  • the key points in the object model may be, but are not limited to, key positions in the object model.
  • the key positions may include, but are not limited to, facial features (eyes, eyebrows, noses, The mouth, ear) is in the position. Binding the key points in the object model to the corresponding positions on the common target bone, and then skinning on the target bone to obtain an object control model that matches the target virtual object, wherein the object control model is included for Adjusts the control vertices and control bones that control the target virtual object.
  • the following operations are repeatedly performed: determining the target control vertex and/or the target control bone to be adjusted, and obtaining the multi-frame emoticon frame by multiple adjustments, so as to generate a continuously changing emoticon that matches the target virtual object.
  • the object control model corresponding to each target virtual object is obtained by adapting the general target skeleton with the object model of different target virtual objects, and expanding the scope of the target virtual object for generating the expression animation. Therefore, the purpose of generating a rich and diverse expression animation for different target virtual objects is achieved, thereby achieving the effect of improving the versatility of the expression animation generation method.
  • the method according to the above embodiment can be implemented by means of software plus a necessary general hardware platform, and of course, by hardware, but in many cases, the former is A better implementation.
  • the technical solution of the present invention which is essential or contributes to the prior art, may be embodied in the form of a software product stored in a storage medium (such as ROM/RAM, disk,
  • the optical disc includes a number of instructions for causing a terminal device (which may be a cell phone, a computer, a server, or a network device, etc.) to perform the methods described in various embodiments of the present invention.
  • an expression animation generating apparatus for implementing the above-described expression animation generating method is further provided.
  • the device includes:
  • a first obtaining unit 802 configured to acquire an object model of the target virtual object
  • an adapting unit 804 configured to adapt the acquired object model to the target bone to obtain an object control model that matches the target virtual object, wherein the object control model includes a control vertex and a control bone, and each control bone Used to control a part of the area of the object control model, controlling the vertex to control the intersection of the bone;
  • an adjusting unit 806, configured to adjust a control vertex and/or a control bone in the object control model according to the obtained adjustment instruction, to obtain an expression frame picture of the target virtual object;
  • a generating unit 808, configured to generate an emoticon animation of the target virtual object by using the emoticon frame screen.
  • the adjusting unit 806 includes:
  • a processing module configured to repeatedly perform the following steps until the expression frame picture of the target virtual object is obtained: determining the target control vertex and/or the target control bone to be adjusted in the object control model according to the acquired adjustment instruction; adjusting the target control Vertices and/or targets control bones.
  • the processing module includes:
  • a first adjustment sub-module configured to adjust a control weight corresponding to each target control bone connected to the target control vertex, wherein the control weight of the target control bone is larger, and the target control bone is controlled in the object control model. The greater the extent of the area; and/or,
  • the second adjustment sub-module is used to adjust the display position of the target control bone.
  • the generating unit 808 includes:
  • an obtaining module configured to acquire an emoticon frame image of the multi-frame target virtual object
  • a generating module configured to generate the expression animation of the target virtual object in a predetermined order by acquiring the acquired multi-frame emoticon frame.
  • the adaptation unit 804 includes:
  • an adaptation module configured to bind the key points in the acquired object model to the target bone, and skin the object model onto the target bone to obtain an object control model that matches the target virtual object.
  • an electronic device for implementing the above-described expression animation generation is further provided.
  • the electronic device includes: a memory 902, a processor 904, and is stored in the memory and can be on the processor.
  • the computer program in operation in addition, also includes a communication interface 906 for transmission.
  • a communication interface 906, configured to acquire an object model of the target virtual object
  • the processor 904 is connected to the communication interface 906, and is configured to adapt the acquired object model to the target bone to obtain an object control model that matches the target virtual object, wherein the object control model includes a control vertex and Controls the bones, each control bone is used to control a portion of the object's control model, the control vertex is the intersection of the control bones, and is also set to control the control vertices and/or control bones in the model based on the acquired adjustment instructions.
  • the adjustment is made to obtain an emoticon frame picture of the target virtual object; and is also set as an emoticon for generating the target virtual object using the emoticon frame picture.
  • the memory 902 is connected to the communication interface 906 and the processor 904, and is set as an emoticon for storing the target virtual object.
  • Embodiments of the present invention also provide a storage medium.
  • the foregoing storage medium may be located in at least one of the plurality of network devices in the network.
  • the storage medium is arranged to store program code for performing the following steps:
  • the object control model includes a control vertex and a control bone, and each control bone is used to control the object control model.
  • the control vertices are the intersections of the control bones;
  • the storage medium is further arranged to store program code for performing the following steps:
  • the storage medium is further arranged to store program code for performing the following steps:
  • the storage medium is further arranged to store program code for performing the following steps:
  • the acquired multi-frame emoticon frame screen is generated, and the emoticon animation of the target virtual object is generated in a predetermined order.
  • the storage medium is further arranged to store program code for performing the following steps:
  • the obtained key points in the object model are bound to the target bone, and the object model is skinned onto the target bone to obtain an object control model that matches the target virtual object.
  • the foregoing storage medium may include, but not limited to, a USB flash drive, a Read-Only Memory (ROM), a Random Access Memory (RAM), a mobile hard disk, and a magnetic memory.
  • ROM Read-Only Memory
  • RAM Random Access Memory
  • a mobile hard disk e.g., a hard disk
  • magnetic memory e.g., a hard disk
  • embodiments of the present invention also provide a computer program product comprising instructions that, when executed on a computer, cause the computer to perform the expression animation generation described in any of the above embodiments.
  • the integrated unit in the above embodiment if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in the above-described computer readable storage medium.
  • the technical solution of the present invention may contribute to the prior art or all or part of the technical solution may be embodied in the form of a software product stored in a storage medium.
  • a number of instructions are included to cause one or more computer devices (which may be a personal computer, server or network device, etc.) to perform all or part of the steps of the methods described in various embodiments of the present invention.
  • the disclosed client may be implemented in other manners.
  • the device embodiments described above are merely illustrative.
  • the division of the unit is only a logical function division.
  • multiple units or components may be combined or may be Integrate into another system, or some features can be ignored or not executed.
  • the mutual coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection through some interface, unit or module, and may be electrical or otherwise.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
  • each functional unit in each embodiment of the present invention may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit.
  • the above integrated unit can be implemented in the form of hardware or in the form of a software functional unit.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Graphics (AREA)
  • Software Systems (AREA)
  • Geometry (AREA)
  • Architecture (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Processing Or Creating Images (AREA)

Abstract

本发明公开了一种表情动画生成方法和装置、存储介质及电子装置。其中,该方法包括:获取目标虚拟对象的对象模型;将获取到的对象模型适配到目标骨骼上,以得到与目标虚拟对象匹配的对象控制模型,其中,对象控制模型包括控制顶点与控制骨骼,每个控制骨骼用于控制对象控制模型的一部分区域,控制顶点为控制骨骼的交点;根据获取到的调整指令对对象控制模型中的控制顶点和/或控制骨骼进行调整,得到目标虚拟对象的表情帧画面;利用表情帧画面生成目标虚拟对象的表情动画。本发明解决了由于受到复杂的生成操作的限制而导致所生成的表情动画过于单一的技术问题。

Description

表情动画生成方法和装置、存储介质及电子装置
本申请要求于2017年08月28日提交中国专利局、申请号为201710752994.6、发明名称为“表情动画生成方法和装置、存储介质及电子装置”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本发明涉及计算机领域,具体而言,涉及一种表情动画生成方法和装置、存储介质及电子装置。
背景技术
在虚拟应用世界中往往会设计多种不同形象的虚拟角色,为了使得不同的虚拟角色拥有丰富多样的表情动画,目前相关技术中常用的方式是通过局部缩放模型或多个模型融合的方式,来改变或调整每个虚拟角色的五官形态,以为每个虚拟角色生成不同的表情动画。
然而,在采用上述方式为虚拟角色生成表情动画时,往往需要开发人员多次手动调试模型,才能生成所需的表情动画。由于受到复杂的生成操作的限制,将导致所生成的表情动画过于单一的问题。
针对上述的问题,目前尚未提出有效的解决方案。
发明内容
本发明实施例提供了一种表情动画生成方法和装置、存储介质及电子装置,以至少解决由于受到复杂的生成操作的限制而导致所生成的表情动画过于单一的技术问题。
根据本发明实施例的一个方面,提供了一种表情动画生成方法,包括:获取目标虚拟对象的对象模型;将获取到的上述对象模型适配到目标骨骼上,以得到与上述目标虚拟对象匹配的对象控制模型,其中,上述对象控制模型包括控制顶点与控制骨骼,每个上述控制骨骼用于控制上述对象控制模型的一部分区域,上述控制顶点为上述控制骨骼的交点;根据获取到的调整指令对上述对象控制模型中的上述控制顶点和/或上述控制骨骼进行调整,以得到上述目标虚拟对象的表情帧画面;利用上述表情帧画面生成上述目标虚拟对象的表情动画。
可选的,所述根据获取到的调整指令对所述对象控制模型中的所述控制顶点和/或所述控制骨骼进行调整,以得到所述目标虚拟对象的表情帧画面,包括:
重复执行以下步骤,直至得到所述目标虚拟对象的表情帧画面:
根据获取到的调整指令,确定所述对象控制模型中所要调整的目标控制顶点和/或目标控制骨骼;
调整所述目标控制顶点和/或所述目标控制骨骼。
可选的,所述调整所述目标控制顶点和/或所述目标控制骨骼,包括:
调整与所述目标控制顶点所连接的各个目标控制骨骼对应的控制权重,其中,所述目标控制骨骼的控制权重越大,所述目标控制骨骼在所述对象控制模型中所控制的区域的范围越大;和/或,
调整所述目标控制骨骼的显示位置。
可选的,所述利用所述表情帧画面,生成所述目标虚拟对象的表情动画,包括:
获取多帧所述目标虚拟对象的表情帧画面;
将获取到的多帧所述表情帧画面,按照预定顺序生成所述目标虚拟对象的表情动画。
可选的,所述将获取到的所述对象模型适配到目标骨骼上,以得到与所述目标虚拟对象匹配的对象控制模型,包括:
将获取到的所述对象模型中的关键点与所述目标骨骼绑定,并将所述对象模型蒙皮到所述目标骨骼上,以得到与所述目标虚拟对象匹配的对象控制模型。
可选的,所述目标骨骼包括:通用面部骨骼,所述对象模型包括:面部模型。
根据本发明实施例的另一方面,还提供了一种表情动画生成装置,包括:第一获取单元,用于获取目标虚拟对象的对象模型;适配单元,用于将获取到的上述对象模型适配到目标骨骼上,以得到与上述目标虚拟对象匹配的对象控制模型,其中,上述对象控制模型包括控制顶点与控制骨骼,每个上述控制骨骼用于控制上述对象控制模型的一部分区域,上述控制顶点为上述控制骨骼的交点;调整单元,用于根据获取到的调整指令对上述对象控制模型中的上述控制顶点和/或上述控制骨骼进行调整,以得到上述目标虚拟对象的表情帧画面;生成单元,用于利用上述表情帧画面生成上述目标虚拟对象的表情动画。
可选的,所述调整单元,包括:
处理模块,用于重复执行以下步骤,直至得到所述目标虚拟对象的表情帧画面:根据获取到的调整指令,确定所述对象控制模型中所要调整的目标控制顶点和/或目标控制 骨骼;调整所述目标控制顶点和/或所述目标控制骨骼。
可选的,所述处理模块包括:
第一调整子模块,用于调整与所述目标控制顶点所连接的各个目标控制骨骼对应的控制权重,其中,所述目标控制骨骼的控制权重越大,所述目标控制骨骼在所述对象控制模型中所控制的区域的范围越大;和/或,
第二调整子模块,用于调整所述目标控制骨骼的显示位置。
可选的,所述生成单元包括:
获取模块,用于获取多帧所述目标虚拟对象的表情帧画面;
生成模块,用于将获取到的多帧所述表情帧画面,按照预定顺序生成所述目标虚拟对象的表情动画。
可选的,所述适配单元包括:
适配模块,用于将获取到的所述对象模型中的关键点与所述目标骨骼绑定,并将所述对象模型蒙皮到所述目标骨骼上,以得到与所述目标虚拟对象匹配的对象控制模型。
可选的,所述目标骨骼包括:通用面部骨骼,所述对象模型包括:面部模型。
根据本发明实施例的又一方面,还提供了一种表情动画生成方法,所述方法应用于终端,所述方法包括:
所述终端执行上述的表情动画生成方法。
根据本发明实施例的又一方面,还提供了一种存储介质,上述存储介质用于存储程序,其中,上述程序运行时执行上述表情动画生成方法。
根据本发明实施例的又一方面,还提供了一种电子装置,包括存储器、处理器;
其中,所述存储器用于存储程序,所述处理器用于执行所述存储器中存储的程序;
所述处理器通过所述程序执行上述的表情动画生成方法。
根据本发明实施例的又一方面,还提供了一种包括指令的计算机程序产品,当其在计算机上运行时,使得所述计算机执行上述的表情动画生成方法。
本发明提供的表情动画生成方法中,通过将获取到的目标虚拟对象的对象模型适配到通用的目标骨骼上,得到与目标虚拟对象匹配的对象控制模型,通过调整对象控制模型中包括的控制顶点和/或控制骨骼,实现对上述对象控制模型的调整控制,以得到目标虚拟对象的表情帧画面,利用获取到的表情帧画面生成目标虚拟对象的表情动画。
本发明通过将目标虚拟对象的对象模型适配到通用的目标骨骼上,通过对适配后得 到的对象控制模型中的控制骨骼和/或控制顶点直接进行调整,得到丰富多样的表情帧画面,从而达到为目标虚拟对象生成多种多样生动真实的表情动画的目的,进而克服相关技术中所生成的表情动画过于单一的问题。
本发明利用通用的目标骨骼与不同的目标虚拟对象的对象模型进行适配,得到各个目标虚拟对象对应的对象控制模型,扩大用于生成表情动画的目标虚拟对象的范围,针对不同的目标虚拟对象都可通过调整对应的对象控制模型中的控制骨骼和/或控制顶点,生成丰富多样的表情动画,从而实现扩大上述表情动画生成方式的使用范围,提高通用性的效果。
附图说明
此处所说明的附图用来提供对本发明的进一步理解,构成本申请的一部分,本发明的示意性实施例及其说明用于解释本发明,并不构成对本发明的不当限定。在附图中:
图1是根据本发明实施例的一种可选的表情动画生成方法的应用环境示意图;
图2是根据本发明实施例的一种可选的表情动画生成方法的流程图;
图3是根据本发明实施例的一种可选的表情动画生成方法的效果示意图;
图4是根据本发明实施例的另一种可选的表情动画生成方法的效果示意图;
图5是根据本发明实施例的另一种可选的表情动画生成方法的流程图;
图6是根据本发明实施例的一种可选的表情动画生成方法的操作界面示意图;
图7是根据本发明实施例的另一种可选的表情动画生成方法的操作界面示意图;
图8是根据本发明实施例的一种可选的表情动画生成装置的示意图;
图9是根据本发明实施例的一种可选的电子装置的示意图。
具体实施方式
为了使本技术领域的人员更好地理解本发明方案,下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分的实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都应当属于本发明保护的范围。
需要说明的是,本发明的说明书和权利要求书及上述附图中的术语“第一”、“第 二”等是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。应该理解这样使用的数据在适当情况下可以互换,以便这里描述的本发明的实施例能够以除了在这里图示或描述的那些以外的顺序实施。此外,术语“包括”和“具有”以及他们的任何变形,意图在于覆盖不排他的包含,例如,包含了一系列步骤或单元的过程、方法、系统、产品或设备不必限于清楚地列出的那些步骤或单元,而是可包括没有清楚地列出的或对于这些过程、方法、产品或设备固有的其它步骤或单元。
在本发明实施例中,提供了一种上述表情动画生成方法的实施例。作为一种可选的实施方式,该表情动画生成方法可以但不限于应用于如图1所示的应用环境中,终端102中运行有表情动画编辑应用,在该应用中获取目标虚拟对象的对象模型,并将该对象模型适配到目标骨骼上,以得到与上述目标虚拟对象匹配的对象控制模型,其中,该对象控制模型中包括控制顶点与控制骨骼,控制顶点为控制骨骼的交点,每个控制骨骼用于控制对象控制模型中的一部分区域;根据获取到的调整指令,对上述对象控制模型中的控制顶点和/或控制骨骼进行调整,以得到该目标虚拟对象的表情帧画面。然后,利用上述表情帧画面,生成目标虚拟对象的表情动画。
在本实施例中,通过将获取到的目标虚拟对象的对象模型适配到通用的目标骨骼上,以得到与目标虚拟对象匹配的对象控制模型,从而通过调整对象控制模型中包括的控制顶点和/或控制骨骼,实现对上述对象控制模型的调整控制,以得到目标虚拟对象的表情帧画面,利用获取到的表情帧画面来生成目标虚拟对象的表情动画。也就是说,通过将目标虚拟对象的对象模型适配到通用的目标骨骼上,通过对适配后得到的对象控制模型中的控制骨骼和/或控制顶点直接进行调整,得到丰富多样的表情帧画面,从而达到为目标虚拟对象生成多种多样生动真实的表情动画的目的,进而克服相关技术中所生成的表情动画过于单一的问题。利用通用的目标骨骼与不同的目标虚拟对象的对象模型进行适配,得到各个目标虚拟对象对应的对象控制模型,扩大用于生成表情动画的目标虚拟对象的范围,针对不同的目标虚拟对象都可通过调整对应的对象控制模型中的控制骨骼和/或控制顶点,来生成丰富多样的表情动画,从而实现扩大上述表情动画生成方式的使用范围,提高通用性的效果。
可选地,在本实施例中,上述终端102可以包括但不限于以下至少之一:手机、平板电脑、笔记本电脑、台式PC机及其他用于生成表情动画的硬件设备。上述只是一种示例,本实施例对此不做任何限定。
根据本发明实施例,提供了一种表情动画生成方法,如图2所示,该方法包括:
S202,获取目标虚拟对象的对象模型;
S204,将获取到的对象模型适配到目标骨骼上,以得到与目标虚拟对象匹配的对象控制模型,其中,对象控制模型包括控制顶点与控制骨骼,每个控制骨骼用于控制对象控制模型的一部分区域,控制顶点为控制骨骼的交点;
S206,根据获取到的调整指令,对对象控制模型中的控制顶点和/或控制骨骼进行调整,以得到目标虚拟对象的表情帧画面;
S208,利用表情帧画面,生成目标虚拟对象的表情动画。
可选地,在本实施例中,上述表情动画生成方法可以但不限于应用于需要编辑表情动画的应用,例如应用于游戏应用中,目标虚拟对象以游戏应用中虚拟的游戏角色对象为例,通过上述表情动画生成方法为上述游戏角色对象生成多种不同的表情动画。获取上述游戏角色对象的对象模型(如游戏角色对象的三维模型),将该对象模型适配到通用的目标骨骼上,以得到与上述游戏角色对象匹配的对象控制模型,从而实现通过调整该对象控制模型中的控制顶点和/或控制骨骼,来得到对应的表情帧画面,进而利用上述表情帧画面来生成上述游戏角色对象的表情动画。上述仅是一种示例,本实施例中对此不做任何限定。
需要说明的是,通过将目标虚拟对象的对象模型适配到通用的目标骨骼上,通过对适配后得到的对象控制模型中的控制骨骼和/或控制顶点直接进行调整,得到丰富多样的表情帧画面,从而达到为目标虚拟对象生成多种多样生动真实的表情动画的目的,进而克服相关技术中所生成的表情动画过于单一的问题。利用通用的目标骨骼与不同的目标虚拟对象的对象模型进行适配,得到各个目标虚拟对象对应的对象控制模型,扩大用于生成表情动画的目标虚拟对象的范围,针对不同的目标虚拟对象都可通过调整对应的对象控制模型中的控制骨骼和/或控制顶点,来生成丰富多样的表情动画,从而实现扩大上述表情动画生成方式的使用范围,提高通用性的效果。
可选地,在本实施例中,上述目标骨骼可以但不限于为通用的同一套骨骼。也就是说,针对不同目标虚拟对象的对象模型,都可以但不限于适配到上述通用的目标骨骼上,得到与各个目标虚拟对象分别对应的对象控制模型,从而实现通过对对象控制模型中的控制顶点和/或控制骨骼的调整,以达到对目标虚拟对象的表情帧画面的调整控制,进而便于得到丰富多样的表情动画。例如,以目标骨骼为面部通用骨骼为例,可以但不限于将不 同脸型的面部模型适配到上述面部通用骨骼上,以便于利用面部通用骨骼来控制生成与不同脸型对应的表情动画,从而达到丰富表情动画的样式,并提供表情动画生成过程的通用性。
可选地,在本实施例中,上述对象控制模型中的控制顶点和/或控制骨骼,可以但不限于为对象模型与目标骨骼适配后得到的对象控制模型中用于控制目标虚拟对象的表情帧画面的关键要素。通过调整与目标虚拟对象匹配的对象控制模型中的控制顶点和/或控制骨骼,可以为该目标虚拟对象生成不同的表情动画。
此外,需要说明的是,上述表情动画可以但不限于为N帧表情帧画面,按照预定顺序生成的具有连续变化的动态表情。上述仅是一种示例,本实施例中对此不做任何限定。
需要说明的是,上述目标骨骼可以包括但不限于:全身通用骨骼、面部通用骨骼。也就是说,所生成的表情动画可以包括全身动作构成的表情动画,也可以包括五官形态构成的表情动画,本实施例中对此不做任何限定。
作为一种可选的方案,根据获取到的调整指令,对对象控制模型中的控制顶点和/或控制骨骼进行调整,以得到目标虚拟对象的表情帧画面,包括:
S1,重复执行以下步骤,直至得到表情帧画面:
S12,根据调整指令确定对象控制模型中所要调整的目标控制顶点和/或目标控制骨骼;
S14,调整目标控制顶点和/或目标控制骨骼。
可选地,在本实施例中,S14,调整目标控制顶点和/或目标控制骨骼,包括但不限于:
1)调整与目标控制顶点所连接的各个目标控制骨骼对应的控制权重,其中,目标控制骨骼的控制权重越大,目标控制骨骼在对象控制模型中所控制的区域的范围越大;和/或
2)调整目标控制骨骼的显示位置。
例如,如图3所示,以目标骨骼为面部通用骨骼为例,对应的对象控制模型为面部控制模型,控制骨骼为面部控制骨骼,控制顶点为面部控制顶点。通过调整与图3所示的面部控制顶点所连接的各个面部控制骨骼对应的控制权重,来确定各个面部控制骨骼在对象控制模型中所控制的区域的范围,其中,控制权重越大,控制的区域的范围越大。在本实施例中还可以但不限于继续调整面部控制骨骼的显示位置,来改变五官在面部的显示形 态,以得到对应的表情帧画面。例如,调整图3所示的眉骨(眉毛的控制骨骼)的显示位置,以改变眉毛在面部的显示位置,从而获取到调整后的表情帧画面,在获取到多帧表情帧画面组合后,生成对应的动态的表情动画。
可选地,在本实施例中,将获取到的目标虚拟对象的对象模型适配到目标骨骼上,以得到与该目标虚拟对象匹配的对象控制模型,包括:将获取到的对象模型中的关键点与目标骨骼绑定,并将该对象模型蒙皮到目标骨骼上,以得到与目标虚拟对象匹配的对象控制模型。
需要说明的是,在本实施例中,每个控制骨骼用于控制对象控制模型中的某一部分区域,形成面部不同的五官形态,如鼻型,嘴型,眼睛,额头等。在本实施例中,使用同一套通用的目标骨骼适配不同目标虚拟对象的对象模型,对得到的对象控制模型中的各个控制顶点及控制骨骼进行调整,来改变对象控制模型的五官特征,以得到目标虚拟对象的不同表情。
其中,各个控制顶点所连接的各个控制骨骼的控制权重,用于确定每个控制骨骼在对应控制顶点上的强弱控制。例如,如图4所示,眼1,眼2中眼睛位置的骨骼周围,深色的控制顶点表示控制力强,浅色的控制顶点表示控制力弱,由此可以看出眼睛周围的控制骨骼的控制范围。假设眼1和眼2使用相同的骨骼来匹配,眼1的初始对象模型大,眼2的初始对象模型小,两个眼睛目前眨眼的幅度相同,骨骼移动相同,为了达到使用同一个眼部骨骼适配眼1和眼2,可以通过调整控制顶点的控制权重,从而使得所生成的表情通用化。
具体结合以下图5所示示例进行说明,仍以生成游戏应用中的游戏角色对象的面部表情动画为例,上述表情动画生成过程可以如下:
S502,模型制作。通过3DSMAX软件中的编辑工具,将游戏角色对象的二维模型,实体转换成三维的对象模型。其中,上述3DSMAX软件是用来制作3D模型、动画、特效等的应用软件。
S504,模型拆分。将制作好的上述对象模型拆分成头部对象模型和身体对象模型两个部分。
S506,骨骼绑定。将拆分后的对象模型适配到目标骨骼上。
S508,蒙皮通用性适配。将对象模型蒙皮到目标骨骼上,得到对象控制模型,通过控制对象控制模型中的控制骨骼移动旋转,调整控制顶点的控制权重,来达到适配的目的。
S510,动画制作。在3DSMAX软件中利用对对象控制模型调整后得到的多帧表情帧画 面,生成动态的表情动画;
S512,头部身体拆分,输出fbx文件。将制作好的表情动画分别拆分成头部动画,身体动画输出两个fbx文件。需要说明的是,上述表情动画可以包括但不限于:面部表情动画、全身表情动画。也就是说,不同的身体形态也可以包含在对象的表情动画中。上述仅是一种示例,本实施例中对此不做任何限定。
S514,状态机搭建,整合进状态机。在unity引擎软件中,搭建如图6所示效果的动画状态机,将生成的表情动画整合输出到对应的状态机中,效果如图7所示。
通过本申请提供的实施例,通过将目标虚拟对象的对象模型适配到通用的目标骨骼上,通过对适配后得到的对象控制模型中的控制骨骼和/或控制顶点直接进行调整,得到丰富多样的表情帧画面,达到为目标虚拟对象生成多种多样生动真实的表情动画的目的,进而克服相关技术中所生成的表情动画过于单一的问题。利用通用的目标骨骼与不同的目标虚拟对象的对象模型进行适配,得到各个目标虚拟对象对应的对象控制模型,扩大用于生成表情动画的目标虚拟对象的范围,针对不同的目标虚拟对象都可通过调整对应的对象控制模型中的控制骨骼和/或控制顶点,生成丰富多样的表情动画,从而实现扩大上述表情动画生成方式的使用范围,提高通用性的效果。
需要说明的是,在本实施例中,根据不同的调整指令,对上述对象控制模型中的控制顶点和/或控制骨骼进行调整,得到一帧所需的表情帧画面。在获取到多帧表情帧画面后,可以将其组合并按照预定顺序生成一个动态的表情动画。
例如,仍以生成游戏应用中的游戏角色对象的面部表情动画为例,在用于编辑的游戏引擎中,可以根据调整指令确定与游戏角色对象匹配的对象控制模型中所要调整的目标控制骨骼,然后直接调整目标控制骨骼在游戏角色对象的面部的显示位置,以得到调整后的表情帧画面。
又例如,仍以生成游戏应用中的游戏角色对象的面部表情动画为例,在用于编辑的游戏引擎中,可以根据调整指令确定与游戏角色对象匹配的对象控制模型中所要调整的目标控制顶点,然后直接调整目标控制顶点所连接的各个目标控制骨骼对应的控制权重。假设目标控制顶点与四个目标控制骨骼连接,如骨骼A-骨骼D,对应的控制权重分别为0.1、0.2、0.3、0.4。其中,骨骼D的控制权重最大,所控制的区域的范围最大,控制力最强,骨骼C至骨骼A依次降低。再根据调整指令调整上述骨骼的显示位置,各个骨骼对应的控制区域的变化程度不同,以得到调整后的表情帧画面。
本申请提供的实施例中,通过调整目标控制骨骼和/或目标控制顶点上的控制权重,来实现获取丰富多样的表情帧画面的目的,以使目标虚拟对象拥有各式各样的表情动画,从而使得目标虚拟对象的表情动画更真实、更生动。
作为一种可选的方案,利用表情帧画面,生成目标虚拟对象的表情动画,包括:
S1,获取多帧目标虚拟对象的表情帧画面;
S2,将多帧表情帧画面,按照预定顺序生成目标虚拟对象的表情动画。
需要说明的是,在本实施例中,上述表情动画可以但不限于为多帧表帧画面按照预定顺序所生成的具有连续变化的动态表情。例如,多帧表情帧画面分别为帧A:嘴角紧闭、帧B:嘴角上扬、帧C:张嘴露齿,则可以按照预定顺序生成该目标虚拟对象(例如游戏角色独享)匹配的表情动画“大笑”。
本申请提供的实施例中,通过将获取到的多帧表情帧画面,按照预定顺序生成目标虚拟对象的表情动画,从而实现为不同的目标虚拟对象生成丰富多样的表情动画的目的,以使生成过程通用化。
作为一种可选的方案,将获取到的目标虚拟对象的对象模型适配到目标骨骼上,以得到与该目标虚拟对象匹配的对象控制模型,包括:
S1,将获取到的对象模型中的关键点与目标骨骼绑定,并将对象模型蒙皮到目标骨骼上,以得到与目标虚拟对象匹配的对象控制模型。
可选地,在本实施例中,上述对象模型中的关键点可以但不限于对象模型中的关键位置,以面部对象模型为例,关键位置可以包括但不限于五官(眼、眉、鼻、嘴、耳)所在位置。将上述对象模型中的关键点与通用的目标骨骼上对应的位置绑定,然后在目标骨骼上蒙皮,以得到与目标虚拟对象匹配的对象控制模型,其中,该对象控制模型中包括用于调整控制该目标虚拟对象的控制顶点和控制骨骼。
然后,重复执行以下操作:确定所要调整的目标控制顶点和/或目标控制骨骼,通过多次调整,获取到多帧表情帧画面,以便于生成连续变化的与该目标虚拟对象匹配的表情动画。
本申请提供的实施例中,通过将通用的目标骨骼与不同的目标虚拟对象的对象模型进行适配,得到各个目标虚拟对象对应的对象控制模型,扩大用于生成表情动画的目标虚拟对象的范围,从而实现为不同的目标虚拟对象生成丰富多样的表情动画的目的,进而达到提高表情动画生成方法的通用性的效果。
需要说明的是,对于前述的各方法实施例,为了简单描述,故将其都表述为一系列的动作组合,但是本领域技术人员应该知悉,本发明并不受所描述的动作顺序的限制,因为依据本发明,某些步骤可以采用其他顺序或者同时进行。其次,本领域技术人员也应该知悉,说明书中所描述的实施例均属于优选实施例,所涉及的动作和模块并不一定是本发明所必须的。
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到根据上述实施例的方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件,但很多情况下前者是更佳的实施方式。基于这样的理解,本发明的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质(如ROM/RAM、磁碟、光盘)中,包括若干指令用以使得一台终端设备(可以是手机,计算机,服务器,或者网络设备等)执行本发明各个实施例所述的方法。
此外,根据本发明实施例,还提供了一种用于实施上述表情动画生成方法的表情动画生成装置,可以参考上述方法实施例中所描述的示例,本实施例在此不再赘述。
如图8所示,该装置包括:
1)第一获取单元802,用于获取目标虚拟对象的对象模型;
2)适配单元804,用于将获取到的对象模型适配到目标骨骼上,以得到与目标虚拟对象匹配的对象控制模型,其中,对象控制模型包括控制顶点与控制骨骼,每个控制骨骼用于控制对象控制模型的一部分区域,控制顶点为控制骨骼的交点;
3)调整单元806,用于根据获取到的调整指令,对对象控制模型中的控制顶点和/或控制骨骼进行调整,以得到目标虚拟对象的表情帧画面;
4)生成单元808,用于利用表情帧画面,生成目标虚拟对象的表情动画。
作为一种可选的方案,调整单元806包括:
1)处理模块,用于重复执行以下步骤,直至得到目标虚拟对象的表情帧画面:根据获取到的调整指令,确定对象控制模型中所要调整的目标控制顶点和/或目标控制骨骼;调整目标控制顶点和/或目标控制骨骼。
可选地,在本实施例中,处理模块包括:
(1)第一调整子模块,用于调整与目标控制顶点所连接的各个目标控制骨骼对应的控制权重,其中,目标控制骨骼的控制权重越大,目标控制骨骼在对象控制模型中所控制的区域的范围越大;和/或,
(2)第二调整子模块,用于调整目标控制骨骼的显示位置。
作为一种可选的方案,生成单元808包括:
1)获取模块,用于获取多帧目标虚拟对象的表情帧画面;
2)生成模块,用于将获取到的多帧表情帧画面按照预定顺序生成目标虚拟对象的表情动画。
作为一种可选的方案,适配单元804包括:
1)适配模块,用于将获取到的对象模型中的关键点与目标骨骼绑定,并将对象模型蒙皮到目标骨骼上,以得到与目标虚拟对象匹配的对象控制模型。
根据本发明实施例,还提供了一种用于实施上述表情动画生成的电子装置,如图9所示,该电子装置包括:存储器902、处理器904及存储在存储器上并可在处理器上运行的计算机程序,此外,还包括用于传输的通讯接口906。
1)通讯接口906,设置为用于获取目标虚拟对象的对象模型;
2)处理器904,与通讯接口906连接,设置为用于将获取到的对象模型适配到目标骨骼上,以得到与目标虚拟对象匹配的对象控制模型,其中,对象控制模型包括控制顶点与控制骨骼,每个控制骨骼用于控制对象控制模型的一部分区域,控制顶点为控制骨骼的交点;还设置为用于根据获取到的调整指令,对对象控制模型中的控制顶点和/或控制骨骼进行调整,以得到目标虚拟对象的表情帧画面;还设置为用于利用表情帧画面生成目标虚拟对象的表情动画。
3)存储器902,与通讯接口906及处理器904连接,设置为用于存储目标虚拟对象的表情动画。
本实施例中的具体示例可以参考上述实施例1和实施例2中所描述的示例,本实施例在此不再赘述。
本发明的实施例还提供了一种存储介质。可选地,在本实施例中,上述存储介质可以位于网络中的多个网络设备中的至少一个网络设备。
可选地,在本实施例中,存储介质被设置为存储用于执行以下步骤的程序代码:
S1,获取目标虚拟对象的对象模型;
S2,将获取到的对象模型适配到目标骨骼上,以得到与目标虚拟对象匹配的对象控制模型,其中,对象控制模型包括控制顶点与控制骨骼,每个控制骨骼用于控制对象控制模型的一部分区域,控制顶点为控制骨骼的交点;
S3,根据获取到的调整指令对对象控制模型中的控制顶点和/或控制骨骼进行调整,以得到目标虚拟对象的表情帧画面;
S4,利用表情帧画面生成目标虚拟对象的表情动画。
可选地,存储介质还被设置为存储用于执行以下步骤的程序代码:
S1,重复执行以下步骤,直至得到目标虚拟对象的表情帧画面:根据获取到的调整指令,确定对象控制模型中所要调整的目标控制顶点和/或目标控制骨骼;调整目标控制顶点和/或目标控制骨骼。
可选地,存储介质还被设置为存储用于执行以下步骤的程序代码:
调整与所述目标控制顶点所连接的各个目标控制骨骼对应的控制权重,其中,所述目标控制骨骼的控制权重越大,所述目标控制骨骼在所述对象控制模型中所控制的区域的范围越大;和/或,
调整所述目标控制骨骼的显示位置。
可选地,存储介质还被设置为存储用于执行以下步骤的程序代码:
获取多帧所述目标虚拟对象的表情帧画面;
将获取到的多帧所述表情帧画面,按照预定顺序生成所述目标虚拟对象的表情动画。
可选地,存储介质还被设置为存储用于执行以下步骤的程序代码:
将获取到的所述对象模型中的关键点与所述目标骨骼绑定,并将所述对象模型蒙皮到所述目标骨骼上,以得到与所述目标虚拟对象匹配的对象控制模型。
可选地,在本实施例中,上述存储介质可以包括但不限于:U盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、移动硬盘、磁碟或者光盘等各种可以存储程序代码的介质。
另一方面,本发明实施例还提供了一种包括指令的计算机程序产品,当其在计算机上运行时,使得计算机执行以上任意一个实施例所描述的表情动画生成。
本实施例中的具体示例可以参考上述实施例1和实施例2中所描述的示例,本实施例在此不再赘述。
上述本发明实施例序号仅仅为了描述,不代表实施例的优劣。
上述实施例中的集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在上述计算机可读取的存储介质中。基于这样的理解,本发明的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件 产品的形式体现出来,该计算机软件产品存储在存储介质中,包括若干指令用以使得一台或多台计算机设备(可为个人计算机、服务器或者网络设备等)执行本发明各个实施例所述方法的全部或部分步骤。
在本发明的上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述的部分,可以参见其他实施例的相关描述。
在本申请所提供的几个实施例中,应该理解到,所揭露的客户端,可通过其它的方式实现。其中,以上所描述的装置实施例仅仅是示意性的,例如所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,单元或模块的间接耦合或通信连接,可以是电性或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本发明各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
以上所述仅是本发明的优选实施方式,应当指出,对于本技术领域的普通技术人员来说,在不脱离本发明原理的前提下,还可以做出若干改进和润饰,这些改进和润饰也应视为本发明的保护范围。

Claims (16)

  1. 一种表情动画生成方法,包括:
    获取目标虚拟对象的对象模型;
    将获取到的所述对象模型适配到目标骨骼上,以得到与所述目标虚拟对象匹配的对象控制模型,其中,所述对象控制模型包括控制顶点与控制骨骼,每个控制骨骼用于控制所述对象控制模型的一部分区域,所述控制顶点为所述控制骨骼的交点;
    根据获取到的调整指令,对所述对象控制模型中的所述控制顶点和/或所述控制骨骼进行调整,得到所述目标虚拟对象的表情帧画面;
    利用所述表情帧画面,生成所述目标虚拟对象的表情动画。
  2. 根据权利要求1所述的方法,所述根据获取到的调整指令对所述对象控制模型中的所述控制顶点和/或所述控制骨骼进行调整,以得到所述目标虚拟对象的表情帧画面,包括:
    重复执行以下步骤,直至得到所述目标虚拟对象的表情帧画面:
    根据获取到的调整指令,确定所述对象控制模型中所要调整的目标控制顶点和/或目标控制骨骼;
    调整所述目标控制顶点和/或所述目标控制骨骼。
  3. 根据权利要求2所述的方法,所述调整所述目标控制顶点和/或所述目标控制骨骼,包括:
    调整与所述目标控制顶点所连接的各个目标控制骨骼对应的控制权重,其中,所述目标控制骨骼的控制权重越大,所述目标控制骨骼在所述对象控制模型中所控制的区域的范围越大;和/或,
    调整所述目标控制骨骼的显示位置。
  4. 根据权利要求1所述的方法,所述利用所述表情帧画面,生成所述目标虚拟对象的表情动画,包括:
    获取多帧所述目标虚拟对象的表情帧画面;
    将获取到的多帧所述表情帧画面,按照预定顺序生成所述目标虚拟对象的表情动画。
  5. 根据权利要求1所述的方法,所述将获取到的所述对象模型适配到目标骨骼上,以得到与所述目标虚拟对象匹配的对象控制模型,包括:
    将获取到的所述对象模型中的关键点与所述目标骨骼绑定,并将所述对象模型蒙皮 到所述目标骨骼上,以得到与所述目标虚拟对象匹配的对象控制模型。
  6. 根据权利要求1至5中任一项所述的方法,所述目标骨骼包括:通用面部骨骼,所述对象模型包括:面部模型。
  7. 一种表情动画生成装置,包括:
    第一获取单元,用于获取目标虚拟对象的对象模型;
    适配单元,用于将获取到的所述对象模型适配到目标骨骼上,以得到与所述目标虚拟对象匹配的对象控制模型,其中,所述对象控制模型包括控制顶点与控制骨骼,每个控制骨骼用于控制所述对象控制模型的一部分区域,所述控制顶点为所述控制骨骼的交点;
    调整单元,用于根据获取到的调整指令,对所述对象控制模型中的所述控制顶点和/或所述控制骨骼进行调整,以得到所述目标虚拟对象的表情帧画面;
    生成单元,用于利用所述表情帧画面,生成所述目标虚拟对象的表情动画。
  8. 根据权利要求7所述的装置,所述调整单元,包括:
    处理模块,用于重复执行以下步骤,直至得到所述目标虚拟对象的表情帧画面:根据获取到的调整指令,确定所述对象控制模型中所要调整的目标控制顶点和/或目标控制骨骼;调整所述目标控制顶点和/或所述目标控制骨骼。
  9. 根据权利要求8所述的装置,所述处理模块包括:
    第一调整子模块,用于调整与所述目标控制顶点所连接的各个目标控制骨骼对应的控制权重,其中,所述目标控制骨骼的控制权重越大,所述目标控制骨骼在所述对象控制模型中所控制的区域的范围越大;和/或,
    第二调整子模块,用于调整所述目标控制骨骼的显示位置。
  10. 根据权利要求7所述的装置,所述生成单元包括:
    获取模块,用于获取多帧所述目标虚拟对象的表情帧画面;
    生成模块,用于将获取到的多帧所述表情帧画面,按照预定顺序生成所述目标虚拟对象的表情动画。
  11. 根据权利要求7所述的装置,所述适配单元包括:
    适配模块,用于将获取到的所述对象模型中的关键点与所述目标骨骼绑定,并将所述对象模型蒙皮到所述目标骨骼上,以得到与所述目标虚拟对象匹配的对象控制模型。
  12. 根据权利要求7至11中任一项所述的装置,所述目标骨骼包括:通用面部骨骼,所述对象模型包括:面部模型。
  13. 一种表情动画生成方法,所述方法应用于终端,所述方法包括:
    所述终端执行上述权利要求1-6任意一项所述的表情动画生成方法。
  14. 一种电子装置,包括存储器、处理器;
    其中,所述存储器用于存储程序,所述处理器用于执行所述存储器中存储的程序;
    所述处理器通过所述程序执行所述权利要求1至6中任一项中所述的表情动画生成方法。
  15. 一种存储介质,所述存储介质用于存储程序,其中,所述程序运行时执行所述权利要求1至6中任一项中所述的表情动画生成方法。
  16. 一种包括指令的计算机程序产品,当其在计算机上运行时,使得所述计算机执行权利要求1-6任一项所述的表情动画生成方法。
PCT/CN2018/088150 2017-08-28 2018-05-24 表情动画生成方法和装置、存储介质及电子装置 WO2019041902A1 (zh)

Priority Applications (4)

Application Number Priority Date Filing Date Title
KR1020197029031A KR102338136B1 (ko) 2017-08-28 2018-05-24 이모티콘 애니메이션 생성 방법 및 디바이스, 저장 매체 및 전자 디바이스
JP2019549402A JP7297359B2 (ja) 2017-08-28 2018-05-24 表情アニメーション生成方法及び装置、記憶媒体ならびに電子装置
US16/553,005 US10872452B2 (en) 2017-08-28 2019-08-27 Expression animation generation method and apparatus, storage medium, and electronic apparatus
US17/068,675 US11270489B2 (en) 2017-08-28 2020-10-12 Expression animation generation method and apparatus, storage medium, and electronic apparatus

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201710752994.6A CN107657651B (zh) 2017-08-28 2017-08-28 表情动画生成方法和装置、存储介质及电子装置
CN201710752994.6 2017-08-28

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US16/553,005 Continuation US10872452B2 (en) 2017-08-28 2019-08-27 Expression animation generation method and apparatus, storage medium, and electronic apparatus

Publications (1)

Publication Number Publication Date
WO2019041902A1 true WO2019041902A1 (zh) 2019-03-07

Family

ID=61128888

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/088150 WO2019041902A1 (zh) 2017-08-28 2018-05-24 表情动画生成方法和装置、存储介质及电子装置

Country Status (5)

Country Link
US (2) US10872452B2 (zh)
JP (1) JP7297359B2 (zh)
KR (1) KR102338136B1 (zh)
CN (1) CN107657651B (zh)
WO (1) WO2019041902A1 (zh)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110322416A (zh) * 2019-07-09 2019-10-11 腾讯科技(深圳)有限公司 图像数据处理方法、装置以及计算机可读存储介质
CN111739135A (zh) * 2020-07-30 2020-10-02 腾讯科技(深圳)有限公司 虚拟角色的模型处理方法、装置及可读存储介质
CN114596393A (zh) * 2022-01-24 2022-06-07 深圳市大富网络技术有限公司 一种骨骼模型生成方法、装置、系统及存储介质

Families Citing this family (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107657651B (zh) 2017-08-28 2019-06-07 腾讯科技(上海)有限公司 表情动画生成方法和装置、存储介质及电子装置
CN110135226B (zh) * 2018-02-09 2023-04-07 腾讯科技(深圳)有限公司 表情动画数据处理方法、装置、计算机设备和存储介质
CN108305309B (zh) * 2018-04-13 2021-07-20 腾讯科技(成都)有限公司 基于立体动画的人脸表情生成方法和装置
CN108805963B (zh) * 2018-05-21 2023-03-24 网易(杭州)网络有限公司 三维模型的处理方法和装置、及存储介质和终端
CN108846886B (zh) * 2018-06-19 2023-03-24 北京百度网讯科技有限公司 一种ar表情的生成方法、客户端、终端和存储介质
CN109509242B (zh) * 2018-11-05 2023-12-29 网易(杭州)网络有限公司 虚拟对象面部表情生成方法及装置、存储介质、电子设备
CN109621419B (zh) * 2018-12-12 2022-05-03 网易(杭州)网络有限公司 游戏角色表情的生成装置方法及装置、存储介质
CN109727302B (zh) * 2018-12-28 2023-08-08 网易(杭州)网络有限公司 骨骼创建方法、装置、电子设备及存储介质
CN110490958B (zh) * 2019-08-22 2023-09-01 腾讯科技(深圳)有限公司 动画绘制方法、装置、终端及存储介质
CN110717974B (zh) * 2019-09-27 2023-06-09 腾讯数码(天津)有限公司 展示状态信息的控制方法、装置、电子设备和存储介质
CN110766776B (zh) * 2019-10-29 2024-02-23 网易(杭州)网络有限公司 生成表情动画的方法及装置
CN111068331B (zh) * 2019-11-21 2021-07-27 腾讯科技(深圳)有限公司 虚拟道具的动画配置方法及装置、存储介质及电子装置
CN111210495A (zh) * 2019-12-31 2020-05-29 深圳市商汤科技有限公司 三维模型驱动方法、装置、终端及计算机可读存储介质
CN111899319B (zh) 2020-08-14 2021-05-14 腾讯科技(深圳)有限公司 动画对象的表情生成方法和装置、存储介质及电子设备
CN111899321B (zh) * 2020-08-26 2023-09-26 网易(杭州)网络有限公司 一种虚拟角色表情展现的方法和装置
CN112149599B (zh) * 2020-09-29 2024-03-08 网易(杭州)网络有限公司 表情追踪方法、装置、存储介质和电子设备
CN112862936B (zh) * 2021-03-16 2023-08-08 网易(杭州)网络有限公司 表情模型处理方法及装置、电子设备、存储介质
CN113223126A (zh) * 2021-05-19 2021-08-06 广州虎牙科技有限公司 虚拟对象的表情生成方法、应用程序、设备及存储介质
CN113470148B (zh) * 2021-06-30 2022-09-23 完美世界(北京)软件科技发展有限公司 表情动画制作方法及装置、存储介质、计算机设备
CN113485596B (zh) * 2021-07-07 2023-12-22 游艺星际(北京)科技有限公司 虚拟模型的处理方法、装置、电子设备及存储介质
CN113781611B (zh) * 2021-08-25 2024-06-25 北京壳木软件有限责任公司 一种动画制作方法、装置、电子设备及存储介质
CN115797523B (zh) * 2023-01-05 2023-04-18 武汉创研时代科技有限公司 一种基于人脸动作捕捉技术的虚拟角色处理系统及方法

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120007859A1 (en) * 2010-07-09 2012-01-12 Industry-Academic Cooperation Foundation, Yonsei University Method and apparatus for generating face animation in computer system
CN105069830A (zh) * 2015-08-14 2015-11-18 广州市百果园网络科技有限公司 表情动画生成方法及装置
CN107657651A (zh) * 2017-08-28 2018-02-02 腾讯科技(上海)有限公司 表情动画生成方法和装置、存储介质及电子装置

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3530095B2 (ja) * 2000-01-27 2004-05-24 株式会社スクウェア・エニックス ビデオゲームにおける三次元オブジェクト変形方法及びビデオゲーム装置、並びにビデオゲーム用のプログラムを記録したコンピュータ読み取り可能な記録媒体
JP5055223B2 (ja) * 2008-08-11 2012-10-24 Kddi株式会社 映像コンテンツ生成装置及びコンピュータプログラム
KR101671900B1 (ko) * 2009-05-08 2016-11-03 삼성전자주식회사 가상 세계에서의 객체를 제어하는 시스템, 방법 및 기록 매체
JP5620743B2 (ja) * 2010-08-16 2014-11-05 株式会社カプコン 顔画像編集用プログラム、その顔画像編集用プログラムを記録した記録媒体及び顔画像編集システム
US10748325B2 (en) * 2011-11-17 2020-08-18 Adobe Inc. System and method for automatic rigging of three dimensional characters for facial animation
US11087517B2 (en) * 2015-06-03 2021-08-10 Disney Enterprises, Inc. Sketch-based abstraction for character posing and synthesis

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120007859A1 (en) * 2010-07-09 2012-01-12 Industry-Academic Cooperation Foundation, Yonsei University Method and apparatus for generating face animation in computer system
CN105069830A (zh) * 2015-08-14 2015-11-18 广州市百果园网络科技有限公司 表情动画生成方法及装置
CN107657651A (zh) * 2017-08-28 2018-02-02 腾讯科技(上海)有限公司 表情动画生成方法和装置、存储介质及电子装置

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110322416A (zh) * 2019-07-09 2019-10-11 腾讯科技(深圳)有限公司 图像数据处理方法、装置以及计算机可读存储介质
CN110322416B (zh) * 2019-07-09 2022-11-18 腾讯科技(深圳)有限公司 图像数据处理方法、装置以及计算机可读存储介质
CN111739135A (zh) * 2020-07-30 2020-10-02 腾讯科技(深圳)有限公司 虚拟角色的模型处理方法、装置及可读存储介质
CN111739135B (zh) * 2020-07-30 2023-03-21 腾讯科技(深圳)有限公司 虚拟角色的模型处理方法、装置及可读存储介质
CN114596393A (zh) * 2022-01-24 2022-06-07 深圳市大富网络技术有限公司 一种骨骼模型生成方法、装置、系统及存储介质
CN114596393B (zh) * 2022-01-24 2024-06-07 深圳市大富网络技术有限公司 一种骨骼模型生成方法、装置、系统及存储介质

Also Published As

Publication number Publication date
US10872452B2 (en) 2020-12-22
CN107657651B (zh) 2019-06-07
JP7297359B2 (ja) 2023-06-26
JP2020510262A (ja) 2020-04-02
US11270489B2 (en) 2022-03-08
KR20190122250A (ko) 2019-10-29
CN107657651A (zh) 2018-02-02
KR102338136B1 (ko) 2021-12-09
US20210027515A1 (en) 2021-01-28
US20190385350A1 (en) 2019-12-19

Similar Documents

Publication Publication Date Title
WO2019041902A1 (zh) 表情动画生成方法和装置、存储介质及电子装置
KR102658960B1 (ko) 얼굴 재연을 위한 시스템 및 방법
CN111417987B (zh) 用于实时复杂角色动画和交互性的系统和方法
US20220157004A1 (en) Generating textured polygon strip hair from strand-based hair for a virtual character
US20180350123A1 (en) Generating a layered animatable puppet using a content stream
KR102491140B1 (ko) 가상 아바타 생성 방법 및 장치
KR101306221B1 (ko) 3차원 사용자 아바타를 이용한 동영상 제작장치 및 방법
WO2022143179A1 (zh) 虚拟角色模型创建方法、装置、电子设备和存储介质
US11978145B2 (en) Expression generation for animation object
CN115049016B (zh) 基于情绪识别的模型驱动方法及设备
JP2018001403A (ja) 音声と仮想動作を同期させる方法、システムとロボット本体
CN114170648A (zh) 视频生成方法、装置、电子设备及存储介质
KR102409103B1 (ko) 이미지 변형 방법
KR20200134623A (ko) 3차원 가상 캐릭터의 표정모사방법 및 표정모사장치
US11954779B2 (en) Animation generation method for tracking facial expression and neural network training method thereof
KR102544261B1 (ko) 끈적임이 반영된 입술 움직임을 나타내는 3d 이미지를 제공하는 전자 장치의 제어 방법
KR102501411B1 (ko) 비대칭 얼굴 형태를 유지하며 대칭 표정을 생성하는 전자 장치의 제어 방법
US20240096033A1 (en) Technology for creating, replicating and/or controlling avatars in extended reality
CN117195563A (zh) 动画生成方法以及装置
CN111009022A (zh) 一种模型动画生成的方法和装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18851528

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2019549402

Country of ref document: JP

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 20197029031

Country of ref document: KR

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18851528

Country of ref document: EP

Kind code of ref document: A1