CN110490958B - Animation drawing method, device, terminal and storage medium - Google Patents

Animation drawing method, device, terminal and storage medium Download PDF

Info

Publication number
CN110490958B
CN110490958B CN201910780082.9A CN201910780082A CN110490958B CN 110490958 B CN110490958 B CN 110490958B CN 201910780082 A CN201910780082 A CN 201910780082A CN 110490958 B CN110490958 B CN 110490958B
Authority
CN
China
Prior art keywords
model
animation
skin
data
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910780082.9A
Other languages
Chinese (zh)
Other versions
CN110490958A (en
Inventor
刘凯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201910780082.9A priority Critical patent/CN110490958B/en
Publication of CN110490958A publication Critical patent/CN110490958A/en
Application granted granted Critical
Publication of CN110490958B publication Critical patent/CN110490958B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The embodiment of the invention provides an animation drawing method, an animation drawing device, a terminal and a storage medium; the method comprises the following steps: acquiring a target model for drawing an animation role; positioning nodes in the target model to obtain a skin model with initial facial features of the animation roles; the facial nodes of the skin model have a certain movement range; modifying the initial facial features of the skin model based on the target facial features of the animated character to be drawn, so as to cause the skin model to produce animation; and acting the animation on the target model to obtain the animation role to be drawn. By performing initial skin operation on the target model, the target model is endowed with some initial facial features, so that the skin speed is increased, the universality of facial feature adjustment in different character drawing processes is realized, and the time for drawing the animation characters is saved.

Description

Animation drawing method, device, terminal and storage medium
Technical Field
The present invention relates to three-dimensional animation technology, and in particular, to an animation drawing method, apparatus, terminal, and storage medium.
Background
In the process of making the animation, fusion deformation comprises large amount of art data, high performance consumption, strong dependence on an initial model, high later adjustment cost and the like. Therefore, most games are manufactured in a skeleton mode, but different games are different in specifications of face pinching research and development, a high-quality expression animation technology is lacked, and the two aspects lack general production flow, so that research and development cost is high, production period is long, and mass production difficulty is high.
Disclosure of Invention
The embodiment of the invention provides an animation drawing method, an animation drawing device, a terminal and a storage medium, which can effectively improve the efficiency of drawing animation roles.
The technical scheme of the embodiment of the invention is realized as follows:
in a first aspect, an embodiment of the present invention provides an animation drawing method, including:
obtaining a target model for drawing the animation roles;
positioning nodes in the target model to obtain a skin model with initial facial features of the animation roles; the facial nodes of the skin model have a certain movement range;
modifying the initial facial features of the skin model based on the target facial features of the animated character to be drawn, so as to cause the skin model to produce animation;
And acting the animation on the target model to obtain the animation role to be drawn.
In a second aspect, an embodiment of the present invention provides an animation drawing device, including:
the first acquisition module is used for acquiring a target model for drawing the animation role;
the first skin module is used for positioning nodes in the target model to obtain a skin model with initial facial features of the animation roles; the facial nodes of the skin model have a certain movement range;
the first modification module is used for modifying the initial facial features of the skin model based on the target facial features of the animation character to be drawn so as to enable the skin model to generate animation;
and the first determining module is used for acting the animation on the target model to obtain the animation role to be drawn.
In a third aspect, an embodiment of the present invention provides a terminal, including:
a memory for storing executable instructions;
and the processor is used for realizing the animation drawing method when executing the executable instructions stored in the memory.
Correspondingly, the embodiment of the invention provides a storage medium which stores executable instructions for realizing the animation drawing method provided by the embodiment of the invention when being executed by a processor.
The embodiment of the invention has the following beneficial effects: firstly, acquiring a target model for drawing an animation role; then, positioning nodes in the target model to obtain a skin model with initial facial features of the animation roles; the facial nodes of the skin model have a certain movement range; modifying the initial facial features of the skin model based on the target facial features of the animated character to be drawn, so as to cause the skin model to produce animation; finally, the animation is acted on the target model, and the animation role to be drawn is obtained; therefore, by performing initial skin operation on the target model, some initial facial features are given to the target model, so that the speed of skin on the target model can be increased, the universality of facial feature adjustment in different character drawing processes is realized, and the time for drawing the animation characters is saved.
Drawings
FIG. 1 is a schematic diagram of an alternative architecture of an animation rendering system provided by an embodiment of the present invention;
FIG. 2 is a schematic diagram of a system for animation rendering according to an embodiment of the present invention;
FIG. 3A is a schematic diagram of an implementation flow of an animation rendering method according to an embodiment of the present invention;
FIG. 3B is a schematic flow chart of another implementation of the animation rendering method according to the embodiment of the present invention;
FIG. 3C is a schematic flow chart of another implementation of the animation rendering method according to the embodiment of the present invention;
FIG. 3D is a schematic flow chart of another implementation of the animation rendering method according to the embodiment of the present invention;
FIG. 3E is a flowchart illustrating another implementation of the animation rendering method according to an embodiment of the present invention;
FIGS. 4A-4C are pictorial representations of an animated character drawing interface in accordance with an embodiment of the invention;
FIG. 5A is a schematic diagram of the composition of an animation rendering model according to an embodiment of the present invention;
FIG. 5B is a schematic representation of a three-dimensional bone according to an embodiment of the present invention;
fig. 6A to 9E are drawing interface diagrams of animated characters according to an embodiment of the invention.
Detailed Description
The present invention will be further described in detail with reference to the accompanying drawings, for the purpose of making the objects, technical solutions and advantages of the present invention more apparent, and the described embodiments should not be construed as limiting the present invention, and all other embodiments obtained by those skilled in the art without making any inventive effort are within the scope of the present invention.
In the following description, reference is made to "some embodiments" which describe a subset of all possible embodiments, but it is to be understood that "some embodiments" can be the same subset or different subsets of all possible embodiments and can be combined with one another without conflict.
In the following description, the terms "first", "second", "third" and the like are merely used to distinguish similar objects and do not represent a specific ordering of the objects, it being understood that the "first", "second", "third" may be interchanged with a specific order or sequence, as permitted, to enable embodiments of the invention described herein to be practiced otherwise than as illustrated or described herein.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. The terminology used herein is for the purpose of describing embodiments of the invention only and is not intended to be limiting of the invention.
Before describing embodiments of the present invention in further detail, the terms and terminology involved in the embodiments of the present invention will be described, and the terms and terminology involved in the embodiments of the present invention will be used in the following explanation.
1) Pinching face: in three-dimensional (3 d) games, players can modify character body features according to their own wishes.
2) A facial behavior encoding system (Facial Action Coding System, FACS) accurately summarises the "emotion movements on the face", divides it into movement units according to the anatomical features of the face, and analyzes the expressions associated with these movement units. Is a reference standard of the micro-expression industry, is widely applied abroad, and is always applied to psychology and computer animation (Computer Animation, CG) film and television animation.
3) Expression binding: setting up skeleton system for the 3D character to make it move according to the intention of the drawing person to make a mat for the next link. This is an important component of animation rendering, whose quality directly affects the final animation effect.
4) Covering: three-dimensional animation terminology, used in three-dimensional games, is a drawing technique for three-dimensional animation. Bone is added to the model based on the model created in the three-dimensional software. Since the bone and the model are independent of each other, the model is tied to the bone in order for the bone to drive the model to a reasonable motion, so that the bone drives the model to operate.
In the related art, in the process of game drawing, two technologies are realized for kneading a face and expressing an expression, one is fusion deformation; the other is skeletal drive. Fusion deformation, including large amount of art data, high performance consumption, strong dependence on the initial model, high later adjustment cost, etc. Therefore, most games are drawn in a skeleton mode, but different games are different in specifications of face pinching research and development, a high-quality expression animation technology is lacked, and the two aspects lack general production flow, so that research and development cost is high, production period is long, and mass production difficulty is high.
Aiming at the technical problems, the embodiment of the invention provides an animation drawing method, a terminal and a storage medium. The data is inherited step by step, and finally the data result is transferred to the game skeleton. And correspondingly designing a multi-layer control system to control the multi-stage bones, wherein the first-layer system is a face pinching control system, custom face pinching data and specifications can be recorded, the control system is completely driven by the first-stage bones, and after the face pinching data of the system are overlapped, the second-stage face pinching bones are driven to generate animation. The second layer system is an expression binding system driven by a second-level skeleton, integrates the theory of the facial behavior coding system and the data sample into a control system, can refer to a FACS (FACS machine format) to edit an expression animation unit for the second time, and then drives a third-level expression skeleton to generate an animation. On one hand, drawing personnel can rapidly draw assets such as kneading faces, expression binding, animation and the like on any three-dimensional roles by virtue of the plug-in; on the other hand, a standardized solution is provided for the game studio, and development cost is greatly reduced.
The following describes exemplary applications of the terminal for animation rendering provided by the embodiments of the present invention, and the client provided by the embodiments of the present invention may be implemented as a notebook computer, a tablet computer, a desktop computer, a set-top box, a mobile terminal (for example, a mobile phone, a portable music player, a personal digital assistant, a dedicated messaging terminal, a portable game terminal), and other various types of user terminals, and may also be implemented as a server. In the following, an exemplary application when the terminal is implemented as a terminal or a server will be described.
Referring to fig. 1, fig. 1 is a schematic diagram of an alternative architecture of an animation drawing system according to an embodiment of the present invention, for supporting an exemplary application, an animation drawing model 11 includes a face twisting operation control system for controlling a face pinching operation and an expression binding system for controlling expression data based on a first-stage skeleton, a second-stage skeleton, and a third-stage skeleton. In the embodiment of the present invention, the animation drawing model 11 may be created on the server 12 side, and then the animation drawing model 11 is embedded as an application program into the terminal 400-1, and when an animation character needs to be drawn in the application program, firstly, the skeleton of a target model of the animation character to be drawn is positioned through a proxy model in the animation drawing model 11, and then the target model is endowed with initial face pinching data and initial expression data; finally, the drawing personnel can modify the initial face pinching data and the initial expression data according to own will so as to enable the target models used for the modified face pinching data and the modified expression data to generate animations; therefore, after the initial skin operation is performed on the target model, the target model is endowed with some initial facial features, so that the skin speed can be increased, the universality of facial feature adjustment in different character drawing processes is realized, and the time for facial adjustment of the animation character is saved. In other embodiments, the animation rendering model 11 may be implemented in a background program, or may be presented on a front-end platform.
Referring to fig. 2, fig. 2 is a schematic structural diagram of an animation rendering system according to an embodiment of the present invention, and a terminal 400 shown in fig. 2 includes: at least one processor 410, a memory 450, at least one network interface 420, and a user interface 430. The various components in terminal 400 are coupled together by a bus system 440. It is understood that the bus system 440 is used to enable connected communication between these components. The bus system 440 includes a power bus, a control bus, and a status signal bus in addition to the data bus. But for clarity of illustration the various buses are labeled in fig. 2 as bus system 440.
The processor 410 may be an integrated circuit chip having signal processing capabilities such as a general purpose processor, a digital signal processor, or other programmable logic device, discrete gate or transistor logic device, discrete hardware components, etc., wherein the general purpose processor may be a microprocessor or any conventional processor, etc.
The user interface 430 includes one or more output devices 431, including one or more speakers and/or one or more visual displays, that enable presentation of the media content. The user interface 430 also includes one or more input devices 432, including user interface components that facilitate user input, in some examples a keyboard, mouse, microphone, touch screen display, camera, other input buttons and controls.
Memory 450 may be removable, non-removable, or a combination thereof. Exemplary hardware terminals include solid state memory, hard drives, optical drives, and the like. Memory 450 optionally includes one or more storage terminals physically remote from processor 410.
Memory 450 includes volatile memory or nonvolatile memory, and may also include both volatile and nonvolatile memory. The non-volatile Memory may be a Read Only Memory (R OM), and the volatile Memory may be a random access Memory (Random Access Memory, RAM). The memory 450 described in embodiments of the present invention is intended to comprise any suitable type of memory.
In some embodiments, memory 450 is capable of storing data to support various operations, examples of which include programs, modules and data structures, or subsets or supersets thereof, as exemplified below.
An operating system 451 including system programs, e.g., framework layer, core library layer, driver layer, etc., for handling various basic system services and performing hardware-related tasks, for implementing various basic services and handling hardware-based tasks;
a network communication module 452 for reaching other computing terminals via one or more (wired or wireless) network interfaces 420, the exemplary network interface 420 comprising: bluetooth, wireless compatibility authentication, and universal serial bus (Universal Serial Bus, USB), etc.;
A presentation module 453 for enabling presentation of information (e.g., a user interface for operating a peripheral terminal and displaying content and information) via one or more output devices 431 (e.g., a display screen, speakers, etc.) associated with the user interface 430;
an input processing module 454 for detecting one or more user inputs or interactions from one of the one or more input devices 432 and translating the detected inputs or interactions.
In some embodiments, the apparatus provided by the embodiments of the present invention may be implemented in software, and fig. 2 shows an animation server 455 stored in a memory 450, which may be software in the form of a program, a plug-in, or the like, including the following software modules: a first acquisition module 4551, a first skin module 4552, a first modification module 4553 and a first determination module 4554; these modules are logical and can thus be arbitrarily combined or further split depending on the functions implemented. The functions of the respective modules will be described hereinafter.
In other embodiments, the apparatus provided by the embodiments of the present invention may be implemented in hardware, and by way of example, the apparatus provided by the embodiments of the present invention may be a processor in the form of a hardware decoding processor that is programmed to perform the animation drawing method provided by the embodiments of the present invention, for example, the processor in the form of a hardware decoding processor may employ one or more application specific integrated circuits (ASIC, application Specif ic Integrated Circuit), DSP, programmable logic device (PLD, programmable Logic De vice), complex programmable logic device (CPLD, complex Programmable Logic Device), field programmable gate array (FPGA, field-Programmable Gate Array), or other electronic component.
The animation drawing method provided by the embodiment of the invention will be described in connection with the exemplary application and implementation of the terminal provided by the embodiment of the invention.
Referring to fig. 3A, fig. 3A is a schematic flowchart of an implementation of the animation drawing method according to an embodiment of the present invention, and is described with reference to the steps shown in fig. 3A.
Step S101, a target model for drawing an animated character is acquired.
In some embodiments, the target model may be an empty model for rendering an animated character, i.e., no facial feature data is contained in the target model.
And step S102, positioning nodes in the target model to obtain a skin model with initial facial features of the animation roles.
In some embodiments, the nodes in the target model are located, and in a specific example, the target model may be subjected to a skinning operation, where the skinning operation may be understood as corresponding the distribution of bones of the proxy model with the distribution of nodes of the target model. The skeleton in the target model is positioned by wrapping the target model by the agent model in the created animation drawing model, and the skeleton in the agent model is set with a range in which the skeleton of the agent model can drive the nodes of the target model to move, so that after the agent model wraps the target model, the distribution of the skeleton of the agent model and the distribution of the nodes of the target model can be corresponded, and meanwhile, the initial facial feature data in the created animation drawing model can be endowed to the target model, so that the skin model with the initial facial features is obtained.
Step S103, based on the target facial features of the animation character to be drawn, the initial facial features of the skin model are modified so as to enable the skin model to generate animation.
In some embodiments, the initial facial feature data comprises: initial face pinching data for initially setting facial features of the animated character; and the initial expression data is used for initially setting the facial expression of the animation character. First, a target facial feature of an animated character to be drawn, for example, a mouth expression to be drawn, is determined, and then the target facial feature is a facial feature capable of realizing the mouth expression. If the initial facial feature is a closed mouth, facial feature data corresponding to the closed mouth is 50, and the facial feature data is reduced to 10 so as to realize the expression of opening the mouth; thus, the initial facial features are modified according to the intention of the drawing personnel, so that the skin model with the facial features modified can generate animation meeting the intention of the drawing personnel.
And step S104, the animation is acted on the target model, and the animation role to be drawn is obtained.
In some embodiments, the animation generated by the skin model is saved, and then the animation is called by the controller to be nested on the target model, so that the animation role capable of realizing the action of the animation can be drawn.
In the embodiment of the invention, firstly, a built model is adopted to position bones, so that the moving range of the model influenced by the bones is transmitted to a unit for skin operation, and the speed of the skin is accelerated; and the initial facial features of the character are given in the initial skin operation process, so that the universality of facial feature adjustment in different character drawing processes is realized, and the time for facial adjustment of the animation character can be saved.
In some embodiments, in order to accelerate the drawing process of drawing a new animated character, after the step S104, the method further includes, referring to fig. 3B, fig. 3B is a schematic flowchart of another implementation of the animation drawing method according to the embodiment of the present invention, and based on fig. 3A, the following description is made:
step S121, saving the modified data of the initial facial features.
In some embodiments, the modified data of the initial facial features is saved in the created animation drawing model, for example, the mouth opening expression is adjusted from 50 to 90, then the difference 40 may be saved in the created animation drawing model, and the expression 90 may also be saved directly in the created animation drawing model.
Step S122, when a new animated character different from the animated character is drawn, generating initial facial features of the new animated character based on the modified data of the initial facial features.
In some embodiments, when a new animated character is drawn that is different from the animated character, the modified data of the initial facial features is assigned to the target model used to draw the new animated character so that the skin model of the new animated character has the initial facial features.
In other embodiments, the step S122 may be further implemented by: first, determining a first difference between the target pinching face data and the initial pinching face data; for example, the target pinching face data is 90, the initial pinching face data is 30, and then the first difference is 60; then, determining a second difference between the target expression data and the initial expression data; for example, the target expression data is 80, the initial expression data is 40, and then the first difference is 40; storing the first difference value and the second difference value in the animation drawing model; finally, when a new animated character distinct from the animated character is drawn, an initial facial feature of the new animated character is generated based on the first and second differences. In one embodiment, the first difference may be used as initial pinching data in the initial facial features of the new animated character and the second difference may be used as initial expression data in the initial facial features of the new animated character.
Step S123, performing skinning operation on a new target model for drawing the new animated character based on the initial facial features of the new animated character, so as to obtain a new skinned model.
In some embodiments, the skinning of a new object model of a new animated character is performed by: and wrapping the new target model by adopting a proxy model containing the initial facial features of the new animation role to realize the positioning of bones of the new target model, and endowing the new target model with the initial facial features of the new animation role in the positioning process to obtain the new skin model.
Step S124, based on the target facial features of the new animation character, the initial facial features of the new skin model are modified so as to animate the new skin model.
In some embodiments, first, an action to be performed by a new animated character is determined, and a target facial feature of the new animated character is determined based on the action, such as an action to perform eyebrow lifting, and then the data of the target facial feature is the data required to perform the eyebrow lifting. Based on the data of the target facial features, the data of the initial facial features of the new skin model is adjusted so that the adjusted new skin model can generate an animation to be realized by the new animated character. And finally, acting the animation generated by the new skin model on the new target model, thereby drawing to complete the new animation role.
In the embodiment of the invention, the facial features of the created animation roles are stored and used as the initial facial features of other new animation roles, so that the data is reused, and the flow of drawing the animation roles is obviously promoted.
In some embodiments, the initial skinning operation is performed on the target model of the animated character to be drawn using the created animated rendering model, and before step S102, the method further comprises the steps of:
and a first step of acquiring a proxy model in the animation drawing model.
In some embodiments, the skeleton of the proxy model has a first range of activities that the skeleton of the proxy model is configured to drive nodes of the target model. The animation drawing model can be understood as a built framework for drawing the animation role, for example, in the animation drawing model, three-level bones and two-layer control systems are built, the first-level bones are used for positioning a target model of the animation role to be drawn, after positioning is completed, positioning data are fed back to the first-layer control system, namely a face pinching control system, and then the face pinching control system controls the second-level bones to carry out face pinching operation on the positioned target model to obtain a skin model; after face pinching is completed, feeding the data back to an expression binding system, controlling a third-level skeleton to set expression data of a skin model with the face pinched, recording the expression data, and outputting a final output result to the skin skeleton by the expression binding system; thus, the manufacturing of the game framework, namely the skinned framework, is completed.
And a second step of surrounding the proxy model with the target model so that the distribution of bones in the proxy model corresponds to the distribution of nodes in the target model.
In some embodiments, the proxy model is wrapped around the target model as much as possible, so that, because the first range in which the skeleton of the proxy model can drive the nodes of the target model to move is set in the proxy model, the distribution of the nodes of the target model after wrapping corresponds to the distribution of the skeleton in the proxy model.
And thirdly, determining a target model surrounding the proxy model as the skin model.
In some embodiments, as shown in fig. 6C, the model 604 is a target model, i.e., a skin model, that encloses the proxy model.
The manner of obtaining the skin model provided in the first to third steps may be considered as that an initial skin of the target model, that is, an initial setting is performed on a range of nodes in the proxy model where bones in the proxy model can drive the target model, and in a subsequent fabrication, the first range in the skin model may be adjusted according to an action to be implemented by the animated character to be drawn, for example, a range of nodes in the proxy model where bones in the proxy model can drive the target model is increased.
In some embodiments, to enable the final skin model to implement the motion of the more realistic animated character, the skin data of the skin model is further adjusted, after step S102, the method further includes the following steps, see fig. 3C, where fig. 3C is a schematic flowchart of still another implementation of the animation drawing method according to the embodiment of the present invention, and based on fig. 3A, the following description is given:
step S131, transmitting the first range set in the proxy model to the skin skeleton, so as to determine a second range in which the skeleton of the skin skeleton can drive the node of the target model to perform the activity.
In some embodiments, the second range is adjusted based on the first range, for example, the first range indicates that the skeleton can drive 50% of the nodes of the target model, but in the range, the animation generated by the skin model cannot be well matched with the action to be realized by the animation character to be drawn, and then the range of the nodes of the skeleton can drive the nodes of the target model is adjusted to 70% based on the 50% continuous adjustment so as to be fully matched with the action to be realized by the animation character to be drawn. The skinning skeleton is used for skinning for the sample model that pinching the face was accomplished, and the skinning skeleton is used for more accurate adjustment skeleton can drive the scope of node in the skinning model promptly, can drive through the skeleton that adopts on the skinning skeleton the node of target model carries out the scope of activity for the action of the animation role that produces to be drawn that the skinning model can be more lifelike.
Step S132, adjusting skin data of the skin model based on the second range, to obtain an updated skin model.
In some embodiments, the range of nodes in the skinned model in which the skeleton can drive the target model is adjusted using the redetermined second range, so that the updated skinned model can produce an animation that the animated character to be drawn needs to achieve.
And step S133, modifying the initial facial features of the updated skin model based on the target facial features of the animation character to be drawn so as to enable the updated skin model to generate an animation.
And step S134, the animation is acted on the target model, and the animation role to be drawn is obtained.
In some embodiments, in order to better meet the requirements of the drawing personnel for facial features and expression diversity of the animated character to be drawn, the step S103 may be implemented by referring to fig. 3D, and fig. 3D is a schematic flow chart of another implementation of the animation drawing method provided by the embodiment of the invention, based on fig. 3A, the following description is made:
step S141, determining initial face pinching data and initial expression data of the skin model.
In some embodiments, the initial pinching face data and the initial expression data are data of initial facial features of the animated character to be drawn that the animated rendering model imparts to the target model of the animated character.
Step S142, determining target face pinching data and target expression data required for realizing target actions of the animation roles.
In one specific example, the target action of the animated character may be understood as an action to be performed by the animated character, such as, for example, opening the mouth, blinking, or lifting the eyebrows, etc.
And step S143, adjusting the initial kneading face data of the skin model by adopting the target kneading face data to obtain the skin kneading face model.
In some embodiments, the target pinching face data is the corresponding facial feature when the action required by the animated character implementation is performed. First, determining a first difference between the target pinching face data and the initial pinching face data; and then, according to the first difference value, adjusting the initial face pinching data to obtain the skin face pinching model. For example, if the nose bridge needs to be turned up, the target face pinching data is face pinching data required by the nose bridge to be turned up, first, a first difference value between the face pinching data required by the nose bridge to be turned up and the initial face pinching data is determined, and then the initial face pinching data is adjusted based on the first difference value, so that the nose bridge to be turned up is achieved, and the skin face pinching model is obtained.
And S144, adjusting the initial expression data of the skin pinching face model by adopting the target expression data to obtain a skin adjusting model.
In some embodiments, the target expression data is expression data required for the action to be performed by the animated character; firstly, determining a second difference value between the target expression data and the initial expression data; and then, according to the second difference value, adjusting the initial expression data to obtain the skin adjustment model. For example, if the expression of pucker is required to be realized, the target expression data is the expression data required by pucker, first, a second difference value between the expression data required by pucker and the initial expression data is determined, and then, the current initial expression data is adjusted based on the second difference value to realize pucker, so that the skin adjustment model is obtained.
And step S145, generating the action to be realized of the animation character to be drawn through the skin adjustment model.
In some embodiments, after the initial face pinching data and the initial expression data are adjusted correspondingly based on the target face pinching data and the target expression data of the animated character, the obtained skin adjustment model can generate the action to be realized by the animated character meeting the requirements of the drawing personnel.
In some embodiments, the process of creating an animation rendering model for rendering an animated character, see fig. 3E, where fig. 3E is a schematic flow chart of still another implementation of the animation rendering method according to an embodiment of the present invention, for the following description:
step S151, positioning the facial skeletal joints of the sample model for drawing the sample character, to obtain a positioned sample model.
In some embodiments, three-level bones and a two-level system are set for the animation drawing model, data among the three-level bones are inherited step by step, for example, positioning completion data output by a first-level bone are sent to a face pinching control system, and the face pinching control system is used for controlling a second-level bone to carry out face pinching operation; and then after the face pinching of the second-stage skeleton is completed, the face pinching data are output to an expression binding system, and the expression binding system controls the third-stage skeleton to set and record the expression. Step S151 may be considered to be implemented by a first-level skeleton of the constructed animation drawing model, for example, the first-level skeleton is used to locate a facial skeleton joint of a sample model for drawing a sample character, so as to obtain a located sample model.
And step S152, modifying facial features of the positioned sample model to obtain a sample model with a pinched face.
In some embodiments, the step S152 may be considered as a second level bone implementation in the animation rendering model; in a specific example, the modifying the facial feature of the located sample model may be modifying the facial feature of the located sample model in response to a first control instruction issued by the face pinching control system. The data of the positioning completion output by the first-stage skeleton is sent to a face pinching control system, and the face pinching control system controls the second-stage skeleton to modify the facial features of the positioned sample model to obtain the sample model with the face pinching completion.
In step S153, expression data of the sample model with the pinched face is recorded.
In some embodiments, the step S152 may be considered as a third level skeletal implementation in an animation rendering model; namely, the face pinching data of the face pinching completion of the second-level bones is output to an expression binding system, and the expression binding system controls the third-level bones to set and record the expression data of the sample model of the face pinching completion. In a specific example, the step S153 may be to send the data for modifying the facial feature to an expression binding system, and record the expression data of the sample model after the face pinching in response to the second control instruction sent by the expression binding system. In the embodiment of the invention, the expression binding system can also select the action node from the created action node library and output the action node to the third-level skeleton so that the third-level skeleton records the action of the action node; in one specific example, first, based on the expression elements in the expression library, creating an action node for implementing the action of the sample character; wherein, the actions of the sample roles represented by different action nodes are different; the expression library may be a FACS, and the expression elements may be base expressions in the FACS, creating a plurality of action nodes based on the base expressions to realize various actions; secondly, forming an action node library based on the plurality of created action nodes; and finally, storing the action node library in the animation drawing model. Therefore, when drawing the animation role, a drawing person can select a plurality of target action nodes matched with actions to be realized by the animation role from the action node library, and then, the actions to be realized by the animation role can be drawn by combining the plurality of target nodes, so that the whole drawing process is simpler and faster. And the method is beneficial to automatizing related work of processing and completing functions of real-time editing, previewing, intelligent importing and exporting of specification resources and the like.
Step S154, determining a proxy model based on the set range of motion of each skeleton capable of driving the nodes of the sample model with the pinched face.
In some embodiments, first, a range of nodes of a sample model is set in which bones drive pinching faces to be completed, for example, a node of a sample model in which bones drive pinching faces to be completed by 40%. Then, based on the range and data (i.e., pinching face data) that modifies the facial features, a proxy model is generated; therefore, a first range in which the skeleton of the proxy model can drive the nodes of the target model to move is set in the proxy model, and the initial skin process of the target model is realized after the proxy model is adopted to wrap the target model.
Step S155, creating the animation drawing model based on the proxy model, the expression data, and the data for modifying the facial features.
In some embodiments, the expression data and the data for modifying the facial features are respectively stored in corresponding files separately, the data files for storing the expression data and modifying the facial features separately are imported into a proxy model, and an animation drawing model is built by combining the relation between two layers of systems corresponding to the two types of data.
In the embodiment of the invention, a multi-layer control system is created to separate the data modifying the facial features and the expression data, and a drawing staff can multiplex sample data to nest and export the sample data to various 3D roles, so that the data universality is greatly improved; and the FACS theory and the data sample are integrated into the control system, and the FACS secondary editing expression animation unit can be consulted, and then the third layer of expression skeleton is driven to generate animation, so that the whole drawing process is simpler and quicker.
In some embodiments, in order to apply the kneading face data and the expression data of the drawn animation character to other new animation characters and update the file storing the kneading face data and the expression data in time, after step S153, the method further includes the steps of:
and the first step is to send the stored face pinching data and expression data to a skin skeleton for skin of the sample model with the face pinching.
In some embodiments, the skinned skeleton may be considered a final built game frame in which any animated character may be drawn.
And secondly, saving the skin skeleton comprising the kneading face data and the expression data in the animation drawing model.
In some embodiments, after the pinching face data and the expression data are sent to the skinned skeleton, the skinned skeleton is a part of the animation drawing model, so as to be used for performing fine skinning on the skinned model.
In some embodiments, after creating the action node library based on the expressive elements in the expression library (e.g., the base expressions in the FACS), the target facial features of the animated character to be drawn may be determined as follows:
first, searching a target action node matched with an action to be realized by the animation role from an action node library stored in the animation drawing model.
In a specific example, the action to be realized by the animation character is to open the mouth, and a target action node capable of realizing opening the mouth is searched from the action node library.
The target facial feature is then determined based on the corresponding action of the target action node.
In some embodiments, the target face feature is determined by determining target face pinching data and target expression data required for realizing the action corresponding to the target action node. For example, target face pinching data and target expression data required for realizing the action of opening the mouth are determined.
In the following, an exemplary application of the embodiment of the present invention to an actual game animation scene will be described, taking the role of creating a game as an example.
FIG. 4A is a drawing interface diagram of an animated character according to an embodiment of the present invention, as shown in FIG. 4A, on which resource results of pinching face and expression data of various 3D characters, such as a child character 401 or an adult male character 402, etc., can be displayed, and on which the model variety is not limited, and the method can be applied to both males and females in a realistic style or a cartoon style; a model adaptation button 412, an automatic binding button 413, a save skin button 414 and an automatic generation button 415 in a toolbar 411 are adopted to draw resources of various roles of the drawing interface, and then the resources are imported into a game to create a role interface of the game; as shown in fig. 4B, fig. 4B is an interface diagram for drawing female characters, and character 403 is a game character to be drawn according to an embodiment of the present invention. FIG. 4C is an interface diagram of a female character implementing a face pinching operation according to an embodiment of the present invention, as shown in FIG. 4C, it can be seen that the created data for modifying the character 403 has been synchronized into the game, displayed at the toolbar 404, and the player can interact with the character 403 in the toolbar 404, for example, pinching a face.
In embodiments of the present invention, the game character may be female or male, and the renderer or game player may modify any character via tools such as tool bar 404 of FIG. 4C.
An embodiment of the present invention provides an animation drawing method, fig. 5A is a schematic diagram of a composition structure of an animation drawing model according to an embodiment of the present invention, and the following description is made with reference to fig. 5A:
the skeletal system 51 includes: first level skeleton 501, second level skeleton 502, third level skeleton 503, and game skeleton 504.
In some embodiments, a first level skeleton 501 is used for character initial positioning, a second level skeleton 502 is used for face pinching, and a third level skeleton 503 is used for recording expressions. The data between these three levels of skeletons is in a step-wise inheritance fashion, with game skeleton 504 being a skinned skeleton. Finally, third level skeleton 503 passes the data results to game skeleton 504.
The control system 52 includes: a pinching face control system 521 and an expression binding system 522.
In some embodiments, the pinching face control system 521 is configured to record custom pinching face data and specifications, where the pinching face control system 521 is completely driven by the first-stage skeleton, and after the pinching face data of the system is superimposed, the second-stage skeleton is driven to generate animation. The expression binding system 522, driven by the second level bone, is used to integrate the FACS theory and FACS data samples into the controller system, and can perform a secondary editing of the expression data of the character with reference to the action nodes formed by using the base table situation in the FACS, and then, drive the third level bone to generate animation based on the secondarily edited expression data. Thus, the data frame design of pinching face and expression is successfully integrated, and the data flow for finally generating the animation is obtained.
The data editing module 53 is configured to create an interface for modification in the data editing module 53 for a portion to be modified after completing drawing of the game skeleton, and perform data modification to complete character drawing.
In some embodiments, the game skeleton may be skeleton 54 shown in fig. 5B, and skeleton 54 may be understood to be a skeleton presentation in 3D software, a virtual skeleton in a 3D view.
The resource export module 54 is configured to export data after completing the role drawing, and store the exported data in a corresponding folder, which can be used in the drawing of other subsequent roles.
Finally, after the data is exported, engine settings may be made based on the exported data.
In one specific example, the related drawing process of the game character is completed with assistance of the programmed art tool, and the steps are as follows:
first, a model scene file is opened.
In some embodiments, the model scene file may be a test file in which the animated character to be drawn is contained.
Second, in response to a click operation entered at the model adaptation button, the bone of the target model is located.
Here, in response to the input click operation, the proxy model is derived. As shown in fig. 6A, the model 601 is a target model, and after clicking the model adaptation button 602, a screen shown in fig. 6B is displayed, resulting in the proxy model 603.
And thirdly, adjusting the proxy model until the target model is completely wrapped.
In some embodiments, the third step of adjusting the proxy model until the target model is completely wrapped may be that the drawing personnel manually adjusts the proxy model so that the proxy model completely surrounds the target model, or may automatically adjust the proxy model through third party software. The model 601 is an empty model for drawing an animated character to be drawn. In the process of adjusting the proxy model to wrap the animation roles to be drawn, the finer and better the adjustment is, so that the distribution of bones contained in the proxy model corresponds to the distribution of bones in the animation roles to be drawn; after the proxy model is fully wrapped around the animated character to be drawn, a screen 604 as shown in fig. 6C is obtained, i.e., the model 604 is the animated character to be drawn that is fully wrapped by the proxy model. Therefore, the agent model wraps the animation roles to be drawn, and intelligent matching and intelligent positioning of bones are realized; in addition, in the skin drawing process, the weight of the proxy model is transmitted to the skin, so that the time required by skin operation can be reduced, and the process of drawing the roles is quickened.
And fourthly, responding to clicking operation of a drawing person in the automatic binding button input to automatically generate related bones, creating pinching face nodes, building a controller system and the like.
In some embodiments, the automatically generated relevant skeleton is saved in the background, i.e., is invisible at the character drawing interface, the created pinching face node (i.e., pinching face control system) and the controller system (i.e., expression binding system) are visible to the drawing person at the drawing interface, as shown in fig. 6D, the drawing person can input pinching face data for pinching the character by clicking the pinching face node 605, then the pinching face data is transferred to the controller system 606, and the controller system 606 controls facial features of the character to be modified based on the pinching face data; the controller system 606 is also used to control changes in the facial expression of the character. In a specific example, the controller system 606 may be considered as a black box, which includes a plurality of functional nodes, animation curves, and addition, subtraction, multiplication, and division of the calculated relationships, where after the first-stage skeleton positions the characters, the controller system 606 is used as a face pinching control system to transfer the output relationship to the second-stage skeleton, and a black box, that is, an expression binding system, is between the second-stage skeleton and the third-stage skeleton, where the expression binding system includes a plurality of parameters and influence factors, where the parameters and influence factors act on the data in the face pinching control system, and output the final output result to the third-stage skeleton to give the output value of the node relationship to the expression driving skeleton; and finally, transferring the output value to the game skeleton through a set node in the skeleton by the third-stage skeleton to obtain the skinned skeleton.
And fifthly, transmitting the first range contained in the proxy model to a skin skeleton to determine a second range in which the skeleton of the skin skeleton can drive the node of the target model to move, so as to finish the skin of the positioning role.
In some embodiments, by setting the inter-model skin transfer button 701 as shown in fig. 7A on the drawing interface, clicking the button, displaying the interface 702, in the interface 702, loading a first range (in a specific example, the first range may be understood as a weight value, that is, a weight of a node of the skeleton affecting the target model) of each part in the proxy model in the window 703, loading each part of the animation character to be drawn corresponding thereto in the window 704, for example, recording the first range of the head contained in the proxy model in the window 703, loading the head of the animation character to be drawn in the window 704, and then clicking the "go" button 705, so as to realize transfer of the first range in the proxy model to the skin skeleton; thus, after the first range is transferred, the process of automatic covering is realized, and the precision is about 85%; then, the weight distribution of each skeleton of the animated character to be drawn after the automatic skin is checked and corrected until the correction is satisfied, for example, the weight distribution of the upper and lower lips and eyelid is emphasized; in order to make a relationship between a skeleton and a model motion of the animation character 721 to be drawn for finishing the skin, as shown in fig. 7B, after each skeleton in the animation character 721 to be drawn for finishing the skin is brushed with weights, the skeleton will drive the nodes of the model corresponding to the animation character 721 to be drawn for finishing the skin to change, but how the skeleton drives the nodes of the model to move needs to further modify the first range, so as to obtain a second range (in a specific example, the second range can be understood as a weight value different from the first range, the readjusted skeleton affects the weight of the nodes of the target model based on the first range), and the skeleton is distributed to affect the range of the corresponding nodes of the target model based on the second range. After the agent model and the target model are matched, a first range in the agent model is transferred to the target model, so that the model weight transfer process is completed, and the speed of covering the target model is increased. In a specific example, after the first range in the proxy model is transferred to the target model, the range of the bone driving node needs to be further drawn based on the relationship between the bone of the proxy model and the node of the target model, for example, the first range of the bone driving node is adjusted to 30% and the first range of the bone driving node is adjusted to 50% of the nodes driven by the bone.
And sixthly, selecting a target model, and outputting a save instruction to a save skin button to save the skin model.
In some embodiments, as shown in FIG. 7C, a button 732 to save skin is clicked in interface 731 to save skin; then, click the automatically generated button 741 as shown in fig. 7D, pop up the prompt box 742 after normal completion, click the prompt box 742 to confirm, automatically pop up the folder, and generate the related resource file on the original path of the target model. In a specific example, the pinching face data is stored in a fixed file, the file can provide a great amount of materials for drawing characters for drawing personnel, a node for modifying the pinching face data can be set as an engine, and a skeleton model of the engine can be stored in another file. In the case of saving the skin model, the pinching face nodes in the file that saves pinching face data are empty, and template data needs to be added. By clicking on the pinching face node 751, a pinching face/body type template is imported, i.e., pinching face data is imported into a fixed file, as shown in fig. 7E, and pinching face data for the target model, such as anterior and posterior to the nose, anterior and posterior to the nostril, nostril orientation X, nostril orientation Y, nostril size, bridge slope, wing width, tip slope, tip height, tip width, bridge length, and top and bottom, etc., can be generated in the interface 761, as shown in fig. 7F.
And seventhly, adjusting initial face pinching data in the skin model.
In some embodiments, as shown in fig. 8A, by entering a pinching-face node 801, the pinching-face data editor 811 shown in fig. 8B is started, a pinching-face data modification panel 812 is popped up, and the pinching-face data can be modified at the pinching-face data modification panel 812 according to the wishes of the renderer. In one specific example, 50 is typically a default value, 0 is a minimum value, and 100 is a maximum value. Dragging the slide rod can preview the change of the attribute, and dissatisfaction can be modified at any time. While in the active mode 813, the face data may be modified on the face data modification panel 812; recording the modified pinching face data while in the recording state 814; when in a state of adding the influence skeleton 815, the skeleton may be added in the model. As shown in fig. 8C, the face pinching data of the nostrils 831 is adjusted from 50 to 100 in the face pinching data modification panel, resulting in an adjusted model 832, and it can be seen that the nostrils of the model 832 are adjusted upward. Closing a window after the face twisting data to be modified is modified, and prompting whether to derive face pinching data; if the button for determining to output the face pinching data is clicked, the folder is automatically popped up, and a data file is generated on the original path of the target model for the engine to call or use for multiple times by similar roles.
And eighth step, adjusting the initial expression data in the skin model.
In some embodiments, clicking on the expression FACS editor, the first time the attribute value is yellow, the interface is similar to a pinching-face editor, and as shown in fig. 9A, clicking instructions are entered into the expression FACS editor 901, popup window 902, each of the channels displayed in window 902 representing a parameter of the FACS, each parameter corresponding to a base expression within a FACS, i.e., base expressions 1 through 19 are different base expression units in the FACS. In this case, the window 902 is not editable, and as shown in fig. 9B, a freeze command needs to be input from a button 921 for freezing FACS data, for example, clicking the button to edit each attribute as if pinching a face, and as shown in fig. 9C, the mouth edge of the model 931 may be edited to be a mouth opening, and expression data of the mouth opening in the panel 932 may be adjusted. In the embodiment of the present invention, after editing the base expression based on FACS theory, a plurality of action nodes are obtained, and a specific expression effect (for example, a combination of a plurality of base expressions) can be previewed and verified by combining the action nodes, as shown in fig. 9D, various action nodes are displayed in a panel 941, for example, eyebrow lifting, eyebrow lowering, eyebrow squeezing, upper eyelid lifting, lower eyelid lifting, eye closing, eye lifting, cheek lifting, egg tightening, nostril lifting, nostril tightening, nostril expanding, lip lifting, mouth corner lowering, lower lip lifting, mouth corner stretching, mouth corner tightening, upper lip lowering, lower lip lifting, lip bending, lip curling inwards, mouth skin flattening, lip lifting or lip opening and the like. In one specific example, as shown in fig. 9E, action node eyebrow lifting 951 includes 8 action nodes related to eyebrows, such as 5 action nodes in block 952 in fig. 9E (i.e., left eyebrow medial upward 86, left eyebrow medial upward 84, left eyebrow lateral downward, and left eyebrow lateral upward), right eyebrow medial upward in block 953, right eyebrow medial upward in block 954, and right eyebrow lateral upward 04 in block 955. After all expression data are modified, a button for activating FACS data needs to be clicked to ensure the timely update of the data in the control system. In this case, a realistic expression animation can be drawn by operating the controller.
Continuing with the description below of an exemplary architecture of the animation rendered server 455 provided by embodiments of the invention implemented as software modules, in some embodiments, as shown in FIG. 2, the software modules stored in the animation rendered server 455 of memory 440 may comprise:
a first acquisition module 4551 for acquiring a target model for drawing an animated character;
a first skinning module 4552 configured to locate a node in the target model, and obtain a skinned model having an initial facial feature of the animated character; the facial nodes of the skin model have a certain movement range;
a first modification module 4553, configured to modify, based on target facial features of an animated character to be drawn, initial facial features of the skin model, so as to animate the skin model;
and the first determining module 4554 is configured to apply the animation to the target model to obtain the animated character to be drawn.
In some embodiments, the first modification module 4553 is further configured to:
saving the modified initial facial feature data;
and in the process of drawing the animation character which is different from the animation character to be drawn, giving the initial facial feature of the animation character which is different from the animation character to be drawn based on the modified initial facial feature data.
In some embodiments, the initial facial feature data comprises: initial face pinching data and initial expression data, wherein the initial face pinching data are used for initially setting facial features of the animation roles; the initial expression data is used for initially setting the facial expression of the animation character.
In some embodiments, the first skin module 4552 is further configured to:
acquiring a proxy model in an animation production model, wherein a skeleton of the proxy model is set in a first range of nodes capable of driving the target model;
wrapping the proxy model around the target model such that the distribution of bones in the proxy model corresponds to the distribution of nodes in the target model;
and determining a target model wrapping the proxy model as the skin model.
In some embodiments, the first skin module 4552 is further configured to:
positioning a face bone joint for making a sample model of a sample character to obtain a positioned sample model;
modifying facial features of the positioned sample model to obtain a sample model with a face pinched;
recording expression data of a sample model with a face pinched;
The animation model is created based on the data modifying the facial features and the expression data.
In some embodiments, the first skin module 4552 is further configured to:
creating action nodes for realizing actions of the sample roles based on expression elements in an expression library; wherein, the actions of the sample roles represented by different action nodes are different;
forming an action node library based on the plurality of created action nodes;
and storing the action node library in the animation production model.
In some embodiments, the first skin module 4552 is further configured to:
modifying facial features of the located sample model with a second level of bone in response to control instructions issued by a face pinching control system;
and transmitting the data modified by the facial features to an expression binding system so that the expression binding system controls the third-level skeleton to record the expression data of the sample model with the pinched face.
In some embodiments, the first skin module 4552 is further configured to:
transmitting the kneading face data and the expression data stored in the third-level skeleton to a skin skeleton for skin of the sample model with the kneading face;
The skinned skeleton is saved in the animation model.
In some embodiments, the first skin module 4552 is further configured to:
transmitting the first range set in the proxy model to the skin skeleton to determine a second range in which the skeleton of the skin skeleton can drive nodes of the target model;
and adjusting skin data of the skin model based on the second range to obtain an updated skin model.
In some embodiments, a first modification module 4553 for:
determining initial face pinching data and initial expression data of the skin model;
determining target face pinching data and target expression data required by the action to be realized of the animation role;
in the skin model, adjusting initial pinching data of the skin model by adopting the target pinching data to obtain a skin pinching model;
and in the skin pinching face model, the target expression data is adopted to adjust the initial expression data of the skin pinching face model to obtain the skin model, so that the skin model generates the action to be realized of the animation character to be drawn.
In some embodiments, a first modification module 4553 for:
Determining a first difference between the target pinching face data and the initial pinching face data;
and in the skin model, adjusting the initial face pinching data according to the first difference value to obtain the skin face pinching model.
In some embodiments, the first modification module 4553 is further configured to:
determining a second difference between the target expression data and the initial expression data;
and storing the first difference value and the second difference value in the animation production model, so as to be used for modifying facial features of the animation roles which are different from the animation roles to be drawn based on the first difference value and the second difference value in the process of drawing the animation roles which are different from the animation roles to be drawn.
In some embodiments, the first modification module 4553 is further configured to:
searching a target action node matched with the action to be realized of the animation role from an action node library stored in the animation production model;
the target facial feature is determined based on the corresponding action of the target action node.
In some embodiments, the first modification module 4553 is further configured to:
and determining target face pinching data and target expression data required by the action corresponding to the target action node as the target facial features.
Embodiments of the present invention provide a storage medium having stored therein executable instructions which, when executed by a processor, cause the processor to perform the method provided by the embodiments of the present invention.
In some embodiments, the storage medium may be FRAM, ROM, PROM, EPROM, EE PROM, flash memory, magnetic surface memory, optical disk, or CD-ROM; or may be various terminals including one or any combination of the above memories.
In some embodiments, the executable instructions may be in the form of programs, software modules, scripts, or code, written in any form of programming language (including compiled or interpreted languages, or declarative or procedural languages), and they may be deployed in any form, including as stand-alone programs or as modules, components, subroutines, or other units suitable for use in a computing environment.
As an example, the executable instructions may, but need not, correspond to files in a file system, may be stored as part of a file that holds other programs or data, for example, in one or more scripts in a hypertext markup language (Hyper Text Markup Language, HTML) document, in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub-programs, or portions of code).
As an example, the executable instructions may be deployed to be executed on one computing terminal or on multiple computing terminals located at one site or, alternatively, on multiple computing terminals distributed across multiple sites and interconnected by a communication network.
In summary, according to the embodiment of the present invention, for an animated character to be drawn, firstly, a target model for drawing the animated character is obtained; then, positioning nodes in the target model to obtain a skin model with initial facial features of the animation roles; the facial nodes of the skin model have a certain movement range; modifying the initial facial features of the skin model based on the target facial features of the animated character to be drawn, so as to cause the skin model to produce animation; finally, the animation is acted on the target model to obtain the animation role to be drawn, so that after the initial skin operation of the target model is performed, some initial facial features are given to the target model, the skin speed can be increased, the universality of facial feature adjustment in different role drawing processes is realized, and the time for facial adjustment of the animation role is saved.
The foregoing is merely exemplary embodiments of the present invention and is not intended to limit the scope of the present invention. Any modification, equivalent replacement, improvement, etc. made within the spirit and scope of the present invention are included in the protection scope of the present invention.

Claims (14)

1. An animation rendering method, the method comprising:
acquiring a target model for drawing an animation role;
acquiring a proxy model in an animation drawing model, wherein a skeleton of the proxy model is set in the proxy model and can drive a node of the target model to perform a first range of motion;
surrounding the proxy model with the target model such that a distribution of bones in the proxy model corresponds to a distribution of nodes in the target model;
determining a target model surrounding the proxy model as a skin model having initial facial features of the animated character; the facial nodes of the skin model have a certain movement range;
modifying the initial facial features of the skin model based on the target facial features of the animated character to be drawn, so as to cause the skin model to produce animation;
and acting the animation on the target model to obtain the animation role to be drawn.
2. The method of claim 1, wherein after animating the skin model, the method further comprises:
saving the data of the modified initial facial features;
generating initial facial features of the new animated character based on the modified data of the initial facial features when drawing a new animated character that is different from the animated character;
performing skinning operation on a new target model for drawing a new animation role based on initial facial features of the new animation role to obtain a new skinned model;
based on the new animated character target facial features, the initial facial features of the new skin model are modified to animate the new skin model.
3. The method according to claim 1 or 2, wherein the initial facial feature data comprises:
initial face pinching data for initially setting facial features of the animated character;
and the initial expression data is used for initially setting the facial expression of the animation character.
4. The method of claim 1, wherein prior to the obtaining a proxy model in an animated rendering model, the method further comprises:
Positioning a face bone joint of a sample model for drawing a sample character to obtain a positioned sample model;
modifying facial features of the positioned sample model to obtain a sample model with a face pinched;
recording expression data of a sample model with a face pinched;
determining a proxy model based on a range in which each set bone can drive nodes of the sample model with the face pinched to move;
creating the animated rendering model based on the proxy model, the expression data, and the data modifying the facial features.
5. The method according to claim 4, wherein the method further comprises:
creating action nodes for realizing actions of the sample roles based on expression elements in an expression library; wherein, the actions of the sample roles represented by different action nodes are different;
forming an action node library based on the plurality of created action nodes;
and storing the action node library in the animation drawing model.
6. The method of claim 4, wherein modifying facial features of the located sample model comprises:
modifying facial features of the located sample model in response to a first control instruction issued by a face pinching control system;
Correspondingly, the recording of the expression data of the sample model with the pinched face completed comprises the following steps:
and transmitting the data for modifying the facial features to an expression binding system, and recording the expression data of the sample model with the pinched face in response to a second control instruction sent by the expression binding system.
7. The method of claim 4, wherein after the recording of expression data of the sample model of the face pinching completion, the method further comprises:
transmitting the stored kneading face data and expression data to a skin skeleton for skinning a sample model with the kneading face;
and saving the skin skeleton comprising the kneading face data and the expression data in the animation drawing model.
8. The method of claim 7, wherein the method further comprises:
transmitting the first range set in the proxy model to the skin skeleton to determine a second range in which the skeleton of the skin skeleton can drive nodes of the target model to move;
and adjusting skin data of the skin model based on the second range to obtain an updated skin model.
9. The method of claim 1, wherein modifying the initial facial features of the skin model based on the target facial features of the animated character to be drawn to animate the skin model comprises:
Determining initial face pinching data and initial expression data of the skin model;
determining target face pinching data and target expression data required for realizing target actions of the animation roles;
adopting the target kneading face data to adjust the initial kneading face data of the skin model to obtain a skin kneading face model;
adopting the target expression data to adjust the initial expression data of the skin pinching face model to obtain a skin adjusting model;
and generating the action to be realized of the animation role to be drawn through the skin adjustment model.
10. The method of claim 9, wherein the adjusting the initial pinching data of the skinned model using the target pinching data to obtain the skinned pinching model comprises:
determining a first difference between the target pinching face data and the initial pinching face data;
and adjusting the initial face pinching data according to the first difference value to obtain the skin face pinching model.
11. The method according to claim 10, wherein the method further comprises:
determining a second difference between the target expression data and the initial expression data;
Storing the first difference value and the second difference value in the animation drawing model;
when a new animated character, which is different from the animated character, is drawn, an initial facial feature of the new animated character is generated based on the first and second differences.
12. An animation rendering device, the device comprising:
the first acquisition module is used for acquiring a target model for drawing the animation role;
the first skin module is used for acquiring a proxy model in the animation drawing model, wherein a skeleton of the proxy model is set in the proxy model and can drive a node of the target model to move in a first range;
the first skin module is further used for surrounding the proxy model to the target model so that the distribution of bones in the proxy model corresponds to the distribution of nodes in the target model; determining a target model surrounding the proxy model as a skin model having initial facial features of the animated character; the facial nodes of the skin model have a certain movement range;
the first modification module is used for modifying the initial facial features of the skin model based on the target facial features of the animation character to be drawn so as to enable the skin model to generate animation;
And the first determining module is used for acting the animation on the target model to obtain the animation role to be drawn.
13. A terminal, comprising:
a memory for storing executable instructions;
a processor for implementing the method of any one of claims 1 to 11 when executing executable instructions stored in said memory.
14. A storage medium having stored thereon executable instructions for causing a processor to perform the method of any one of claims 1 to 11.
CN201910780082.9A 2019-08-22 2019-08-22 Animation drawing method, device, terminal and storage medium Active CN110490958B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910780082.9A CN110490958B (en) 2019-08-22 2019-08-22 Animation drawing method, device, terminal and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910780082.9A CN110490958B (en) 2019-08-22 2019-08-22 Animation drawing method, device, terminal and storage medium

Publications (2)

Publication Number Publication Date
CN110490958A CN110490958A (en) 2019-11-22
CN110490958B true CN110490958B (en) 2023-09-01

Family

ID=68553010

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910780082.9A Active CN110490958B (en) 2019-08-22 2019-08-22 Animation drawing method, device, terminal and storage medium

Country Status (1)

Country Link
CN (1) CN110490958B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111161427A (en) * 2019-12-04 2020-05-15 北京代码乾坤科技有限公司 Self-adaptive adjustment method and device of virtual skeleton model and electronic device
CN111729321B (en) * 2020-05-07 2024-03-26 完美世界(重庆)互动科技有限公司 Method, system, storage medium and computing device for constructing personalized roles
CN111768488B (en) * 2020-07-07 2023-12-29 网易(杭州)网络有限公司 Virtual character face model processing method and device
CN111951360B (en) * 2020-08-14 2023-06-23 腾讯科技(深圳)有限公司 Animation model processing method and device, electronic equipment and readable storage medium
CN111899319B (en) * 2020-08-14 2021-05-14 腾讯科技(深圳)有限公司 Expression generation method and device of animation object, storage medium and electronic equipment
CN114913278A (en) * 2021-06-30 2022-08-16 完美世界(北京)软件科技发展有限公司 Expression model generation method and device, storage medium and computer equipment
CN113470148B (en) * 2021-06-30 2022-09-23 完美世界(北京)软件科技发展有限公司 Expression animation production method and device, storage medium and computer equipment

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101833788A (en) * 2010-05-18 2010-09-15 南京大学 Three-dimensional human modeling method by using cartographical sketching
CN103052973A (en) * 2011-07-12 2013-04-17 华为技术有限公司 Method and device for generating body animation
CN103377484A (en) * 2012-04-28 2013-10-30 上海明器多媒体科技有限公司 Method for controlling role expression information for three-dimensional animation production
CN103824316A (en) * 2014-03-26 2014-05-28 广州博冠信息科技有限公司 Method and equipment for generating action pictures for object
CN106097418A (en) * 2016-06-14 2016-11-09 江苏师范大学 Cartoon character face verification method for designing based on Interactive evolutionary algorithm
CN106228592A (en) * 2016-09-12 2016-12-14 武汉布偶猫科技有限公司 A kind of method of clothing threedimensional model automatic Bind Skin information
CN107213638A (en) * 2017-04-06 2017-09-29 珠海金山网络游戏科技有限公司 A kind of 3D game bone processing systems and its processing method
CN107633542A (en) * 2016-07-19 2018-01-26 珠海金山网络游戏科技有限公司 One kind pinches face editor and animation fusion method and system
CN107657651A (en) * 2017-08-28 2018-02-02 腾讯科技(上海)有限公司 Expression animation generation method and device, storage medium and electronic installation
CN109285209A (en) * 2018-09-14 2019-01-29 网易(杭州)网络有限公司 Processing method, device, processor and the terminal of the mask of game role
CN109395390A (en) * 2018-10-26 2019-03-01 网易(杭州)网络有限公司 Processing method, device, processor and the terminal of game role facial model
CN109727302A (en) * 2018-12-28 2019-05-07 网易(杭州)网络有限公司 Bone creation method, device, electronic equipment and storage medium

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101833788A (en) * 2010-05-18 2010-09-15 南京大学 Three-dimensional human modeling method by using cartographical sketching
CN103052973A (en) * 2011-07-12 2013-04-17 华为技术有限公司 Method and device for generating body animation
CN103377484A (en) * 2012-04-28 2013-10-30 上海明器多媒体科技有限公司 Method for controlling role expression information for three-dimensional animation production
CN103824316A (en) * 2014-03-26 2014-05-28 广州博冠信息科技有限公司 Method and equipment for generating action pictures for object
CN106097418A (en) * 2016-06-14 2016-11-09 江苏师范大学 Cartoon character face verification method for designing based on Interactive evolutionary algorithm
CN107633542A (en) * 2016-07-19 2018-01-26 珠海金山网络游戏科技有限公司 One kind pinches face editor and animation fusion method and system
CN106228592A (en) * 2016-09-12 2016-12-14 武汉布偶猫科技有限公司 A kind of method of clothing threedimensional model automatic Bind Skin information
CN107213638A (en) * 2017-04-06 2017-09-29 珠海金山网络游戏科技有限公司 A kind of 3D game bone processing systems and its processing method
CN107657651A (en) * 2017-08-28 2018-02-02 腾讯科技(上海)有限公司 Expression animation generation method and device, storage medium and electronic installation
CN109285209A (en) * 2018-09-14 2019-01-29 网易(杭州)网络有限公司 Processing method, device, processor and the terminal of the mask of game role
CN109395390A (en) * 2018-10-26 2019-03-01 网易(杭州)网络有限公司 Processing method, device, processor and the terminal of game role facial model
CN109727302A (en) * 2018-12-28 2019-05-07 网易(杭州)网络有限公司 Bone creation method, device, electronic equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
"基于OPENGL的人体姿态数据仿真";刘凯、柴毅等;《计算机仿真》;第26卷(第4期);第267-270页 *

Also Published As

Publication number Publication date
CN110490958A (en) 2019-11-22

Similar Documents

Publication Publication Date Title
CN110490958B (en) Animation drawing method, device, terminal and storage medium
US6011562A (en) Method and system employing an NLE to create and modify 3D animations by mixing and compositing animation data
CN110766776B (en) Method and device for generating expression animation
US6718231B2 (en) Authoring system and authoring method, and storage medium
Heloir et al. Real-time animation of interactive agents: Specification and realization
JP2002120174A (en) Authoring system, authoring method and storage medium
CN109621419B (en) Game character expression generation device and method, and storage medium
CN114363712A (en) AI digital person video generation method, device and equipment based on templated editing
US11948240B2 (en) Systems and methods for computer animation using an order of operations deformation engine
Llorach et al. Web-based embodied conversational agents and older people
US20040179043A1 (en) Method and system for animating a figure in three dimensions
US20100013838A1 (en) Computer system and motion control method
US20230071947A1 (en) Information processing system, information processing method, program, and user interface
KR102349530B1 (en) Method, device and system for automatically creating of animation object based on artificial intelligence
Kshirsagar et al. Multimodal animation system based on the MPEG-4 standard
Zhang et al. PoseVEC: Authoring Adaptive Pose-aware Effects using Visual Programming and Demonstrations
KR100817506B1 (en) Method for producing intellectual contents
Bai et al. Bring Your Own Character: A Holistic Solution for Automatic Facial Animation Generation of Customized Characters
Santos Virtual Avatars: creating expressive embodied characters for virtual reality
US20240171782A1 (en) Live streaming method and system based on virtual image
JP2018128543A (en) Sign language cg editing device and program
CN117195563A (en) Animation generation method and device
Beskow et al. Expressive Robot Performance Based on Facial Motion Capture.
CN111127602A (en) Animation production method and device based on NGUI
CN117252961A (en) Face model building method and face model building system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant