CN110517337B - Animation character expression generation method, animation production method and electronic equipment - Google Patents

Animation character expression generation method, animation production method and electronic equipment Download PDF

Info

Publication number
CN110517337B
CN110517337B CN201910811489.3A CN201910811489A CN110517337B CN 110517337 B CN110517337 B CN 110517337B CN 201910811489 A CN201910811489 A CN 201910811489A CN 110517337 B CN110517337 B CN 110517337B
Authority
CN
China
Prior art keywords
model
facial
basic
face
face model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910811489.3A
Other languages
Chinese (zh)
Other versions
CN110517337A (en
Inventor
王立有
刘宝龙
刘宁
覃小春
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Digital Sky Technology Co ltd
Original Assignee
Chengdu Digital Sky Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu Digital Sky Technology Co ltd filed Critical Chengdu Digital Sky Technology Co ltd
Priority to CN201910811489.3A priority Critical patent/CN110517337B/en
Publication of CN110517337A publication Critical patent/CN110517337A/en
Application granted granted Critical
Publication of CN110517337B publication Critical patent/CN110517337B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/403D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Abstract

The application provides an animation character expression generating method, an animation production method, electronic equipment and a storage medium. The method for generating the animation character expression comprises the following steps: acquiring a facial scanning result of a performer, and generating a basic facial model according to the scanning result; according to a preset migration mode, migrating the basic face model to an animation role, and generating a target face model of the animation role; acquiring the facial expression of the performer, and acquiring model parameters of the basic facial model according to the facial expression; and obtaining the facial expression of the animation role according to the model parameters and the target facial model. Compared with the prior art, the facial expression obtained by the method only by tracking the motion information of the key points of the face is more real and visual.

Description

Animation character expression generation method, animation production method and electronic equipment
Technical Field
The present invention relates to the technical field of animation, and in particular, to an animation character expression generating method, an animation method, an electronic device, and a storage medium.
Background
The most manufacturing processes of face animation used in the current game are as follows: and fixing a plurality of key points on the face of the performer to obtain the change information of the facial expression, then solving the facial motion information of the performer, and transferring the facial motion information of the performer to the face of the animation character, thereby manufacturing the facial expression of the animation character. However, in the specific implementation process, the inventor finds that the facial expression muscles of the human body are complex, and the expression of the performer (especially the micro-expression related to the performer) cannot be completely reproduced only by tracking the motion information of the key points of the human face, so that the effect presented after the expression is migrated to the animated character is lack of realism.
Disclosure of Invention
In view of the foregoing, an object of the embodiments of the present application is to provide an animation character expression generating method, an animation production method, an electronic device and a storage medium, so as to solve the above-mentioned problem of "the effect presented after the expression is migrated to the animation character lacks the sense of reality" by tracking the key points of the human face.
In order to solve the above technical problems, the embodiments of the present application are implemented in the following manner:
in a first aspect, an embodiment of the present application provides a method for generating an animated character expression, including: acquiring a facial scanning result of a performer, and generating a basic facial model according to the scanning result; according to a preset migration mode, migrating the basic face model to an animation role, and generating a target face model of the animation role; acquiring the facial expression of the performer, and acquiring model parameters of the basic facial model according to the facial expression; and obtaining the facial expression of the animation role according to the model parameters and the target facial model.
In the application, a basic face model is generated according to the acquired face scanning result of the performer; and generating a target face model of the animation role according to the basic face model of the performer. And then, according to the facial expression of the performer, obtaining the model parameters of the basic facial model, and according to the model parameters and the target facial model, obtaining the facial expression of the animation character which is the same as the facial expression of the performer, wherein compared with the prior art, the facial expression obtained by only tracking the motion information of key points of the face is more real and more visual.
With reference to the foregoing technical solution provided in the first aspect, in some possible implementation manners, the basic face model includes an original face model and a first face model; the preset migration mode comprises the following steps: acquiring an original target face model of the animated character corresponding to the original face model; acquiring a mapping relation between the structure of the original face model and the structure of the first face model; and applying the mapping relation to the original target face model to obtain a target first face model of the animation role.
In the present application, the migration method is to obtain a mapping relationship between a structure of an original face model of a performer and a structure of a first face model, and then apply the mapping relationship to an original target face model of an animated character to obtain a target first face model of the animated character. In this way, a target face model of the animated character corresponding to the basic face model of the actor can be acquired so as to migrate the facial expression of the actor subsequently.
With reference to the foregoing technical solution provided in the first aspect, in some possible implementation manners, the acquiring a facial expression of the performer, and acquiring, according to the facial expression, model parameters of the basic facial model includes: acquiring the facial expression of the performer; substituting the facial expression of the performer into a preset calculation formula to obtain model parameters of the basic facial model.
In the method, a mathematical model is established to calculate model parameters of a basic facial model, and accuracy of obtaining facial expression is improved.
With reference to the foregoing technical solution provided in the first aspect, in some possible implementation manners, the preset calculation formula is:wherein (1)>Representing the constructed mathematical model, M representing the facial expression of the performer, < >>Representing the model parameters; wherein (1)>B 0 Representing the original face model, B k Representing the basic face model, n representing the number of basic face models.
With reference to the foregoing technical solution of the first aspect, in some possible implementation manners, the obtaining the animated character according to the model parameters and the target face modelThe formula of the facial expression is:wherein E represents the facial expression of the animated character, C 0 Representing the original target face model, C k Representing the target face model,/->Representing the model parameters, n representing the number of the target face models.
With reference to the foregoing technical solution of the first aspect, in some possible implementation manners, the obtaining a facial scan result of a performer, and generating a basic facial model according to the scan result includes: and acquiring a scanning result of scanning the face of the performer through 3D scanning, and generating the basic face model according to the scanning result.
In the method, the base facial model of the performer is generated according to the 3D scanning result, so that the manufactured base facial model is more accurate and real.
In a second aspect, an embodiment of the present application provides an animation method, including: obtaining a result of scanning the performance of the performer through 4D scanning, and obtaining multi-frame grid data with preset duration; acquiring a facial scanning result of the performer, and generating a basic facial model according to the scanning result; according to a preset migration mode, migrating the basic face model to an animation role, and generating a target face model of the animation role; acquiring facial expressions of the performers in the grid data of each frame, and acquiring model parameters of the basic facial model of each frame according to the facial expressions of each frame; and obtaining the facial expression of the animation role of each frame according to the model parameters of the basic facial model and the target facial model of each frame, and further obtaining the animation with preset duration.
In the present application, facial expressions of animated characters are generated from a base facial model of a performer and a target facial model of the base facial model of the corresponding performer. Each frame of animation can generate the facial expression of an animation role of a frame, and in the prior art, when the motion information of key points of a human face is tracked, the key frames are required to be set, the RBF interpolation function is trained to obtain the blendhape coefficients between the key frames, the mode is complicated and needs to be adjusted repeatedly, and the animation effect is influenced, so that the animation effect obtained by the scheme of the application is better, and the transition between the expressions of the animation roles is more real.
With reference to the foregoing technical solution of the second aspect, in some possible implementation manners, after the obtaining a result of scanning a performance of a performer through 4D scanning, the method further includes: and carrying out re-topology on the multi-frame grid data so as to enable the grid topology among frames to be consistent, and obtaining the grid data.
In the method, multi-frame grid data are subjected to heavy topology so that grid topologies among frames are consistent, and later unified expression animation production is facilitated.
In a third aspect, an embodiment of the present application further provides an animated character expression generating device, including: the first acquisition module is used for acquiring the face scanning result of the performer and generating a basic face model according to the scanning result. And the generating module is used for migrating the basic face model to the animation role according to a preset migration mode and generating a target face model of the animation role. And the second acquisition module is used for acquiring the facial expression of the performer and acquiring model parameters of the basic facial model according to the facial expression. And the obtaining module is used for obtaining the facial expression of the animation role according to the model parameters and the target facial model.
In a fourth aspect, embodiments of the present application further provide an animation device, including: and the third acquisition module is used for acquiring a result of scanning the performance of the performer through 4D scanning to obtain multi-frame grid data with preset duration. And the fourth acquisition module is used for acquiring the facial scanning result of the performer and generating a basic facial model according to the scanning result. And the generation module is used for migrating the basic face model to the animation role according to a preset migration mode and generating a target face model of the animation role. And a fifth acquisition module, configured to acquire a facial expression of the performer in the mesh data of each frame, and acquire model parameters of the basic facial model of each frame according to the facial expression of each frame. The obtaining module is used for obtaining the facial expression of the animation role of each frame according to the model parameters of the basic facial model and the target facial model of each frame, and further obtaining the animation with preset duration.
In a fifth aspect, embodiments of the present application provide an electronic device, including: the device comprises a processor and a memory, wherein the processor is connected with the memory; the memory is used for storing programs; the processor is configured to invoke a program stored in the memory, to perform a method as provided by and/or in connection with the embodiments of the first aspect described above and/or to perform a method as provided by and/or in connection with the embodiments of the second aspect described above.
In a sixth aspect, embodiments of the present application provide a storage medium having stored thereon a computer program which, when run by a processor, performs a method as provided by and/or in connection with the first aspect embodiment described above.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the embodiments of the present application will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and should not be considered as limiting the scope, and other related drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flowchart of steps of a method for generating an animated character expression according to an embodiment of the present application.
Fig. 2 is a flowchart of a preset migration method provided in an embodiment of the present application.
Fig. 3 is a schematic diagram of a basic face model according to an embodiment of the present application.
Fig. 4 is a flowchart illustrating steps of an animation method according to an embodiment of the present application.
Fig. 5 is a block diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described below with reference to the drawings in the embodiments of the present application.
In the prior art, most of the face animation used in games is produced by the following steps: and fixing a plurality of key points on the face of the performer to obtain the change information of the facial expression, then resolving the facial motion information of the performer, and transferring the facial motion information of the performer to the face of the animation character, thereby manufacturing the facial expression of the animation character. However, facial expressions of human bodies are complicated in muscles, and the expressions of the performers (especially, the micro expressions of the performers) cannot be completely reproduced only by the motion information of key points of the human faces, so that the effect presented after the expressions are moved to the animated characters is lack of realism.
In view of the above problems, the present inventors have studied and studied, and have proposed the following examples to solve the above problems.
Referring to fig. 1, an embodiment of the present application provides an animated character expression generating method. The method comprises the following steps: steps S101-S104.
Step S101: and obtaining a facial scanning result of the performer, and generating a basic facial model according to the scanning result.
Firstly, a face scanning result of a performer is obtained, that is, the face of the performer needs to be scanned in advance, and the purpose of scanning the face of the performer is to generate a basic face model of the performer. As an alternative embodiment, the face of the performer is scanned and a basic face model of the performer is generated according to the scanning result, which may be that the face of the performer is scanned by 3D (3-dimensional) scanning and the basic face model of the performer is generated according to the scanning result. The 3D scan is also called a three-dimensional scan. I.e. the application is directed to scanning of the face of a performer by means of a three-dimensional scanner. It should be noted that three-dimensional scanning refers to scanning the spatial shape and structure of an object and colors to obtain the spatial coordinates of the surface of the object. In the embodiment of the application, the basic face model of the performer is generated according to the 3D scanning result, so that the manufactured basic face model is more accurate and real. As another embodiment, the face of the performer may be scanned by a device such as a stereoscopic camera. The present application is not limited thereto.
After the scanning is completed, a basic face model of the performer is generated according to the scanning result, and a person skilled in the art can understand that the basic face model of the performer is generated as a fusion deformer for making the performer. Thus, software that makes the performer's blendrope may use Maya or 3dsMax or other three-dimensional modeling software. The present application is not limited thereto. It should be noted that there is more than one basic facial model, and each basic facial model corresponds to a specific expression of a performer. For example, when the basic facial models are 5, the basic facial models can comprise five expressions of opening mouth, closing eyes, crying, smiling and generating gas. Wherein each base face model may also continue to refine more base face models, such as laughter, smile, mouth-opening smile, and so forth. Therefore, the type of expression in the basic face model and the number of basic face models are not limited in this application.
Step S102: and according to a preset migration mode, migrating the basic face model to an animation role, and generating a target face model of the animation role.
After the basic face model of the performer is obtained, the basic face model is migrated to the animation role according to a preset migration mode, and a target face model of the animation role is generated. When this migration method is adopted, the original face model of the actor needs to be included in the basic face model of the actor. Here, the original face model may be understood as a face model when the face of the performer has no expression, that is, a face model in a natural state of the performer.
Referring to fig. 2, the preset migration method includes: steps S201 to S203.
Step S201: an original target face model of the animated character corresponding to the original face model is obtained.
The original target face model of the animated character can be understood as a face model when the face of the animated character has no expression, i.e., a face model in the natural state of the animated character. This step starts to create a target face model of the animated character corresponding to the basic face model of the actor, and the initial target face model of the animated character corresponding to the initial face model of the actor is created first. The software for creating the model may be the same as or different from the model of the base face of the performer, and will not be described in detail here.
Step S202: and obtaining a mapping relation between the structure of the original face model and the structure of the first face model.
The method comprises the steps of obtaining a mapping relation between the structure of an original face model and the structure of a first face model, wherein the number of basic face models is more than one, and the method is characterized in that the point-to-point relation between the original face model and the first face model of a performer is found, and the point-to-point mapping relation is obtained. For example, the first facial model contains a smile, when the first facial model is in a smile state, the positions of points on two sides of the mouth corner should be higher than the positions of points in the middle of the lips, for example, while when the first facial model is in an original facial expression, the positions of points on the lips should be approximately the same horizontal position, and at this time, the mapping relationship of the points on the lips of the performer can be obtained. For example, if the expression included in the basic facial expression is a large-eye open, the mapping relationship between the original facial model of the performer and the model with the large-eye open is obtained according to the same principle. For ease of understanding, referring to FIG. 3, S0 is the original facial model of the actor, and S1-S4 are one of the other basic facial models. The step is used for obtaining the mapping relation between S0 and S1, the mapping relation between S0 and S2, the mapping relation between S0 and S3 and the mapping relation between S0 and S4.
Step S203: and applying the mapping relation to the original target face model to obtain a target first face model of the animation role.
And finally, applying the obtained mapping relation to the original target face model of the animation role to obtain a target first face model of the animation role. For example, when the original facial expression is obtained in step S203, the positions of the points on the lips are approximately the same horizontal position, and the expression contained in the first facial model is a smile, and when the facial expression is in the smile state, the positions of the points on both sides of the mouth angle are higher than the position of the point in the middle of the lips, for example. The mapping relationship is applied to the original target face model of the animated character and a target first face model of the animated character containing smiles is obtained.
It will be appreciated that after the first facial model of the animated character is obtained, fine-tuning of the facial model may be performed in order to make the facial model of the animated character more accurate and visual. The present application is not limited thereto.
In the embodiment of the application, the migration manner is to obtain the mapping relationship between the structure of the original face model of the performer and the structure of the first face model, and then apply the mapping relationship to the original target face model of the animated character to obtain the target first face model of the animated character. In this way, a target face model of the animated character corresponding to the basic face model of the actor can be acquired so as to migrate the facial expression of the actor subsequently.
It will be appreciated that in other embodiments, the previously generated base facial model of the actor may not include the original facial model of the actor when the migration approach is employed. As yet another embodiment, one target face model corresponding to one of the basic face models may be acquired first. And then taking the basic facial expression as a reference, and acquiring the mapping relation between the basic facial expression and other basic facial expressions. And finally, applying the mapping relation to a target facial model corresponding to one of the basic facial expressions, so that the same effect can be achieved. The following illustrates an example, such as where one basic facial expression is a laugh, a target facial model of the laugh corresponding to the basic facial expression of the laugh is obtained. And then taking the basic facial expression of the smile as a reference, and acquiring the mapping relation between the basic facial expression of the smile and the basic facial expression of the smile. Finally, the mapping relation is applied to a smiling target face model, and the smiling target face model can be obtained.
In addition, when the next target face model is acquired, the selected reference may not be fixed, one basic face model may be selected randomly as the reference, or a similar basic face model may be selected as the reference, for example, a mapping relationship between the basic face model and the smile may be acquired by taking the basic facial expression of the smile as the reference; the mapping relation between the basic facial model and the crying basic facial expression can be obtained by taking the basic facial expression of the crying as a reference. The present application is not limited thereto.
In other embodiments, instead of using the migration method, a target face model corresponding to the basic face model of the performer one by one may be directly created.
The specific manner of the method can be determined according to the actual situation. The present application is not limited thereto.
Step S103: and acquiring the facial expression of the performer, and acquiring model parameters of the basic facial model according to the facial expression.
If the facial expression of the animated character is produced, it is also required to produce the facial expression of the performer. Through the foregoing steps, a basic facial model of the performer has been obtained. The step of obtaining model parameters of the basic facial model according to the obtained facial expression of the performer. The obtaining of the facial expression of the performer may be obtaining mesh data of a frame of the facial expression of the performer through 4D scanning. The face of the performer may be scanned by another scanning device, which is not limited in this application. It should be noted that, the topology of the mesh data for acquiring the facial expression of the performer should be consistent with the topology of the basic face model.
Specifically, the facial expression of the performer is obtained, and the model parameters of the basic facial model are obtained according to the facial expression, which can be the facial expression of the performer is obtained; substituting the facial expression of the performer into a preset calculation formula to obtain model parameters of the basic facial model.
The preset calculation formula is as follows:wherein (1)>Representing the constructed mathematical model, M representing the facial expression of the performer,/->Representing model parameters. It should be explained that s.t. is an abbreviation for subject to, which mathematically represents the satisfying condition, i.e. model parameter +.>0 or more and 1 or less.
Wherein, the liquid crystal display device comprises a liquid crystal display device,B 0 representing the original face model, B k Represents the basic face model, and n represents the number of basic face models.
The model parameters can be obtained through the two formulas
Step S104: and obtaining the facial expression of the animation role according to the model parameters and the target facial model.
Since the target face model of the animated character is obtained by migration according to the basic face model, that is, the target face model has a correspondence with the basic face model, the facial expression of the animated character identical to the facial expression of the performer can be obtained through the model parameters. I.e., restore the facial expression of the performer to the face of the animated character.
Specifically, a calculation formula for obtaining the facial expression of the animation character according to the model parameters and the target facial model is as follows:wherein E represents the facial expression of the animated character, C 0 Representing the original target face model, C k Representing the target face model, ++>Representing model parameters, n represents the number of target face models. The facial expression of the animation character which is the same as the facial expression of the performer can be obtained through the formula.
The animation expression production used in the current game is simulated according to the performer, namely, the facial expression of the corresponding animation character is generated by acquiring the facial expression of the performer. The difference between the method and the device is that the expression is migrated in a model mode, and the method and the device are characterized in that the corresponding relation between key point information and a blendcope coefficient is obtained by training RBF interpolation functions through key point tracking.
In the embodiment of the application, a basic face model is generated according to the acquired face scanning result of the performer; and generating a target face model of the animation role according to the basic face model of the performer. And then, according to the facial expression of the performer, obtaining the model parameters of the basic facial model, and according to the model parameters and the target facial model, obtaining the facial expression of the animation character which is the same as the facial expression of the performer, wherein compared with the prior art, the facial expression obtained by only tracking the motion information of key points of the face is more real and more visual.
Based on the same inventive concept, please refer to fig. 4, an animation method is further provided in an embodiment of the present application, which includes: steps S301-S305. It should be noted that the above embodiment provides how to make a facial expression of an animated character. The present embodiment provides how to make a segment of animation that includes facial expressions of multiple frames of animated characters.
Step S301: and acquiring a result of scanning the performance of the performer through 4D scanning, and obtaining multi-frame grid data with preset duration.
First, in order to animate a segment of facial expressions containing animated characters, it is still necessary to base the facial expressions of the performers. Thus, the performer is required to perform a performance, the content of which should be consistent with the scene content of the animation. During the performance of the performer, the performer is scanned in 4D (4-dimensional, four-dimensional), and simultaneously three-dimensional reconstruction can be performed by using three-dimensional reconstruction software (such as photoScan), and a multi-frame triangular grid is output. 4D scanning of the performer may employ a 4D scanning device. Finally, multi-frame grid data with a preset time length are obtained. It should be noted that, if an animation is produced for 5 minutes, the preset time period here is 5 minutes, and if an animation is produced for 10 minutes, the preset time period here is 10 minutes. The preset time length is set according to the time length of the animation. The present application is not limited thereto.
Because of the acquired multi-frame grid data with preset duration, there may be differences in grid data between frames, that is, the grid topology may be inconsistent between frames. Therefore, in order to make the grid topology consistent from frame to frame, the method is convenient for the later unified expression animation production. After 4D scanning the performance of the performer to obtain multi-frame grid data of a preset duration, the method further comprises: and carrying out re-topology on the multi-frame grid data so as to enable the grid topology among frames to be consistent, and obtaining the grid data.
Step S302: and acquiring a facial scanning result of the performer, and generating a basic facial model according to the scanning result.
It should be noted that, in this step S302, the same as step S101 in the above embodiment, for avoiding redundancy, specific details are not described further herein, and the same parts are referred to each other.
Step S303: and according to a preset migration mode, migrating the basic face model to an animation role, and generating a target face model of the animation role.
And according to a preset migration mode, migrating the basic face model to the animation role, and manufacturing a target face model of the animation role. When this migration method is adopted, the original face model of the actor needs to be included in the previously created basic face model of the actor. Here, the original face model may be understood as a face model when the face of the performer has no expression, that is, a face model in a natural state of the performer.
The preset migration mode comprises the following steps: acquiring an original target face model of the animated character corresponding to the original face model; acquiring a mapping relation between the structure of the original face model and the structure of the first face model; and applying the mapping relation to the original target face model to obtain a target first face model of the animation role.
It should be noted that, the step S303 is the same as the step S102 in the above embodiment, and for avoiding redundancy, specific details are not described further herein, and the same parts are referred to each other.
Step S304: and acquiring the facial expression of the performer in the grid data of each frame, and acquiring model parameters of the basic facial model of each frame according to the facial expression of each frame.
The step S304 is substantially the same as the step S103 in the above embodiment. The difference is that step S103 is only the model parameters of the acquired one-frame basic face model, and step S304 is to acquire the model parameters of each frame basic face model. The principles are communicated, and in order to avoid redundancy, specific details are not described further herein, but only the same parts are referred to each other.
Step S305: and obtaining the facial expression of the animation role of each frame according to the model parameters of the basic facial model and the target facial model of each frame, and further obtaining the animation with preset duration.
It should be noted that in this step S305 andstep S104 in the above embodiment is substantially the same. The difference is that step S104 is only the acquired facial expression of one frame of the animated character, and step S305 is to acquire the facial expression of each frame of the animated character. The principle is that the two are communicated, and the calculation formula is adopted:wherein E represents the facial expression of the animated character, C 0 Representing the original target face model, C k Representing the target face model, ++>Representing model parameters, n represents the number of target face models. The facial expression of the animation character which is the same as the facial expression of the performer can be obtained through the formula. Therefore, in order to avoid redundancy, specific details are not described further herein, and like parts are referred to each other. And finally, combining the facial expressions of the animation roles of each frame to form the animation with the preset duration.
For ease of understanding, the following description will be given by way of a specific example, assuming that an animation of 5 minutes duration is required. Then a 4D scanning device is used to record a 5 minute performance to obtain 9000 frames of grid data. And carrying out re-topology on 9000 frames of grid data to obtain 9000 frames of grid data with consistent topology. And then, carrying out face scanning on the performer performing, and making a basic face model of the performer according to the scanning result. The number of basic face models may be selected according to project requirements, for example, 51. And then according to a preset migration mode, migrating the 51 basic face models to the animation roles, and manufacturing 51 target face models of the animation roles. And then obtaining model parameters corresponding to the grid data of each frame according to 9000 frames of grid data. Because the number of basic face models is 51, the number of model parameters corresponding to each frame of grid data is 51, and finally, the facial expression of each frame of animation role is obtained according to the model parameters of each frame of basic face models and the target face models, and then the facial expressions of each frame of animation role are combined, so that the animation of 5 minutes can be formed.
In the embodiment of the application, the facial expression of the animated character is generated according to the basic facial model of the performer and the target facial model corresponding to the basic facial model of the performer. Each frame of animation can generate the facial expression of an animation role of a frame, and in the prior art, when the motion information of key points of a human face is tracked, the key frames are required to be set, the RBF interpolation function is trained to obtain the blendhape coefficients between the key frames, the mode is complicated and needs to be adjusted repeatedly, and the animation effect is influenced, so that the animation effect obtained by the scheme of the application is better, and the transition between the expressions of the animation roles is more real.
Based on the same inventive concept, the embodiment of the present application further provides an animated character expression generating device, including: the first acquisition module is used for acquiring the face scanning result of the performer and generating a basic face model according to the scanning result.
And the generating module is used for migrating the basic face model to the animation role according to a preset migration mode and generating a target face model of the animation role.
And the second acquisition module is used for acquiring the facial expression of the performer and acquiring model parameters of the basic facial model according to the facial expression.
And the obtaining module is used for obtaining the facial expression of the animation role according to the model parameters and the target facial model.
Optionally, the generating module is further configured to obtain an original target face model of the animated character corresponding to the original face model; acquiring a mapping relation between the structure of the original face model and the structure of the first face model; and applying the mapping relation to the original target face model to obtain a target first face model of the animation role.
Optionally, the second obtaining module is further configured to obtain a facial expression of the performer; substituting the facial expression of the performer into a preset calculation formula to obtain model parameters of the basic facial model.
Optionally, a preset calculation formula in the second obtaining module is:wherein (1)>Representing the constructed mathematical model, M representing the facial expression of the actor; />Representing the model parameters, wherein +_>B 0 Representing the original face model, B k Representing the basic face model, n representing the number of basic face models.
Optionally, the calculation formula of the obtaining module for obtaining the facial expression of the animated character according to the model parameters and the target facial model is:wherein E represents the facial expression of the animated character, C 0 Representing the original target face model, C k Representing the target face model,/->Representing the model parameters, n representing the number of the target face models.
Optionally, the first obtaining module is further configured to obtain a scanning result of scanning the face of the performer through 3D scanning, and generate the basic face model according to the scanning result.
Based on the same inventive concept, an embodiment of the present application further provides an animation device, including:
and the third acquisition module is used for acquiring a result of scanning the performance of the performer through 4D scanning to obtain multi-frame grid data with preset duration.
And the fourth acquisition module is used for acquiring the facial scanning result of the performer and generating a basic facial model according to the scanning result.
And the generation module is used for migrating the basic face model to the animation role according to a preset migration mode and generating a target face model of the animation role.
And a fifth acquisition module, configured to acquire a facial expression of the performer in the mesh data of each frame, and acquire model parameters of the basic facial model of each frame according to the facial expression of each frame.
The obtaining module is used for obtaining the facial expression of the animation role of each frame according to the model parameters of the basic facial model and the target facial model of each frame, and further obtaining the animation with preset duration.
Optionally, the device further comprises a re-topology module, which is used for re-topology the multi-frame grid data after obtaining the multi-frame grid data with preset duration after obtaining the result of scanning the performance of the performer through 4D scanning, so that the grid topologies among the frames are consistent, and the grid data are obtained.
Referring to fig. 5, an electronic device 10 is provided according to an embodiment of the present application, and the electronic device 10 includes: at least one processor 111, at least one memory 112, at least one communication bus 113. Wherein the communication bus 113 is used to enable direct connection communication for the components. The memory 112 may be a high-speed RAM memory or a non-volatile memory (non-volatile memory), such as at least one disk memory. Wherein the memory 112 has stored therein computer readable instructions. The processor 111 is configured to execute executable modules stored in the memory 112. For example, the processor 111 is configured to obtain a facial scan of the performer, and generate a basic facial model according to the scan; according to a preset migration mode, migrating the basic face model to an animation role, and generating a target face model of the animation role; acquiring the facial expression of the performer, and acquiring model parameters of the basic facial model according to the facial expression; and obtaining the facial expression of the animation role according to the model parameters and the target facial model.
The processor 111 may be a general-purpose processor, including a central processing unit (Central Processing Unit, CPU), a network processor (Network Processor, NP), etc.; but also Digital Signal Processors (DSPs), application Specific Integrated Circuits (ASICs), field Programmable Gate Arrays (FPGAs) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components. The disclosed methods, steps, and logic blocks in the embodiments of the present application may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
Among other things, the electronic device 10 in the embodiments of the present application includes, but is not limited to: personal computers, desktop computers, all-in-one machines, tablet computers, and the like.
Based on the same inventive concept, the present application also provides a storage medium having stored thereon a computer program which, when executed, performs the method provided in the above embodiments.
The storage media may be any available media that can be accessed by a computer or a data storage device such as a server, data center, or the like that contains an integration of one or more available media. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., solid State Disk (SSD)), etc.
In the embodiments provided in the present application, it should be understood that the disclosed method may be implemented in other manners. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some communication interface, device or unit indirect coupling or communication connection, which may be in electrical, mechanical or other form.
Furthermore, functional modules in various embodiments of the present application may be integrated together to form a single portion, or each module may exist alone, or two or more modules may be integrated to form a single portion.
In this document, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions.
The foregoing is merely exemplary embodiments of the present application and is not intended to limit the scope of the present application, and various modifications and variations may be suggested to one skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principles of the present application should be included in the protection scope of the present application.

Claims (7)

1. An animated character expression generating method, comprising:
acquiring a facial scanning result of a performer, and generating a basic facial model according to the scanning result; the basic face model includes an original face model and a first face model; the original face model is a face model when the face of the performer has no expression; the first facial model is a facial model of the actor's specific expression;
according to a preset migration mode, migrating the basic face model to an animation role, and generating a target face model of the animation role;
acquiring the facial expression of the performer, and acquiring model parameters of the basic facial model according to the facial expression; obtaining the facial expression of the animation role according to the model parameters and the target facial model;
the preset migration mode comprises the following steps:
acquiring an original target face model of the animated character corresponding to the original face model;
obtaining a mapping relationship between the structure of the original face model and the structure of the first face model, including: acquiring a point-to-point relationship between the original face model of the actor and the first face model;
applying the mapping relation to the original target face model to obtain a target first face model of the animation role;
the obtaining the facial expression of the performer and obtaining the model parameters of the basic facial model according to the facial expression includes:
acquiring the facial expression of the performer;
substituting the facial expression of the performer into a preset calculation formula to obtain model parameters of the basic facial model;
the preset calculation formula is as follows:wherein (1)>Representing the constructed mathematical model, M representing the facial expression of the actor; />Representing the model parameters;
wherein, the liquid crystal display device comprises a liquid crystal display device,B 0 representing the original face model, B k Representing the basic face model, n representing the number of basic face models.
2. The method of claim 1, wherein the calculation formula for obtaining the facial expression of the animated character from the model parameters and the target facial model is:wherein E represents the facial expression of the animated character, C 0 Representing the original target face model, C k Representing the target face model,/->Representing the mouldAnd the model parameters, n, represent the number of the target face models.
3. The method of claim 1, wherein the acquiring a facial scan of the actor and generating a base facial model from the scan comprises:
and acquiring a scanning result of scanning the face of the performer through 3D scanning, and generating the basic face model according to the scanning result.
4. A method of animation comprising:
obtaining a result of scanning the performance of the performer through 4D scanning, and obtaining multi-frame grid data with preset duration;
acquiring a facial scanning result of the performer, and generating a basic facial model according to the scanning result; the basic face model includes an original face model and a first face model; the original face model is a face model when the face of the performer has no expression; the first facial model is a facial model of the actor's specific expression;
according to a preset migration mode, migrating the basic face model to an animation role, and generating a target face model of the animation role;
acquiring facial expressions of the performers in the grid data of each frame, and acquiring model parameters of the basic facial model of each frame according to the facial expressions of each frame;
obtaining facial expressions of the animation roles of each frame according to the model parameters of the basic facial model and the target facial model of each frame, and further obtaining animation with preset duration;
the step of obtaining facial expressions of the performers in the grid data of each frame, and obtaining model parameters of the basic facial model of each frame according to the facial expressions of each frame comprises the following steps:
acquiring facial expressions of the performers in the grid data of each frame;
substituting the facial expression of the performer in each frame into a preset calculation formula to obtain model parameters of the basic facial model of each frame;
the preset calculation formula is as follows:wherein (1)>Representing the constructed mathematical model, M representing the facial expression of the actor; />Representing the model parameters;
wherein, the liquid crystal display device comprises a liquid crystal display device,B 0 representing the original face model, B k Representing the basic face model, n representing the number of basic face models;
the preset migration mode comprises the following steps:
acquiring an original target face model of the animated character corresponding to the original face model;
obtaining a mapping relationship between the structure of the original face model and the structure of the first face model, including: acquiring a point-to-point relationship between the original face model of the actor and the first face model;
and applying the mapping relation to the original target face model to obtain a target first face model of the animation role.
5. The method according to claim 4, wherein after obtaining the result of scanning the performance of the performer by the 4D scan, obtaining multi-frame grid data of a preset duration, the method further comprises:
and carrying out re-topology on the multi-frame grid data so as to enable the grid topology among frames to be consistent, and obtaining the grid data.
6. An electronic device, comprising: the device comprises a processor and a memory, wherein the processor is connected with the memory;
the memory is used for storing programs;
the processor being adapted to run a program stored in the memory, to perform the method of any one of claims 1-3 or the method of any one of claims 4-5.
7. A storage medium having stored thereon a computer program which, when run by a computer, performs the method of any of claims 1-3 or the method of any of claims 4-5.
CN201910811489.3A 2019-08-29 2019-08-29 Animation character expression generation method, animation production method and electronic equipment Active CN110517337B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910811489.3A CN110517337B (en) 2019-08-29 2019-08-29 Animation character expression generation method, animation production method and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910811489.3A CN110517337B (en) 2019-08-29 2019-08-29 Animation character expression generation method, animation production method and electronic equipment

Publications (2)

Publication Number Publication Date
CN110517337A CN110517337A (en) 2019-11-29
CN110517337B true CN110517337B (en) 2023-07-25

Family

ID=68629287

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910811489.3A Active CN110517337B (en) 2019-08-29 2019-08-29 Animation character expression generation method, animation production method and electronic equipment

Country Status (1)

Country Link
CN (1) CN110517337B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021208330A1 (en) * 2020-04-17 2021-10-21 完美世界(重庆)互动科技有限公司 Method and apparatus for generating expression for game character
CN111598979B (en) * 2020-04-30 2023-03-31 腾讯科技(深圳)有限公司 Method, device and equipment for generating facial animation of virtual character and storage medium
CN112150594B (en) * 2020-09-23 2023-07-04 网易(杭州)网络有限公司 Expression making method and device and electronic equipment
CN111968207B (en) 2020-09-25 2021-10-29 魔珐(上海)信息科技有限公司 Animation generation method, device, system and storage medium
CN112149599B (en) * 2020-09-29 2024-03-08 网易(杭州)网络有限公司 Expression tracking method and device, storage medium and electronic equipment
CN112699791A (en) * 2020-12-29 2021-04-23 百果园技术(新加坡)有限公司 Face generation method, device and equipment of virtual object and readable storage medium
CN112581520A (en) * 2021-01-29 2021-03-30 秒影工场(北京)科技有限公司 Facial shape expression model construction method based on frame continuous four-dimensional scanning
CN116485959A (en) * 2023-04-17 2023-07-25 北京优酷科技有限公司 Control method of animation model, and adding method and device of expression

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101944238A (en) * 2010-09-27 2011-01-12 浙江大学 Data driving face expression synthesis method based on Laplace transformation
CN108305309A (en) * 2018-04-13 2018-07-20 腾讯科技(成都)有限公司 Human face expression generation method based on 3-D cartoon and device
CN108520548A (en) * 2018-03-26 2018-09-11 闫明佳 Expression moving method
CN110163939A (en) * 2019-05-28 2019-08-23 上海米哈游网络科技股份有限公司 Three-dimensional animation role's expression generation method, apparatus, equipment and storage medium

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10013787B2 (en) * 2011-12-12 2018-07-03 Faceshift Ag Method for facial animation
CN108564642A (en) * 2018-03-16 2018-09-21 中国科学院自动化研究所 Unmarked performance based on UE engines captures system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101944238A (en) * 2010-09-27 2011-01-12 浙江大学 Data driving face expression synthesis method based on Laplace transformation
CN108520548A (en) * 2018-03-26 2018-09-11 闫明佳 Expression moving method
CN108305309A (en) * 2018-04-13 2018-07-20 腾讯科技(成都)有限公司 Human face expression generation method based on 3-D cartoon and device
CN110163939A (en) * 2019-05-28 2019-08-23 上海米哈游网络科技股份有限公司 Three-dimensional animation role's expression generation method, apparatus, equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
三维人脸表情获取及重建技术综述;王珊 等;《系统仿真学报》;第30卷(第7期);第2423-2444页 *

Also Published As

Publication number Publication date
CN110517337A (en) 2019-11-29

Similar Documents

Publication Publication Date Title
CN110517337B (en) Animation character expression generation method, animation production method and electronic equipment
US20210201552A1 (en) Systems and methods for real-time complex character animations and interactivity
US11908057B2 (en) Image regularization and retargeting system
CN111161395B (en) Facial expression tracking method and device and electronic equipment
WO2013165440A1 (en) 3d reconstruction of human subject using a mobile device
KR20120072128A (en) Apparatus and method for generating digital clone
JP2023519846A (en) Volumetric capture and mesh tracking based machine learning
US20130314405A1 (en) System and method for generating a video
US11893671B2 (en) Image regularization and retargeting system
Park et al. Template‐Based Reconstruction of Surface Mesh Animation from Point Cloud Animation
CN113781611A (en) Animation production method and device, electronic equipment and storage medium
Huynh et al. A framework for cost-effective communication system for 3D data streaming and real-time 3D reconstruction
WO2023184357A1 (en) Expression model making method and apparatus, and electronic device
Furukawa et al. Automatic generation of hair motion of 3D characters following japanese anime style
CN116305994A (en) Simulation data generation method and device of inertial measurement equipment, medium and equipment
Huynh Development of a standardized framework for cost-effective communication system based on 3D data streaming and real-time 3D reconstruction
Takács Animation of Avatar Face Based on Human Face Video

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant