CN111292427B - Bone displacement information acquisition method, device, equipment and storage medium - Google Patents

Bone displacement information acquisition method, device, equipment and storage medium Download PDF

Info

Publication number
CN111292427B
CN111292427B CN202010151301.XA CN202010151301A CN111292427B CN 111292427 B CN111292427 B CN 111292427B CN 202010151301 A CN202010151301 A CN 202010151301A CN 111292427 B CN111292427 B CN 111292427B
Authority
CN
China
Prior art keywords
facial
displacement information
target
face
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010151301.XA
Other languages
Chinese (zh)
Other versions
CN111292427A (en
Inventor
许金坚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202010151301.XA priority Critical patent/CN111292427B/en
Publication of CN111292427A publication Critical patent/CN111292427A/en
Application granted granted Critical
Publication of CN111292427B publication Critical patent/CN111292427B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts

Abstract

The embodiment of the application discloses a bone displacement information acquisition method, a bone displacement information acquisition device, bone displacement information acquisition equipment and a storage medium, and belongs to the technical field of computers. The method comprises the following steps: the method comprises the steps of obtaining vertex displacement information corresponding to target facial expressions, binding a plurality of facial vertexes and a plurality of facial skeletons of a target facial model respectively according to binding weights corresponding to the vertex displacement information, carrying out deformation processing on the bound target facial model to enable the facial skeletons to move and drive the facial vertexes to move, and obtaining skeleton displacement information of the target facial model when the displacement information of the facial vertexes is matched with the vertex displacement information. The embodiment of the application provides a method for acquiring skeleton displacement information, a target face model can be deformed by the skeleton displacement information subsequently, the target face model with the target facial expression is obtained, manual multiple adjustment on facial skeletons is not needed, more manpower and time consumption can be avoided, and efficiency can be improved.

Description

Bone displacement information acquisition method, device, equipment and storage medium
Technical Field
The embodiment of the application relates to the technical field of computers, in particular to a bone displacement information acquisition method, a bone displacement information acquisition device, bone displacement information acquisition equipment and a storage medium.
Background
With the development of computer technology, the creation of virtual characters is often involved in the fields of movies, games and the like, such as designing virtual characters with various images in games. In order to make these virtual characters more vivid, it is necessary to generate facial expressions of the virtual characters.
In the related art, a face skeleton is created from a target face model of a virtual character, and the target face model is bound to the face skeleton. If the target facial expression needs to be generated, the operator needs to adjust the facial skeleton of the target facial model for multiple times, such as rotating, zooming, position adjusting and the like, so as to observe whether the facial expression generated by the target facial model is consistent with the target facial expression.
However, the above method requires the operator to manually adjust the facial skeleton for many times, and has complex operation, high labor and time consumption, and low efficiency.
Disclosure of Invention
The embodiment of the application provides a bone displacement information acquisition method, a bone displacement information acquisition device, bone displacement information acquisition equipment and a storage medium, and can improve the efficiency of bone displacement information acquisition. The technical scheme is as follows:
in one aspect, a bone displacement information acquisition method is provided, the method comprising:
acquiring vertex displacement information corresponding to the target facial expression;
binding a plurality of face vertexes and a plurality of face skeletons of a target face model respectively according to binding weights corresponding to the vertex displacement information, wherein the binding weights between the face vertexes and the face skeletons represent the influence degree of the displacement information of the face skeletons on the displacement information of the face vertexes;
and performing deformation processing on the bound target face model to enable the plurality of facial bones to move and drive the plurality of facial vertexes to move, and acquiring bone displacement information of the target face model when the displacement information of the plurality of facial vertexes is matched with the vertex displacement information.
In another aspect, there is provided a bone displacement information acquisition apparatus, the apparatus including:
the first acquisition module is used for acquiring vertex displacement information corresponding to the target facial expression;
the binding module is used for binding a plurality of facial vertexes and a plurality of facial skeletons of a target facial model respectively according to the binding weights corresponding to the vertex displacement information, and the binding weights between the facial vertexes and the facial skeletons represent the influence degree of the displacement information of the facial skeletons on the displacement information of the facial vertexes;
the first deformation processing module is used for carrying out deformation processing on the bound target face model so as to enable the plurality of facial bones to move and drive the plurality of facial vertexes to move;
and the second acquisition module is used for acquiring the bone displacement information of the target face model when the displacement information of the plurality of face vertexes is matched with the vertex displacement information.
Optionally, the binding module includes:
a weight obtaining unit, configured to obtain binding weights between a plurality of facial vertices and a plurality of facial skeletons of the target facial model, respectively, according to the vertex displacement information;
and the binding unit is used for binding a plurality of face vertexes and a plurality of face skeletons of the target face model respectively according to the obtained binding weight.
Optionally, the weight obtaining unit is further configured to process the vertex displacement information through a deformation solver to obtain binding weights between a plurality of facial vertices and a plurality of facial skeletons of the target facial model.
Optionally, the apparatus further comprises:
a model reading module for reading the target face model;
a skeleton creation module to create a plurality of facial bones of the target face model from the plurality of facial vertices, each facial bone corresponding to a facial vertex.
Optionally, the bone creation module comprises:
the file acquisition unit is used for acquiring a bone configuration file, wherein the bone configuration file comprises a plurality of vertex identifications and bone identifications corresponding to each vertex identification;
and the skeleton generating unit is used for generating a facial skeleton at each facial vertex of the target facial model according to the plurality of vertex identifications and the skeleton identification corresponding to each vertex identification.
Optionally, the first obtaining module includes:
and the deformation processing unit is used for carrying out deformation processing on the template surface model according to the deformation parameters corresponding to the target facial expression to obtain the vertex displacement information.
Optionally, the deformation processing unit is further configured to:
building locators on a plurality of face vertices of the template face model;
and according to the deformation parameters corresponding to the target facial expression, after deformation processing is carried out on the template facial model, the established displacement information of the plurality of locators is obtained, and the vertex displacement information is formed.
Optionally, the deformation processing unit is further configured to:
obtaining a hybrid deformer, the hybrid deformer including the deformation parameters;
binding the hybrid deformer to the template face model;
and deforming the template face model through the bound mixed deformer to obtain the vertex displacement information.
Optionally, the deformation processing unit is further configured to:
carrying out deformation processing on the template face model according to deformation parameters corresponding to the target facial expression;
and processing the deformed template face model through a deformation solver to obtain the vertex displacement information.
Optionally, the apparatus further comprises:
and the second deformation processing module is used for carrying out deformation processing on the target face model according to the bone displacement information to obtain the target face model making the target facial expression.
In another aspect, a computer device is provided, which includes a processor and a memory, the memory having stored therein at least one program code, which is loaded and executed by the processor to implement the operations as performed in the bone displacement information acquisition method.
In still another aspect, a computer-readable storage medium having at least one program code stored therein is provided, the at least one program code being loaded and executed by a processor to implement the operations as performed in the bone displacement information acquisition method.
The method, the device and the storage medium provided by the embodiment of the application obtain the vertex displacement information corresponding to the target facial expression, respectively bind the plurality of facial vertices and the plurality of facial skeletons of the target facial model according to the binding weights corresponding to the vertex displacement information, and the binding weights between the facial vertices and the facial skeletons represent the influence degree of the displacement information of the facial skeletons on the displacement information of the facial vertices. And carrying out deformation processing on the bound target face model so as to enable a plurality of facial skeletons to move and drive a plurality of facial vertexes to move, and acquiring skeleton displacement information of the target face model when the displacement information of the facial vertexes is matched with the vertex displacement information. The embodiment of the application provides a method for acquiring skeleton displacement information corresponding to a target facial expression, and the target facial model can be subsequently deformed through the skeleton displacement information to obtain the target facial model with the target facial expression, so that the facial skeleton is not required to be manually adjusted for multiple times, more manpower and time can be avoided being consumed, and the efficiency can be improved.
And the computer equipment respectively acquires the binding weights between the plurality of face vertexes and the plurality of face skeletons of the target face model according to the vertex displacement information, and respectively binds the plurality of face vertexes and the plurality of face skeletons of the target face model according to the acquired binding weights. According to the method and the device, the deformation parameters are converted into the binding weights in a deformation learning resolving mode, the binding weights do not need to be manually set, therefore, the situation that more manpower and time are consumed in setting the binding weights can be avoided, the process of binding a plurality of facial vertexes and a plurality of facial skeletons is simplified, and the efficiency of the binding link of the target facial model is further improved.
In addition, according to the embodiment of the application, the deformation parameters corresponding to the target facial expression made by the expression making personnel can be acquired in a motion capture mode, the template facial model making the target facial expression is obtained by carrying out deformation processing on the template facial model according to the deformation parameters, the facial expression of a real character is converted into the facial expression of a virtual character, and the quality and the efficiency of making the facial expression of the virtual character can be improved.
And the original face model can make a corresponding target facial expression according to the bound mixed deformer, the mixed deformer bound by the original face model is copied, the copied mixed deformer is bound to the template face model, binding weights are obtained according to the method provided by the embodiment of the application, a plurality of face vertexes of the target face model are respectively bound with a plurality of face skeletons, skeleton displacement information is obtained, the bound target face model can make the target facial expression according to the skeleton displacement information, the original face model is regarded as the target facial expression to be copied to the target face model, and therefore the facial expressions of different face models can be made more quickly.
Moreover, according to the embodiment of the application, a plurality of facial vertexes of the target facial model and a plurality of facial skeletons are respectively bound according to the binding weight, skeleton displacement information corresponding to the target facial expression is obtained, the skeleton displacement information is bound with the target facial model, the bound target facial model can be subjected to deformation processing according to the skeleton displacement information, the target facial model for making the target facial expression is obtained, equivalently, a set of rigorous facial expression binding system is established, and the quality and the efficiency of subsequently making the facial expression and the facial animation according to the target facial model can be improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a flowchart of a bone displacement information obtaining method according to an embodiment of the present application.
Fig. 2 is a schematic diagram of a model display interface according to an embodiment of the present disclosure.
FIG. 3 is a schematic diagram of an attribute delivery control interface provided by an embodiment of the present application.
Fig. 4-1 is a schematic diagram of a face model of a binding hybrid deformer provided in an embodiment of the present application.
Fig. 4-2 is a schematic diagram of a face model of an unbound hybrid deformer provided by an embodiment of the present application.
Fig. 5 is a schematic diagram of another face model provided in the embodiment of the present application.
FIG. 6 is a schematic diagram of a modified solver control interface according to an embodiment of the present disclosure.
Fig. 7-1 is a schematic diagram of a face model without binding facial bones according to an embodiment of the present application.
Fig. 7-2 is a schematic diagram of a corresponding relationship between facial bones and facial vertices according to an embodiment of the present application.
Fig. 7-3 are schematic diagrams of a face model with bound facial bones according to an embodiment of the present application.
Fig. 8-1 is a schematic diagram of another face model without facial bones bound according to an embodiment of the present application.
Fig. 8-2 is a schematic diagram of another corresponding relationship between facial bones and facial vertices provided in the embodiment of the present application.
Fig. 8-3 is a schematic diagram of another face model with bound facial bones provided by an embodiment of the present application.
Fig. 9-1 is a schematic diagram of another face model of a bound hybrid deformer provided in an embodiment of the present application.
Fig. 9-2 is a schematic diagram of a face model of another unbound hybrid deformer provided by embodiments of the present application.
Fig. 10 is a flowchart for generating facial expressions according to an embodiment of the present application.
Fig. 11 is a schematic structural diagram of a bone displacement information acquisition apparatus according to an embodiment of the present application.
Fig. 12 is a schematic structural diagram of another bone displacement information acquisition device according to an embodiment of the present application.
Fig. 13 is a schematic diagram of a terminal according to an embodiment of the present application.
Fig. 14 is a schematic diagram of a server according to an embodiment of the present application.
Detailed Description
To make the objects, technical solutions and advantages of the embodiments of the present application more clear, the embodiments of the present application will be further described in detail with reference to the accompanying drawings.
As used herein, the terms "plurality" and "each," a plurality includes two or more than two, and each refers to each of the corresponding plurality. For example, the plurality of vertex identifications includes 3 vertex identifications, and each vertex identification refers to each vertex identification in the 3 vertex identifications.
Artificial Intelligence (AI) is a theory, method, technique and application system that uses a digital computer or a machine controlled by a digital computer to simulate, extend and expand human Intelligence, perceive the environment, acquire knowledge and use the knowledge to obtain the best results. In other words, artificial intelligence is a comprehensive technique of computer science that attempts to understand the essence of intelligence and produce a new intelligent machine that can react in a manner similar to human intelligence. Artificial intelligence is the research of the design principle and the realization method of various intelligent machines, so that the machines have the functions of perception, reasoning and decision making.
The artificial intelligence technology is a comprehensive subject and relates to the field of extensive technology, namely the technology of a hardware level and the technology of a software level. The artificial intelligence infrastructure generally includes technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing technologies, operation/interaction systems, mechatronics, and the like. Artificial intelligence software techniques include natural language processing techniques and machine learning. With the research and progress of artificial intelligence technology, the artificial intelligence technology is developed and researched in a plurality of fields, for example, common smart homes, smart wearable devices, virtual assistants, smart sound boxes, smart marketing, unmanned driving, automatic driving, unmanned aerial vehicles, robots, smart medical treatment, smart customer service, face recognition, three-dimensional face model reconstruction and the like.
Computer Vision (CV) is the science of how to make a machine "look", and more specifically, it refers to using a camera and a Computer to replace human eyes to perform machine Vision such as recognition, tracking and measurement on a target, and further performing image processing, so that the Computer processing becomes an image more suitable for human eyes to observe or to transmit to an instrument to detect. As a scientific discipline, computer vision research-related theories and techniques attempt to build artificial intelligence systems that can capture information from images or multidimensional data. The computer vision technology generally includes technologies such as image processing, image Recognition, image semantic understanding, image retrieval, OCR (Optical Character Recognition), video processing, video semantic understanding, video content/behavior Recognition, three-dimensional object reconstruction, three-dimensional technology, virtual reality, augmented reality, synchronous positioning, map construction, and the like, and also includes common biometric technologies such as face Recognition, fingerprint Recognition, and the like.
Machine Learning (ML) is a multi-domain cross discipline, and relates to a plurality of disciplines such as probability theory, statistics, approximation theory, convex analysis, algorithm complexity theory and the like. The special research on how a computer simulates or realizes the learning behavior of human beings so as to acquire new knowledge or skills and reorganize the existing knowledge structure to continuously improve the performance of the computer. Machine learning is the core of artificial intelligence, is the fundamental approach for computers to have intelligence, and is applied to all fields of artificial intelligence. Machine learning and deep learning generally include techniques such as artificial neural networks, belief networks, reinforcement learning, transfer learning, inductive learning, and teaching learning.
The bone displacement information acquisition method provided by the embodiment of the application can be realized through the artificial intelligence technology, the computer vision technology and the machine learning.
The embodiment of the application provides a bone displacement information acquisition method, and an execution main body is computer equipment.
In one possible implementation manner, the computer device may be a terminal, and the terminal may be a mobile phone, a computer, a tablet computer, a smart television, a wearable device, or other various types of devices. Alternatively, the computer device may be a server. The server can be a server, a server cluster consisting of a plurality of servers, or a cloud computing server center.
The computer equipment obtains vertex displacement information corresponding to the target facial expression, a plurality of facial vertexes of the target facial model and a plurality of facial skeletons are respectively bound according to binding weights corresponding to the vertex displacement information, and deformation processing is carried out on the bound target facial model by the computer equipment, so that the plurality of facial vertexes move along with the plurality of facial skeletons while the plurality of facial skeletons move. And when the displacement information of the plurality of facial vertexes is matched with the vertex displacement information, obtaining skeleton displacement information of the target facial model, and then carrying out deformation processing on the target facial model according to the obtained skeleton displacement information to obtain the target facial model with the target facial expression.
Fig. 1 is a flowchart of a bone displacement information obtaining method according to an embodiment of the present application. Applied to a computer device, see fig. 1, the method comprises:
101. the computer device reads the target face model.
The computer device stores a target face model created in advance, and the computer device reads the target face model to process the target face model. The target face model is a face model of a target object, and the target object may be a virtual object created by a computer or may also be a real object. The target object may include a human or an animal, etc.
The target face model includes a three-dimensional mesh volume of a face of a target object, the three-dimensional mesh volume including a plurality of face polygon meshes with a plurality of face vertices thereon. The face vertices are basic elements constituting the three-dimensional mesh body, and the face polygon mesh can be drawn from the plurality of face vertices, and the plurality of face polygon meshes can constitute the three-dimensional mesh body of the face, so that the shape of the three-dimensional mesh body is changed due to the position change of the face vertices. A three-dimensional mesh may be considered to be a collection of a plurality of facial vertices and a plurality of facial polygons that may be used to describe the three-dimensional structure of a face. The face polygon is at least one of a triangle, a quadrilateral, or other simple convex polygon. The target face model may be subjected to deformation processing such as rotation, translation, and scaling. The face vertex may include attribute information such as three-dimensional coordinate information, laser reflection intensity, color information, and the like, and the target face model may be subsequently rendered according to the attribute information. The facial vertices may include eyebrow vertices, eye vertices, nose vertices, mouth vertices, etc., each facial vertex may represent a location of a facial five-sense organ.
The target face model corresponds to a UV coordinate system (texture coordinate system), according to which a two-dimensional texture image can be projected on the surface of the three-dimensional mesh body, where the texture can be used to express detail information of the object surface, including detail information such as texture, color or pattern. U represents a horizontal coordinate axis of the two-dimensional texture image, the horizontal coordinate axis is in the horizontal direction of the UV coordinate system, V represents a vertical coordinate axis of the two-dimensional texture image, the vertical coordinate axis is in the vertical direction of the UV coordinate system, and any position on the two-dimensional texture image can be defined by the two coordinate axes forming the two-dimensional coordinate system. Since the two-dimensional texture image is projected on the surface of the three-dimensional mesh, the two-dimensional coordinate system can define an arbitrary position on the surface of the three-dimensional mesh.
In a possible implementation manner, if the target object is a real object, the image information may be obtained by continuously shooting the image acquisition device around the face or the head of the target object, and the three-dimensional mesh of the face of the target object may be obtained according to the acquired image information of the target object. Optionally, the acquired image information includes image information of multiple viewing angles, such as front face pose image information, side face pose image information, upward view pose image information, and downward view pose image information of the target object.
In another possible implementation manner, if the target object is a virtual object, the computer device creates a face geometric body corresponding to the target object, and performs processing such as point adding, line adding, and face adding on the face geometric body to obtain a three-dimensional mesh body of the face of the target object, where the three-dimensional mesh body is a target face model of the target object.
102. The computer device creates a plurality of facial bones of the target face model from the plurality of facial vertices, each facial bone corresponding to one of the facial vertices.
The computer respectively creates a facial skeleton corresponding to each facial vertex according to the facial vertices of the target facial model to obtain a plurality of facial skeletons of the target facial model, wherein each facial skeleton corresponds to one facial vertex. The corresponding relation between the facial skeleton and the facial vertex means that the position of the corresponding facial skeleton in the target facial model can be determined according to the position of the facial vertex in the target facial model, and the position change generated by the subsequent facial skeleton drives the corresponding facial vertex to generate corresponding position change.
The facial skeleton is used for driving the target facial model to move. Each facial skeleton corresponds to one facial vertex, and the facial skeleton can control the position of the corresponding facial vertex in the process of deformation of the target facial model, namely when a plurality of facial skeletons are displaced, the facial skeletons on the three-dimensional mesh body of the target facial model can be synchronously driven to be displaced, so that the target facial model is deformed, and therefore the facial skeleton drives the target facial model to move, and the target facial model makes facial expressions. The facial bones may include eyebrow bones, eye bones, nose bones, mouth bones, etc., and the facial bones correspond to facial vertices, may be eyebrow bones corresponding to eyebrow vertices, etc.
In one possible implementation, a computer device obtains a bone profile that includes a plurality of vertex identifications and a bone identification corresponding to each vertex identification. The computer device generates a facial skeleton at each facial vertex of the target facial model based on the plurality of vertex identifications and the skeleton identification corresponding to each vertex identification.
The vertex identification is used for representing the vertex of the face in the target face model, and can be the number of the vertex of the face and the like; the skeleton identification is used to indicate a facial skeleton, and may be a name of the facial skeleton, a number of the facial skeleton, or the like.
The computer equipment stores a skeleton configuration file in advance, the skeleton configuration file is used for configuring corresponding facial skeletons for a facial model, vertex identifications and skeleton identifications in the skeleton configuration file are in one-to-one correspondence, the correspondence between the vertex identifications and the skeleton identifications is represented at the vertex of a face represented by the vertex identifications, and the facial skeletons represented by the skeleton identifications are generated.
The positions of the vertices of the face represented by the vertex identifications in the face model can be determined according to the UV coordinate system of the face model, and the skeleton configuration file can be applied to a plurality of different face models because the positions of the vertices of the face represented by the same vertex identification in the plurality of face models with the same UV coordinate system are the same. And determining the position of the facial vertex represented by the vertex identification according to the UV coordinate system, and generating the facial skeleton represented by the skeleton identification corresponding to the vertex identification at the position to accord with the characteristics of the real face. For example, the bone identifier corresponding to a certain vertex identifier represents the eyebrow bone, and the position corresponding to the vertex identifier determined according to the UV coordinate system is the position of the eyebrow bone to be generated in the face model.
The computer device reads the vertex identification and the corresponding skeleton identification in the skeleton configuration file, determines the position of the vertex of the face represented by the vertex identification in the target face model, determines the face skeleton represented by the corresponding skeleton identification of the vertex identification, and generates the face skeleton at the position. The computer device executes the above operations on the plurality of vertex identifications and the skeleton identification corresponding to each vertex identification included in the skeleton configuration file, and thus a plurality of facial skeletons of the target facial model can be obtained.
Optionally, a skeleton generation script for generating a facial skeleton of the face model is stored in the computer device in advance. When the computer device determines the position corresponding to the vertex identification and the facial skeleton represented by the skeleton identification, the skeleton generation script is executed to generate the corresponding facial skeleton at the position.
In the bone profile shown in table 1, the vertex identities are represented by numbers, and the bone identities are represented by combinations of bone names, bone locations, and numbers. For example, "eyebrow-left-001" indicates the eyebrow numbered 001 on the left side of the face, see table 1, where the bone identification and the vertex identification correspond one-to-one. Where the bone identification "eyebrow-left-001" corresponds to the vertex identification "6903", indicating that the 001 numbered eyebrow on the left side of the face is generated at the face vertex numbered 6903.
TABLE 1
Bone identification Vertex identification
Eyebrow-left-001 6903
Eyebrow-left-002 963
Eyebrow-left-003 16164
eyebrow-Right-001 12067
eyebrow-Right-002 3508
Eyebrow-right-003 11979
As shown in fig. 2, a target face model is displayed in the model display interface 201, a region circled in the model display interface 201 indicates a facial skeleton generated at a facial vertex of the target face model, and information of a facial skeleton "eyebrow-left-001" facial skeleton is displayed in the skeleton information display interface 202, corresponding to the circled eyebrow indicated by an arrow in the figure. When the facial bone eyebrow-left-001 moves subsequently, displacement information and rotation information of the facial bone eyebrow-left-001 can be displayed in the bone information display interface 202.
103. And the computer equipment carries out deformation processing on the template surface model according to the deformation parameters corresponding to the target facial expression to obtain vertex displacement information.
And the computer equipment acquires a deformation parameter corresponding to the target facial expression and a template facial model, and carries out deformation processing on the template facial model according to the deformation parameter so that the template facial model makes the target facial expression. Because the positions of a plurality of face vertexes of the template face model can be changed when the template face model is deformed, the computer equipment can obtain vertex displacement information corresponding to the template face model.
The deformation parameters are used to deform the three-dimensional mesh body of the template face model to a predetermined shape with a certain rule. The deformation parameters corresponding to the target facial expression are used for deforming the three-dimensional mesh body of the template facial model to the shape of the target facial expression. The target facial expression may be crying, laughing, smiling, surprised, puzzling, and the like. The deformation parameter may be a coordinate value of a vertex of the three-dimensional mesh after deformation, for example, a six-degree-of-freedom coordinate system is established on the three-dimensional mesh, and the deformation parameter is a coordinate value of the vertex in the six-degree-of-freedom coordinate system.
The template face model is a three-dimensional mesh body without binding facial bones, is stored in the computer equipment in advance and can be regarded as an intermediate tool for acquiring vertex displacement information corresponding to the target facial expression.
The vertex displacement information includes displacement information of a plurality of facial vertices of the template face model when the template face model makes the target facial expression, and the displacement information may represent a change in position of the plurality of facial vertices caused by the influence of the deformation parameter.
In one possible implementation, the computer device obtains a hybrid deformer, the hybrid deformer including deformation parameters, the computer device binds the hybrid deformer to the template face model, and the template face model is deformed by the bound hybrid deformer to obtain vertex displacement information.
The blend morpher (blend morphe) can be used for blending morphs, which is a technology implemented in Maya software (a three-dimensional animation software) that can be applied to the production of animations or special effects in the fields of movies and games, etc. The mixed deformer can deform the three-dimensional grid body to a preset shape according to deformation parameters, and the mixed deformation technology is generally applied to the production of three-dimensional expression animation due to the fact that the accurate shape change of the grid body can be obtained. And after the mixed deformer is bound with the template face model, the template face model can be subjected to deformation treatment through the mixed deformer.
Wherein the hybrid deformer is pre-stored in a computer device. Optionally, motion capture and rendering is performed in real-time by a motion capture device, worn by the emographer, the motion capture device collects a plurality of feature points of the face of an expression maker and maps the plurality of feature points to an original face model in real time, when the expression maker makes the target facial expression, the plurality of feature points of the face will generate position change, accordingly, the plurality of feature points mapped on the original face model synchronously generate position change, the computer device creates a hybrid deformer including deformation parameters based on the change in the positions of the plurality of feature points mapped on the original face model, the deformation parameters correspond to the target facial expression made by the expression maker, that is, the mixed deformer corresponds to the target facial expression, and the target facial expression can be made by the facial model processed by the mixed deformer. The expression maker makes a plurality of target facial expressions, and the computer device can obtain a plurality of mixing deformers, each of which corresponds to one target facial expression.
Optionally, the hybrid deformer created by the computer device is bound to the original face model, and when the computer device needs to deform the template face model through the hybrid deformer, the hybrid deformer needs to be bound to the template face model first, and then the computer device copies the hybrid deformer bound to the original face model, and binds the copied hybrid deformer to the template face model.
Optionally, the computer device processes the mixed deformer bound to the original face model through a property Transfer (Transfer Attribute) function, copies the mixed deformer, and binds the copied mixed deformer to the template face model. The attribute transfer function is a function in three-dimensional animation software, displacement information of a face vertex is obtained by sampling the face vertex on a three-dimensional mesh body of an original face model, the displacement information of the face vertex is transferred to the three-dimensional mesh body of a template face model based on three-dimensional space comparison, so that the three-dimensional mesh body of the template face model is modified, a corresponding mixed deformer is created according to the modification information of the three-dimensional mesh body, and the mixed deformer is bound with the template face model.
Fig. 3 is an attribute transfer control interface according to an embodiment of the present application, and as shown in fig. 3, an attribute transfer control interface 301 includes setting options for attribute transfer, such as setting options for vertex positions and vertex normals of a hybrid deformer, setting options for texture setting and color setting, and setting options for sample space, mirroring, flipping a texture coordinate system, a color border, and a query path.
Optionally, the original face model is bound to a plurality of hybrid morphers, and the computer device obtains a morpher batch script for copying and binding the plurality of hybrid morphers to which a face model is bound to other face models. The computer device processes the original face model bound multiple hybrid morphers through the morpher batch script, copying and binding the multiple hybrid morphers to the template face model. For example, fig. 4-1 shows an original face model with multiple hybrid morphers bound, and fig. 4-2 shows a template face model with no hybrid morphers bound, then the multiple hybrid morphers to which the original face model is bound can be copied and bound to the template face model by the morpher batch script.
In another possible implementation manner, the computer device establishes locators at a plurality of facial vertices of the template facial model, and obtains displacement information of the plurality of established locators after performing deformation processing on the template facial model according to deformation parameters corresponding to the target facial expression to form vertex displacement information.
The computer device establishes a locator on each face vertex of the template face model, the template face model is deformed according to deformation parameters corresponding to target facial expressions, when the template face model is deformed, a plurality of face vertices of the template face model can generate position changes, the locators attached to the template face model can synchronously generate position changes, and because the locators are established on the face vertices, the position changes of the locators are the same as the position changes of the face vertices, displacement information of the plurality of established locators is obtained, and the displacement information can form vertex displacement information to represent the displacement information of the plurality of vertices of the template face model.
As shown in fig. 5, a locator 502 is built on a face vertex 501 of a template face model, and when the face vertex 501 of the template face model changes in position when the template face model is deformed, the locators 502 attached to the template face model change in position simultaneously, and displacement information of the plurality of built locators 502 is acquired, and the displacement information may constitute vertex displacement information. Among them, the template face model (1) in fig. 5 is a model including face vertices, and the template face model (2) is a model including locators.
In another possible implementation manner, the computer device performs deformation processing on the template face model according to deformation parameters corresponding to the target facial expression, and processes the deformed template face model through a deformation solver to obtain vertex displacement information.
A Deformation Solver, also called a Deformation Learning Solver (Deformation Learning Solver), is used to obtain vertex displacement information of the deformed template face model. And the computer equipment carries out deformation processing on the template face model according to the deformation parameters to obtain a deformed template face model, and each face vertex of the template face model synchronously generates position change compared with the template face model before deformation. The computer device processes the deformed template face model through the deformation solver to obtain displacement information of each face vertex in the template face model, namely, vertex displacement information corresponding to the deformation parameters. Therefore, the essential role of the deformation solver is to automatically acquire vertex displacement information of the template face model from the deformation parameters.
It should be noted that, in the above step 103, the process of performing, by the computer device, deformation processing on the template surface model according to the deformation parameters corresponding to the target facial expression to obtain the vertex displacement information is described. In another embodiment, vertex displacement information corresponding to the target facial expression may be obtained in other manners.
104. And the computer equipment respectively acquires the binding weights between a plurality of facial vertexes and a plurality of facial skeletons of the target facial model according to the vertex displacement information.
The vertex displacement information obtained in step 103 represents the position change of the facial vertex due to the influence of the deformation parameter when the template facial model makes the target facial expression. And because the UV coordinate system of the target face model is the same as the UV coordinate system of the template face model, the number of the face vertexes of the target face model is the same as the number of the face vertexes of the template face model, and the positions of the face vertexes on the target face model are the same as the positions of the face vertexes on the template face model. Thus, based on the vertex displacement information, the computer device may obtain binding weights (Weight) between the plurality of facial vertices and the plurality of facial bones of the target facial model.
When the position of the facial skeleton in the target facial model changes, the facial skeleton drives the corresponding facial vertex to change, however, the displacement magnitude to be generated by each facial vertex of the target facial model may be different, that is, each facial vertex is influenced by the corresponding facial skeleton differently, and then each facial vertex and the corresponding facial skeleton need to be bound through a binding weight, where the binding weight represents the influence degree of the displacement information of the facial skeleton on the displacement information of the facial vertex.
In one possible implementation, the computer device processes the vertex displacement information via a deformation solver to obtain binding weights between a plurality of facial vertices and a plurality of facial bones of the target facial model.
The deformation solver is used to obtain binding weights between the facial vertices and the facial skeleton. After the computer equipment acquires vertex displacement information of the template face model, inputting the vertex displacement information into a deformation solver, wherein the vertex displacement information comprises displacement information of a plurality of vertexes of the template face model, and the deformation solver processes the vertex displacement information to obtain binding weights between the plurality of face vertexes and a plurality of face skeletons of the target face model. Therefore, the essential role of the deformation solver is to automatically acquire vertex displacement information corresponding to the target face model according to the vertex displacement information.
Optionally, the deformation solver may further process the deformed template face model to obtain vertex displacement information. The deformation solver processes the deformed template face model to obtain position change, namely vertex displacement information of the face vertex, of each face vertex of the template face model, wherein the position change is generated when each face vertex is influenced by deformation parameters, and the deformation solver processes the vertex displacement information to obtain binding weight when the face vertex is bound with the face skeleton. The essential function of the deformation solver is to automatically obtain the corresponding binding weight of the target face model according to the deformation parameters.
Optionally, if the template face model is deformed by the hybrid deformer including the deformation parameters, the deformation solver may obtain a binding weight between each face vertex and the face skeleton in the template face model, that is, the deformation solver may convert the hybrid deformer bound to the template face model into the binding weight corresponding to the template face model.
Therefore, the deformation solver can process the deformed template face model to obtain vertex displacement information, and process the vertex displacement information to obtain the binding weight between a plurality of face vertexes and a plurality of face skeletons of the target face model.
Fig. 6 is a deformation solver control interface provided by an embodiment of the present application, and as shown in fig. 6, the deformation solver control interface 601 includes setting options such as the number of bones, target skin, and time range.
105. And the computer equipment binds a plurality of face vertexes and a plurality of face skeletons of the target face model respectively according to the obtained binding weights.
When the computer equipment obtains the binding weights between a plurality of face vertexes and a plurality of face skeletons of a target face model, the plurality of face vertexes and the plurality of face skeletons of the target face model are respectively bound according to the binding weights, displacement information of the face skeletons can affect displacement information of the face vertexes after binding, when the face skeletons are subjected to position change, the face vertexes bound by the face skeletons can synchronously generate position change according to the binding weights, and the larger the binding weights are, the larger the influence degree of the displacement information of the face vertexes on the displacement information of the face skeletons is.
The process of respectively binding a plurality of face vertexes and a plurality of face skeletons of a target face model is also called Skin (Skin), the Skin is a drawing technology of three-dimensional animation, the face skeletons are added to the face model on the basis of the face model created in three-dimensional software, and because the face skeletons and the face model are mutually independent, in order to enable the face skeletons to drive the face model to generate reasonable motion, the face vertexes on a three-dimensional mesh body of the face model are bound to the face skeletons, so that the three-dimensional mesh body is attached to the face skeletons, and the face skeletons drive the face model to run. The binding weight is also called a skinning weight, which refers to the distribution degree of each facial vertex bound by the corresponding facial skeleton when the three-dimensional mesh body of the facial model is skinned by the facial skeleton.
Fig. 7-1 shows the target face model before binding the facial skeleton, fig. 7-2 shows the correspondence of the facial skeleton to the facial vertex, and fig. 7-3 shows the target face model after binding the facial skeleton. And according to the corresponding relation between the facial bones and the facial vertexes shown in the figure 7-2, respectively binding a plurality of facial vertexes and a plurality of facial bones in the target facial model shown in the figure 7-1 to obtain the bound target facial model shown in the figure 7-3. In the correspondence shown in fig. 7-2, the circular marks represent the vertices of the face, and the linear marks represent the bones of the face.
Through the steps 101-105, it is possible to bind the plurality of face vertices and the plurality of face skeletons of any face model respectively. For example, FIG. 8-1 shows a 6 face model: face model 8011, face model 8021, face model 8031, face model 8041, face model 8051, face model 8061. Fig. 8-2 shows the correspondence between the facial bones and the facial vertices, and according to the correspondence, the plurality of facial vertices and the plurality of facial bones of the 6 facial models in fig. 8-1 are respectively bound, so as to obtain the 6 facial models after binding shown in fig. 8-3: face model 8012, face model 8022, face model 8032, face model 8042, face model 8052, face model 8062. In the correspondence relationship shown in fig. 8-2, the circular marks represent the vertices of the face, and the linear marks represent the bones of the face.
It should be noted that, in the embodiment of the present application, the process of obtaining the binding weight corresponding to the vertex displacement information and respectively binding the plurality of facial vertices and the plurality of facial bones of the target facial model according to the obtained binding weight through the above-mentioned step 103-104 is described. In another embodiment, the multiple face vertices and the multiple face skeletons of the target face model may be bound respectively according to the binding weights corresponding to the vertex displacement information in other manners.
106. And the computer equipment carries out deformation processing on the bound target face model so as to move the plurality of facial bones and drive the plurality of facial vertexes to move.
After the computer equipment respectively binds a plurality of facial vertexes and a plurality of facial skeletons of the target facial model, the position change of the facial skeletons can drive the facial vertexes bound with the facial skeletons to synchronously generate position change, and the target facial model is deformed due to the position change of the facial vertexes of the target facial model, so that the target facial model can be driven to deform due to the position change of the facial skeletons. In order to make the target facial model make facial expressions, the target facial model needs to be driven to make the facial expressions by adjusting facial bones and changing positions of the facial bones.
The computer device performs a deformation process on the target face model after binding the facial bones so that a plurality of facial bones are moved while the plurality of facial vertices are moved along with the facial bones.
Alternatively, the computer device adjusts the face skeleton to move the face skeleton, such as rotating, scaling, adjusting the position of the face skeleton, and when the face skeleton moves, a target face model bound with the face skeleton is deformed, and a plurality of face vertexes of the target face model move along with the plurality of face skeleton while the plurality of face skeleton moves, and the plurality of face vertexes are displaced accordingly.
107. When the displacement information of the plurality of facial vertices is matched with the vertex displacement information, the computer device obtains bone displacement information of the target facial model.
In order to obtain the bone displacement information when the target facial model makes the target facial expression, the target facial model needs to make the target facial expression first. And the target facial expression corresponds to the vertex displacement information, when the displacement information of the plurality of facial vertices of the target facial model matches the vertex displacement information, it is indicated that the target facial model has made the target facial expression by morphing.
Accordingly, the computer device acquires displacement information of a plurality of face vertices of the target face model, and determines whether the displacement information of the plurality of face vertices matches vertex displacement information, the matching of the displacement information of the plurality of face vertices with the vertex displacement information being that the displacement information of the plurality of face vertices is the same as the displacement information of the plurality of face vertices in the vertex displacement information. When the computer equipment detects that the displacement information of the plurality of facial vertexes is matched with the vertex displacement information, determining that the target facial model makes the target facial expression through deformation, wherein the displacement information of a plurality of facial skeletons of the target facial model corresponds to the target facial expression, and then the computer equipment acquires skeleton displacement information of the target facial model, wherein the skeleton displacement information comprises the displacement information of the plurality of facial skeletons. And when the computer device detects that the displacement information of a plurality of facial vertices does not match the vertex displacement information, the target facial model does not make the target facial expression by deformation, and the bone displacement information at this time is of no value.
108. And the computer equipment carries out deformation processing on the target face model according to the skeleton displacement information to obtain the target face model making the target facial expression.
The computer device obtains bone displacement information, which corresponds to the target facial expression. The computer device can adjust the facial skeleton of the target facial model according to the skeleton displacement information, so that the displacement information of the facial skeleton conforms to the skeleton displacement information, the target facial model deforms in the process of facial skeleton movement, and the deformed target facial model is the target facial model with the target facial expression.
In one possible implementation manner, the computer device obtains bone displacement information corresponding to a target facial expression, and binds the bone displacement information with a target face model, so that the target face model can be deformed according to the bone displacement information.
In another possible implementation manner, after the target face model with the target facial expression is obtained, the target face model with the target facial expression may be collapsed, and the collapsed target face model is rendered to obtain the expression image with the target facial expression.
In the three-dimensional animation production, the collapse means that characteristics such as a deformer, a two-dimensional texture image, a facial skeleton, and a shape attached to a target face model are fixed to the target face model, and parameters and the like bound to the target face model cannot be processed on the target face model.
Optionally, the collapsed target face model is introduced into a rendering engine, and the rendering engine performs rendering processing on the target face model to obtain an expression image with the target facial expression. The rendering engine may be a game engine, such as UE4 (ghost engine 4).
Alternatively, the computer device performs collapse processing on the target face model that makes the target facial expression using a BakeSimulation command to obtain a collapsed target face model.
In another possible implementation manner, through the above step 101-. And the computer equipment collapses a plurality of target face models making different target facial expressions respectively, and renders the collapsed target face models to obtain a plurality of expression images making different target facial expressions. And the computer equipment continuously plays the expression images in sequence to form the facial animation.
The facial animation belongs to skeleton animation, the skeleton animation is one of three-dimensional animation, a skeleton structure control model is formed by interconnected skeletons, and the animation is generated by changing the positions of the skeletons or rotating the skeletons.
For example, fig. 9-1 shows a face model with a hybrid deformer bound, fig. 9-2 shows a face model with a hybrid deformer unbound, and the face model in fig. 9-2 is subjected to a morphing process by the hybrid deformer of the face model in fig. 9-1, resulting in a face model making a facial expression. It can be seen that according to the method provided by the embodiment of the present application, the facial expression of the face model in fig. 9-1 can be successfully copied to the face model in fig. 9-2.
Fig. 10 is a flowchart of generating facial expressions according to an embodiment of the present application, and referring to fig. 10, a computer device reads a target face model, copies deformation parameters bound to an original face model, binds the copied deformation parameters to a template face model, obtains binding weights by calculating displacement information generated by a facial vertex affected by the deformation parameters, and reads skeleton animations generated according to the binding to collapse the skeleton animations.
1001. A target face model is read.
1002. And binding the mixed deformer with the model face model.
1003. The binding weights are resolved.
And carrying out deformation processing on the model face model through the hybrid deformer to obtain vertex displacement information, and resolving the vertex displacement information into binding weight.
1004. A target facial model making a target facial expression is read.
And respectively binding a plurality of facial vertexes and a plurality of facial skeletons of the target facial model according to the binding weight, acquiring skeleton displacement information corresponding to the target facial expression, and performing deformation processing on the bound target facial model according to the skeleton displacement information to obtain the target facial model with the target facial expression.
1005. Collapsing out the target facial model making the target facial expression.
1006. And obtaining an expression image for making the target facial expression.
And introducing the collapsed target face model into a rendering engine, and rendering the target face model through the rendering engine to obtain an expression image.
In the embodiment of the application, a plurality of facial vertexes of the target facial model and a plurality of facial skeletons are respectively bound according to the binding weight, skeleton displacement information corresponding to the target facial expression is obtained, and the skeleton displacement information and the target facial model are bound, which are all preparation work before the target facial expression is made by the target facial model and can be called as a binding link of the target facial model. The binding (Rig) link is a link in the animation production process, a series of deformers such as facial skeletons, mixed deformers and the like are established on a target face model, and the deformers are driven by certain logic, so that the target face model can conveniently make required actions or forms.
It should be noted that, in the embodiment of the present application, after the bone displacement information is obtained, the target face model is subjected to the deformation processing according to the bone displacement information, but the embodiment of the present application does not limit when the target face model is subjected to the deformation processing according to the bone displacement information, that is, the timing for executing step 108 is not limited, and it is only required to ensure that the bone displacement information is obtained before step 108 is executed. In another embodiment, step 108 may not be executed, and after obtaining the bone displacement information, the target face model is not deformed according to the bone displacement information. Or the obtained bone displacement information can be sent to other computer equipment, and the other computer equipment carries out deformation processing on the target face model according to the bone displacement information.
It should be noted that the target face model, the template face model, the original face model, the hybrid deformer, the skeleton configuration file, the skeleton generation script, the locator, the deformation solver, and the like, which are referred to in the embodiments of the present application, may be stored in a computer device in advance, and may be downloaded from another computer device for the computer device, or uploaded to the computer device by another computer device, or uploaded to the computer device by an operator, which is not limited in the embodiments of the present application.
According to the method provided by the embodiment of the application, the computer equipment acquires the vertex displacement information corresponding to the target facial expression, the plurality of facial vertices and the plurality of facial skeletons of the target facial model are respectively bound according to the binding weights corresponding to the vertex displacement information, and the binding weights between the facial vertices and the facial skeletons represent the influence degree of the displacement information of the facial skeletons on the displacement information of the facial vertices. And carrying out deformation processing on the bound target face model so as to enable a plurality of facial skeletons to move and drive a plurality of facial vertexes to move, and acquiring skeleton displacement information of the target face model when the displacement information of the facial vertexes is matched with the vertex displacement information. The embodiment of the application provides a method for acquiring skeleton displacement information corresponding to a target facial expression, and the target facial model can be subsequently deformed through the skeleton displacement information to obtain the target facial model with the target facial expression, so that the facial skeleton is not required to be manually adjusted for multiple times, more manpower and time can be avoided being consumed, and the efficiency can be improved.
And the computer equipment respectively acquires the binding weights between the plurality of face vertexes and the plurality of face skeletons of the target face model according to the vertex displacement information, and respectively binds the plurality of face vertexes and the plurality of face skeletons of the target face model according to the acquired binding weights. According to the method and the device, the deformation parameters are converted into the binding weights in a deformation learning resolving mode, the binding weights do not need to be manually set, therefore, the situation that more manpower and time are consumed in setting the binding weights can be avoided, the process of binding a plurality of facial vertexes and a plurality of facial skeletons is simplified, and the efficiency of the binding link of the target facial model is further improved.
In addition, according to the embodiment of the application, the deformation parameters corresponding to the target facial expression made by the expression making personnel can be acquired in a motion capture mode, the template facial model making the target facial expression is obtained by carrying out deformation processing on the template facial model according to the deformation parameters, the facial expression of a real character is converted into the facial expression of a virtual character, and the quality and the efficiency of making the facial expression of the virtual character can be improved.
And the original face model can make a corresponding target facial expression according to the bound mixed deformer, the mixed deformer bound by the original face model is copied, the copied mixed deformer is bound to the template face model, binding weights are obtained according to the method provided by the embodiment of the application, a plurality of face vertexes of the target face model are respectively bound with a plurality of face skeletons, skeleton displacement information is obtained, the bound target face model can make the target facial expression according to the skeleton displacement information, the original face model is regarded as the target facial expression to be copied to the target face model, and therefore the facial expressions of different face models can be made more quickly.
Moreover, according to the binding weight, the multiple face vertexes of the target face model and the multiple face skeletons are respectively bound, skeleton displacement information corresponding to the target face expression is obtained, the skeleton displacement information and the target face model are bound, namely the bound target face model can be subjected to deformation processing according to the skeleton displacement information, the target face model with the target face expression is obtained, equivalently, a set of rigorous facial expression binding system is established, and the quality and the efficiency of subsequently making the face expression and the face animation according to the target face model can be improved.
Fig. 11 is a schematic structural diagram of a bone displacement information acquisition apparatus according to an embodiment of the present application. Referring to fig. 11, the apparatus includes:
a first obtaining module 1101, configured to obtain vertex displacement information corresponding to a target facial expression;
the binding module 1102 is configured to bind a plurality of facial vertices and a plurality of facial skeletons of the target facial model respectively according to binding weights corresponding to the vertex displacement information, where the binding weights between the facial vertices and the facial skeletons indicate degrees of influence of displacement information of the facial skeletons on displacement information of the facial vertices;
a first deformation processing module 1103, configured to perform deformation processing on the bound target face model, so as to move a plurality of facial bones and drive a plurality of facial vertices to move;
and a second obtaining module 1104, configured to obtain skeleton displacement information of the target face model when the displacement information of the multiple face vertices matches the vertex displacement information.
The device provided by the embodiment of the application acquires vertex displacement information corresponding to the target facial expression, and binds a plurality of facial vertexes and facial skeletons of the target facial model respectively according to binding weights corresponding to the vertex displacement information, wherein the binding weights between the facial vertexes and the facial skeletons represent the influence degree of the displacement information of the facial skeletons on the displacement information of the facial vertexes. And carrying out deformation processing on the bound target face model so as to enable a plurality of facial skeletons to move and drive a plurality of facial vertexes to move, and acquiring skeleton displacement information of the target face model when the displacement information of the facial vertexes is matched with the vertex displacement information. The embodiment of the application provides a device for acquiring skeleton displacement information corresponding to a target facial expression, and the target facial model can be subsequently deformed through the skeleton displacement information, so that the target facial model with the target facial expression is obtained, the facial skeleton is not required to be manually adjusted for multiple times, more manpower and time can be avoided being consumed, and the efficiency can be improved.
Optionally, referring to fig. 12, the binding module 1102 includes:
a weight obtaining unit 1112 configured to obtain binding weights between a plurality of facial vertices and a plurality of facial bones of the target facial model, respectively, according to the vertex displacement information;
a binding unit 1122, configured to bind the plurality of face vertices and the plurality of face skeletons of the target face model according to the obtained binding weights, respectively.
Optionally, referring to fig. 12, the weight obtaining unit 1112 is further configured to process the vertex displacement information by a deformation solver to obtain binding weights between a plurality of facial vertices and a plurality of facial bones of the target facial model.
Optionally, referring to fig. 12, the apparatus further comprises:
a model reading module 1105 for reading a target face model;
a skeleton creation module 1106 is configured to create a plurality of facial bones of the target face model based on the plurality of facial vertices, each facial bone corresponding to a facial vertex.
Optionally, referring to fig. 12, a bone creation module 1106 comprises:
a file obtaining unit 1116, configured to obtain a bone configuration file, where the bone configuration file includes a plurality of vertex identifiers and a bone identifier corresponding to each vertex identifier;
and a skeleton generating unit 1126, configured to generate a facial skeleton at each facial vertex of the target facial model according to the plurality of vertex identifications and the skeleton identification corresponding to each vertex identification.
Optionally, referring to fig. 12, the first obtaining module 1101 includes:
and a deformation processing unit 1111, configured to perform deformation processing on the template facial model according to the deformation parameter corresponding to the target facial expression, so as to obtain vertex displacement information.
Optionally, referring to fig. 12, the deformation processing unit 1111 is further configured to:
establishing a locator on a plurality of face vertices of the template face model;
and according to the deformation parameters corresponding to the target facial expression, after the template facial model is subjected to deformation processing, acquiring the established displacement information of the plurality of locators to form vertex displacement information.
Optionally, referring to fig. 12, the deformation processing unit 1111 is further configured to:
acquiring a mixed deformer, wherein the mixed deformer comprises deformation parameters;
binding the mixed deformer with the template face model;
and carrying out deformation processing on the template surface model through the bound mixed deformer to obtain vertex displacement information.
Optionally, referring to fig. 12, the deformation processing unit 1111 is further configured to:
carrying out deformation processing on the template surface model according to the deformation parameters corresponding to the target facial expression;
and processing the deformed template face model through a deformation solver to obtain vertex displacement information.
Optionally, referring to fig. 12, the apparatus further comprises:
and a second deformation processing module 1107, configured to perform deformation processing on the target face model according to the bone displacement information, to obtain a target face model with a target facial expression.
It should be noted that: the bone displacement information acquiring apparatus provided in the above embodiment is only illustrated by the division of the above functional modules when acquiring bone displacement information, and in practical applications, the above function distribution may be completed by different functional modules according to needs, that is, the internal structure of the computer device is divided into different functional modules to complete all or part of the above described functions. In addition, the bone displacement information acquisition device provided by the above embodiment and the bone displacement information acquisition method embodiment belong to the same concept, and specific implementation processes thereof are described in the method embodiment and are not described herein again.
Fig. 13 shows a schematic structural diagram of a terminal 1300 according to an exemplary embodiment of the present application. The terminal 1300 can be used for executing the steps executed by the computer device in the bone displacement information acquisition method.
The terminal 1300 can be used for executing the bone displacement information acquisition method provided by the above method embodiments.
In general, terminal 1300 includes: a processor 1301 and a memory 1302.
Processor 1301 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and the like. The processor 1301 may be implemented in at least one hardware form of a DSP (Digital Signal Processing), an FPGA (Field-Programmable Gate Array), and a PLA (Programmable Logic Array). The processor 1301 may also include a main processor and a coprocessor, where the main processor is a processor for Processing data in an awake state, and is also referred to as a Central Processing Unit (CPU); a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 1301 may be integrated with a GPU (Graphics Processing Unit, image Processing interactor) for rendering and drawing content required to be displayed on the display screen. In some embodiments, processor 1301 may further include an AI (Artificial Intelligence) processor for processing computational operations related to machine learning.
Memory 1302 may include one or more computer-readable storage media, which may be non-transitory. The memory 1302 may also include high speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in the memory 1302 is used to store at least one program code for being possessed by the processor 1301 for implementing the bone displacement information acquisition method provided by the method embodiments herein.
In some embodiments, the apparatus 1300 may further optionally include: a peripheral interface 1303 and at least one peripheral. Processor 1301, memory 1302, and peripheral interface 1303 may be connected by a bus or signal line. Each peripheral device may be connected to the peripheral device interface 1303 via a bus, signal line, or circuit board. Specifically, the peripheral device includes: at least one of radio frequency circuitry 1304, touch display 1305, camera 1306, audio circuitry 1307, positioning component 1308, and power supply 1309.
Peripheral interface 1303 may be used to connect at least one peripheral associated with I/O (Input/Output) to processor 1301 and memory 1302. In some embodiments, processor 1301, memory 1302, and peripheral interface 1303 are integrated on the same chip or circuit board; in some other embodiments, any one or two of the processor 1301, the memory 1302, and the peripheral device interface 1303 may be implemented on a separate chip or circuit board, which is not limited in this embodiment.
The Radio Frequency circuit 1304 is used to receive and transmit RF (Radio Frequency) signals, also called electromagnetic signals. The radio frequency circuitry 1304 communicates with communication networks and other communication devices via electromagnetic signals. The radio frequency circuit 1304 converts an electrical signal into an electromagnetic signal to transmit, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 1304 includes: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and so forth. The radio frequency circuitry 1304 may communicate with other devices via at least one wireless communication protocol. The wireless communication protocols include, but are not limited to: metropolitan area networks, various generation mobile communication networks (2G, 3G, 4G, and 8G), Wireless local area networks, and/or WiFi (Wireless Fidelity) networks. In some embodiments, the radio frequency circuit 1304 may also include NFC (Near Field Communication) related circuits, which are not limited in this application.
The display screen 1305 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display screen 1305 is a touch display screen, the display screen 1305 also has the ability to capture touch signals on or over the surface of the display screen 1305. The touch signal may be input to the processor 1301 as a control signal for processing. At this point, the display 1305 may also be used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, display 1305 may be one, providing the front panel of terminal 1300; in other embodiments, display 1305 may be at least two, either on different surfaces of terminal 1300 or in a folded design; in some embodiments, display 1305 may be a flexible display disposed on a curved surface or on a folded surface of terminal 1300. Even further, the display 1305 may be arranged in a non-rectangular irregular figure, i.e., a shaped screen. The Display 1305 may be made of LCD (Liquid Crystal Display), OLED (Organic Light-Emitting Diode), or the like.
The camera assembly 1306 is used to capture images or video. Optionally, camera assembly 1306 includes a front camera and a rear camera. Typically, the front camera is disposed on the front panel of the terminal 1300 and the rear camera is disposed on the rear side of the terminal 1300. In some embodiments, the number of the rear cameras is at least two, and each rear camera is any one of a main camera, a depth-of-field camera, a wide-angle camera and a telephoto camera, so that the main camera and the depth-of-field camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize panoramic shooting and VR (Virtual Reality) shooting functions or other fusion shooting functions. In some embodiments, camera assembly 1306 may also include a flash. The flash lamp can be a monochrome temperature flash lamp or a bicolor temperature flash lamp. The double-color-temperature flash lamp is a combination of a warm-light flash lamp and a cold-light flash lamp, and can be used for light compensation at different color temperatures.
The audio circuit 1307 may include a microphone and a speaker. The microphone is used for collecting sound waves of a user and the environment, converting the sound waves into electric signals, and inputting the electric signals to the processor 1301 for processing, or inputting the electric signals to the radio frequency circuit 1304 for realizing voice communication. For stereo capture or noise reduction purposes, multiple microphones may be provided, each at a different location of terminal 1300. The microphone may also be an array microphone or an omni-directional pick-up microphone. The speaker is used to convert electrical signals from the processor 1301 or the radio frequency circuitry 1304 into sound waves. The loudspeaker can be a traditional film loudspeaker or a piezoelectric ceramic loudspeaker. When the speaker is a piezoelectric ceramic speaker, the speaker can be used for purposes such as converting an electric signal into a sound wave audible to a human being, or converting an electric signal into a sound wave inaudible to a human being to measure a distance. In some embodiments, audio circuitry 1307 may also include a headphone jack.
The positioning component 1308 is used for positioning the current geographic position of the terminal 1300 for implementing navigation or LBS (Location Based Service). The Positioning component 1308 can be a Positioning component based on the GPS (Global Positioning System) of the united states, the beidou System of china, the graves System of russia, or the galileo System of the european union.
Power supply 1309 is used to provide power to various components in terminal 1300. The power source 1309 may be alternating current, direct current, disposable or rechargeable. When the power source 1309 comprises a rechargeable battery, the rechargeable battery may support wired or wireless charging. The rechargeable battery may also be used to support fast charge technology.
Those skilled in the art will appreciate that the configuration shown in fig. 13 is not intended to be limiting with respect to terminal 1300 and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components may be employed.
Fig. 14 is a schematic structural diagram of a server according to an embodiment of the present application, where the server 1400 may generate a relatively large difference due to different configurations or performances, and may include one or more processors (CPUs) 1401 and one or more memories 1402, where the memory 1402 stores at least one program code, and the at least one program code is loaded and executed by the processors 1401 to implement the methods provided by the foregoing method embodiments. Of course, the server may also have components such as a wired or wireless network interface, a keyboard, and an input/output interface, so as to perform input/output, and the server may also include other components for implementing the functions of the device, which are not described herein again.
The server 1400 may be configured to perform the steps performed by the computer device in the bone displacement information obtaining method.
The embodiment of the present application further provides a computer device for acquiring bone displacement information, where the computer device includes a processor and a memory, and the memory stores at least one program code, and the at least one program code is loaded and executed by the processor, so as to implement the operations of the bone displacement information acquiring method of the above embodiment.
The embodiment of the present application further provides a computer-readable storage medium, where at least one program code is stored in the computer-readable storage medium, and the at least one program code is loaded and executed by a processor to implement the operations of the bone displacement information obtaining method of the above embodiment.
The embodiment of the present application further provides a computer program, where the computer program includes at least one program code, and the at least one program code is loaded and executed by a processor to implement the operations of the bone displacement information obtaining method of the foregoing embodiment.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, where the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The above description is only an alternative embodiment of the present application and should not be construed as limiting the present application, and any modification, equivalent replacement, or improvement made within the spirit and principle of the present application should be included in the protection scope of the present application.

Claims (13)

1. A bone displacement information acquisition method, characterized in that the method comprises:
acquiring vertex displacement information corresponding to the target facial expression;
binding a plurality of face vertexes and a plurality of face skeletons of a target face model respectively according to binding weights corresponding to the vertex displacement information, wherein the binding weights between the face vertexes and the face skeletons represent the influence degree of the displacement information of the face skeletons on the displacement information of the face vertexes;
performing deformation processing on the bound target face model to enable the plurality of facial bones to move and drive the plurality of facial vertexes to move, and acquiring bone displacement information of the target face model when the displacement information of the plurality of facial vertexes is matched with the vertex displacement information;
the acquiring vertex displacement information corresponding to the target facial expression comprises the following steps:
obtaining a hybrid deformer, the hybrid deformer comprising deformation parameters;
binding the mixed deformer with a template face model;
and deforming the template face model through the bound mixed deformer to obtain the vertex displacement information.
2. The method according to claim 1, wherein the binding the plurality of facial vertices and the plurality of facial skeletons of the target facial model according to the binding weights corresponding to the vertex displacement information comprises:
respectively acquiring binding weights between a plurality of facial vertexes and a plurality of facial skeletons of the target facial model according to the vertex displacement information;
and respectively binding a plurality of face vertexes and a plurality of face skeletons of the target face model according to the obtained binding weight.
3. The method according to claim 2, wherein the obtaining binding weights between a plurality of facial vertices and a plurality of facial skeletons of the target facial model according to the vertex displacement information comprises:
and processing the vertex displacement information through a deformation solver to obtain the binding weight between a plurality of facial vertexes and a plurality of facial skeletons of the target facial model.
4. The method according to claim 1, wherein before the binding the plurality of facial vertices and the plurality of facial bones of the target facial model according to the binding weights corresponding to the vertex displacement information, the method further comprises:
reading the target face model;
from the plurality of face vertices, a plurality of face bones of the target face model are created, each face bone corresponding to one face vertex.
5. The method of claim 4, wherein said creating a plurality of facial bones of said target facial model from said plurality of facial vertices comprises:
obtaining a bone configuration file, wherein the bone configuration file comprises a plurality of vertex identifications and bone identifications corresponding to the vertex identifications;
and respectively generating a facial skeleton at each facial vertex of the target facial model according to the plurality of vertex identifications and the skeleton identification corresponding to each vertex identification.
6. The method of claim 1, wherein the obtaining vertex displacement information corresponding to the target facial expression further comprises:
building locators on a plurality of face vertices of the template face model;
and according to the deformation parameters corresponding to the target facial expression, after deformation processing is carried out on the template facial model, the established displacement information of the plurality of locators is obtained, and the vertex displacement information is formed.
7. The method of claim 1, wherein the obtaining vertex displacement information corresponding to the target facial expression further comprises:
carrying out deformation processing on the template face model according to deformation parameters corresponding to the target facial expression;
and processing the deformed template face model through a deformation solver to obtain the vertex displacement information.
8. The method of claim 1, wherein after obtaining bone displacement information for the target face model, the method further comprises:
and according to the bone displacement information, carrying out deformation processing on the target face model to obtain the target face model with the target facial expression.
9. A bone displacement information acquisition apparatus, characterized in that the apparatus comprises:
the first acquisition module is used for acquiring vertex displacement information corresponding to the target facial expression;
the binding module is used for binding a plurality of facial vertexes and a plurality of facial skeletons of a target facial model respectively according to the binding weights corresponding to the vertex displacement information, and the binding weights between the facial vertexes and the facial skeletons represent the influence degree of the displacement information of the facial skeletons on the displacement information of the facial vertexes;
the first deformation processing module is used for carrying out deformation processing on the bound target face model so as to enable the plurality of facial bones to move and drive the plurality of facial vertexes to move;
a second obtaining module, configured to obtain skeleton displacement information of the target face model when the displacement information of the plurality of face vertices matches the vertex displacement information;
the first obtaining module includes:
the deformation processing unit is used for carrying out deformation processing on the template surface model according to the deformation parameters corresponding to the target facial expression to obtain the vertex displacement information;
the deformation processing unit is further used for acquiring a mixed deformer, and the mixed deformer comprises the deformation parameters; binding the hybrid deformer to the template face model; and deforming the template face model through the bound mixed deformer to obtain the vertex displacement information.
10. The apparatus of claim 9, wherein the binding module comprises:
a weight obtaining unit, configured to obtain binding weights between a plurality of facial vertices and a plurality of facial skeletons of the target facial model, respectively, according to the vertex displacement information;
and the binding unit is used for binding a plurality of face vertexes and a plurality of face skeletons of the target face model respectively according to the obtained binding weight.
11. The apparatus of claim 10, wherein the weight obtaining unit is further configured to process the vertex displacement information by a deformation solver to obtain binding weights between a plurality of facial vertices and a plurality of facial bones of the target facial model.
12. A computer device, characterized in that it comprises a processor and a memory, in which at least one program code is stored, which is loaded and executed by the processor, to implement the bone displacement information acquisition method according to any one of claims 1 to 8.
13. A computer-readable storage medium, wherein at least one program code is stored in the computer-readable storage medium, and the at least one program code is loaded and executed by a processor to implement the bone displacement information acquisition method according to any one of claims 1 to 8.
CN202010151301.XA 2020-03-06 2020-03-06 Bone displacement information acquisition method, device, equipment and storage medium Active CN111292427B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010151301.XA CN111292427B (en) 2020-03-06 2020-03-06 Bone displacement information acquisition method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010151301.XA CN111292427B (en) 2020-03-06 2020-03-06 Bone displacement information acquisition method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN111292427A CN111292427A (en) 2020-06-16
CN111292427B true CN111292427B (en) 2021-01-01

Family

ID=71030190

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010151301.XA Active CN111292427B (en) 2020-03-06 2020-03-06 Bone displacement information acquisition method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111292427B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112017295B (en) * 2020-08-28 2024-02-09 重庆灵翎互娱科技有限公司 Adjustable dynamic head model generation method, terminal and computer storage medium
CN112090082A (en) * 2020-09-27 2020-12-18 完美世界(北京)软件科技发展有限公司 Facial skeleton processing method and device, electronic equipment and storage medium
CN112101327B (en) * 2020-11-18 2021-01-29 北京达佳互联信息技术有限公司 Training method of motion correction model, motion correction method and device
CN115529500A (en) * 2022-09-20 2022-12-27 中国电信股份有限公司 Method and device for generating dynamic image
CN116311478B (en) * 2023-05-16 2023-08-29 北京百度网讯科技有限公司 Training method of face binding model, face binding method, device and equipment

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108986189A (en) * 2018-06-21 2018-12-11 珠海金山网络游戏科技有限公司 Method and system based on real time multi-human motion capture in three-dimensional animation and live streaming

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102509333B (en) * 2011-12-07 2014-05-07 浙江大学 Action-capture-data-driving-based two-dimensional cartoon expression animation production method
CN104268921A (en) * 2014-09-12 2015-01-07 上海明穆电子科技有限公司 3D face expression control method and system
CN105654537B (en) * 2015-12-30 2018-09-21 中国科学院自动化研究所 It is a kind of to realize and the expression cloning method and device of virtual role real-time interactive
CN107657650B (en) * 2017-08-18 2021-12-17 深圳市谜谭动画有限公司 Animation model role binding method and system based on Maya software
CN110766776B (en) * 2019-10-29 2024-02-23 网易(杭州)网络有限公司 Method and device for generating expression animation

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108986189A (en) * 2018-06-21 2018-12-11 珠海金山网络游戏科技有限公司 Method and system based on real time multi-human motion capture in three-dimensional animation and live streaming

Also Published As

Publication number Publication date
CN111292427A (en) 2020-06-16

Similar Documents

Publication Publication Date Title
CN111292427B (en) Bone displacement information acquisition method, device, equipment and storage medium
CN109584151B (en) Face beautifying method, device, terminal and storage medium
US20230351663A1 (en) System and method for generating an avatar that expresses a state of a user
US20200387698A1 (en) Hand key point recognition model training method, hand key point recognition method and device
US8933928B2 (en) Multiview face content creation
CN112634416B (en) Method and device for generating virtual image model, electronic equipment and storage medium
CN113822977A (en) Image rendering method, device, equipment and storage medium
CN111369428B (en) Virtual head portrait generation method and device
WO2022205762A1 (en) Three-dimensional human body reconstruction method and apparatus, device, and storage medium
CN111652123B (en) Image processing and image synthesizing method, device and storage medium
CN110136236B (en) Personalized face display method, device and equipment for three-dimensional character and storage medium
CN112927362A (en) Map reconstruction method and device, computer readable medium and electronic device
CN108961375A (en) A kind of method and device generating 3-D image according to two dimensional image
CN114332530A (en) Image classification method and device, computer equipment and storage medium
US20210152751A1 (en) Model training method, media information synthesis method, and related apparatuses
CN113657357B (en) Image processing method, image processing device, electronic equipment and storage medium
CN113706678A (en) Method, device and equipment for acquiring virtual image and computer readable storage medium
CN111680758B (en) Image training sample generation method and device
CN113706440A (en) Image processing method, image processing device, computer equipment and storage medium
CN112598780A (en) Instance object model construction method and device, readable medium and electronic equipment
CN110458924B (en) Three-dimensional face model establishing method and device and electronic equipment
CN113436348B (en) Three-dimensional model processing method and device, electronic equipment and storage medium
CN114219001A (en) Model fusion method and related device
CN114170648A (en) Video generation method and device, electronic equipment and storage medium
CN112037305B (en) Method, device and storage medium for reconstructing tree-like organization in image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40024676

Country of ref document: HK

GR01 Patent grant
GR01 Patent grant