CN116977605A - Virtual character image model generation method, device and computer equipment - Google Patents

Virtual character image model generation method, device and computer equipment Download PDF

Info

Publication number
CN116977605A
CN116977605A CN202310145296.5A CN202310145296A CN116977605A CN 116977605 A CN116977605 A CN 116977605A CN 202310145296 A CN202310145296 A CN 202310145296A CN 116977605 A CN116977605 A CN 116977605A
Authority
CN
China
Prior art keywords
version
model
initial
dimensional image
vertex
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310145296.5A
Other languages
Chinese (zh)
Inventor
杨凯
尚鸿
石天阳
陈星翰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202310145296.5A priority Critical patent/CN116977605A/en
Publication of CN116977605A publication Critical patent/CN116977605A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/403D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Software Systems (AREA)
  • Architecture (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Geometry (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The application relates to a method, a device and computer equipment for generating a virtual character image model. The method comprises the following steps: acquiring a first version of three-dimensional image model matched with the custom shaping parameters, and acquiring a second version of initial three-dimensional image model; and respectively determining the vertex difference between each vertex in the initial three-dimensional image model of the second version and the three-dimensional image model of the first version to determine the model difference between the three-dimensional image model of the first version and the initial three-dimensional image model of the second version, further determining the target modeling parameters, and applying the target modeling parameters to the initial three-dimensional image model of the second version to generate the customized virtual role image model of the second version. According to the method, by combining an artificial intelligence technology and a 3D technology, the rapid migration of the pinching face parameters can be realized under the condition that the new version of the three-dimensional image model is updated, the new version of the three-dimensional image model is automatically generated, and the custom pinching face is not required to be carried out again.

Description

Virtual character image model generation method, device and computer equipment
Technical Field
The present application relates to the field of computer technologies, and in particular, to a method, an apparatus, and a computer device for generating an avatar model.
Background
Currently, customizing avatars in applications has been a very mature business application. In the customized service of the avatar, the user can freely modify each part of the avatar according to personal preference. For example, the user can precisely adjust the position and size of a portion such as eyes, mouth, or eyebrows to obtain a satisfactory avatar image.
With the update iteration of the application version, the user's customized avatar image of the previous version may fail from version to version, resulting in the inability of the user to continue using the previously customized avatar image in the new version of the application.
Disclosure of Invention
In view of the foregoing, there is a need for a method, apparatus, computer device, computer-readable storage medium, and computer program product for generating an avatar image model that enables accurate reproduction of customized avatar images between different versions.
In one aspect, the present application provides a method of generating an avatar image model. The method comprises the following steps:
Acquiring a first version of three-dimensional image model matched with the custom shaping parameters, and acquiring a second version of initial three-dimensional image model;
determining vertex differences between each vertex in the second version of the initial three-dimensional avatar model and the first version of the three-dimensional avatar model;
determining inter-version model differences of the first version of the three-dimensional avatar model and the second version of the initial three-dimensional avatar model according to the determined vertex differences;
determining target shaping parameters for the initial three-dimensional image model of the second version based on the inter-version model differences;
the target shaping parameters are applied to the second version of the initial three-dimensional avatar model to generate the second version of the custom avatar model.
On the other hand, the application also provides a device for generating the virtual character image model. The device comprises:
the acquisition module is used for acquiring a first version of three-dimensional image model matched with the custom molding parameters and acquiring a second version of initial three-dimensional image model;
a determining module, configured to determine a vertex difference between each vertex in the second version of the initial three-dimensional avatar model and the first version of the three-dimensional avatar model;
The determining module is further configured to determine a model difference between versions of the first version of the three-dimensional avatar model and the second version of the initial three-dimensional avatar model according to the determined vertex difference;
the determining module is further configured to determine a target shaping parameter for the initial three-dimensional image model of the second version based on the inter-version model difference;
and the generation module is used for applying the target shaping parameters to the initial three-dimensional image model of the second version so as to generate the customized virtual character image model of the second version.
On the other hand, the application also provides computer equipment. The computer device comprises a memory and a processor, wherein the memory stores a computer program, and the processor realizes the steps of the method for generating the avatar image model when executing the computer program.
In another aspect, the present application also provides a computer-readable storage medium. The computer readable storage medium has stored thereon a computer program which, when executed by a processor, performs the steps of the method of generating a avatar model described above.
In another aspect, the present application also provides a computer program product. The computer program product comprises a computer program which, when executed by a processor, implements the steps of the method for generating a avatar image model described above.
The method, the device, the computer equipment, the storage medium and the computer program product for generating the avatar image model can determine the processing of the initial three-dimensional avatar model of the second version through subsequent processing under the condition of determining the user-defined avatar image by acquiring the three-dimensional avatar model of the first version matched with the user-defined shaping parameters and acquiring the initial three-dimensional avatar model of the second version. By respectively determining the vertex difference between each vertex in the initial three-dimensional image model of the second version and the three-dimensional image model of the first version, and then determining the model difference between the three-dimensional image model of the first version and the initial three-dimensional image model of the second version according to the vertex difference, whether the initial three-dimensional image model of the second version is modified or updated on a modeling level or not compared with the three-dimensional image model of the first version or not, the differences of the models between different versions can be compared as finely as possible by comparing the differences of the two models in the vertex dimension, and then the target shaping parameters determined based on the model differences between the versions can be more accurately restored after being applied to the initial three-dimensional image model of the second version. By the method, under the conditions of version updating or model modification, for example, a user can automatically migrate the customized avatar image of the first version into the second version without manually re-performing custom settings, and the customized avatar image of the user can be automatically and quickly restored and reproduced.
Drawings
FIG. 1 is an application environment diagram of a method of generating an avatar image model in one embodiment;
FIG. 2 is a schematic diagram of a three-dimensional avatar model with facial regions as an example in one embodiment;
FIG. 3 is a flow diagram of a method of generating an avatar image model in one embodiment;
FIG. 4 is a schematic diagram of a custom character avatar in one embodiment;
FIG. 5 is a schematic diagram of a four corner tile with an index order in one embodiment;
FIG. 6A is an interface diagram of a role customization interface in one embodiment;
FIG. 6B is an interface diagram of a character customization interface in another embodiment;
FIG. 6C is an interface diagram of a character customization interface in yet another embodiment;
FIG. 7 is a schematic diagram of generating a second three-dimensional avatar model by superposition of a plurality of preset model templates in one embodiment;
FIG. 8 is a flow diagram of generating a first version of a three-dimensional avatar model in one embodiment;
FIG. 9 is a schematic illustration of bones in European space in one embodiment;
FIG. 10 is a block diagram of a system for generating avatar image models in one embodiment;
FIG. 11 is a schematic diagram of a similarity measurement module according to an embodiment;
FIG. 12 is a schematic diagram of a point-to-point error calculation module in one embodiment;
FIG. 13 is a schematic diagram of a point-to-face error calculation module in one embodiment;
FIG. 14 is a block diagram illustrating an apparatus for generating an avatar image model in one embodiment;
fig. 15 is an internal structural view of a computer device in one embodiment.
Detailed Description
The present application will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present application more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the application.
The method for generating the virtual character image model provided by the embodiment of the application can be applied to an application environment shown in figure 1. Wherein the terminal 102 is connected to the server 104 for communication. The terminal 102 and the server 104 may be directly or indirectly connected through wired or wireless communication, and the present application is not limited herein. The data storage system may store data that the server 104 needs to process. The data storage system may be integrated on the server 104 or may be located on the cloud or other servers.
In some embodiments, the terminal 102 sends the custom shaping parameters to the server 104, and the server 104 obtains a first version of the three-dimensional avatar model that matches the custom shaping parameters and obtains a second version of the initial three-dimensional avatar model. For each vertex in the second version of the initial three-dimensional avatar model, server 104 determines a vertex difference from the first version of the three-dimensional avatar model, and may determine an inter-version model difference between the first version of the three-dimensional avatar model and the second version of the initial three-dimensional avatar model based on each determined vertex difference. Thus, server 104 may determine the target shaping parameters for the initial three-dimensional visual model of the second version based on the inter-version model differences.
In some embodiments, the server 104 applies the target shaping parameters to the second version of the initial three-dimensional avatar model to generate a second version of the custom avatar model; the server 104 then transmits the generated second version of the customized avatar image model to the terminal 102 for presentation by the terminal 102.
In other embodiments, the server 104 sends the target shaping parameters to the terminal 102, and the terminal 102 applies the target shaping parameters to the stored second version of the initial three-dimensional avatar model, thereby generating a second version of the customized avatar model.
The three-dimensional image model may also be called a three-dimensional Mesh model (Mesh), and is a representation form of a three-dimensional model commonly used in game making or animation. Three-dimensional character models are commonly used to represent the appearance of a game character or animated character. The three-dimensional visual model comprises a plurality of vertexes, wherein the vertexes form a surface sheet through a specific connection relation, and the surface sheet is generally triangular or square. As shown in fig. 2, a three-dimensional face model in a three-dimensional avatar model is taken as an example, and the three-dimensional face model includes a plurality of vertices V and a patch F composed of the plurality of vertices V.
The terminal 102 may be, but not limited to, one or more of various desktop computers, notebook computers, smart phones, tablet computers, internet of things devices, or portable wearable devices, etc., which may be one or more of smart speakers, smart televisions, smart air conditioners, or smart vehicle devices, etc. The portable wearable device may be one or more of a smart watch, a smart bracelet, or a headset device, etc.
The server 104 may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, CDNs (Content Delivery Network, content delivery networks), basic cloud computing services such as big data and artificial intelligent platforms, and the like.
Among these, artificial intelligence (Artificial Intelligence, AI) is the theory, method, technique and application system that uses a digital computer or a digital computer-controlled machine to simulate, extend and extend human intelligence, sense the environment, acquire knowledge and use knowledge to obtain optimal results. The artificial intelligence software technology mainly comprises a computer vision technology, a voice processing technology, a natural language processing technology, machine learning/deep learning and other directions. Computer Vision (CV) technology includes technologies such as image processing, three-dimensional object reconstruction, 3D technology, virtual reality, and augmented reality.
With research and progress of artificial intelligence technology, the method for generating the avatar image model provided by the embodiment of the application can be applied to various fields, such as common smart home, smart wearable equipment, virtual assistant, smart sound box, smart marketing, unmanned, automatic driving, unmanned plane, robot, smart medical treatment, or smart customer service, etc., and the avatar image can be generated based on the three-dimensional image model so as to provide intelligent service and immersive experience.
In some embodiments, the terminal may be loaded with APP (Application) applications or applications with video playing capabilities to provide application services for users, including one or more of applications that traditionally need to be installed separately, and applet applications that can be used without downloading the installation, such as browser clients, web page clients, or game clients. In the process of providing application service for users, the terminal can initiate service call to the server, and the server operates corresponding business processes. For example, if the terminal is loaded with a game client, and the user performs custom adjustment on the displayed three-dimensional image model in the game client, the server runs the relevant game process to display the custom avatar image. Therefore, by providing the function of carrying out self-defining adjustment on the three-dimensional image model, thousands of people and thousands of faces of role self-defining services can be provided for users.
In some embodiments, as shown in fig. 3, a method of generating an avatar image model is provided, which may be applied to a terminal or a server, or may be cooperatively performed by the terminal and the server. The following description will take an example in which the method is applied to a computer device, which may be a terminal or a server. The method comprises the following steps:
step S302, a first version of three-dimensional image model matched with the custom molding parameters is obtained, and a second version of initial three-dimensional image model is obtained.
After the user performs the self-defined adjustment on the initial three-dimensional image model of the first version, the computer equipment obtains the self-defined shaping parameters based on the adjustment operation of the user. For example, the terminal displays a role customization interface through an application program, a plurality of parameter control bars are provided in the role customization interface, a user adjusts the parameter control bars, and the terminal obtains the custom shaping parameters based on the adjustment operation of the user. Or when the computer equipment is a server, the self-defined shaping parameters uploaded by the terminal can be obtained.
The custom shaping parameters are shaping parameters generated when the user performs shaping adjustments to the appearance of the first version of the initial three-dimensional avatar model. The process of changing the appearance presented by the three-dimensional visual model may be referred to as shaping and adjusting. The shaping parameters are used for representing the deviation degree of the initial three-dimensional image model from the standard appearance after shaping adjustment. The standard appearance is the default appearance of the initial three-dimensional image model after modeling is completed. As shown in fig. 4, for example, fig. 4, panel (a) is a standard appearance that is shown by default for the initial three-dimensional avatar model; after the user-defined adjustment, the computer device performs a shaping adjustment on the three-dimensional character model, for example, changes the positions and sizes of eyes, eyebrows, etc., so as to generate a user-customized avatar, as shown in fig. 4 (b).
The custom shaping parameters include one or more shaping parameters that may correspond to different avatar image areas or regions, such as eye regions corresponding to facial regions, or nail regions corresponding to hand regions, respectively, and the like.
When the custom modeling parameters are applied to the three-dimensional image model, the appearance presented by the three-dimensional image model is correspondingly changed, so that the virtual character image customized by the user is presented. Further, the first version of the three-dimensional image model matched with the custom shaping parameters refers to the customized three-dimensional image model obtained after the initial three-dimensional image model of the first version is shaped and adjusted by the custom shaping parameters.
In an actual scenario, the version update iteration of the application program, or modification, reconstruction, and other changes of the three-dimensional image model may cause version variation of the three-dimensional image model, and the old version may be referred to as a first version, and the new version may be referred to as a second version. Or the second version is an offline version to be tested, and the custom shaping parameters aiming at the three-dimensional image model in the first version are required to be applied to the three-dimensional image model in the second version so as to test the three-dimensional image model in the second version.
In some embodiments, the differences between the first version and the second version may be embodied as differences between the different versions of the initial three-dimensional visual model. In the modeling stage, the appearance of the three-dimensional image model is affected by model parameters, and modification of the model parameters can lead to version variation of the three-dimensional image model. Wherein the model parameters include, but are not limited to, one or more of bone parameters, fusion deformation parameters, or skin weights, etc. The fusion deformation parameter may be, for example, a Blendshape (expression animation) parameter. For example, the modeler adjusts the skeletal parameters and skin weights of the model to cause a discrepancy between the initial three-dimensional avatar model of the first version and the initial three-dimensional avatar model of the second version.
In addition, changes in the topology of the three-dimensional avatar model may also cause version changes in the three-dimensional avatar model. Wherein the topology includes, but is not limited to, vertices, patches composed of multiple vertices, and the like. Wherein each vertex on the three-dimensional visual model is recorded in a vertex list and each vertex has a respective Index, and the Index of each vertex is recorded in the Index list. The index of vertices indicates which vertices make up the patch. As shown in fig. 5, vertices V1, V2, V3, V4 form a four-corner patch a, vertices V1', V2', V3', V4' form a four-corner patch B, and numerals represent the index order of the vertices. When the number of vertices constituting the four-corner patch a and the number of vertices constituting the four-corner patch B are completely identical in the index order, the four-corner patch a and the four-corner patch B are identical. At this time, two vertices of the same index order have the same physical semantics, for example, vertex V1 and vertex V1' both correspond to the corner positions of the eyes. Thus, the change in topology may be embodied as a change in the number of vertices, a change in the number of patches, or a change in the index order of vertices, etc.
When the user is provided with the self-defining function of the three-dimensional image model through the application program, in order to ensure that the self-defining adjustment of the three-dimensional image model by the user is in a reasonable range, the user does not directly adjust the model parameters of the three-dimensional image model, but indirectly changes the model parameters of the three-dimensional image model through a set of shaping parameters, so that the appearance of the three-dimensional image model is changed, and the self-defining virtual character image based on the three-dimensional image model of the first version is realized. How the numerical variation of the shaping parameters is reflected as the influence on the three-dimensional image model requires to establish a parameter value mapping relation between each numerical value in the numerical range of the parameter control bar and the model parameters of the three-dimensional image model. Further, in some embodiments, the difference between the first version and the second version may also result from an update of the parameter value mapping relationship.
Specifically, the computer device obtains a first version of the three-dimensional visual model that matches the custom shaping parameters, comprising: the computer equipment obtains the self-defined modeling parameters and applies the self-defined modeling parameters to the three-dimensional image model of the first version to obtain the three-dimensional image model of the first version matched with the self-defined modeling parameters. The customized shaping parameters are obtained by the user performing customized adjustment on the initial three-dimensional image model of the first version.
When the computer device is a terminal, the terminal can acquire the initial three-dimensional image model of the first version from the cloud server, download the self-defined shaping parameters backed up by the cloud server, and then apply the self-defined shaping parameters to the initial three-dimensional image model of the first version locally so as to generate the three-dimensional image model of the first version matched with the self-defined shaping parameters. Or, the terminal may directly obtain the three-dimensional image model of the first version, which is generated by the cloud server and matches with the custom shaping parameters. The computer device obtains a second version of the initial three-dimensional visual model comprising: and the terminal downloads the initial three-dimensional image model of the second version from the cloud server or locally extracts the initial three-dimensional image model of the second version.
When the computer device is a server, the server may obtain the custom shaping parameters uploaded by the terminal and apply the custom shaping parameters to the stored initial three-dimensional image model of the first version, thereby generating a three-dimensional image model of the first version that matches the custom shaping parameters. The computer device obtains a second version of the initial three-dimensional visual model comprising: the server locally extracts a second version of the initial three-dimensional visual model.
Step S304, each vertex in the initial three-dimensional image model of the second version is respectively determined to be different from the vertex of the three-dimensional image model of the first version.
As stated earlier, the differences between the first version and the second version may result from changes in the topology of the three-dimensional visual model. Therefore, in the embodiment of the application, the similarity measurement is performed based on the vertexes, so that the subsequently generated three-dimensional image model of the second version fits with the customized three-dimensional image model of the first version as much as possible.
Wherein the vertex differences are used to reflect differences between the second version of the initial three-dimensional avatar model and the first version of the three-dimensional avatar model from a topology level. Vertex differences may be measured by differences in dimensions, for example, spatial location, index order, and the like. The spatial position of the vertex can be represented by the three-dimensional coordinates of the vertex in the world coordinate system or model space.
Because the topology is, for example, vertices or patches, in some embodiments, the computer device determines vertex differences for each vertex in the second version of the initial three-dimensional avatar model from the first version of the three-dimensional avatar model, respectively, including: the computer device determining respective vertices on the second version of the initial three-dimensional visual model; the computer device determining respective vertices on the first version of the three-dimensional visual model; for each vertex in the second version of the initial three-dimensional visual model, the computer device determines a vertex difference for each vertex from each vertex in the first version of the three-dimensional visual model.
Alternatively, for each vertex in the second version of the initial three-dimensional avatar model, the computer device may also find a matching vertex in the first version of the three-dimensional avatar model and then determine the vertex difference between the vertex in the second version of the initial three-dimensional avatar model and the vertex matching the vertex.
In other embodiments, the computer device separately determines vertex differences for each vertex in the second version of the initial three-dimensional avatar model from the first version of the three-dimensional avatar model, comprising: the computer device determining respective vertices on the second version of the initial three-dimensional visual model; the computer device determining each patch on the first version of the three-dimensional visual model; for each vertex in the second version of the initial three-dimensional visual model, the computer device determines a vertex difference for each vertex from each patch in the first version of the three-dimensional visual model.
Alternatively, for each vertex in the second version of the initial three-dimensional avatar model, the computer device may also find a matching patch in the first version of the three-dimensional avatar model and then determine the vertex difference between the vertex in the second version of the initial three-dimensional avatar model and the matching patch.
Step S306, determining the model difference between the versions of the first version three-dimensional image model and the second version initial three-dimensional image model according to the determined vertex difference.
Inter-version model differences are used to characterize model differences of two different versions of a three-dimensional visual model from a modeling level. In some embodiments, the computer device determines a model difference between versions of the first version of the three-dimensional avatar model and the second version of the initial three-dimensional avatar model based on the determined vertex difference, comprising: the computer device determines inter-version model differences for the first version of the three-dimensional avatar model and the second version of the initial three-dimensional avatar model based on the vertex differences determined for all vertices.
Wherein the computer device determines inter-version model differences based on the vertex differences determined for all vertices, including but not limited to: and calculating one or more results of a mean value, a variance, a mean square error, a square sum and the like based on the vertex differences, and determining model differences among versions based on the calculated results. The computer device calculates a mean square error based on the respective vertex differences, and regards the mean square error as a model difference between the first version of the three-dimensional avatar model and the second version of the initial three-dimensional avatar model.
Step S308, determining target shaping parameters of the initial three-dimensional image model for the second version based on the model differences between the versions.
And for the initial three-dimensional image model of the first version, generating the three-dimensional image model of the first version after applying the custom shaping parameters, and forming the virtual character image customized by a user in a rendering mode. For the initial three-dimensional character model of the second version different from the first version, the computer device needs to calculate and determine the target shaping parameters so as to achieve the effect of restoring or reproducing the virtual character image customized by the user after the target shaping parameters are applied to the initial three-dimensional character model of the second version.
After deriving the inter-version model differences for the first version of the three-dimensional avatar model and the second version of the initial three-dimensional avatar model, in some embodiments, the computer device determines target shaping parameters for the second version of the initial three-dimensional avatar model based on the inter-version model differences, comprising: based on the model difference between the versions, the computer equipment adjusts the initial shaping parameters of the initial three-dimensional image model of the second version, and the adjusted initial shaping parameters are the target shaping parameters.
The initial shaping parameters are shaping parameters preset after the modeling of the three-dimensional image model is completed, and the three-dimensional image model with the initial shaping parameters is presented as a default character image appearance.
In other embodiments, based on the inter-version model differences, the computer device determines target shaping parameters for the initial three-dimensional image model of the second version, comprising: based on the model differences between versions, the computer device adjusts the custom shaping parameters so that the adjusted shaping parameters are applicable to the initial three-dimensional character model of the second version, thereby restoring the user-customized virtual character image. The adjusted shaping parameters are then the target shaping parameters.
Step S310, applying the target shaping parameters to the second version of the initial three-dimensional avatar model to generate a second version of the customized avatar model.
After obtaining the target shaping parameters for the initial three-dimensional image model of the second version, the computer device can apply the target shaping parameters to the initial three-dimensional image model of the second version so as to adaptively adjust the appearance of the initial three-dimensional image model of the second version, and the user can automatically generate the customized virtual character image model of the second version without the need of manually adjusting again by the user. The second version of the custom avatar image model and the first version of the three-dimensional avatar model that matches the custom shaping parameters are identical in appearance or as similar as possible for restoring the custom avatar image.
Wherein the computer device applies the target shaping parameters to the second version of the initial three-dimensional avatar model, comprising: and the computer equipment performs shaping adjustment on the initial three-dimensional image model of the second version according to the target shaping parameters, or the computer equipment sends the target shaping parameters to other equipment so as to enable the other equipment to perform shaping adjustment on the initial three-dimensional image model of the second version according to the target shaping parameters.
For example, when the computer device is a terminal, the terminal may extract the initial three-dimensional image model of the second version from the cloud server or locally, and perform shaping adjustment on the initial three-dimensional image model of the second version according to the determined target shaping parameters, so as to obtain the customized avatar image model of the second version.
For another example, when the computer device is a server, the server may perform shaping adjustment on the second version of the initial three-dimensional avatar model according to the determined target shaping parameter, generate a second version of the customized avatar model, and send the generated second version of the customized avatar model to the terminal; and the terminal receives the customized virtual character image model of the second version and displays the customized virtual character image model.
For another example, the server may send the determined target shaping parameter to the terminal, and after the terminal receives the target shaping parameter, the terminal performs shaping adjustment on the initial three-dimensional character model of the second version according to the target shaping parameter, so as to obtain the customized virtual character image model of the second version. In the process that the terminal performs shaping adjustment on the initial three-dimensional image model of the second version according to the target shaping parameters, the initial three-dimensional image model of the second version can be downloaded from a cloud server in advance or can be extracted from local.
In the method for generating the avatar image model, the processing of the initial three-dimensional image model of the second version can be determined through subsequent processing under the condition of determining the user-defined avatar image by acquiring the three-dimensional image model of the first version matched with the user-defined modeling parameters and acquiring the initial three-dimensional image model of the second version. By respectively determining the vertex difference between each vertex in the initial three-dimensional image model of the second version and the three-dimensional image model of the first version, and then determining the model difference between the three-dimensional image model of the first version and the initial three-dimensional image model of the second version according to the vertex difference, whether the initial three-dimensional image model of the second version is modified or updated on a modeling level or not compared with the three-dimensional image model of the first version or not, the differences of the models between different versions can be compared as finely as possible by comparing the differences of the two models in the vertex dimension, and then the target shaping parameters determined based on the model differences between the versions can be more accurately restored after being applied to the initial three-dimensional image model of the second version. By the method, under the conditions of version updating or model modification, for example, a user can automatically migrate the customized avatar image of the first version into the second version without manually re-performing custom settings, and the customized avatar image of the user can be automatically and quickly restored and reproduced.
In some embodiments, obtaining a first version of a three-dimensional avatar model that matches the custom shaping parameters includes: obtaining a custom shaping parameter; determining target model parameters of the three-dimensional image model of the first version based on a preset parameter value mapping relation between the self-defined shaping parameters and model parameters of the three-dimensional image model of the first version; and performing shaping adjustment on the initial three-dimensional image model of the first version by referring to the target model parameters to obtain the three-dimensional image model of the first version matched with the self-defined shaping parameters.
In the foregoing, when the user is provided with the function of customizing the three-dimensional image model, in order to ensure that the user can customize and adjust the three-dimensional image model within a reasonable range, the user cannot directly adjust the model parameters of the three-dimensional image model. Taking game application as an example, a game planner needs to formulate a numerical planning table according to model parameters of the three-dimensional image model, and the numerical planning table records a parameter value mapping relation between each numerical value in a numerical range of the parameter control bar and the model parameters of the three-dimensional image model. The updating of the mapping relationship between the first version and the second version may be that the numerical plan table is updated.
As shown in fig. 6A to 6B, the shaping adjustment function provided by the computer device through the application program is, for example, to provide a character customization interface, in which 3 parameter control bars P1, P2, P3 are provided, each parameter control bar sets an adjustable numerical range in advance, and the numerical value of each adjustable value in the parameter control bars corresponds to a specific numerical value of a model parameter of the three-dimensional image model. The parameter control bar may be presented in a threshold bar configuration as shown in fig. 6A, or in a slider bar configuration as shown in fig. 6B. When the user controls the parameter control bar to move, the displayed character image also changes. For example, as shown in fig. 6B and 6C, when the user selects the maximum value in the parameter control bar P1 (as shown in fig. 6B), the appearance of the three-dimensional avatar model appears as the eye maximum; when the user selects the minimum value in the parameter control bar P1 (as shown in fig. 6C), the appearance of the three-dimensional avatar model appears as the eye minimum, etc.
It is to be understood that the above-mentioned manner of performing the shaping adjustment through man-machine interaction is merely an example, and the shaping adjustment may be appropriately adjusted according to the actual situation in a specific application scenario, for example, the computer device may further provide a plurality of initial shaping parameters through an application program, and the user may select one or more shaping parameters from the plurality of initial shaping parameters, so that the computer device obtains the customized shaping parameters. It should be clear to those skilled in the art that reasonable modifications and appropriate adjustments to the human-computer interaction described above are within the scope of the present application.
Specifically, after the computer device obtains the custom shaping parameter, according to a parameter value mapping relation preset between the custom shaping parameter and a model parameter of the three-dimensional image model of the first version, the custom shaping parameter is mapped into a specific parameter value of the model parameter, and the specific parameter value is the target model parameter of the three-dimensional image model of the first version.
Wherein the model parameters of the three-dimensional visual model include, but are not limited to, one or more of bone parameters, fusion deformation parameters, or skin weights, etc. After the modeling of the three-dimensional image model is completed, the model parameters of the three-dimensional image model are used as initial model parameters.
In some embodiments, a parameter value mapping relationship between the plurality of shaping parameters and initial model parameters of the first version of the three-dimensional avatar model is preset. The custom shaping parameter is a selected parameter value of the plurality of shaping parameters. After the computer equipment obtains the self-defined shaping parameters, the target model parameters of the three-dimensional image model of the first version can be determined according to the parameter value mapping relation between the preset shaping parameters and the initial model parameters.
For example, according to the parameter value mapping relationship, the computer device determines the model parameters x1 of the three-dimensional avatar model of the first version corresponding to the custom shaping parameters { "nose", "30" }, determines the model parameters x2 of the three-dimensional avatar model of the first version corresponding to the custom shaping parameters { "eyes", "15" }, and so on.
Therefore, the computer equipment can perform shaping adjustment on the initial three-dimensional image model of the first version according to the target model parameters determined by the self-defined shaping parameters, and the three-dimensional image model of the first version matched with the self-defined shaping parameters is obtained. The computer equipment performs shaping adjustment on the initial three-dimensional image model of the first version according to the target model parameters, and the method comprises the following steps: shaping and adjusting a bone model in the initial three-dimensional image model of the first version according to the bone parameters; performing shaping adjustment on the deformation degree in the initial three-dimensional image model of the first version according to the fusion deformation parameters; or, shaping adjustments are made to the skin in the first version of the initial three-dimensional visual model according to skin weights, and so on.
In the above embodiment, the adjustment of the three-dimensional image model on the appearance by the user is converted into the adjustment of the model parameters of the three-dimensional image model by the preset parameter value mapping relation, so as to present the customized avatar image, which can ensure that the generated avatar image does not have a strange appearance shape on one hand, and save the shaping adjustment operation of the user in the form of numerical values on the other hand, under the condition of version iteration or update, the reduction of the customized avatar image can be realized by the customized shaping parameters with extremely small data quantity without saving large-scale model data.
As stated above, the model parameters of the three-dimensional visual model include bone parameters and fusion deformation parameters. Then in some embodiments the target model parameters include target bone parameters and target fusion deformation parameters. Wherein the bone parameters comprise transformation coefficients for translation, rotation and scaling of each bone constituting the three-dimensional visual model in European space. The fusion deformation parameters comprise weights corresponding to each model template in a plurality of preset model templates.
In some embodiments, shaping the first version of the initial three-dimensional avatar model with reference to the target model parameters to obtain a first version of the three-dimensional avatar model that matches the custom shaping parameters, comprising: according to the target bone parameters, sequentially performing bone adjustment and adjustment based on skin weight on the initial three-dimensional image model of the first version to obtain a first three-dimensional image model; based on the first three-dimensional avatar model, a first version of the three-dimensional avatar model that matches the custom shaping parameters is determined.
Specifically, the computer device performs bone adjustment on the initial three-dimensional image model of the first version according to the target bone parameters, including: one or more of the position or size of each bone constituting the three-dimensional visual model is adjusted. In the case of a change in the position or size of the bone, the epidermis attached to the bone should be changed. Therefore, after the bone is adjusted, the computer device performs an adjustment based on the skin weight to adjust the shape of the epidermis on the three-dimensional visual model.
In some embodiments, the computer device determines a first version of the three-dimensional avatar model that matches the custom shaping parameters based on the first three-dimensional avatar model, comprising: the computer device uses the first three-dimensional avatar model as a first version of the three-dimensional avatar model that matches the custom shaping parameters. Alternatively, in other embodiments, the computer device may make additional adjustments based on the generated first three-dimensional avatar model to obtain a first version of the three-dimensional avatar model that matches the custom shaping parameters. The additional adjustment may be a preset adjustment of the region or portion in a preset manner, such as unifying an irrelevant enlargement or reduction of the face region, or changing the color of the three-dimensional avatar model, etc.
In other embodiments, shaping the first version of the initial three-dimensional avatar model with reference to the target model parameters to obtain a first version of the three-dimensional avatar model that matches the custom shaping parameters, comprising: determining a second three-dimensional image model obtained by fusion under the influence of each weight on the basis of the weight of each of a plurality of preset model templates in the target fusion deformation parameters; based on the second three-dimensional persona model, a first version of the three-dimensional persona model that matches the custom shaping parameters is determined.
For the first version of the initial three-dimensional image model, the computer device obtains a plurality of preset model templates which are preset, and each preset model template presents a preset appearance of the first version of the three-dimensional image model. For example, the preset model template a presents a fat face or a fat body, the preset model template B presents a lean face or a lean body, and so on. By overlapping a plurality of preset model templates to different degrees, various appearance shapes can be realized. For this reason, the target fusion deformation parameter includes a weight corresponding to each preset model template, where the weight is used to represent a specific gravity occupied by the corresponding preset model template in the stacking process of the plurality of preset model templates.
Specifically, the computer equipment determines the specific gravity of each preset model template in the superposition process of a plurality of preset model templates based on the weight of each preset model template in the target fusion deformation parameters, and superimposes each preset model template according to the specific gravity of each preset model template, so as to obtain a second three-dimensional image model fused under the influence of each weight. The mode of superposing the preset model templates includes, but is not limited to, linear superposition, nonlinear superposition or the like.
For example, as shown in fig. 7, if the preset model template a corresponds to the weight a, and the preset model template B corresponds to the weight B, the computer device superimposes the two preset model templates based on the weights of the preset model template a and the preset model template B, so as to obtain the second three-dimensional image model.
In some embodiments, the computer device determines a first version of the three-dimensional avatar model that matches the custom shaping parameters based on the second three-dimensional avatar model, comprising: the computer device uses the second three-dimensional avatar model as a first version of the three-dimensional avatar model that matches the custom shaping parameters. Alternatively, in other embodiments, the computer device may make additional adjustments based on the generated second three-dimensional avatar model to obtain a second version of the three-dimensional avatar model that matches the custom shaping parameters.
In still other embodiments, shaping the first version of the initial three-dimensional avatar model with reference to the target model parameters to obtain a first version of the three-dimensional avatar model that matches the custom shaping parameters, comprising: according to the target bone parameters, sequentially performing bone adjustment and adjustment based on skin weight on the initial three-dimensional image model of the first version to obtain a first three-dimensional image model; determining a second three-dimensional image model obtained by fusion under the influence of each weight on the basis of the weight of each of a plurality of preset model templates in the target fusion deformation parameters; and superposing the first three-dimensional image model and the second three-dimensional image model to obtain a first version of three-dimensional image model matched with the custom molding parameters.
In one aspect, the computer device first performs bone adjustment on a first version of the initial three-dimensional visual model, adjusts each bone in the three-dimensional visual model, and then performs adjustment based on skin weights on the three-dimensional visual model with the bones adjusted to obtain the first three-dimensional visual model. On the other hand, the computer equipment superimposes each preset model template according to the weight in the target fusion deformation parameter, so that a second three-dimensional image model is obtained. The computer equipment superimposes the first three-dimensional image model and the second three-dimensional image model, so that a first version of three-dimensional image model matched with the custom molding parameters is obtained. Wherein the manner in which the computer device superimposes the first three-dimensional avatar model and the second three-dimensional avatar model includes, but is not limited to, linear superimposition, non-linear superimposition, or the like.
Illustratively, as shown in fig. 8, the computer device obtains the target skeleton parameter and the target fusion deformation parameter according to the custom shaping parameter and the preset parameter mapping relation. In one aspect, a computer device first sequentially performs bone adjustment and skin adjustment on a first version of an initial three-dimensional visual model to obtain a first three-dimensional visual model. On the other hand, the computer equipment superimposes each preset model template according to the weight in the target fusion deformation parameter, so that a second three-dimensional image model is obtained. Finally, the computer equipment superimposes the first three-dimensional image model and the second three-dimensional image model to obtain a three-dimensional image model of the first version.
In the process of sequentially performing bone adjustment and skin adjustment based on the target bone parameters, the position of each bone may be affected by another bone connected to the bone, for example, as shown in fig. 9, for an arm part of a human body, an upper arm bone is connected to a body through a joint 1, a forearm bone is connected to the upper arm bone through a joint 2, and a hand bone is connected to the forearm bone through a joint 3. And the position of the forearm skeleton is affected by the position of the upper arm skeleton, and the position of the hand skeleton is affected by the position of the forearm skeleton. Thus, each bone corresponds to a bone transformation matrix that represents which bones the bone is affected, and to what extent. For example, mat () represents a transformation matrix of a bone, and then Mat (hand) =mat (hand part) ×mat (forearm part) ×mat (big arm part).
Furthermore, since bone parameters represent the transform coefficients of bone in the European space, and vertices are typically placed in the world coordinate system, computer equipment is required to translate the position of bone in European space into the world coordinate system when performing skin weight based adjustments. At this time, the computer device also needs to adjust the shape of the upper epidermis of the three-dimensional visual model based on the transformation matrix related to the coordinate system transformation and the skeleton transformation matrix of each skeleton in combination with the skin weight.
In the above embodiment, the shaping adjustment is performed on the initial three-dimensional image model of the first version through the target skeleton parameter and the target fusion deformation parameter in the target model parameters, so that the virtual character image which the user wants to obtain can be accurately presented; and the target model parameters obtained by the custom shaping parameter mapping are stored only by storing parameter data without storing adjusted model data under the condition of version iteration or update, so that the consumption of storage resources is greatly reduced.
In some embodiments, according to the target bone parameters, bone adjustment and adjustment based on skin weights are sequentially performed on the first version of the initial three-dimensional avatar model to obtain a first three-dimensional avatar model, including: obtaining a transformation coefficient of each bone in the target bone parameters; adjusting each bone in the initial three-dimensional image model of the first version based on the transformation coefficient to obtain each bone after transformation; and performing skin adjustment on the initial three-dimensional image model of the first version according to the transformed bones and preset skin weights corresponding to the bones respectively to obtain a first three-dimensional image model.
Specifically, the computer device obtains a transformation coefficient for each bone in the target bone parameters, the transformation coefficient representing a translation amount, a rotation amount, and a scaling amount of the bone in the three-dimensional visual model after the user has undergone the custom shaping adjustment. Based on the transformation coefficients of each bone, the computer device adjusts the corresponding bone in the first version of the initial three-dimensional visual model to obtain transformed bones. The computer device adjusts the bone including, but not limited to, one or more of translation, rotation, or scaling.
In the case of a change in the position or size of the bone, the epidermis attached to the bone should be changed. Skin adjustment refers to adjusting the skin shape of the three-dimensional visual model. The epidermis of the three-dimensional visual model is formed by the interconnection of the patches. The shape of the epidermis is associated with one or more bones; for example, in the region of the femur, the shape of the epidermis is related to the skeletal parameters of the femur; as another example, at a joint, the shape of the epidermis is related to bone parameters of multiple bones forming a joint at the joint. In general, the shape change of the epidermis may be achieved by the apex position shift of the apex. In the case where the epidermis is affected by multiple bones, the effect of the bones on the shape of the epidermis is characterized by the skin weights.
Therefore, after the bones are adjusted, the computer equipment adjusts the binding vertices on the transformed bones according to the preset skin weights corresponding to the bones, so as to adjust the skin of the initial three-dimensional image model of the first version and further obtain the first three-dimensional image model. The computer device adjusts the binding vertices on each transformed bone, comprising: and adjusting the coordinates of the vertexes bound on each skeleton after transformation. For example, the computer device determines an offset of coordinates of each vertex on the three-dimensional avatar model in the world coordinate system, thereby adjusting the coordinates of the vertices on the three-dimensional avatar model to achieve skin adjustment of the first version of the initial three-dimensional avatar model.
Because the coordinates of the vertices on the three-dimensional visual model are based on the world coordinate system, and the transformation coefficients of bones are in the European space, the computer equipment also needs to perform coordinate system conversion before performing the adjustment based on the skin weight, so as to determine the position of each bone in the world coordinate system; in this process, the computer device may calculate a transformation matrix representing the change in coordinates of each bone as it is transformed from the European space to the world coordinate system.
In the embodiment, the skeleton and the skin of the three-dimensional image model are adjusted to realize smooth deformation of the appearance of the three-dimensional image model, so that the virtual character image customized by a user can be carefully presented; and compared with a mode of directly adjusting the model grid, the method is simple in calculation and higher in efficiency. In addition, when some special character images (such as particularly fat or particularly thin) are difficult to realize through the custom molding parameters by presetting a plurality of preset model templates, the method can be realized through superposition of the plurality of preset model templates, so that the degree of freedom of user definition is greatly improved, and the user experience is improved.
It is to be understood that the above-mentioned manner of adjusting the bones and the skins to achieve the shaping adjustment of the three-dimensional avatar model is merely an example, and any manner of enabling the shaping adjustment of the three-dimensional avatar model may be applied based on the inventive concept of the present application, for example, the computer device may input the custom shaping parameters into a pre-trained neural network model, and the neural network model outputs the adjusted three-dimensional avatar model. It should be clear to those skilled in the art that reasonable variations and appropriate modifications of the above-described shaping adjustment are within the scope of the present application.
As stated previously, the differences between the first version and the second version may result from one or more of a change in parameter value mapping, a bone model, skin weights, a pre-set model template, or a topology of the three-dimensional visual model. The change of the skeleton model, the skin weight and the preset model template may result in the change of the topology structure of the three-dimensional image model.
To this end, in some embodiments, determining vertex differences for each vertex in the second version of the initial three-dimensional avatar model from the first version of the three-dimensional avatar model, respectively, includes: based on at least one topological dimension, carrying out consistency comparison on the respective topological structures of the first version of three-dimensional image model and the second version of initial three-dimensional image model to obtain a topological comparison result; determining a matched target topological structure in the topological structure of the three-dimensional image model of the first version aiming at each vertex in the initial three-dimensional image model of the second version according to the topological comparison result; the vertex differences of each vertex in the second version of the initial three-dimensional avatar model from the first version of the three-dimensional avatar model are determined based on the differences between each vertex in the second version of the initial three-dimensional avatar model and the matched target topology.
Wherein the topology dimensions include, but are not limited to, one or more of a vertex number dimension, a patch number dimension, and an index order dimension of vertices, among others.
The computer device performs a consistency comparison of respective topologies of the first version of the three-dimensional avatar model and the second version of the initial three-dimensional avatar model based on the one or more topological dimensions, comprising: the computer equipment compares whether the vertex numbers of the first version of the three-dimensional image model and the initial three-dimensional image model of the second version are consistent; the computer equipment compares whether the respective surface patches of the three-dimensional image model of the first version and the initial three-dimensional image model of the second version are consistent; the computer device compares the index order of the vertices constituting the patch on the first version of the three-dimensional visual model to whether the index order of the vertices constituting the patch on the second version of the initial three-dimensional visual model is consistent.
In some embodiments, the computer device performs a consistency comparison based on one of the topology dimensions and determines a topology comparison result based on the comparison result. The topological comparison result comprises a comparison result representing consistent topological structure and a comparison result representing inconsistent topological structure.
In other embodiments, based on at least one topological dimension, a consistency comparison is performed on respective topologies of the first version of the three-dimensional avatar model and the second version of the initial three-dimensional avatar model to obtain a topology comparison result, including: for any topological dimension of at least one topological dimension, carrying out consistency comparison on the topological structures of the first version of three-dimensional image model and the initial three-dimensional image model of the second version of three-dimensional image model to obtain a comparison result of a single topological dimension; based on the comparison result of each topological dimension, a topological comparison result between the three-dimensional image model of the first version and the initial three-dimensional image model of the second version is obtained.
And in the plurality of topological dimensions, the computer equipment performs consistency comparison on the topological structures of the first version of three-dimensional image model and the second version of initial three-dimensional image model aiming at any topological dimension, so as to obtain a comparison result of single topological dimension. For example, the computer equipment obtains a comparison result of the number of vertexes by comparing whether the number of vertexes is consistent; for another example, the computer equipment obtains a comparison result of the number dimension of the dough sheets by comparing whether the number of the dough sheets is consistent; for another example, the computer device obtains the comparison result of the index sequence dimension by comparing whether the index sequences of the vertices are consistent.
Thus, the computer equipment comprehensively obtains the topological comparison result between the initial three-dimensional image model of the first version and the initial three-dimensional image model of the second version according to the comparison result of all topological dimensions. In some embodiments, when there is a comparison result of any topology dimension characterizing inconsistency, the computer device obtains a topology comparison result of the topological structure inconsistency; in other words, the computer device obtains a topology comparison result with consistent topology only if the comparison results of all topology dimensions are characterized as consistent.
In the above embodiment, the difference between the three-dimensional image models of different versions can be accurately obtained through the topology consistency judgment of the multiple dimensions, and the target shaping parameters generated based on the difference are more accurate, so that the user-defined virtual character image in the first version can be more accurately restored.
After obtaining the topology comparison result, the computer device determines a matched topology in the topology of the first version of the three-dimensional avatar model for each vertex in the second version of the initial three-dimensional avatar model based on the topology comparison result, the matched topology in the first version being referred to as the target topology. In some embodiments, the computer device determines, for each vertex in the second version of the initial three-dimensional avatar model, a matching target vertex in the topology of the first version of the three-dimensional avatar model. In other embodiments, the computer device determines a matching target patch in the topology of the first version of the three-dimensional avatar model for each vertex in the second version of the initial three-dimensional avatar model.
The computer device then determines vertex differences for each vertex in the second version of the initial three-dimensional avatar model from the first version of the three-dimensional avatar model based on differences between each vertex in the second version of the initial three-dimensional avatar model and the matched target topology. Wherein the difference between the vertex and the matched target topology comprises: the difference between the vertex and the matched target vertex, or the difference between the vertex and the matched target patch. The difference between the vertex and the matched target topology may be the distance between the vertex and the matched target topology, etc.
In the above embodiment, whether the three-dimensional avatar models of different versions have differences is accurately determined through the topological consistency of multiple dimensions, and the vertex differences are correspondingly calculated by determining the target topological structure for calculating the vertex differences in the initial three-dimensional avatar of the first version, so that the vertex errors between the three-dimensional avatar models of different versions can be accurately measured even though the three-dimensional avatar models of two versions with different topological structures are also calculated, and further the generated target modeling parameters can more accurately restore the user-defined avatar in the first version.
In some embodiments, determining a matching target topology in the topology of the first version of the three-dimensional avatar model for each vertex in the second version of the initial three-dimensional avatar model based on the topology comparison results comprises: under the condition that the topological comparison result represents that the topological structure is consistent, acquiring the index sequence of each vertex in the initial three-dimensional image model of the second version; determining target vertexes with the same index sequence in a plurality of vertexes of the three-dimensional image model of the first version for each vertex in the initial three-dimensional image model of the second version; and taking the target vertex in the first version of three-dimensional image model as a target topological structure matched with the vertex in the second version of initial three-dimensional image model.
Under the condition that the topological comparison result represents that the topological structure is consistent, the first version of three-dimensional image model and the second version of initial three-dimensional image model are not changed in the topological structure, and then all vertexes on the second version of initial three-dimensional image model can determine one-to-one corresponding target vertexes in the first version of three-dimensional image model.
Specifically, the computer device obtains an index order of each vertex in the second version of the initial three-dimensional avatar model, and for each vertex in the second version of the initial three-dimensional avatar model, the computer device determines a vertex having the same index order among the plurality of vertices of the first version of the three-dimensional avatar model as a target vertex, respectively. Furthermore, the computer device can calculate vertex differences according to two vertices which are in one-to-one correspondence and respectively belong to three-dimensional image models of different versions. Vertices in the first version of the three-dimensional visual model having the same index order are the target topologies that match vertices in the second version of the initial three-dimensional visual model.
In the above embodiment, under the condition that the topological structures are consistent, it is explained that the three-dimensional image models between different versions are not changed in the topological structures, that means that the models are identical, each vertex in one version of the three-dimensional image model has a vertex corresponding to one in the other version of the three-dimensional image model, and then the vertex difference can be accurately calculated according to the two corresponding vertices.
Wherein vertex differences can be generally represented by differences in coordinates. To this end, in some embodiments, where the topology comparison results characterize topology agreement, determining vertex differences of each vertex in the second version of the initial three-dimensional avatar model from the first version of the three-dimensional avatar model based on differences between each vertex in the second version of the initial three-dimensional avatar model and the matched target topology comprises: for each vertex in the initial three-dimensional visual model of the second version, acquiring the vertex coordinates of each vertex and the matched vertex coordinates of the target vertex; a vertex difference of each vertex in the second version of the initial three-dimensional avatar model from the first version of the three-dimensional avatar model is determined based on a coordinate distance between the vertex coordinates of each vertex and the vertex coordinates of the matched target vertex.
Specifically, for each vertex in the second version of the initial three-dimensional visual model, the computer device obtains vertex coordinates for each vertex; also, for the target vertices that match each vertex, the computer device also obtains vertex coordinates of the respective target vertices. Thus, the computer device calculates a distance between the vertex coordinates of each vertex and the matched vertex coordinates of the target vertex, thereby determining a vertex difference of each vertex in the initial three-dimensional avatar model of the second version from the three-dimensional avatar model of the first version.
In some embodiments, the computer device determining the vertex difference based on the coordinate distance comprises: determining the coordinate distance as a vertex difference; alternatively, additional numerical calculations may be performed based on the coordinate distance, such as multiplying/dividing by a certain coefficient, squaring/squaring, and the like.
In the above embodiment, the vertex difference is determined by calculating the coordinate distance under the same world coordinate system, so that the offset of the vertex coordinate can be directly and accurately reflected, the difference between the initial three-dimensional image model of the second version and the three-dimensional image model of the first version is further reflected, and the target shaping parameters can be better generated by adjusting the difference, so as to fit the user-defined virtual character image as much as possible.
In other embodiments, in the case that the topology comparison result indicates that the topology structure is inconsistent, the number of vertices or the index order of the vertices may be changed, which means that the vertices corresponding to one may not be found on the three-dimensional image model of the two versions.
Thus, determining a matching target topology in the topology of the first version of the three-dimensional avatar model for each vertex in the second version of the initial three-dimensional avatar model based on the topology comparison results, comprising: under the condition that the topological comparison results represent inconsistent topological structures, for each vertex in the initial three-dimensional image model of the second version, determining a matched target surface patch in a plurality of surface patches of the three-dimensional image model of the first version; the matched target patch is taken as a target topology matched with the vertex in the initial three-dimensional visual model of the second version.
Specifically, in the case where the topology comparison result indicates that the topology is inconsistent, the computer device determines, for each vertex in the initial three-dimensional avatar model of the second version, a matching patch among the patches of the three-dimensional avatar model of the first version, respectively, as a target patch. The matching target patch determined in the first version of the three-dimensional visual model may be a patch closest to the vertex, i.e., a nearest neighbor patch. For example, the computer device calculates, for each vertex in the second version of the initial three-dimensional avatar model, the distance of the respective vertex from all patches of the first version of the three-dimensional avatar model, and selects the nearest neighbor patch as the target patch that matches the corresponding vertex. The distance between the vertex and the patch can be characterized by the projected distance. Further, the computer device may calculate vertex differences from the vertices and the matched target patches.
In the above embodiment, if the topology structure is inconsistent and the vertex has changed, the vertex error is calculated by determining the patch matched with the vertex as the target patch and using the difference between the vertex and the target patch, so as to reduce the error caused by the version iteration as much as possible, improve the accuracy of the subsequent generation of the target shaping parameter, and further fit the user-defined virtual character image as much as possible.
Wherein the difference between the vertex and the target patch is calculated by the point-to-face distance. In some embodiments, where the topology comparison results characterize a topology inconsistency, determining vertex differences for each vertex in the second version of the initial three-dimensional avatar model from the first version of the three-dimensional avatar model based on differences between each vertex in the second version of the initial three-dimensional avatar model and the matched target topology comprises: respectively determining projection points of each vertex in the initial three-dimensional image model of the second version in the matched target surface patch; obtaining a projection distance between the vertex and the projection point for each vertex in the initial three-dimensional image model of the second version; and determining vertex differences between each vertex in the initial three-dimensional image model of the second version and the three-dimensional image model of the first version based on the projection distance corresponding to each vertex.
Specifically, the computer device determines a projection point of each vertex in the second version of the initial three-dimensional avatar model in the matched target surface patch, and obtains a projection distance between the vertex and the corresponding projection point for each vertex in the second version of the initial three-dimensional avatar model. The projection distance is the distance from the vertex to the surface patch. Further, the computer device may determine a vertex difference for each vertex in the second version of the initial three-dimensional avatar model from the first version of the three-dimensional avatar model based on the projection distance corresponding to each vertex.
In some embodiments, the computer device determining the vertex difference based on the projection distance comprises: determining the projection distance as a vertex difference; alternatively, additional numerical calculations may be performed based on the projection distance, such as multiplying/dividing by a certain coefficient, squaring/squaring, and the like.
In the above embodiment, under the condition that the point-to-point distance calculation cannot be performed, the vertex difference is determined through the point-to-point distance calculation, so that the offset of the vertex coordinates can be indirectly and accurately reflected, the difference between the initial three-dimensional image model of the second version and the three-dimensional image model of the first version is further reflected, and the target shaping parameters can be better generated by adjusting the difference, so that the user-defined virtual character image can be fitted as much as possible.
After obtaining the vertex differences for each vertex, the computer device may scale the differences for the two versions as a whole. To this end, in some embodiments, determining an inter-version model difference of the first version of the three-dimensional avatar model and the second version of the initial three-dimensional avatar model based on the determined vertex difference comprises: determining the overall error of the vertexes according to the vertex difference between each vertex in the initial three-dimensional image model of the second version and the three-dimensional image model of the first version; model differences between versions of the first version of the three-dimensional avatar model and the second version of the initial three-dimensional avatar model are determined based on the global errors of the vertices.
Specifically, the computer device performs numerical calculation according to the vertex difference between each vertex in the initial three-dimensional image model of the second version and the three-dimensional image model of the first version, and determines the overall error of the vertices according to the result of the numerical calculation.
Wherein the manner in which the computer device performs the numerical calculation based on the vertex differences for each vertex in the second version of the initial three-dimensional visual model includes, but is not limited to: one or more of the mean, variance, mean square error, or sum of squares, etc. of the vertex differences for each vertex are calculated. For example, the computer device calculates a mean square error for each vertex difference and uses the mean square error as a model difference between the versions of the first version of the three-dimensional avatar model and the second version of the initial three-dimensional avatar model.
Further, the computer device determines a model difference between the versions of the first version of the three-dimensional avatar model and the second version of the initial three-dimensional avatar model based on the determined global error of the vertices. In some embodiments, the computer device treats the determined global error of vertices as the inter-version model differences.
In the above embodiment, the model difference between the whole versions is determined by integrating the vertex errors of the vertices, so that the influence caused by the extremely individual special vertices or calculation errors can be avoided while the difference is measured as comprehensively as possible, the obtained result is more accurate, and further, the subsequently generated target modeling parameters are more accurate.
In some embodiments, determining the target shaping parameters for the initial three-dimensional visual model of the second version based on the inter-version model differences comprises: determining a parameter adjustment amount of the initial three-dimensional image model for the second version based on the inter-version model differences; the initial three-dimensional image model of the second version is matched with the initial shaping parameters; and carrying out parameter adjustment on the initial shaping parameters based on the parameter adjustment quantity to obtain target shaping parameters of the initial three-dimensional image model of the second version.
Specifically, the computer equipment determines the parameter adjustment quantity of the initial three-dimensional image model aiming at the second version based on the model difference between the versions and a preset adjustment coefficient; and the computer equipment carries out parameter adjustment on the initial molding parameters according to the parameter adjustment quantity, so as to obtain the target molding parameters.
In some embodiments, the computer device determines a parameter adjustment amount for the initial three-dimensional avatar model of the second version based on the inter-version model differences and preset adjustment coefficients, comprising: the computer equipment obtains the parameter adjustment amount based on the product or summation result of the model difference between versions and the preset adjustment coefficient.
In some embodiments, the computer device performs parameter adjustment on the initial molding parameters according to the parameter adjustment amount to obtain target molding parameters, including: and the computer equipment obtains the target shaping parameters according to the accumulated result of the parameter adjustment quantity and the initial shaping parameters.
Illustratively, the second version of the target shaping parameter x is based on the initial shaping parameter x as shown in equation (1) below 0 The method comprises the following steps:
wherein, gamma is a preset adjustment coefficient,is the model difference between versions.
In the above embodiment, the initial shaping parameters are updated by using the calculated model differences between versions as an adjustment amount to obtain the target shaping parameters, so that the virtual character image generated based on the target shaping parameters has high similarity with the virtual character image of the first version, thereby realizing the accurate reduction of the customized character image.
In the above example, the updating of the initial shaping parameters may be iterated a plurality of times such that the difference between the avatar image generated based on the target shaping parameters and the avatar image of the first version is as small as possible. To this end, in some embodiments, the target shaping parameters are obtained by performing a plurality of parameter adjustments to the initial shaping parameters; determining a parameter adjustment amount for the initial three-dimensional visual model of the second version based on the inter-version model differences, comprising: constructing a loss function based on model differences between versions; gradient derivation is carried out according to the loss function, and updated gradients aiming at the initial shaping parameters are obtained; determining a parameter adjustment amount of the initial three-dimensional image model for the second version based on the update gradient and the update step size; the target molding parameters obtained after the parameter adjustment is performed on the initial molding parameters based on the parameter adjustment amount are used as the initial molding parameters in the next parameter adjustment.
Specifically, the computer equipment constructs a loss function based on model differences among versions, and performs iterative computation for a plurality of times; in each iteration, the computer equipment substitutes model differences among versions of the current iteration into the loss function according to the loss function to calculate, and performs gradient derivation to obtain an updated gradient aiming at the initial shaping parameters.
The computer device then determines a parameter adjustment amount for the initial three-dimensional visual model of the second version based on the update gradient and the update step size. Wherein, the update step size can be calculated based on the update gradient, and the update step size of each iteration can be the same or different. Or the update step length is a fixed adjustment coefficient, and the update step length of each iteration is consistent.
Illustratively, the second version of the target shaping parameter x is shown in equation (2) below n+1 Based on the initial shaping parameter x n And (5) carrying out n times of iteration to obtain:
wherein, gamma n For the update step in the nth iteration,and model differences among versions calculated in the nth iteration are obtained.
Therefore, the target molding parameters obtained after the parameter adjustment is carried out on the initial molding parameters based on the parameter adjustment amount after each iteration are used as the initial molding parameters in the next parameter adjustment, so that the iteration update of the initial molding parameters is realized. Illustratively, after the n times of iterative updating, the target shaping parameter obtained by the last iterative calculation is taken as the final target shaping parameter.
In the above embodiment, the initial shaping parameters are updated for multiple times in multiple iteration manners, and the finally generated target shaping parameters can enable the generated virtual character image to have high similarity with the virtual character image of the first version, so that the customized character image is accurately restored.
In some scenarios, the second version is a new version and the first version is an old version. For example, taking a game application as an example, the second version may be a new version obtained by updating the game program, and the first version is a game version before updating. In some embodiments, the method further comprises: acquiring pre-stored custom shaping parameters; the custom shaping parameters are adapted to the first version of the three-dimensional image model; generating a second version of the custom avatar image model based on the custom shaping parameters in response to a trigger event that updates the three-dimensional image; the second version is obtained by updating the version of the first version; and displaying the customized avatar based on the second version of the customized avatar model.
The trigger event for updating the three-dimensional image can be generated for the user to operate the terminal triggered by one or more interaction options provided by the application program. For example, the terminal provides an interactive option of 'automatic update' through an application program, and when a user performs one or more operations of touching, clicking, sliding, pressing, etc. on the terminal, the terminal generates a trigger event for updating the three-dimensional image in response to the operation.
Specifically, after the pre-stored custom shaping parameters are acquired, the computer device responds to a trigger event for updating the three-dimensional figure, generates a target shaping parameter applied to the initial three-dimensional figure model of the second version based on the custom shaping parameters, and applies the target shaping parameter to the initial three-dimensional figure model of the second version, so that a custom avatar figure model of the second version is generated. Further, the computer device may present the customized avatar based on the second version of the customized avatar model.
In some embodiments, when the computer device is a server, the server may send the generated second version of the customized avatar model to the terminal, which exposes the customized avatar.
In some embodiments, when the computer device is a terminal, the terminal may download the second version of the customized avatar image model from the cloud server for display, or may locally generate and display the second version of the customized avatar image model in the case of computing power support of the terminal.
In the embodiment, the customized virtual character image model of the second version is quickly generated by the pre-stored customized shaping parameters and in response to the triggering event for updating the three-dimensional image, so that the customized virtual character image is displayed, the customized virtual character image of the user in the old version can be automatically restored without manually customizing again by the user, and the user experience is improved.
To further enhance the interactive experience with the user, in some embodiments, the method further comprises: displaying a custom parameter interface for the first version of the three-dimensional visual model; the user-defined parameter interface comprises a plurality of parameter options, wherein each parameter option is used for adjusting the appearance of a corresponding part in the three-dimensional image model; responding to the adjustment operation of the parameter options, and acquiring the custom parameter value of each parameter option; and obtaining the custom shaping parameters matched with the three-dimensional image model of the first version based on the custom parameter values of the parameter options.
Specifically, the computer equipment displays a user-defined parameter interface to the user through the application program, and the user-defined parameter interface can be used for adjusting parameters by the user, so that the user-definition of the three-dimensional image model of the first version is realized. In the custom parameter interface, a plurality of parameter options are displayed, and each parameter option is used for adjusting the appearance of a corresponding part in the three-dimensional image model. Wherein a plurality of regions (e.g., facial region, hand region, torso region, or foot region, etc.) are included in the three-dimensional visual model, each region including a plurality of sites, such as facial regions including, but not limited to, one or more of the eyes, mouth, nose, ears, or eyebrows, etc.
The parameter options displayed in the user-defined parameter interface may be input box options for obtaining specific parameter values through user input, menu options for providing a plurality of preset parameter values for a user to select, sliding control bars for determining specific parameter values through a user sliding pointer, and the like. The embodiment of the application does not limit the specific display modes of the custom parameter interface and the parameter options.
The user selects, inputs or slides the parameter options, and the computer equipment responds to the adjustment operation of the parameter options to acquire the custom parameter value of each parameter option. Furthermore, the computer device may obtain custom shaping parameters that match the first version of the three-dimensional image model based on the custom parameter values of the respective parameter options.
In some embodiments, the computer device uses the custom parameter values for the individual parameter options as custom shaping parameters for matching the three-dimensional avatar model of the first version. Or, the custom parameter values of the parameter options of the computer equipment are subjected to parameter mapping, so that custom shaping parameters matched with the three-dimensional image model of the first version are obtained.
In some embodiments, the custom parameter interface also displays a three-dimensional visual model; when the user selects, inputs or slides the parameter options, the appearance of the three-dimensional image model displayed in the user-defined parameter interface changes in real time according to the operation of the user, and the user can watch the adjusted appearance in real time, so that the user can conveniently carry out fine and personalized modification.
In the embodiment, by providing the custom parameter interface and the multiple parameter options, the user can not only customize the whole virtual character image individually, but also perform custom adjustment to the fine position, thereby improving the user experience.
The application also provides an application scene, which applies the method for generating the virtual character image model. Specifically, the application of the method for generating the avatar image model in the application scene is as follows: acquiring a first version of three-dimensional face model matched with the custom face pinching parameters, and acquiring a second version of initial three-dimensional face model; determining vertex differences of each vertex in the second version of the initial three-dimensional face model from the first version of the three-dimensional face model, respectively; determining a model difference between the versions of the first version of the three-dimensional face model and the second version of the initial three-dimensional face model according to the determined vertex difference; determining target pinching parameters for the initial three-dimensional facial model of the second version based on the inter-version model differences; the pinching face parameters are applied to the second version of the initial three-dimensional face model to generate a second version of the custom avatar image model.
It should be understood that, although the steps in the flowcharts related to the above embodiments are sequentially shown as indicated by arrows, these steps are not necessarily sequentially performed in the order indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least some of the steps in the flowcharts described in the above embodiments may include a plurality of steps or a plurality of stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of the steps or stages is not necessarily performed sequentially, but may be performed alternately or alternately with at least some of the other steps or stages.
The following description will exemplify a three-dimensional face model (abbreviated as face model) using a three-dimensional image model. The corresponding shaping parameters are kneading face parameters. Wherein the first version is an old version and the second version is a new version.
The method for generating the avatar model provided by the embodiment of the application can be applied to a system for generating the avatar model shown in fig. 10. The input of the system is old-version face pinching parameters (namely, custom face pinching parameters), an old-version three-dimensional grid model of a custom character is obtained through the old-version face pinching system, the initial face pinching parameters are updated in an iteration mode through a similarity measurement module, and finally, the new-version face pinching parameters (namely, target face pinching parameters) output by the new-version face pinching system are used, so that the new-version virtual character image generated by applying the new-version face pinching parameters is similar to the old-version virtual character image as much as possible.
As shown in fig. 11, the similarity measurement module includes a topology consistency determination module, a point-to-point error calculation module, and a point-to-face error calculation module. In fig. 11, the solid black line indicates the forward direction of reasoning, and the dashed black line indicates the gradient direction of conduction. The computer equipment acquires the old version face model and the new version face model, and performs topology consistency comparison through a topology consistency judging module. When the topology consistency judging module outputs a result that the new version face model is consistent with the old version face model in topology, the computer equipment calculates square errors of each vertex on the new version face model and corresponding vertex of the old version face model through the point-to-point error calculating module; when the topological structure of the model is modified in the game version iteration process, the topological consistency judging module outputs the result that the topology of the new version face model is inconsistent with that of the old version face model, and at the moment, the computer equipment calculates the square error of the nearest triangle face between each vertex on the new version face model and the old version face model through the point-to-face error calculating module. The mean square error of the new version face model vertex is used as a loss function, the gradient is calculated through a gradient descent method, the new version face pinching parameter is finally iterated and optimized, the process is repeated for N times (for example, N=500), and the new version face pinching parameter can be obtained, wherein the new version face pinching parameter is the target face pinching parameter.
The schematic diagram of the point-to-point error calculation module may be as shown in fig. 12, where the point-to-point error calculation module traverses the vertices on the new version face model, and takes one vertex on the new version face model and the corresponding vertex on the old version face model to calculate the square error as the vertex difference. After all vertexes on the new version face model are traversed, the point-to-point error calculation module outputs average vertex errors according to square errors of all vertexes to serve as model differences among the versions.
The schematic diagram of the point-to-face error calculation module may be shown in fig. 13, which is suitable for the case that the new-version face pinching system modifies the face model topology structure based on the old-version face pinching system. The point-to-face error calculation module traverses the vertexes on the new version face model, and as the new version face model is inconsistent with the old version face model in topology, the corresponding vertexes cannot be found directly on the old version face model, so that the triangular faces or the four corner faces of the old version face model are traversed, the point-to-face Euclidean distance between each triangular face or four corner face and the vertexes of the new version face model is calculated, the triangular face piece or the four corner face piece with the smallest distance is taken as the nearest neighbor triangular face piece or the four corner face piece, and the Euclidean distance error of the point-to-face is the vertex error of the vertexes. And after all vertexes on the new version face model are traversed, the point-to-face error calculation module outputs average vertex errors as model differences among versions.
The method for generating the virtual character image model provided by the embodiment of the application does not need to secondarily develop a character image generating system (such as a face pinching system) in an original application program, and the face pinching setting is consistent with the application program system, so that the cost is low.
In addition, when more users perform image customization at the same time, the application can also perform large-scale batch calculation by means of GPU (Graphics Processing Unit, graphic processor) and deep learning framework Pytorch, can convert the pinching face data of the users between multiple versions in a short time, does not occupy larger computer power resources and storage resources, has high efficiency, and is suitable for converting the pinching face data of the players under the iteration of quick versions.
In addition, the application only needs to store the custom shaping parameters, is highly decoupled with the character image generation system of the application program, has high expansibility, can simultaneously have any multiple sets of face pinching systems in the system, and does not need to be optimized for a specific system. Taking a game application program as an example, the game making group can automatically realize the addition of the new version conversion model only by updating corresponding game resources (a numerical plan table, a skeleton model, a Blendrope model or the like), and the efficiency of version iteration is higher. In addition, as only the custom shaping parameters highly decoupled from the character image generating system are needed to be stored, the data size is very small, and the consumption of storage resources is greatly saved.
Based on the same inventive concept, the embodiment of the application also provides a device for generating the avatar image model for realizing the method for generating the avatar image model. The implementation of the solution provided by the device is similar to the implementation described in the above method, so the specific limitation in the embodiments of the device for generating one or more avatar models provided below may refer to the limitation of the method for generating an avatar model hereinabove, and will not be described herein.
In some embodiments, as shown in fig. 14, there is provided an avatar image model generating apparatus 1400 including: an acquisition module 1401, a determination module 1402 and a generation module 1403, wherein:
an obtaining module 1401 is configured to obtain a first version of the three-dimensional image model that matches the custom shaping parameter, and obtain a second version of the initial three-dimensional image model.
A determining module 1402 is configured to determine vertex differences between each vertex in the second version of the initial three-dimensional avatar model and the first version of the three-dimensional avatar model, respectively.
The determining module 1402 is further configured to determine a model difference between the versions of the first version of the three-dimensional avatar model and the second version of the initial three-dimensional avatar model based on the determined vertex difference.
The determining module 1402 is further configured to determine a target shaping parameter for the initial three-dimensional visual model of the second version based on the inter-version model differences.
A generation module 1403 is used to apply the target shaping parameters to the second version of the initial three-dimensional avatar model to generate a second version of the custom avatar model.
In some embodiments, the obtaining module is further configured to obtain the custom shaping parameter; determining target model parameters of the three-dimensional image model of the first version based on a preset parameter value mapping relation between the self-defined shaping parameters and model parameters of the three-dimensional image model of the first version; and performing shaping adjustment on the initial three-dimensional image model of the first version by referring to the target model parameters to obtain the three-dimensional image model of the first version matched with the self-defined shaping parameters.
In some embodiments, the target model parameters include target bone parameters and target fusion deformation parameters; the acquisition module is also used for sequentially carrying out skeleton adjustment and adjustment based on skin weight on the initial three-dimensional image model of the first version according to the target skeleton parameters to obtain a first three-dimensional image model; determining a second three-dimensional image model obtained by fusion under the influence of each weight on the basis of the weight of each of a plurality of preset model templates in the target fusion deformation parameters; and superposing the first three-dimensional image model and the second three-dimensional image model to obtain a first version of three-dimensional image model matched with the custom molding parameters.
In some embodiments, the obtaining module is further configured to obtain a transform coefficient for each bone in the target bone parameter; adjusting each bone in the initial three-dimensional image model of the first version based on the transformation coefficient to obtain each bone after transformation; and performing skin adjustment on the initial three-dimensional image model of the first version according to the transformed bones and preset skin weights corresponding to the bones respectively to obtain a first three-dimensional image model.
In some embodiments, the determining module is further configured to perform a consistency comparison on the respective topologies of the first version of the three-dimensional avatar model and the second version of the initial three-dimensional avatar model based on at least one topology dimension, to obtain a topology comparison result; determining a matched target topological structure in the topological structure of the three-dimensional image model of the first version aiming at each vertex in the initial three-dimensional image model of the second version according to the topological comparison result; the vertex differences of each vertex in the second version of the initial three-dimensional avatar model from the first version of the three-dimensional avatar model are determined based on the differences between each vertex in the second version of the initial three-dimensional avatar model and the matched target topology.
In some embodiments, the determining module is further configured to compare, for any one of at least one topology dimension, the respective topologies of the first version of the three-dimensional avatar model and the second version of the initial three-dimensional avatar model, to obtain a comparison result of the single topology dimension; based on the comparison result of each topological dimension, a topological comparison result between the three-dimensional image model of the first version and the initial three-dimensional image model of the second version is obtained.
In some embodiments, the determining module is further configured to obtain, if the topology comparison result indicates that the topology structures are consistent, an index order of each vertex in the initial three-dimensional image model of the second version; determining target vertexes with the same index sequence in a plurality of vertexes of the three-dimensional image model of the first version for each vertex in the initial three-dimensional image model of the second version; and taking the target vertex in the first version of three-dimensional image model as a target topological structure matched with the vertex in the second version of initial three-dimensional image model.
In some embodiments, the determining module is further configured to obtain, for each vertex in the second version of the initial three-dimensional avatar model, a vertex coordinate of each vertex, and a vertex coordinate of the matched target vertex; a vertex difference of each vertex in the second version of the initial three-dimensional avatar model from the first version of the three-dimensional avatar model is determined based on a coordinate distance between the vertex coordinates of each vertex and the vertex coordinates of the matched target vertex.
In some embodiments, the determining module is further configured to determine, for each vertex in the second version of the initial three-dimensional avatar model, a matching target patch among the plurality of patches of the first version of the three-dimensional avatar model, respectively, in a case where the topology comparison result characterizes the topology inconsistency; the matched target patch is taken as a target topology matched with the vertex in the initial three-dimensional visual model of the second version.
In some embodiments, the determining module is further configured to determine a projection point of each vertex in the second version of the initial three-dimensional avatar model in the matched target patch, respectively; obtaining a projection distance between the vertex and the projection point for each vertex in the initial three-dimensional image model of the second version; and determining vertex differences between each vertex in the initial three-dimensional image model of the second version and the three-dimensional image model of the first version based on the projection distance corresponding to each vertex.
In some embodiments, the determining module is further configured to determine an overall error for the vertices based on vertex differences for each vertex in the second version of the initial three-dimensional avatar model from the first version of the three-dimensional avatar model; model differences between versions of the first version of the three-dimensional avatar model and the second version of the initial three-dimensional avatar model are determined based on the global errors of the vertices.
In some embodiments, the determining module is further for determining a parameter adjustment amount for the initial three-dimensional avatar model of the second version based on the inter-version model differences; the initial three-dimensional image model of the second version is matched with the initial shaping parameters; and carrying out parameter adjustment on the initial shaping parameters based on the parameter adjustment quantity to obtain target shaping parameters of the initial three-dimensional image model of the second version.
In some embodiments, the target shaping parameters are obtained by performing multiple parameter adjustments to the initial shaping parameters; the determining module is also used for constructing a loss function based on model differences among versions; gradient derivation is carried out according to the loss function, and updated gradients aiming at the initial shaping parameters are obtained; determining a parameter adjustment amount of the initial three-dimensional image model for the second version based on the update gradient and the update step size; the target molding parameters obtained after the parameter adjustment is performed on the initial molding parameters based on the parameter adjustment amount are used as the initial molding parameters in the next parameter adjustment.
In some embodiments, the apparatus further includes an updating module, configured to obtain a pre-stored custom shaping parameter; the custom shaping parameters are adapted to the first version of the three-dimensional image model; generating a second version of the custom avatar image model based on the custom shaping parameters in response to a trigger event that updates the three-dimensional image; the second version is obtained by updating the version of the first version; and displaying the customized avatar based on the second version of the customized avatar model.
In some embodiments, the apparatus further comprises an interaction module for exposing a custom parameter interface for the first version of the three-dimensional avatar model; the user-defined parameter interface comprises a plurality of parameter options, wherein each parameter option is used for adjusting the appearance of a corresponding part in the three-dimensional image model; responding to the adjustment operation of the parameter options, and acquiring the custom parameter value of each parameter option; and obtaining the custom shaping parameters matched with the three-dimensional image model of the first version based on the custom parameter values of the parameter options.
The respective modules in the above-described avatar image model generating apparatus may be implemented in whole or in part by software, hardware, and combinations thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
In some embodiments, a computer device is provided, which may be a server or a terminal. The following description will take a computer device as an example of a server, and the internal structure thereof can be shown in fig. 15. The computer device includes a processor, a memory, an Input/Output interface (I/O) and a communication interface. The processor, the memory and the input/output interface are connected through a system bus, and the communication interface is connected to the system bus through the input/output interface. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, computer programs, and a database. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The database of the computer equipment is used for storing data such as shaping parameters and the like. The input/output interface of the computer device is used to exchange information between the processor and the external device. The communication interface of the computer device is used for communicating with an external terminal through a network connection. The computer program, when executed by a processor, implements a method of generating a avatar image model.
It will be appreciated by those skilled in the art that the structure shown in fig. 15 is merely a block diagram of a portion of the structure associated with the present inventive arrangements and is not limiting of the computer device to which the present inventive arrangements are applied, and that a particular computer device may include more or fewer components than shown, or may combine certain components, or have a different arrangement of components.
In some embodiments, there is also provided a computer device comprising a memory and a processor, the memory having stored therein a computer program, the processor implementing the steps of the method embodiments described above when the computer program is executed.
In some embodiments, a computer-readable storage medium is provided, on which a computer program is stored which, when executed by a processor, carries out the steps of the method embodiments described above.
In some embodiments, a computer program product is provided, comprising a computer program which, when executed by a processor, implements the steps of the method embodiments described above.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, database, or other medium used in embodiments provided herein may include at least one of non-volatile and volatile memory. The nonvolatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical Memory, high density embedded nonvolatile Memory, resistive random access Memory (ReRAM), magnetic random access Memory (Magnetoresistive Random Access Memory, MRAM), ferroelectric Memory (Ferroelectric Random Access Memory, FRAM), phase change Memory (Phase Change Memory, PCM), graphene Memory, and the like. Volatile memory can include random access memory (Random Access Memory, RAM) or external cache memory, and the like. By way of illustration, and not limitation, RAM can be in the form of a variety of forms, such as static random access memory (Static Random Access Memory, SRAM) or dynamic random access memory (Dynamic Random Access Memory, DRAM), and the like. The databases referred to in the embodiments provided herein may include at least one of a relational database and a non-relational database. The non-relational database may include, but is not limited to, a blockchain-based distributed database, and the like. The processor referred to in the embodiments provided in the present application may be a general-purpose processor, a central processing unit, a graphics processor, a digital signal processor, a programmable logic unit, a data processing logic unit based on quantum computing, or the like, but is not limited thereto.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The foregoing examples illustrate only a few embodiments of the application and are described in detail herein without thereby limiting the scope of the application. It should be noted that it will be apparent to those skilled in the art that several variations and modifications can be made without departing from the spirit of the application, which are all within the scope of the application. Accordingly, the scope of the application should be assessed as that of the appended claims.

Claims (19)

1. A method of generating a avatar image model, the method comprising:
acquiring a first version of three-dimensional image model matched with the custom shaping parameters, and acquiring a second version of initial three-dimensional image model;
determining vertex differences between each vertex in the second version of the initial three-dimensional avatar model and the first version of the three-dimensional avatar model;
Determining inter-version model differences of the first version of the three-dimensional avatar model and the second version of the initial three-dimensional avatar model according to the determined vertex differences;
determining target shaping parameters for the initial three-dimensional image model of the second version based on the inter-version model differences;
the target shaping parameters are applied to the second version of the initial three-dimensional avatar model to generate the second version of the custom avatar model.
2. The method of claim 1, wherein the obtaining the first version of the three-dimensional avatar model that matches the custom shaping parameters comprises:
obtaining a custom shaping parameter;
determining target model parameters of the three-dimensional image model of the first version based on a preset parameter value mapping relation between the self-defined shaping parameters and model parameters of the three-dimensional image model of the first version;
and performing shaping adjustment on the initial three-dimensional image model of the first version by referring to the target model parameters to obtain the three-dimensional image model of the first version matched with the self-defined shaping parameters.
3. The method of claim 2, wherein the target model parameters include target bone parameters and target fusion deformation parameters; performing shaping adjustment on the initial three-dimensional image model of the first version by referring to the target model parameters to obtain the three-dimensional image model of the first version matched with the self-defined shaping parameters, wherein the method comprises the following steps:
According to the target bone parameters, sequentially performing bone adjustment and adjustment based on skin weight on the initial three-dimensional image model of the first version to obtain a first three-dimensional image model;
determining a second three-dimensional image model obtained by fusion under the influence of each weight on the basis of the weight of each of a plurality of preset model templates in the target fusion deformation parameters;
and superposing the first three-dimensional image model and the second three-dimensional image model to obtain a first version of three-dimensional image model matched with the custom molding parameters.
4. The method of claim 3, wherein the sequentially performing bone adjustment and skin weight-based adjustment on the first version of the initial three-dimensional avatar model according to the target bone parameter to obtain the first three-dimensional avatar model comprises:
obtaining a transformation coefficient of each bone in the target bone parameters;
adjusting each skeleton in the initial three-dimensional image model of the first version based on the transformation coefficient to obtain each skeleton after transformation;
and performing skin adjustment on the initial three-dimensional image model of the first version according to the transformed bones and preset skin weights corresponding to the bones respectively to obtain a first three-dimensional image model.
5. The method of claim 1, wherein the separately determining the vertex differences of each vertex in the second version of the initial three-dimensional avatar model from the first version of the three-dimensional avatar model comprises:
based on at least one topological dimension, consistency comparison is carried out on the topological structures of the first version of three-dimensional image model and the initial three-dimensional image model of the second version of three-dimensional image model, and a topological comparison result is obtained;
determining a matched target topological structure in the topological structure of the three-dimensional image model of the first version aiming at each vertex in the initial three-dimensional image model of the second version according to the topological comparison result;
a vertex difference of each vertex in the second version of the initial three-dimensional avatar model from the first version of the three-dimensional avatar model is determined based on a difference between each vertex in the second version of the initial three-dimensional avatar model and the matched target topology.
6. The method of claim 5, wherein the performing a consistency comparison of the respective topologies of the first version of the three-dimensional avatar model and the second version of the initial three-dimensional avatar model based on the at least one topology dimension to obtain a topology comparison result comprises:
For any topological dimension of at least one topological dimension, carrying out consistency comparison on the topological structures of the first version of three-dimensional image model and the initial three-dimensional image model of the second version of three-dimensional image model to obtain a comparison result of a single topological dimension;
and obtaining a topological comparison result between the three-dimensional image model of the first version and the initial three-dimensional image model of the second version based on the comparison result of each topological dimension.
7. The method of claim 5, wherein determining a matching target topology in the topology of the first version of the three-dimensional avatar model for each vertex in the second version of the initial three-dimensional avatar model based on the topology comparison results, comprises:
under the condition that the topological comparison result represents that the topological structure is consistent, acquiring the index sequence of each vertex in the initial three-dimensional image model of the second version;
determining target vertexes with the same index sequence in a plurality of vertexes of the three-dimensional image model of the first version aiming at each vertex in the initial three-dimensional image model of the second version;
And taking the target vertex in the first version of three-dimensional image model as a target topological structure matched with the vertex in the second version of initial three-dimensional image model.
8. The method of claim 6, wherein, in the case where the topology comparison results characterize topology agreement, the determining vertex differences of each vertex in the second version of the initial three-dimensional avatar model from the first version of the three-dimensional avatar model based on differences between each vertex in the second version of the initial three-dimensional avatar model and the matched target topology comprises:
obtaining vertex coordinates of each vertex and vertex coordinates of the matched target vertex aiming at each vertex in the initial three-dimensional image model of the second version;
a vertex difference of each vertex in the second version of the initial three-dimensional avatar model from the first version of the three-dimensional avatar model is determined based on a coordinate distance between vertex coordinates of each vertex and vertex coordinates of the matched target vertex.
9. The method of claim 5, wherein determining a matching target topology in the topology of the first version of the three-dimensional avatar model for each vertex in the second version of the initial three-dimensional avatar model based on the topology comparison results, comprises:
Under the condition that the topological comparison results represent inconsistent topological structures, for each vertex in the initial three-dimensional image model of the second version, determining a matched target surface patch in a plurality of surface patches of the three-dimensional image model of the first version;
and taking the matched target surface patch as a target topological structure matched with the vertex in the initial three-dimensional image model of the second version.
10. The method of claim 9, wherein, in the event that the topology comparison results in characterizing topology inconsistencies, the determining vertex differences for each vertex in the second version of the initial three-dimensional avatar model and the first version of the three-dimensional avatar model based on differences between each vertex in the second version of the initial three-dimensional avatar model and the matched target topology comprises:
respectively determining projection points of each vertex in the initial three-dimensional image model of the second version in the matched target surface patch;
obtaining a projection distance between the vertex and the projection point for each vertex in the initial three-dimensional image model of the second version;
And determining vertex differences between each vertex in the initial three-dimensional image model of the second version and the three-dimensional image model of the first version based on the projection distance corresponding to each vertex.
11. The method of claim 1, wherein the determining the inter-version model differences of the first version of the three-dimensional avatar model and the second version of the initial three-dimensional avatar model based on the determined vertex differences comprises:
determining the overall error of the vertexes according to the vertex difference between each vertex in the initial three-dimensional image model of the second version and the three-dimensional image model of the first version;
based on the global error of the vertices, a model difference between versions of the first version of the three-dimensional avatar model and the second version of the initial three-dimensional avatar model is determined.
12. The method of claim 1, wherein the determining the target shaping parameters for the initial three-dimensional visual model of the second version based on the inter-version model differences comprises:
determining a parameter adjustment for the initial three-dimensional image model of the second version based on the inter-version model differences; the initial three-dimensional image model of the second version is matched with the initial shaping parameters;
And carrying out parameter adjustment on the initial shaping parameters based on the parameter adjustment quantity to obtain target shaping parameters aiming at the initial three-dimensional image model of the second version.
13. The method according to claim 12, wherein the target shaping parameters are obtained by performing a plurality of parameter adjustments to the initial shaping parameters; the determining, based on the inter-version model differences, a parameter adjustment amount for the initial three-dimensional visual model of the second version, comprising:
constructing a loss function based on the model differences between versions;
gradient derivation is carried out according to the loss function, and an update gradient aiming at the initial shaping parameter is obtained;
determining a parameter adjustment amount for the initial three-dimensional image model of the second version based on the update gradient and the update step size; and the target shaping parameters obtained after the parameter adjustment is carried out on the initial shaping parameters based on the parameter adjustment quantity are used as the initial shaping parameters in the next parameter adjustment.
14. The method according to any one of claims 1 to 13, further comprising:
acquiring pre-stored custom shaping parameters; the custom shaping parameters are adapted to a first version of the three-dimensional image model;
Generating a second version of the custom avatar image model based on the custom shaping parameters in response to a trigger event that updates the three-dimensional image; the second version is obtained by updating the first version;
and displaying the customized avatar based on the customized avatar model of the second version.
15. The method according to any one of claims 1 to 13, further comprising:
displaying a custom parameter interface for the first version of the three-dimensional visual model; the user-defined parameter interface comprises a plurality of parameter options, wherein each parameter option is used for adjusting the appearance of a corresponding part in the three-dimensional image model;
responding to the adjustment operation of the parameter options, and acquiring the custom parameter value of each parameter option;
and obtaining the custom shaping parameters matched with the three-dimensional image model of the first version based on the custom parameter values of the parameter options.
16. An apparatus for generating an avatar image model, the apparatus comprising:
the acquisition module is used for acquiring a first version of three-dimensional image model matched with the custom molding parameters and acquiring a second version of initial three-dimensional image model;
A determining module, configured to determine a vertex difference between each vertex in the second version of the initial three-dimensional avatar model and the first version of the three-dimensional avatar model;
the determining module is further configured to determine a model difference between versions of the first version of the three-dimensional avatar model and the second version of the initial three-dimensional avatar model according to the determined vertex difference;
the determining module is further configured to determine a target shaping parameter for the initial three-dimensional image model of the second version based on the inter-version model difference;
and the generation module is used for applying the target shaping parameters to the initial three-dimensional image model of the second version so as to generate the customized virtual character image model of the second version.
17. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor implements the steps of the method of any one of claims 1 to 15 when the computer program is executed.
18. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the steps of the method of any of claims 1 to 15.
19. A computer program product comprising a computer program, characterized in that the computer program, when being executed by a processor, implements the steps of the method of any one of claims 1 to 15.
CN202310145296.5A 2023-02-08 2023-02-08 Virtual character image model generation method, device and computer equipment Pending CN116977605A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310145296.5A CN116977605A (en) 2023-02-08 2023-02-08 Virtual character image model generation method, device and computer equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310145296.5A CN116977605A (en) 2023-02-08 2023-02-08 Virtual character image model generation method, device and computer equipment

Publications (1)

Publication Number Publication Date
CN116977605A true CN116977605A (en) 2023-10-31

Family

ID=88475493

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310145296.5A Pending CN116977605A (en) 2023-02-08 2023-02-08 Virtual character image model generation method, device and computer equipment

Country Status (1)

Country Link
CN (1) CN116977605A (en)

Similar Documents

Publication Publication Date Title
JP7299414B2 (en) Image processing method, device, electronic device and computer program
CN111632374B (en) Method and device for processing face of virtual character in game and readable storage medium
JP2018532216A (en) Image regularization and retargeting system
CN115564642B (en) Image conversion method, image conversion device, electronic apparatus, storage medium, and program product
CN116977522A (en) Rendering method and device of three-dimensional model, computer equipment and storage medium
CN112699791A (en) Face generation method, device and equipment of virtual object and readable storage medium
JP2022528999A (en) How to drive video characters and their devices, equipment and computer programs
CN112598773A (en) Method and device for realizing skeleton skin animation
JP4842242B2 (en) Method and apparatus for real-time expression of skin wrinkles during character animation
CN114202615A (en) Facial expression reconstruction method, device, equipment and storage medium
CN115908664B (en) Animation generation method and device for man-machine interaction, computer equipment and storage medium
WO2023077972A1 (en) Image data processing method and apparatus, virtual digital human construction method and apparatus, device, storage medium, and computer program product
WO2023130819A1 (en) Image processing method and apparatus, and device, storage medium and computer program
CN116977605A (en) Virtual character image model generation method, device and computer equipment
US20230079478A1 (en) Face mesh deformation with detailed wrinkles
CN111862330B (en) Model acquisition method and device, storage medium and electronic device
CN116912433B (en) Three-dimensional model skeleton binding method, device, equipment and storage medium
US11957976B2 (en) Predicting the appearance of deformable objects in video games
CN117557699B (en) Animation data generation method, device, computer equipment and storage medium
US20240013500A1 (en) Method and apparatus for generating expression model, device, and medium
CN117132713A (en) Model training method, digital person driving method and related devices
KR20060067242A (en) System and its method of generating face animation using anatomy data
CN117765155A (en) Expression redirection driving method and virtual display device
Rajendran Understanding the Desired Approach for Animating Procedurally
CN117671090A (en) Expression processing method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40098086

Country of ref document: HK