CN117195565A - Modeling method and system, electronic equipment and storage medium - Google Patents

Modeling method and system, electronic equipment and storage medium Download PDF

Info

Publication number
CN117195565A
CN117195565A CN202311173360.7A CN202311173360A CN117195565A CN 117195565 A CN117195565 A CN 117195565A CN 202311173360 A CN202311173360 A CN 202311173360A CN 117195565 A CN117195565 A CN 117195565A
Authority
CN
China
Prior art keywords
portrait
image data
model
modeling
bottom die
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311173360.7A
Other languages
Chinese (zh)
Inventor
王智武
唐欣桐
赵紫晗
李立葳
靳俊兆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Yuanjing Digital Technology Co ltd
Original Assignee
Beijing Yuanjing Digital Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Yuanjing Digital Technology Co ltd filed Critical Beijing Yuanjing Digital Technology Co ltd
Priority to CN202311173360.7A priority Critical patent/CN117195565A/en
Publication of CN117195565A publication Critical patent/CN117195565A/en
Pending legal-status Critical Current

Links

Abstract

The application provides a modeling method and system, electronic equipment and a storage medium. Wherein, modeling method includes: under the condition that the first portrait bottom die is obtained, adjusting modeling parameters of the first portrait bottom die to obtain a second portrait bottom die; under the condition that at least two pieces of first image data are acquired, a rendering model is built based on the at least two pieces of first image data, wherein the at least two pieces of first image data comprise target portrait characteristics; and fitting the second portrait bottom die with the rendering model to obtain a target portrait modeling corresponding to the target portrait characteristics. According to the application, the rendering model is constructed by utilizing the image data of the target portrait, and then the three-dimensional digital mannequin can be directly obtained according to the rendering model and the 3DMM (3D Morphable Face Model, face shape and appearance statistical model) bottom die, so that the 2D and 3D organic fusion is realized, the generation process of the digital mannequin is simplified, and the generation cost and the generation period of the digital mannequin are reduced.

Description

Modeling method and system, electronic equipment and storage medium
Technical Field
The present application relates to the field of modeling technologies, and in particular, to a modeling method, a modeling system, an electronic device, and a storage medium.
Background
In the related art, the digital person generation technology is mainly realized by pinching the face by means of a digital person generation platform developed abroad, or by making a deformable model to self-control a pinching face bottom die, or by authoring and fine-tuning by a designer through other software. However, the above method has problems of high cost, long period, complex generation process, and high technical threshold, so a digital person generation technology with low cost and short period is needed.
Disclosure of Invention
The present application aims to solve or improve at least one of the above technical problems.
To this end, a first object of the present application is to provide a modeling method.
A second object of the present application is to provide a modeling system.
A third object of the present application is to provide an electronic device.
A fourth object of the present application is to provide a storage medium.
In order to achieve the first object of the present application, the technical solution of the present application provides a modeling method, including: under the condition that the first portrait bottom die is obtained, adjusting modeling parameters of the first portrait bottom die to obtain a second portrait bottom die; under the condition that at least two pieces of first image data are acquired, a rendering model is built based on the at least two pieces of first image data, wherein the at least two pieces of first image data comprise target portrait characteristics; and fitting the second portrait bottom die with the rendering model to obtain a target portrait modeling corresponding to the target portrait characteristics.
The modeling method provided by the application comprises the steps of firstly establishing a first portrait bottom die by utilizing a 3DMM (3D Morphable Face Model) technology and a face shape and appearance statistical model, and then adjusting a plurality of key model parameters of the first portrait bottom die so as to obtain a second portrait bottom die. The second portrait bottom die is a low-precision model and is simple in structure. Further, at least two pieces of first image data are acquired, wherein the first image data comprise target portrait features, the target portrait features refer to portrait images in the first image data, namely, image data of the target portrait features on a plane are acquired, and then an automatic data training process is carried out through a neural rendering network according to the at least two pieces of first image data, so that a rendering model is constructed. The rendering model comprises data information of the target portrait features, then the rendering model and the second portrait bottom die are fitted, and the fitting process of target portrait modeling is realized, so that the target portrait features appear on the second portrait bottom die, and the target portrait modeling is completed. According to the application, the rendering model is constructed through the image data of the target portrait characteristic, namely the plane image data, and then fitting is carried out according to the rendering model and the 3DMM bottom die, so that a three-dimensional digital mannequin can be obtained, the 2D and 3D organic fusion is realized, the generation process of the digital mannequin is simplified, and the generation cost and the generation period of the digital mannequin are reduced.
In addition, the technical scheme provided by the application can also have the following additional technical characteristics:
in some embodiments, optionally, in a case that at least two pieces of first image data are acquired, the step of constructing the rendering model based on the at least two pieces of first image data includes: extracting two-dimensional characteristic parameters of the target portrait characteristic in each piece of first image data; converting at least two-dimensional characteristic parameters corresponding to each piece of first image data into at least two three-dimensional characteristic parameters; and carrying out model training on the initial model through at least two three-dimensional characteristic parameters corresponding to each piece of first image data to obtain a rendering model.
In this technical solution, when at least two pieces of first image data are acquired, the step of constructing a rendering model based on the at least two pieces of first image data includes: since the first image data is plane data, in each piece of first image data, two-dimensional feature parameters of the target portrait feature in the first image data are acquired, wherein the two-dimensional feature parameters refer to position data, color data, and the like of the target portrait feature in the plane. Further, since the target portrait is modeled as a stereoscopic model, it is necessary to convert the two-dimensional feature parameters of the target portrait features in the first image data into three-dimensional feature parameters of the target portrait features, where the three-dimensional feature parameters refer to position data, color data, and the like of the target portrait features in a stereoscopic space. Thereby determining the position information of the target portrait features in reality. And then, performing model training on the initial model through a neural rendering network by utilizing the three-dimensional characteristic parameters, so as to obtain a rendering model with the position information of the target portrait characteristic.
In some technical solutions, optionally, the step of performing model training on the initial model through at least two three-dimensional feature parameters corresponding to each piece of first image data to obtain a rendering model includes: inputting at least two three-dimensional characteristic parameters into an initial model for training; acquiring the loss degree of a loss function in the training process; and under the condition that the loss degree is smaller than or equal to the preset loss degree, determining the trained initial model as a rendering model.
In the technical scheme, at least two three-dimensional characteristic parameters are utilized to carry out an automatic data training process through a neural rendering network, so that a rendering model is obtained. Specifically, at least two three-dimensional characteristic parameters are input into an initial model for training, and in the training process of the initial model, the loss degree of the initial model is obtained, and the fitting degree of the initial model in the at least two three-dimensional characteristic parameters can be evaluated. When the loss degree is greater than or equal to the preset loss degree, the fitting degree is lower, that is, the initial model does not meet the use requirement, so that the model parameters in the initial model need to be adjusted, and specifically, the model parameters can be adjusted according to the loss degree. And inputting at least two three-dimensional characteristic parameters into the initial model after the model parameter adjustment, training, acquiring the loss degree again, and if the loss degree is larger than the preset loss degree, continuing to adjust and repeating the method until the loss degree is smaller than or equal to the preset loss degree. The loss degree is smaller than the preset loss degree, so that the fitting degree is higher, namely the initial model is successfully trained, and the use requirement is met, and therefore, the initial model after the training is successful is used as a rendering model.
In some embodiments, optionally, in a case that the first portrait bottom die is obtained, adjusting modeling parameters of the first portrait bottom die to obtain a second portrait bottom die includes: acquiring feature information of target portrait features; and adjusting modeling parameters of the first portrait bottom die based on the characteristic information to obtain a second portrait bottom die.
In this technical scheme, under the condition that obtains first portrait die block, adjust the modeling parameter of first portrait die block, obtain the step of second portrait die block, include: the target portrait characteristic is determined, and then characteristic information of the target portrait characteristic is acquired, wherein the characteristic information refers to indispensable data information in the target portrait characteristic, such as position information of five sense organs, height information of a body and the like. And modifying modeling parameters of the first portrait bottom die through the acquired characteristic information, so that a second portrait bottom die is obtained. That is, in the present application, it is not necessary to acquire a large amount of data information of the target portrait features, but only the feature information of the target portrait features is acquired, that is, in the present application, the second portrait base mold is provided with only a rough portrait shape, and a standard digital portrait model can be generated without excessive fine carving.
In some aspects, optionally, the target portrait features include at least one of: facial features, physical features.
In the technical scheme, the target portrait features can comprise face features, five sense organs features, body features and the like, and when the rendering model is built according to the first image data by placing the target portrait features in the first image data, the rendering model can have the target portrait features, and then the target portrait features appear on the target portrait modeling.
In some embodiments, optionally, the feature information includes: size information, position information.
In this technical solution, the feature information of the target portrait feature may include: size information, position information. Such as face width information, eye position information, nose position information, mouth position information, etc.
In some embodiments, optionally, the modeling method further includes rendering the target portrait modeling.
In the technical scheme, after a complete target portrait modeling is obtained, in order to enable the target portrait modeling to have an effect of super-writing, the target portrait modeling after the modeling is completed needs to be rendered, namely, real reflection, projection and the like are added. By rendering the target portrait modeling, the target portrait modeling is ensured to have high-quality and vivid presentation effect.
In order to achieve the second object of the present application, the technical solution of the present application provides a modeling system, including: the adjusting module is used for adjusting modeling parameters of the first portrait bottom die to obtain a second portrait bottom die under the condition that the first portrait bottom die is obtained; the construction module is used for constructing a rendering model based on at least two pieces of first image data under the condition that the at least two pieces of first image data are acquired, wherein the at least two pieces of first image data comprise target portrait characteristics; the fitting module is used for carrying out fitting treatment on the second portrait bottom die and the rendering model to obtain a target portrait modeling corresponding to the target portrait characteristics.
The application provides a modeling system, wherein the modeling system comprises: the device comprises an adjusting module, a constructing module and a fitting module. The method comprises the steps of firstly establishing a first portrait bottom die by using a 3DMM (3D morphological model) technology, and then adjusting a plurality of key model parameters of the first portrait bottom die by an adjusting module so as to obtain a second portrait bottom die. Further, at least two pieces of first image data are acquired, wherein the first image data comprise target portrait characteristics, namely, plane data of the target portrait are acquired, and then a construction module carries out an automatic data training process through a neural rendering network according to the at least two pieces of first image data, so that a rendering model is constructed. The rendering model comprises three-dimensional image data information of the target portrait, and then the fitting module fits the rendering model and the second portrait bottom die, so that the fitting process of target portrait modeling is realized, and the target portrait features appear on the second portrait bottom die, namely, the target portrait modeling is completed. According to the application, the rendering model is constructed through the image data of the target portrait characteristic, namely the plane image data, and then the fitting is carried out according to the rendering model and the 3DMM (3D morphological model) bottom die, so that a three-dimensional digital mannequin can be obtained, the 2D and 3D organic fusion is realized, the generation process of the digital mannequin is simplified, and the generation cost and the generation period of the digital mannequin are reduced.
In order to achieve the third object of the present application, the technical solution of the present application provides an electronic device, including: a memory storing a program or instructions and a processor executing the program or instructions; wherein the processor, when executing the program or instructions, implements the steps of the modeling method according to any of the aspects of the present application.
The electronic device provided by the technical scheme realizes the steps of the modeling method according to any technical scheme of the application, so that the electronic device has all the beneficial effects of the modeling method according to any technical scheme of the application, and the detailed description is omitted.
In order to achieve the fourth object of the present application, the technical solution of the present application provides a storage medium storing a program or an instruction, which when executed, implements the steps of the modeling method of any one of the above technical solutions.
The storage medium provided in the technical scheme realizes the steps of the modeling method according to any one of the technical schemes of the application, so that the storage medium has all the beneficial effects of the modeling method according to any one of the technical schemes of the application, and is not described in detail herein.
Additional aspects and advantages of the application will be set forth in part in the description which follows, or may be learned by practice of the application.
Drawings
The foregoing and/or additional aspects and advantages of the application will become apparent and may be better understood from the following description of embodiments taken in conjunction with the accompanying drawings in which:
FIG. 1 is a schematic flow chart of a modeling method according to an embodiment of the present application;
FIG. 2 is a second flow chart of a modeling method according to an embodiment of the present application;
FIG. 3 is a third flow chart of a modeling method according to an embodiment of the present application;
FIG. 4 is a flow chart of a modeling method according to an embodiment of the present application;
FIG. 5 is a schematic flow chart of a modeling method according to an embodiment of the present application;
FIG. 6 is a block diagram of a modeling system provided by an embodiment of the present application;
fig. 7 is a block diagram of an electronic device according to an embodiment of the present application.
Detailed Description
In order that the above-recited objects, features and advantages of the present application will be more clearly understood, a more particular description of the application will be rendered by reference to the appended drawings and appended detailed description. It should be noted that, without conflict, the embodiments of the present application and features in the embodiments may be combined with each other.
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present application, however, the present application may be practiced in other ways than those described herein, and therefore the scope of the present application is not limited to the specific embodiments disclosed below.
FIG. 1 is a schematic flow chart of a modeling method according to an embodiment of the present application; the method may comprise the steps of:
s102: under the condition that the first portrait bottom die is obtained, adjusting modeling parameters of the first portrait bottom die to obtain a second portrait bottom die;
s104: under the condition that at least two pieces of first image data are acquired, a rendering model is built based on the at least two pieces of first image data, wherein the at least two pieces of first image data comprise target portrait characteristics;
s106: and fitting the second portrait bottom die with the rendering model to obtain a target portrait modeling corresponding to the target portrait characteristics.
The modeling method provided by the application comprises the steps of firstly establishing a first portrait bottom die by utilizing a 3DMM technology, and then adjusting a plurality of key model parameters of the first portrait bottom die so as to obtain a second portrait bottom die. Further, at least two pieces of first image data are acquired, wherein the first image data comprise target portrait characteristics, namely, plane data of target portrait are acquired, and then an automatic data training process is carried out through a neural rendering network according to the at least two pieces of first image data, so that a rendering model is constructed. The rendering model comprises three-dimensional image data information of the target portrait, then the rendering model and the second portrait bottom die are fitted, and the fitting process of target portrait modeling is realized, so that the target portrait features appear on the second portrait bottom die, and the target portrait modeling is completed. According to the application, the rendering model is constructed through the image data of the target portrait characteristic, namely the plane image data, and then fitting is carried out according to the rendering model and the 3DMM bottom die, so that a three-dimensional digital mannequin can be obtained, the 2D and 3D organic fusion is realized, the generation process of the digital mannequin is simplified, and the generation cost and the generation period of the digital mannequin are reduced.
FIG. 2 is a second flow chart of a modeling method according to an embodiment of the present application; wherein, in the case that at least two pieces of first image data are acquired, the step of constructing a rendering model based on the at least two pieces of first image data includes:
s202: extracting two-dimensional characteristic parameters of the target portrait characteristic in each piece of first image data;
s204: converting at least two-dimensional characteristic parameters corresponding to each piece of first image data into at least two three-dimensional characteristic parameters;
s206: and carrying out model training on the initial model through at least two three-dimensional characteristic parameters corresponding to each piece of first image data to obtain a rendering model.
In this embodiment, in the case where at least two pieces of first image data are acquired, the step of constructing a rendering model based on the at least two pieces of first image data includes: because the first image data are plane data, in each piece of first image data, acquiring two-dimensional characteristic parameters of the target portrait characteristic in the first image data, namely acquiring the position of the target portrait characteristic in the plane image; further, since the target portrait is modeled as a stereoscopic model, two-dimensional feature parameters of the target portrait features in the first image data need to be converted into three-dimensional feature parameters of the target portrait features, that is, position information of the target portrait features in a stereoscopic space is obtained, so that position information of the target portrait features in reality is determined. And then, performing model training on the initial model through a neural rendering network by utilizing the three-dimensional characteristic parameters, so as to obtain a rendering model with the position information of the target portrait characteristic.
FIG. 3 is a third flow chart of a modeling method according to an embodiment of the present application; the method comprises the steps of performing model training on an initial model through at least two three-dimensional characteristic parameters corresponding to each piece of first image data to obtain a rendering model, and comprises the following steps:
s302: inputting at least two three-dimensional characteristic parameters into an initial model for training;
s304: acquiring the loss degree of a loss function in the training process;
s306: and under the condition that the loss degree is smaller than or equal to the preset loss degree, determining the trained initial model as a rendering model.
In this embodiment, an automated data training process is performed through a neural rendering network using at least two three-dimensional feature parameters, thereby obtaining a rendering model. Specifically, at least two three-dimensional characteristic parameters are input into an initial model for training, and in the training process of the initial model, the loss degree of the initial model is obtained, and the fitting degree of the initial model in the at least two three-dimensional characteristic parameters can be evaluated. When the loss degree is greater than or equal to the preset loss degree, the fitting degree is lower, that is, the initial model does not meet the use requirement, so that the model parameters in the initial model need to be adjusted, and specifically, the model parameters can be adjusted according to the loss degree. And inputting at least two three-dimensional characteristic parameters into the initial model after the model parameter adjustment, training, acquiring the loss degree again, and if the loss degree is larger than the preset loss degree, continuing to adjust and repeating the method until the loss degree is smaller than or equal to the preset loss degree. The loss degree is smaller than the preset loss degree, so that the fitting degree is higher, namely the initial model is successfully trained, and the use requirement is met, and therefore, the initial model after the training is successful is used as a rendering model. Specifically, the preset loss degree may be 0.3, 0.4, 0.5.
FIG. 4 is a flow chart of a modeling method according to an embodiment of the present application; under the condition that the first portrait bottom die is obtained, adjusting modeling parameters of the first portrait bottom die to obtain a second portrait bottom die, wherein the step of obtaining the second portrait bottom die comprises the following steps:
s402: acquiring feature information of target portrait features;
s404: and adjusting modeling parameters of the first portrait bottom die based on the characteristic information to obtain a second portrait bottom die.
In this embodiment, when the first portrait base mold is obtained, the step of adjusting modeling parameters of the first portrait base mold to obtain the second portrait base mold includes: and determining the target portrait features, acquiring feature information of the target portrait features, and modifying modeling parameters of the first portrait bottom die through the acquired feature information, so as to obtain a second portrait bottom die. That is, in the present application, it is not necessary to acquire a large amount of data information of the target portrait features, but only the feature information of the target portrait features is acquired, that is, in the present application, the second portrait base mold is provided with only a rough portrait shape, and a standard digital portrait model can be generated without excessive fine carving.
In some embodiments, optionally, the target portrait features include at least one of: facial features, physical features.
In this embodiment, the target portrait features may include face features, five sense organs features, physical features, and the like.
In some embodiments, optionally, the feature information includes: size information, position information.
In this embodiment, the feature information of the target portrait feature may include: size information, position information. Such as face width information, eye position information, nose position information, mouth position information, etc.
In some embodiments, optionally, the modeling method further comprises rendering the target portrait modeling.
In this embodiment, after a complete target portrait modeling is obtained, in order to make the target portrait modeling have an effect of super-realistic, it is necessary to render the target portrait modeling after the modeling is completed, that is, to add a real reflection, projection, and the like. By rendering the target portrait modeling, the target portrait modeling is ensured to have high-quality and vivid presentation effect.
In some embodiments, optionally, the modeling method further comprises rendering the target portrait modeling.
In this embodiment, after a complete target portrait modeling is obtained, in order to make the target portrait modeling have an effect of super-realistic, it is necessary to render the target portrait modeling after the modeling is completed, that is, to add a real reflection, projection, and the like. By rendering the target portrait modeling, the target portrait modeling is ensured to have high-quality and vivid presentation effect.
FIG. 5 is a schematic flow chart of a modeling method according to an embodiment of the present application; wherein, modeling method includes:
s502: creating a 3DMM bottom die;
s504: the parameters of the bottom die are adjusted in a small amount, and the bottom die mapping is not required to be adjusted;
s506: uploading a target picture;
s508: carrying out automatic data conversion on the picture;
s510: training the neural rendering network model of the converted data, wherein the loss degree is less than 0.3;
s512: fitting the neural rendering network model and the 3DMM bottom die through a fitting server;
s514: and rendering the digital human model in real time.
In this embodiment, a method of creating a digital person model includes: firstly, a 3DMM bottom die, namely a first portrait bottom die, is created, then bottom die parameters are adjusted in a small amount according to first data information of a target portrait, bottom die mapping is not modulated, and therefore a second portrait bottom die is obtained. Then uploading a picture of the target portrait, analyzing and deconstructing the target portrait in the picture, automatically converting according to the photo data, converting the two-dimensional characteristic parameters in the photo data into three-dimensional characteristic parameters, and then automatically storing. The stored three-dimensional characteristic parameters are used for carrying out an automatic data training process through the neural rendering network, training and fine tuning of the neural rendering network model, automatically evaluating the model result, and obtaining the neural rendering network model with good performance when the loss degree is smaller than 0.3. The trained neural rendering network model and the 3DMM model after parameter adjustment, namely the second portrait bottom model, are combined through the fitting server, so that the real-time fitting process of the digital human model is realized, and the complete digital human model is obtained. Further, to ensure that the digital human model has a high-quality and realistic rendering effect, the digital human model is rendered in real time.
FIG. 6 is a block diagram of a modeling system provided by an embodiment of the present application; wherein the modeling system 60 comprises:
the adjusting module 602 is configured to adjust modeling parameters of the first portrait bottom die to obtain a second portrait bottom die when the first portrait bottom die is obtained;
a construction module 604, configured to construct a rendering model based on at least two pieces of first image data, where the at least two pieces of first image data each include a target portrait feature, when the at least two pieces of first image data are acquired;
and the fitting module 606 is used for performing fitting processing on the second portrait bottom die and the rendering model to obtain a target portrait modeling corresponding to the target portrait characteristics.
The present application provides a modeling system 60, wherein the modeling system 60 comprises: an adjustment module 602, a construction module 604, and a fitting module 606. The first portrait base mold is built by using a 3DMM (3D morphological model) technology, and then the adjustment module 602 adjusts several key model parameters of the first portrait base mold, so as to obtain a second portrait base mold. Further, at least two pieces of first image data are acquired, wherein the first image data include the target portrait feature, that is, the plane data of the target portrait is acquired, and then the construction module 604 performs an automatic data training process through the neural rendering network according to the at least two pieces of first image data, so as to construct the rendering model. The rendering model includes three-dimensional image data information of the target portrait, and then the fitting module 606 fits the rendering model and the second portrait bottom die, so that a fitting process of modeling the target portrait is realized, and thus, characteristics of the target portrait appear on the second portrait bottom die, namely, modeling of the target portrait is completed. According to the application, the rendering model is constructed through the image data of the target portrait characteristic, namely the plane image data, and then fitting is carried out according to the rendering model and the 3DMM bottom die, so that a three-dimensional digital mannequin can be obtained, the 2D and 3D organic fusion is realized, the generation process of the digital mannequin is simplified, and the generation cost and the generation period of the digital mannequin are reduced.
In some embodiments, optionally, the constructing module 604 is configured to extract, in each piece of the first image data, a two-dimensional feature parameter of the target portrait feature in each piece of the first image data; converting at least two-dimensional characteristic parameters corresponding to each piece of first image data into at least two three-dimensional characteristic parameters; and carrying out model training on the initial model through at least two three-dimensional characteristic parameters corresponding to each piece of first image data to obtain a rendering model.
In this embodiment, since the first image data is planar data, in each piece of the first image data, the two-dimensional feature parameters of the target portrait feature in the first image data, that is, the positions of the target portrait feature in the planar image are acquired; further, since the target portrait is modeled as a stereoscopic model, two-dimensional feature parameters of the target portrait features in the first image data need to be converted into three-dimensional feature parameters of the target portrait features, that is, position information of the target portrait features in a stereoscopic space is obtained, so that position information of the target portrait features in reality is determined. And then, performing model training on the initial model through a neural rendering network by utilizing the three-dimensional characteristic parameters, so as to obtain a rendering model with the position information of the target portrait characteristic.
In some embodiments, optionally, the building module 604 is further configured to perform model training on the initial model through at least two three-dimensional feature parameters corresponding to each piece of first image data, to obtain a rendering model, which includes: inputting at least two three-dimensional characteristic parameters into an initial model for training; acquiring the loss degree of a loss function in the training process; and under the condition that the loss degree is smaller than or equal to a preset loss degree, determining the trained initial model as the rendering model.
In this embodiment, the building module 604 performs an automated data training process through the neural rendering network using at least two three-dimensional feature parameters, resulting in a rendering model. Specifically, at least two three-dimensional characteristic parameters are input into an initial model for training, and in the training process of the initial model, the loss degree of the initial model is obtained, and the fitting degree of the initial model in the at least two three-dimensional characteristic parameters can be evaluated. When the loss degree is greater than or equal to the preset loss degree, the fitting degree is lower, that is, the initial model does not meet the use requirement, so that the model parameters in the initial model need to be adjusted, and specifically, the model parameters can be adjusted according to the loss degree. And inputting at least two three-dimensional characteristic parameters into the initial model after the model parameter adjustment, training, acquiring the loss degree again, and if the loss degree is larger than the preset loss degree, continuing to adjust and repeating the method until the loss degree is smaller than or equal to the preset loss degree. The loss degree is smaller than the preset loss degree, so that the fitting degree is higher, namely the initial model is successfully trained, and the use requirement is met, and therefore, the initial model after the training is successful is used as a rendering model.
In some embodiments, optionally, the target portrait features include at least one of: facial features, physical features.
In this embodiment, the target portrait features may include face features, five sense organs features, and body features, and when the target portrait features are placed in the first image data and a rendering model is constructed according to the first image data, the rendering model may have the target portrait features, and then the target portrait features appear on the target portrait modeling.
In some embodiments, optionally, the feature information includes: size information, position information.
In this embodiment, the feature information of the target portrait feature may include: size information, position information. Such as face width information, eye position information, nose position information, mouth position information, etc.
In some embodiments, modeling system 60 optionally further includes a rendering module for rendering the target portrait modeling.
In this embodiment, modeling system 60 also includes a rendering module. After obtaining a complete target portrait modeling, the rendering module is required to render the target portrait modeling after the modeling is completed, namely, to increase the actual reflection, projection and the like, in order to enable the target portrait modeling to have a super-realistic effect. By rendering the target portrait modeling, the target portrait modeling is ensured to have high-quality and vivid presentation effect.
Fig. 7 is a block diagram of an electronic device according to an embodiment of the present application; wherein the electronic device 70 comprises: a memory 702 and a processor 704, the memory 702 storing programs or instructions, the processor 704 executing the programs or instructions; wherein the processor 704, when executing programs or instructions, performs the steps of the modeling method as in any of the embodiments of the present application.
The electronic device 70 provided in this embodiment of the present application implements the steps of the modeling method according to any embodiment of the present application, so that the electronic device has all the advantages of the modeling method according to any embodiment of the present application, and will not be described herein.
The present application provides a storage medium storing a program or instructions that, when executed, implement the steps of the modeling method of any of the above embodiments.
The storage medium provided in this technical solution implements the steps of the modeling method according to any embodiment of the present application, so that the storage medium has all the beneficial effects of the modeling method according to any embodiment of the present application, and will not be described herein.
In the present application, the terms "first," "second," "third," and the like are used for descriptive purposes only and are not to be construed as indicating or implying relative importance; the term "plurality" means two or more, unless expressly defined otherwise. The terms "mounted," "connected," "secured," and the like are to be construed broadly, and may be, for example, fixedly connected, detachably connected, or integrally connected; "coupled" may be directly coupled or indirectly coupled through intermediaries. The specific meaning of the above terms in the present application can be understood by those of ordinary skill in the art according to the specific circumstances.
In the description of the present application, it should be understood that the directions or positional relationships indicated by the terms "upper", "lower", "left", "right", "front", "rear", etc. are based on the directions or positional relationships shown in the drawings, are merely for convenience of describing the present application and simplifying the description, and do not indicate or imply that the devices or units referred to must have a specific direction, be constructed and operated in a specific direction, and thus should not be construed as limiting the present application.
In the description of the present specification, the terms "one embodiment," "some embodiments," "particular embodiments," and the like, mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present application. In this specification, schematic representations of the above terms do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
The above is only a preferred embodiment of the present application, and is not intended to limit the present application, but various modifications and variations can be made to the present application by those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the protection scope of the present application.

Claims (10)

1. A modeling method, comprising:
under the condition that a first portrait bottom die is obtained, adjusting modeling parameters of the first portrait bottom die to obtain a second portrait bottom die;
under the condition that at least two pieces of first image data are acquired, a rendering model is built based on the at least two pieces of first image data, and the at least two pieces of first image data comprise target portrait features;
and fitting the second portrait bottom die with the rendering model to obtain a target portrait modeling corresponding to the target portrait characteristic.
2. Modeling method according to claim 1, wherein in case at least two pieces of first image data are acquired, the step of constructing a rendering model based on at least two pieces of first image data comprises:
extracting two-dimensional characteristic parameters of the target portrait characteristic in each piece of the first image data;
converting at least two-dimensional characteristic parameters corresponding to each piece of first image data into at least two three-dimensional characteristic parameters;
and carrying out model training on the initial model through at least two three-dimensional characteristic parameters corresponding to each piece of first image data to obtain the rendering model.
3. The modeling method according to claim 2, wherein said model training the initial model by at least two of said three-dimensional feature parameters corresponding to each of said first image data, to obtain said rendering model, comprises:
inputting at least two three-dimensional characteristic parameters into an initial model for training;
acquiring the loss degree of a loss function in the training process;
and under the condition that the loss degree is smaller than or equal to a preset loss degree, determining the trained initial model as the rendering model.
4. A modeling method according to any of claims 1 to 3, wherein the step of adjusting the modeling parameters of the first portrait base mold to obtain the second portrait base mold in the case where the first portrait base mold is obtained, comprises:
acquiring feature information of the target portrait features;
and adjusting modeling parameters of the first image bottom die based on the characteristic information to obtain the second image bottom die.
5. A modeling method in accordance with claim 4, wherein the target portrait features include at least one of:
facial features, physical features.
6. The modeling method of claim 4, wherein the feature information includes:
size information, position information.
7. A modeling method as claimed in any one of claims 1 to 3, further comprising:
rendering the target portrait modeling.
8. A modeling system, comprising:
the adjusting module is used for adjusting modeling parameters of the first portrait bottom die to obtain a second portrait bottom die under the condition that the first portrait bottom die is obtained;
the construction module is used for constructing a rendering model based on at least two pieces of first image data under the condition that at least two pieces of first image data are acquired, wherein the at least two pieces of first image data comprise target portrait characteristics;
and the fitting module is used for carrying out fitting treatment on the second portrait bottom die and the rendering model to obtain a target portrait modeling corresponding to the target portrait characteristic.
9. An electronic device, comprising:
a memory storing a program or instructions;
a processor executing the program or instructions;
wherein the processor, when executing the program or instructions, implements the steps of the modeling method of any of claims 1 to 7.
10. A storage medium having stored thereon a program or instructions which, when executed by a processor, implement the steps of the modeling method of any of claims 1 to 7.
CN202311173360.7A 2023-09-12 2023-09-12 Modeling method and system, electronic equipment and storage medium Pending CN117195565A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311173360.7A CN117195565A (en) 2023-09-12 2023-09-12 Modeling method and system, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311173360.7A CN117195565A (en) 2023-09-12 2023-09-12 Modeling method and system, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN117195565A true CN117195565A (en) 2023-12-08

Family

ID=88986566

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311173360.7A Pending CN117195565A (en) 2023-09-12 2023-09-12 Modeling method and system, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN117195565A (en)

Similar Documents

Publication Publication Date Title
US11302064B2 (en) Method and apparatus for reconstructing three-dimensional model of human body, and storage medium
CN106710003B (en) OpenG L ES-based three-dimensional photographing method and system
CN111640175A (en) Object modeling movement method, device and equipment
CN104008569B (en) A kind of 3D scene generating method based on deep video
CN109272566A (en) Movement expression edit methods, device, equipment, system and the medium of virtual role
CN106157354B (en) A kind of three-dimensional scenic switching method and system
CN109064549B (en) Method for generating mark point detection model and method for detecting mark point
WO2002013144A1 (en) 3d facial modeling system and modeling method
CN116109798A (en) Image data processing method, device, equipment and medium
TWI750710B (en) Image processing method and apparatus, image processing device and storage medium
CN104599317A (en) Mobile terminal and method for achieving 3D (three-dimensional) scanning modeling function
CN109472795A (en) A kind of image edit method and device
CN107578469A (en) A kind of 3D human body modeling methods and device based on single photo
CN112509117A (en) Hand three-dimensional model reconstruction method and device, electronic equipment and storage medium
CN114821675B (en) Object processing method and system and processor
CN113689538A (en) Video generation method and device, electronic equipment and storage medium
US11158104B1 (en) Systems and methods for building a pseudo-muscle topology of a live actor in computer animation
CN111754622B (en) Face three-dimensional image generation method and related equipment
CN112562056A (en) Control method, device, medium and equipment for virtual light in virtual studio
CN112598780A (en) Instance object model construction method and device, readable medium and electronic equipment
CN116342782A (en) Method and apparatus for generating avatar rendering model
CN106101575A (en) Generation method, device and the mobile terminal of a kind of augmented reality photo
CN110751026B (en) Video processing method and related device
US20140192045A1 (en) Method and apparatus for generating three-dimensional caricature using shape and texture of face
KR102170445B1 (en) Modeling method of automatic character facial expression using deep learning technology

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination