WO2022143354A1 - Face generation method and apparatus for virtual object, and device and readable storage medium - Google Patents

Face generation method and apparatus for virtual object, and device and readable storage medium Download PDF

Info

Publication number
WO2022143354A1
WO2022143354A1 PCT/CN2021/140590 CN2021140590W WO2022143354A1 WO 2022143354 A1 WO2022143354 A1 WO 2022143354A1 CN 2021140590 W CN2021140590 W CN 2021140590W WO 2022143354 A1 WO2022143354 A1 WO 2022143354A1
Authority
WO
WIPO (PCT)
Prior art keywords
model
face
facial
facial feature
target
Prior art date
Application number
PCT/CN2021/140590
Other languages
French (fr)
Chinese (zh)
Inventor
王宁
刘更代
Original Assignee
百果园技术(新加坡)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 百果园技术(新加坡)有限公司 filed Critical 百果园技术(新加坡)有限公司
Publication of WO2022143354A1 publication Critical patent/WO2022143354A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects

Definitions

  • the present disclosure belongs to the field of Internet technologies, and in particular, relates to a face generation method, apparatus, device and readable storage medium for a virtual object.
  • the virtual object similar to the real object based on the facial features of the real object, so that the virtual object has the facial features of the real object, such as a real person or a real object.
  • Animals virtual objects such as virtual three-dimensional cartoon characters or animals.
  • the virtual facial feature model of the virtual object is first established, and the bionic face model of the real object, the virtual facial feature model and the face bionic are established according to the facial image data of the face of the real object.
  • the models are all three-dimensional (3 Dimensions) models.
  • the facial features in the facial bionic model are transferred into the virtual facial feature model to obtain a target facial feature model including the facial features of the real object, so that the virtual object can have the facial features of the real object.
  • the inventor found that there are at least the following problems in the prior art: in the face generation process, if the facial structure of the virtual object is quite different from the facial structure of the real object, the facial feature migration During the process, abnormal deformation of the virtual facial feature model will occur, resulting in abnormal facial structure of the virtual object.
  • the present disclosure provides a face generation method, device, device and readable storage medium for a virtual object, to a certain extent, solve the abnormal deformation of the virtual facial feature model during the face generation process, resulting in virtual Problems with abnormal facial structure of the subject.
  • an embodiment of the present disclosure provides a method for generating a face of a virtual object, the method comprising:
  • three-dimensional reconstruction obtains a bionic face model of the real object
  • the initial model parameters of the pre-established reference facial feature model are optimized to obtain target model parameters;
  • the target model parameters represent the face of the real object included in the face bionic model feature;
  • the model parameters of the virtual facial feature model are adjusted to migrate the facial features represented by the target model parameters into the virtual facial feature model to obtain a target facial feature model;
  • the virtual facial feature model is a facial feature model that is pre-established according to the reference facial feature model and belongs to a virtual object.
  • an embodiment of the present disclosure provides an apparatus for generating a face of a virtual object, the apparatus comprising:
  • an acquisition module for acquiring face image data of real objects
  • a reconstruction module configured to obtain a bionic face model of the real object by three-dimensional reconstruction based on the face image data
  • an optimization module configured to optimize the initial model parameters of the pre-established reference facial feature model according to the face bionic model to obtain target model parameters; the target model parameters represent the facial features of real objects;
  • an adjustment module for adjusting the model parameters of the virtual facial feature model according to the target model parameters, so as to migrate the facial features represented by the target model parameters into the virtual facial feature model to obtain a target face A facial feature model;
  • the virtual facial feature model is a facial feature model that is pre-established according to the reference facial feature model and belongs to a virtual object.
  • embodiments of the present disclosure provide an electronic device, the electronic device includes a processor, a memory, and a program or instruction stored on the memory and executable on the processor, the program or instruction being The processor implements the steps of the method according to the first aspect when executed.
  • an embodiment of the present disclosure provides a readable storage medium, where a program or an instruction is stored on the readable storage medium, and when the program or instruction is executed by a processor, the steps of the method according to the first aspect are implemented .
  • an embodiment of the present disclosure provides a chip, the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is configured to run a program or an instruction to implement the first aspect the method described.
  • the electronic device first obtains the face image data of the real object, obtains the bionic face model of the real object through three-dimensional reconstruction based on the face image data, and then, according to the bionic face model, performs the pre-established reference face
  • the initial model parameters of the feature model are optimized to obtain the target model parameters, and the model parameters of the virtual facial feature model are adjusted according to the target model parameters, and the facial features represented by the target model parameters are transferred to the virtual facial feature model to obtain a virtual facial feature model.
  • the target facial feature model for the subject.
  • the facial features are first migrated to the reference facial feature model, and then the facial features in the reference facial feature model are migrated to the virtual facial feature model of the virtual object.
  • the facial feature model realizes the transition of facial features and avoids the direct migration of facial features from the facial bionic model to the virtual facial feature model.
  • the problem of abnormal deformation of the facial feature model so as to avoid the abnormal facial structure of the virtual object.
  • FIG. 1 is a flowchart of steps of a method for generating a face of a virtual object provided by an embodiment of the present disclosure
  • FIG. 2 is an optimized schematic diagram of a reference facial feature model provided by an embodiment of the present disclosure
  • FIG. 3 is a flowchart of steps of another method for generating a face of a virtual object provided by an embodiment of the present disclosure
  • FIG. 4 is a block diagram of an apparatus for generating a face of a virtual object provided by an embodiment of the present disclosure
  • FIG. 5 is a schematic diagram of a hardware structure of an electronic device provided by an embodiment of the present disclosure.
  • FIG. 1 is a flowchart of steps of a method for generating a face of a virtual object provided by an embodiment of the present disclosure. As shown in FIG. 1 , the method may include:
  • Step 101 Acquire face image data of a real object.
  • Step 102 Based on the face image data, three-dimensional reconstruction obtains a bionic face model of the real object.
  • the method for generating a face of a virtual object may be executed by electronic devices such as a personal computer, a mobile phone, and a server.
  • the real object may be a real person or animal, or may be a real object with a face such as a real doll or a statue.
  • the face image data is used to build a bionic face model of a real object, and the type and acquisition method of the face image data can be specifically set according to the three-dimensional reconstruction method.
  • an electronic device can obtain a bionic face model through three-dimensional reconstruction of a binocular matching algorithm.
  • the electronic device can first use the binocular camera to shoot the face area of the real object, obtain two two-dimensional face images of the real object, that is, face image data, and then match the two images through the binocular matching algorithm.
  • the face image is obtained to obtain the depth information of the face area.
  • the face bionic model of the real object is obtained by three-dimensional reconstruction.
  • the electronic device may use a depth camera to photograph the face area of a real object to obtain face image data including a color image and a depth image, where the color image includes red (Red, R) channels, green ( The color values of the three color channels of Green, G) channel and blue (Blue, B) channel, the depth image includes the depth value of the depth (Depth, D) channel, and the depth value of each pixel in the depth image data is a real object
  • the distance value between the corresponding point in the face area and the camera for example, the depth value of the pixel point corresponding to the nose tip in the depth image is the distance value between the nose tip of the real object and the camera.
  • the electronic device can obtain a bionic face model of a real object by three-dimensional reconstruction based on the facial features included in the color image and the depth information included in the depth image.
  • the electronic device can obtain a bionic face model by three-dimensional reconstruction according to a two-dimensional face image.
  • the electronic device can directly photograph the face area of a real object to obtain a two-dimensional face image, that is, face image data, and then extract key points in the face image.
  • the coordinates of the key points can determine the face of the real object Department features.
  • the key points include multiple key points in the eye region of the real object, and the multiple key points in the eye region can outline the eye contour of the real object.
  • the electronic device can select an average face model from the 3D Morphable models (3DMM) library, and fit multiple key points to the average face model, Get a bionic face model.
  • 3DMM 3D Morphable models
  • the bionic face model may be a triangular mesh model composed of a certain number of vertices, triangular patches and associated structures, etc.
  • the bionic face model includes facial features of real objects, such as real objects. face contour, eyebrow shape, eye shape, nose shape and mouth shape, as well as the layout position of eyebrows, eyes, nose and mouth in the face area.
  • the reconstruction method of the bionic face model can be set according to requirements, which is not limited in this embodiment.
  • Step 103 According to the face bionic model, optimize the initial model parameters of the pre-established reference facial feature model to obtain target model parameters.
  • the target model parameters represent the facial features of the real objects included in the facial bionic model.
  • the reference facial feature model is a virtual three-dimensional mesh model designed by the user, and the reference facial feature model is used to establish a virtual facial feature model of the virtual object.
  • Different model parameters correspond to different facial features
  • the initial model parameters are the model parameters in the initial state of the reference facial feature model.
  • a virtual facial feature model of the virtual object can be established according to the reference facial feature model.
  • the model parameters of the reference facial feature model that is, adjusting the model parameters of the virtual facial feature model
  • the virtual facial feature model can be The facial feature model has different facial features.
  • the reference facial feature model after obtaining the facial bionic model of the real object, can be fitted to the facial bionic model, so as to optimize the initial model parameters of the reference facial feature model and obtain the target model parameters , that is, the initial parameters of the reference facial feature model are adjusted according to the facial bionic model, so as to migrate the facial features in the facial bionic model to the reference facial feature model, and obtain the target model parameters that can characterize the facial features,
  • the reference facial feature model can have the facial features of the real object.
  • the reference facial feature model is such as a feature blend shape (Identity blendshape) model
  • the Identity blendshape model includes a set of feature bases (blendshape) and a basic grid
  • the basic grid is an average virtual face model
  • each feature base is a basic facial feature model.
  • Each feature base has a corresponding weight
  • the weight of each feature base of the Identity blendshape model in the initial state is the initial model parameter.
  • the feature base and the basic grid form a facial feature model, and by adjusting the weight of each feature base in the Identity blendshape model, a virtual facial feature model with different facial features can be obtained.
  • each feature base in the Identity blendshape model is a three-dimensional mesh model.
  • the three-dimensional mesh model consists of a preset number of vertices, and each vertex has three-dimensional coordinates, that is, on the x-axis, y-axis, and z-axis. coordinate of.
  • each feature base includes n vertices
  • the Identity blendshape model may include m feature bases B 3 ⁇ n and one basic mesh H 3 ⁇ n , where m and n are positive integers.
  • the weight of each feature base is w
  • the Identity blendshape model can be represented by formula A:
  • each feature base B 3 ⁇ n is a basic matrix
  • the subscript 3 ⁇ n indicates that the basic matrix includes a vector of three dimensions that each vertex in the feature base has, that is, the coordinates of the x-axis, y-axis and z-axis
  • the m basic matrices form a B 3 ⁇ n,m matrix
  • the weights of the m basic matrices form a w m,1 matrix.
  • the base mesh is a neutral face matrix
  • the subscript 3 ⁇ n represents a three-dimensional vector of each vertex in the base mesh.
  • the optimization function f(w) about the weight w can be set.
  • the optimization objective can be set as formula B:
  • the subscript x represents the vector of the x dimension
  • P x is the vector of the face bionic model in the x dimension
  • the formula B represents the error between the reference facial feature model and the face bionic model in the x dimension.
  • S is the coefficient matrix of the x dimension. In the calculation process, the weight of each feature base can be determined according to the coefficient matrix in the calculation process.
  • the elements in S can be defined as:
  • i is the i-th row in the coefficient matrix
  • j is the j-th column in the coefficient matrix.
  • Weights other than weights are 0. That is, as shown above, when i is not equal to j, the weight is 0.
  • the optimization function f(w) makes the value of the optimization function f(w) as small as possible, even if the error between the reference face feature model and the face bionic model is as small as possible, so that the reference face feature model is close to the face bionic A model having the facial features included in the facial bionic model.
  • the optimization function f(w) can be converted into a quadratic programming equation (Quadratic Programming, QP) equation, and the QP equation can be solved to obtain the weight w of each feature base, that is, the target model parameter.
  • QP Quadratic Programming
  • the reference facial feature model can also be optimized in other ways, so that the reference facial feature model is close to the face bionic model, and the target model parameters are obtained.
  • the specific optimization process can be set according to requirements, which is not limited in this embodiment. It should be noted that, in order to obtain accurate target model parameters, the facial bionic model and the reference facial feature model may have the same grid structure.
  • FIG. 2 is an optimized schematic diagram of a reference facial feature model provided by an embodiment of the present disclosure.
  • the partial bionic model 202 optimizes the initial model parameters of the reference facial feature model to obtain target model parameters. After adjusting the reference facial feature model according to the target model parameters, the optimized reference face as shown in FIG. 2 can be obtained.
  • Department Bionic Model 203 As shown in FIG. 2 , the optimized reference facial feature model includes the facial features of the person, and the facial features are determined by the target model parameters in the optimized reference facial feature model. In this embodiment, only the target face is used. The target model parameters in the model adjust the model parameters of the virtual facial feature model.
  • Step 104 Adjust the model parameters of the virtual facial feature model according to the target model parameters, so as to migrate the facial features represented by the target model parameters into the virtual facial feature model to obtain the target facial feature model.
  • the virtual facial feature model is a facial feature model that is pre-established according to the reference facial feature model and belongs to a virtual object.
  • the reference facial feature model is the Identity blendshape model
  • the facial feature model of the virtual object can be established according to the Identity blendshape model, and the specific process of establishing the virtual facial feature model according to the Identity blendshape model can be set according to requirements. The example does not limit this.
  • the model parameters of the virtual facial feature model can be adjusted according to the target model parameters to obtain the target facial feature model.
  • the virtual facial feature model is established based on the reference facial feature model.
  • the virtual facial feature model and the reference facial feature model have the same grid structure and have the same model parameters.
  • the virtual facial feature can also be calculated by formula A said.
  • the target model parameters that is, the weight w of each feature base in the optimized reference facial feature model
  • the weight w can be brought into the formula A to obtain the target virtual facial feature model. Since the target model parameters are optimized according to the face bionic model, it can represent the facial features of the real object.
  • the facial features of the real object can be transferred to the virtual object.
  • the virtual facial feature model of the virtual object makes the virtual object have the facial features of the real object.
  • the specific process of adjusting the model parameters of the virtual facial feature model can be set as required, which is not limited in this embodiment.
  • the electronic device first obtains the face image data of the real object, obtains the bionic face model of the real object through three-dimensional reconstruction based on the face image data, and then, according to the Refer to the initial model parameters of the facial feature model for optimization to obtain the target model parameters, and adjust the model parameters of the virtual facial feature model according to the target model parameters, and migrate the facial features represented by the target model parameters to the virtual facial feature model. , to obtain the target facial feature model of the virtual object.
  • the facial features are first migrated to the reference facial feature model, and then the facial features in the reference facial feature model are migrated to the virtual facial feature model of the virtual object.
  • the facial feature model realizes the transition of facial features and avoids the direct migration of facial features from the facial bionic model to the virtual facial feature model.
  • the problem of abnormal deformation of the facial feature model so as to avoid the abnormal facial structure of the virtual object.
  • FIG. 3 is a flowchart of steps of another method for generating a face of a virtual object provided by an embodiment of the present disclosure. As shown in FIG. 3 , the method may include
  • Step 301 Obtain face image data of a real object.
  • Step 302 Based on the face image data and the average face model, obtain a bionic face model by three-dimensional reconstruction.
  • Step 303 According to the average face model, calibrate the reference facial feature model to obtain a calibrated reference facial feature model.
  • the average face model is a reference face model obtained by processing the face model data set, and the structure of the calibrated reference face feature model matches the structure of the average face model.
  • the reference facial feature model may be calibrated first, so that the reference facial feature model matches the structure of the average facial model.
  • a face model data set of a real object may be established.
  • the real object is a person
  • a face model data set of a human face may be established.
  • the face model dataset may be a 3DMM library
  • the average face model may be an average face model in the 3DMM library.
  • the Identity blendshape model can be calibrated against the average face model (i.e. the average face model) before optimizing the reference face feature model.
  • the size, rotation angle and contour of the Identity blendshape model can be adjusted, so that the size of the Identity blendshape model and the average face model are consistent, and the rotation angle of the Identity blendshape model and the average face model are consistent, As well as making the contours of the Identity blendshape model consistent with the contours of the average face model.
  • one or more parameters in the reference facial feature model may be calibrated.
  • the corresponding Deformation Transfer algorithm can also be selected to transfer the shape change shown by the average face model to the reference face feature model, so that the reference face feature model and the average face model are different from each other. Structure matches.
  • the specific method for calibrating the reference facial feature model can be set according to requirements, which is not limited in this embodiment.
  • the reference facial feature model is calibrated first, so that the reference facial feature model and the average facial model have the same or similar structure.
  • the structure of the reference facial feature model matches the average facial model, in the optimization process, more accurate target model parameters can be obtained, so that the facial feature model represented by the target model parameters is closer to the facial features of the real object.
  • Step 304 Optimize the initial model parameters of the calibrated reference facial feature model according to the face bionic model to obtain target model parameters.
  • the method may also include:
  • the optimized reference facial feature model is controlled to match the structure of the facial bionic model.
  • the method may also include:
  • the parameters of the target model are controlled to be less than or equal to the preset threshold, so that the structure of the optimized reference facial feature model conforms to the set structural conditions.
  • the initial model parameters of the calibrated reference facial feature model may be optimized according to the facial bionic model to obtain target model parameters.
  • constraints can be added, so that the optimized reference facial feature model matches the structure of the face bionic model, and the optimized reference facial feature model conforms to the set structural conditions.
  • Laplacian constraints can be added to the optimization function f(w), and the deformation of the reference facial feature model in the optimization process can be controlled by the structural constraints.
  • the Laplacian constraint on the x dimension can be set as:
  • L is the n ⁇ n Laplacian matrix
  • ⁇ x is the n ⁇ 1 vector of the x -dimensional elements of the Laplacian coordinates in the average face model in the face bionic model
  • is the Laplacian constraint
  • the coefficient of the regular term in the term, ⁇ can be greater than or equal to 0 and less than or equal to 1.
  • a weight constraint item can be added to the optimization function f(w), and the structure of the optimized reference facial feature model can meet the set structural conditions through the weight constraint item.
  • the weight constraint on the x dimension can be set as:
  • is the coefficient of the uniform weight regular term, and ⁇ can be greater than or equal to 0 and less than or equal to 1.
  • the optimization function f(w) can be set as:
  • the weight w can be restricted to be greater than or equal to 0 and less than or equal to 1 by the formula: st w ⁇ [0,1]. is a block matrix of 3n ⁇ 3n:
  • the face bionic model can be obtained by 3D reconstruction based on the average face model and face image data.
  • the Laplacian constraint represents the structural error between the reference facial feature model and the average face model. In the optimization process, the Laplacian constraint is used. The term is as small as possible, so that the structure of the optimized reference facial feature model can be close to the structure of the bionic face model, so that the optimized reference facial feature model can be matched with the structure of the bionic face model.
  • making the weight constraint as small as possible can make the weight in the optimized reference facial feature model (that is, the target model parameter) less than or equal to the preset threshold, and when the weight is less than the preset threshold, it can be Avoiding large weights can avoid large deformation of the optimized reference facial feature model, thereby avoiding strange shapes of the optimized reference facial feature model.
  • the optimization function f(w) can be converted into a QP equation, and the weight w of each feature base can be obtained by solving it.
  • the structure of the reference facial feature model obtained by optimization can be matched with the structure of the face bionic model through the Laplace constraint.
  • the optimized reference facial feature model can have more facial features in the facial bionic model, so that The target model parameters can more accurately represent the facial features of real objects.
  • control target model parameter w is less than or equal to the preset threshold, so that the structure of the optimized reference facial feature model does not change greatly, so that the optimized reference facial feature model can be There is no strange deformation of the structure, and the target model parameters can more accurately represent the facial features of real objects.
  • only the optimized reference facial feature model can be controlled to match the structure of the facial bionic model, or only the target model parameters can be controlled to be less than or equal to a preset threshold.
  • the following steps may be included:
  • the target facial features in the optimized reference facial feature model are controlled to meet the set feature conditions.
  • each feature base is a basic facial feature model, and each feature base has a corresponding weight.
  • each facial feature is represented by a certain number of feature bases, for example, the eyebrow feature in the reference facial feature model can be represented by multiple feature bases.
  • the corresponding coefficients can be set for multiple feature bases representing the target facial features (target facial features such as eyebrow features) in the weight constraint item, so that the eyebrow features in the optimized reference facial feature model conform to Set feature conditions.
  • the optimization function f(w) can be expressed as:
  • C is the coefficient matrix corresponding to the ⁇ component in the weight constraint, and the elements in C can be defined as:
  • i is the i-th row in the coefficient matrix
  • j is the j-th column in the coefficient matrix.
  • the coefficient matrix it can be set in the matrix corresponding to the reference face feature model, except the diagonal of the matrix. Weights other than weights are 0. That is, as shown above, when i is not equal to j, the weight is 0.
  • the weights in the reference face feature model can be adjusted more finely through the coefficient matrix.
  • corresponding coefficients may be set in the coefficient matrix for the weights of multiple feature bases representing the target facial features. Deformation of the target facial feature model, so that the target facial features in the optimized reference facial feature model can meet the set feature conditions.
  • the target facial features in the optimized reference facial feature model can meet the set feature conditions.
  • a larger coefficient can be set for the weights of multiple feature bases representing the eyebrow feature in the coefficient matrix.
  • the eyebrow features in the optimized reference facial feature model meet the set feature conditions.
  • smaller coefficients can be set in the coefficient matrix for the weights of the multiple feature bases that make up the eye feature. In the optimization process, the smaller coefficients can make the eyes
  • Step 305 Adjust the model parameters of the virtual facial feature model according to the target model parameters, so as to migrate the facial features represented by the target model parameters into the virtual facial feature model to obtain the target facial feature model.
  • the electronic device first obtains the face image data of the real object, obtains the bionic face model of the real object through three-dimensional reconstruction based on the face image data, and then, according to the Refer to the initial model parameters of the facial feature model for optimization to obtain the target model parameters, and adjust the model parameters of the virtual facial feature model according to the target model parameters, and migrate the facial features represented by the target model parameters to the virtual facial feature model. , to obtain the target facial feature model of the virtual object.
  • the facial features are first migrated to the reference facial feature model, and then the facial features in the reference facial feature model are migrated to the virtual facial feature model of the virtual object.
  • the facial feature model realizes the transition of facial features and avoids the direct migration of facial features from the facial bionic model to the virtual facial feature model.
  • the problem of abnormal deformation of the facial feature model so as to avoid the abnormal facial structure of the virtual object.
  • step 302 can be implemented in the following manner:
  • the face bionic model is obtained by 3D reconstruction.
  • the average face model is a reference face model obtained by processing according to the face model dataset.
  • the face model dataset may be a 3DMM library
  • the average face model may be an average face model in the 3DMM library.
  • the method described in step 102 can be used to establish a bionic face model of the real object based on the collected two-dimensional face image and the average face model.
  • the face bionic model is obtained by 3D reconstruction, and the reference face feature model is calibrated according to the average face model, so that the structure of the calibrated reference face feature model is the same as that of the average face model.
  • Structural matching can make the facial bionic model and the calibrated reference facial feature model have the same or similar mesh structure. Therefore, in the process of optimizing the initial model parameters of the calibrated reference facial feature model according to the face bionic model, the obtained target model parameters can more accurately represent the facial features of the real object.
  • the method may also include:
  • the virtual facial expression model of the virtual object is optimized to obtain the target facial expression model;
  • the target facial expression model includes the facial features in the target facial feature model;
  • the expressions of real objects are tracked.
  • the facial features in the target facial feature model can be migrated to the virtual facial expression model of the virtual object, so as to perform an analysis on the expression of the real object according to the virtual facial expression model. track.
  • the virtual facial expression model is used to construct the facial expression of the real object, and the virtual facial expression model is, for example, an expression blend shape (Expression Blendshape) model.
  • the Expression Blendshape model includes a set of feature bases (blendshape) and a basic mesh (blendshape).
  • the basic mesh is an average face model
  • each feature base is a basic facial expression model.
  • Each feature base has a corresponding weight, and by adjusting the weight of each feature base of the Expression Blendshape model, an Expression Blendshape model with different expressions can be obtained.
  • the Expression Blendshape model after obtaining the target facial feature model, can be fitted to the target facial feature model, so as to realize the optimization of the Expression Blendshape model and transfer the facial features of the target facial feature model to Expression
  • the Expression Blendshape model has the facial features of the real object.
  • the expression of the real object After obtaining the Expression Blendshape model with facial features, the expression of the real object can be tracked based on the target facial expression model.
  • the optimization process of the Expression Blendshape model can be based on the optimization process of the reference facial feature model, and the specific method for tracking the expression of the real object based on the target facial expression model can be set as required, which is not limited in this embodiment.
  • the method for generating a face of a virtual object may be implemented by an electronic device alone, or may be implemented by a plurality of electronic devices in cooperation.
  • the client can obtain the user's face image data, and upload the face image data to the server, and the server can obtain the bionic face model of the real object by 3D reconstruction based on the face image data, and according to the face bionic model, Refer to the initial model parameters of the facial feature model for optimization, obtain the target model parameters, and send the target model parameters to the client.
  • the client can adjust the model parameters of the virtual facial feature model according to the target model parameters to obtain the target facial feature model.
  • the specific embodiment process of the method for generating a face of a virtual object can be set according to requirements, which is not limited in this embodiment.
  • FIG. 4 is a block diagram of an apparatus for generating a face of a virtual object provided by an embodiment of the present disclosure. As shown in FIG. 4 , the apparatus 400 may include:
  • the acquiring module 401 is used for acquiring face image data of a real object.
  • the reconstruction module 402 is used for obtaining a bionic face model of a real object by three-dimensional reconstruction based on the face image data.
  • the optimization module 403 is configured to optimize the initial model parameters of the pre-established reference facial feature model according to the facial bionic model to obtain target model parameters; the target model parameters represent the facial features of the real objects included in the facial bionic model.
  • the adjustment module 404 is used to adjust the model parameters of the virtual facial feature model according to the target model parameters, so as to migrate the facial features represented by the target model parameters into the virtual facial feature model to obtain the target facial feature model;
  • the feature model is a face feature model that is pre-established according to the reference face feature model and belongs to the virtual object.
  • the optimization module 403 is specifically configured to calibrate the reference facial feature model according to the average face model to obtain a calibrated reference facial feature model; wherein, the average face model is processed according to the face model data set.
  • the optimization module 403 is further configured to control the structure matching between the optimized reference facial feature model and the facial bionic model.
  • the optimization module 403 is further configured to control the parameter of the target model to be less than or equal to a preset threshold, so that the structure of the optimized reference facial feature model conforms to the set structure condition.
  • the optimization module 403 is further configured to control the target facial feature in the optimized reference facial feature model to meet the set feature condition.
  • the reconstruction module 402 is specifically configured to obtain a bionic face model by three-dimensional reconstruction based on the face image data and the average face model.
  • the optimization module 403 is further configured to optimize the virtual facial expression model of the virtual object according to the target facial expression model to obtain the target facial expression model; the target facial expression model includes the target facial expression model.
  • the facial features of the real object are tracked based on the target facial expression model.
  • the electronic device first obtains the face image data of the real object, obtains the bionic face model of the real object through three-dimensional reconstruction based on the face image data, and then, according to the bionic face model, performs the pre-established reference face
  • the initial model parameters of the feature model are optimized to obtain the target model parameters, and the model parameters of the virtual facial feature model are adjusted according to the target model parameters, and the facial features represented by the target model parameters are transferred to the virtual facial feature model to obtain a virtual facial feature model.
  • the target facial feature model for the subject.
  • the facial features are first migrated to the reference facial feature model, and then the facial features in the reference facial feature model are migrated to the virtual facial feature model of the virtual object.
  • the facial feature model realizes the transition of facial features and avoids the direct migration of facial features from the facial bionic model to the virtual facial feature model.
  • the problem of abnormal deformation of the facial feature model so as to avoid the abnormal facial structure of the virtual object.
  • the device for generating faces of virtual objects provided by the embodiments of the present disclosure has functional modules corresponding to the methods for generating faces of virtual objects, and can execute the methods for generating faces of virtual objects provided by the embodiments of the present disclosure, and can achieve the same beneficial effects. Effect.
  • an electronic device may include: a processor, a memory, and a computer program stored on the memory and executable on the processor, the When the processor executes the program, each process of the above-mentioned embodiment of the method for generating a face of a virtual object is implemented, and the same technical effect can be achieved. In order to avoid repetition, details are not repeated here.
  • the electronic device may specifically include: a processor 501, a storage device 502, a display screen 503 with a touch function, Input device 504 , output device 505 and communication device 506 .
  • the number of processors 501 in the electronic device may be one or more, and one processor 501 is taken as an example in FIG. 5 .
  • the processor 501 , the storage device 502 , the display screen 503 , the input device 504 , the output device 505 and the communication device 506 of the electronic device may be connected by a bus or in other ways.
  • a computer-readable storage medium is also provided, where instructions are stored in the computer-readable storage medium, when the computer-readable storage medium runs on a computer, the computer causes the computer to execute any one of the foregoing embodiments.
  • the face generation method of the virtual object is also provided, where instructions are stored in the computer-readable storage medium, when the computer-readable storage medium runs on a computer, the computer causes the computer to execute any one of the foregoing embodiments.
  • a computer program product including instructions, which, when run on a computer, causes the computer to execute the method for generating a face of a virtual object described in any of the foregoing embodiments .

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Processing Or Creating Images (AREA)

Abstract

Disclosed are a face generation method and apparatus for a virtual object, and a device and a readable storage medium, which belong to the technical field of the Internet. The method comprises: performing three-dimensional reconstruction to obtain a facial bionic model of a real object; optimizing initial model parameters of a pre-established reference facial feature model according to the facial bionic model, so as to obtain target model parameters; and adjusting model parameters of a virtual facial feature model according to the target model parameters, so as to transfer facial features, which are represented by the target model parameters, into the virtual facial feature model to obtain a target facial feature model of a virtual object. During a facial feature transfer process, the transition of facial features is realized by means of a reference facial feature model, thereby avoiding the direct transfer of the facial features from a facial bionic model to a virtual facial feature model, such that the problem of abnormal deformation occurring in the virtual facial feature model when the difference between the facial structure of a virtual object and the facial structure of a real object is relatively great can be solved.

Description

虚拟对象的脸部生成方法、装置、设备和可读存储介质Method, apparatus, device and readable storage medium for face generation of virtual object
相关申请的交叉引用CROSS-REFERENCE TO RELATED APPLICATIONS
本公开要求在2020年12月29日提交中国专利局、申请号为202011607616.7、名称为“虚拟对象的脸部生成方法、装置、设备和可读存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本公开中。This disclosure claims the priority of a Chinese patent application with application number 202011607616.7 and titled "Face Generation Method, Apparatus, Device and Readable Storage Medium for Virtual Objects" filed with the China Patent Office on December 29, 2020, all of which The contents are incorporated by reference in this disclosure.
技术领域technical field
本公开属于互联网技术领域,特别是涉及一种虚拟对象的脸部生成方法、装置、设备和可读存储介质。The present disclosure belongs to the field of Internet technologies, and in particular, relates to a face generation method, apparatus, device and readable storage medium for a virtual object.
背景技术Background technique
随着互联网技术的发展,根据下述方法,已经实现了基于真实对象的脸部特征,建立与真实对象相似的虚拟对象,使虚拟对象具有真实对象的脸部特征,真实对象例如真实的人或动物,虚拟对象例如虚拟的三维卡通人物或动物。在虚拟对象的脸部生成过程中,首先建立虚拟对象的虚拟脸部特征模型,并根据真实对象脸部的脸部图像数据建立真实对象的脸部仿生模型,虚拟脸部特征模型和脸部仿生模型均为三维(3 Dimensions)模型。然后将脸部仿生模型中的脸部特征迁移到虚拟脸部特征模型中,得到包括真实对象的脸部特征的目标脸部特征模型,从而可以使虚拟对象具有真实对象的脸部特征。With the development of Internet technology, according to the following methods, it has been realized to establish a virtual object similar to the real object based on the facial features of the real object, so that the virtual object has the facial features of the real object, such as a real person or a real object. Animals, virtual objects such as virtual three-dimensional cartoon characters or animals. In the face generation process of the virtual object, the virtual facial feature model of the virtual object is first established, and the bionic face model of the real object, the virtual facial feature model and the face bionic are established according to the facial image data of the face of the real object. The models are all three-dimensional (3 Dimensions) models. Then, the facial features in the facial bionic model are transferred into the virtual facial feature model to obtain a target facial feature model including the facial features of the real object, so that the virtual object can have the facial features of the real object.
在实现本公开的过程中,发明人发现现有技术中至少存在如下问题:在脸部生成过程中,若虚拟对象的脸部结构与真实对象的脸部结构差异较大,在脸部特征迁移过程中,会使虚拟脸部特征模型发生不正常的形变,导致虚拟对象的脸部结构异常。In the process of realizing the present disclosure, the inventor found that there are at least the following problems in the prior art: in the face generation process, if the facial structure of the virtual object is quite different from the facial structure of the real object, the facial feature migration During the process, abnormal deformation of the virtual facial feature model will occur, resulting in abnormal facial structure of the virtual object.
概述Overview
有鉴于此,本公开提供一种虚拟对象的脸部生成方法、装置、设备和可读存储介质,在一定程度上解决了脸部生成过程中虚拟脸部特征模型发生不正常的形变,导致虚拟对象的脸部结构异常的问题。In view of this, the present disclosure provides a face generation method, device, device and readable storage medium for a virtual object, to a certain extent, solve the abnormal deformation of the virtual facial feature model during the face generation process, resulting in virtual Problems with abnormal facial structure of the subject.
为了解决上述技术问题,本公开是这样实现的:In order to solve the above-mentioned technical problems, the present disclosure is implemented as follows:
第一方面,本公开实施例提供了一种虚拟对象的脸部生成方法,该方法包括:In a first aspect, an embodiment of the present disclosure provides a method for generating a face of a virtual object, the method comprising:
获取真实对象的脸部图像数据;Obtain face image data of real objects;
基于所述脸部图像数据,三维重建得到所述真实对象的脸部仿生模型;Based on the face image data, three-dimensional reconstruction obtains a bionic face model of the real object;
根据所述脸部仿生模型,对预先建立的参考脸部特征模型的初始模型参数进行优化,得到目标模型参数;所述目标模型参数表征所述脸部仿生模型包括的所述真实对象的脸部特征;According to the face bionic model, the initial model parameters of the pre-established reference facial feature model are optimized to obtain target model parameters; the target model parameters represent the face of the real object included in the face bionic model feature;
根据所述目标模型参数,调整虚拟脸部特征模型的模型参数,以将所述目标模型参数表征的所述脸部特征迁移到所述虚拟脸部特征模型中,得到目标脸部特征模型;所述虚拟脸部特征模型为根据所述参考脸部特征模型预先建立的、属于虚拟对象的脸部特征模型。According to the target model parameters, the model parameters of the virtual facial feature model are adjusted to migrate the facial features represented by the target model parameters into the virtual facial feature model to obtain a target facial feature model; The virtual facial feature model is a facial feature model that is pre-established according to the reference facial feature model and belongs to a virtual object.
第二方面,本公开实施例提供了一种虚拟对象的脸部生成装置,该装置包括:In a second aspect, an embodiment of the present disclosure provides an apparatus for generating a face of a virtual object, the apparatus comprising:
获取模块,用于获取真实对象的脸部图像数据;an acquisition module for acquiring face image data of real objects;
重建模块,用于基于所述脸部图像数据,三维重建得到所述真实对象的脸部仿生模型;a reconstruction module, configured to obtain a bionic face model of the real object by three-dimensional reconstruction based on the face image data;
优化模块,用于根据所述脸部仿生模型,对预先建立的参考脸部特征模型的初始模型参数进行优化,得到目标模型参数;所述目标模型参数表征所述脸部仿生模型包括的所述真实对象的脸部特征;an optimization module, configured to optimize the initial model parameters of the pre-established reference facial feature model according to the face bionic model to obtain target model parameters; the target model parameters represent the facial features of real objects;
调整模块,用于根据所述目标模型参数,调整虚拟脸部特征模型的模型参数,以将所述目标模型参数表征的所述脸部特征迁移到所述虚拟脸部特征模型中,得到目标脸部特征模型;所述虚拟脸部特征模型为根据所述参考脸部特征模型预先建立的、属于虚拟对象的脸部特征模型。an adjustment module for adjusting the model parameters of the virtual facial feature model according to the target model parameters, so as to migrate the facial features represented by the target model parameters into the virtual facial feature model to obtain a target face A facial feature model; the virtual facial feature model is a facial feature model that is pre-established according to the reference facial feature model and belongs to a virtual object.
第三方面,本公开实施例提供了一种电子设备,该电子设备包括处理器、存储器及存储在所述存储器上并可在所述处理器上运行的程序或指令,所述程序或指令被所述处理器执行时实现如第一方面所述的方法的步骤。In a third aspect, embodiments of the present disclosure provide an electronic device, the electronic device includes a processor, a memory, and a program or instruction stored on the memory and executable on the processor, the program or instruction being The processor implements the steps of the method according to the first aspect when executed.
第四方面,本公开实施例提供了一种可读存储介质,所述可读存储介质上存储程序或指令,所述程序或指令被处理器执行时实现如第一方面所述的方法的步骤。In a fourth aspect, an embodiment of the present disclosure provides a readable storage medium, where a program or an instruction is stored on the readable storage medium, and when the program or instruction is executed by a processor, the steps of the method according to the first aspect are implemented .
第五方面,本公开实施例提供了一种芯片,所述芯片包括处理器和通信 接口,所述通信接口和所述处理器耦合,所述处理器用于运行程序或指令,实现如第一方面所述的方法。In a fifth aspect, an embodiment of the present disclosure provides a chip, the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is configured to run a program or an instruction to implement the first aspect the method described.
在本公开实施例中,电子设备首先获取真实对象的脸部图像数据,基于脸部图像数据,三维重建得到真实对象的脸部仿生模型,然后根据脸部仿生模型,对预先建立的参考脸部特征模型的初始模型参数进行优化,得到目标模型参数,并根据目标模型参数,调整虚拟脸部特征模型的模型参数,将目标模型参数表征的脸部特征迁移到虚拟脸部特征模型中,得到虚拟对象的目标脸部特征模型。在脸部特征的迁移过程中,首先将脸部特征迁移到参考脸部特征模型中,然后将参考脸部特征模型中的脸部特征迁移到虚拟对象的虚拟脸部特征模型中,通过参考脸部特征模型实现脸部特征的过渡,避免脸部特征直接从脸部仿生模型向虚拟脸部特征模型迁移,从而可以解决虚拟对象的脸部结构与真实对象的脸部结构差异较大时,虚拟脸部特征模型发生不正常形变的问题,进而避免虚拟对象的脸部结构异常。In the embodiment of the present disclosure, the electronic device first obtains the face image data of the real object, obtains the bionic face model of the real object through three-dimensional reconstruction based on the face image data, and then, according to the bionic face model, performs the pre-established reference face The initial model parameters of the feature model are optimized to obtain the target model parameters, and the model parameters of the virtual facial feature model are adjusted according to the target model parameters, and the facial features represented by the target model parameters are transferred to the virtual facial feature model to obtain a virtual facial feature model. The target facial feature model for the subject. During the migration of facial features, the facial features are first migrated to the reference facial feature model, and then the facial features in the reference facial feature model are migrated to the virtual facial feature model of the virtual object. The facial feature model realizes the transition of facial features and avoids the direct migration of facial features from the facial bionic model to the virtual facial feature model. The problem of abnormal deformation of the facial feature model, so as to avoid the abnormal facial structure of the virtual object.
上述说明仅是本公开技术方案的概述,为了能够更清楚了解本公开的技术手段,而可依照说明书的内容予以实施,并且为了让本公开的上述和其它目的、特征和优点能够更明显易懂,以下特举本公开的具体实施方式。The above description is only an overview of the technical solutions of the present disclosure. In order to understand the technical means of the present disclosure more clearly, it can be implemented according to the contents of the description, and in order to make the above-mentioned and other purposes, features and advantages of the present disclosure more obvious and easy to understand , the following specific embodiments of the present disclosure are given.
附图说明Description of drawings
通过阅读下文优选实施方式的详细描述,各种其他的优点和益处对于本领域普通技术人员将变得清楚明了。附图仅用于示出优选实施方式的目的,而并不认为是对本公开的限制。而且在整个附图中,用相同的参考符号表示相同的部件。在附图中:Various other advantages and benefits will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings are for purposes of illustrating preferred embodiments only and are not to be considered limiting of the present disclosure. Also, the same components are denoted by the same reference numerals throughout the drawings. In the attached image:
图1是本公开实施例提供的一种虚拟对象的脸部生成方法的步骤流程图;1 is a flowchart of steps of a method for generating a face of a virtual object provided by an embodiment of the present disclosure;
图2是本公开实施例提供的一种参考脸部特征模型的优化示意图;FIG. 2 is an optimized schematic diagram of a reference facial feature model provided by an embodiment of the present disclosure;
图3是本公开实施例提供的另一种虚拟对象的脸部生成方法的步骤流程图;3 is a flowchart of steps of another method for generating a face of a virtual object provided by an embodiment of the present disclosure;
图4是本公开实施例提供的一种虚拟对象的脸部生成装置的框图;4 is a block diagram of an apparatus for generating a face of a virtual object provided by an embodiment of the present disclosure;
图5是本公开实施例提供的一种电子设备的硬件结构示意图。FIG. 5 is a schematic diagram of a hardware structure of an electronic device provided by an embodiment of the present disclosure.
具体实施例specific embodiment
下面将参照附图更详细地描述本公开的示例性实施例。虽然附图中显示了本公开的示例性实施例,然而应当理解,可以以各种形式实现本公开而不应被这里阐述的实施例所限制。相反,提供这些实施例是为了能够更透彻地理解本公开,并且能够将本公开的范围完整的传达给本领域的技术人员。Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited by the embodiments set forth herein. Rather, these embodiments are provided so that the present disclosure will be more thoroughly understood, and will fully convey the scope of the present disclosure to those skilled in the art.
图1是本公开实施例提供的一种虚拟对象的脸部生成方法的步骤流程图,如图1所示,该方法可以包括:FIG. 1 is a flowchart of steps of a method for generating a face of a virtual object provided by an embodiment of the present disclosure. As shown in FIG. 1 , the method may include:
步骤101、获取真实对象的脸部图像数据。Step 101: Acquire face image data of a real object.
步骤102、基于脸部图像数据,三维重建得到真实对象的脸部仿生模型。Step 102: Based on the face image data, three-dimensional reconstruction obtains a bionic face model of the real object.
本实施例中,虚拟对象的脸部生成方法可以由个人计算机、手机和服务器等电子设备执行。真实对象可以为真实的人物或动物,也可以为真实的玩偶或者雕像等具有脸部的真实物体。脸部图像数据用于建立真实对象的脸部仿生模型,脸部图像数据的类型和获取方法可以根据三维重建方法具体设置。In this embodiment, the method for generating a face of a virtual object may be executed by electronic devices such as a personal computer, a mobile phone, and a server. The real object may be a real person or animal, or may be a real object with a face such as a real doll or a statue. The face image data is used to build a bionic face model of a real object, and the type and acquisition method of the face image data can be specifically set according to the three-dimensional reconstruction method.
例如,电子设备可以通过双目匹配算法三维重建得到脸部仿生模型。在三维重建过程中,电子设备首先可以采用双目摄像头对真实对象的脸部区域进行拍摄,获取真实对象的两张二维的脸部图像,即脸部图像数据,然后通过双目匹配算法匹配两张脸部图像,得到脸部区域的深度信息,根据脸部图像中的脸部特征和深度信息,三维重建得到真实对象的脸部仿生模型。For example, an electronic device can obtain a bionic face model through three-dimensional reconstruction of a binocular matching algorithm. In the 3D reconstruction process, the electronic device can first use the binocular camera to shoot the face area of the real object, obtain two two-dimensional face images of the real object, that is, face image data, and then match the two images through the binocular matching algorithm. The face image is obtained to obtain the depth information of the face area. According to the facial features and depth information in the face image, the face bionic model of the real object is obtained by three-dimensional reconstruction.
在一种实施例中,电子设备可以采用深度摄像头,对真实对象的脸部区域进行拍摄,得到包括彩色图像和深度图像的脸部图像数据,彩色图像包括红色(Red,R)通道、绿色(Green,G)通道和蓝色(Blue,B)通道三个颜色通道的颜色值,深度图像包括深度(Depth,D)通道的深度值,深度图像数据中每个像素点的深度值为真实对象脸部区域中对应的点与摄像头之间的距离值,例如深度图像中鼻尖对应的像素点的深度值为真实对象的鼻尖与摄像头之间的距离值。电子设备可以基于彩色图像包括的脸部特征和深度图像包括的深度信息,三维重建得到真实对象的脸部仿生模型。In one embodiment, the electronic device may use a depth camera to photograph the face area of a real object to obtain face image data including a color image and a depth image, where the color image includes red (Red, R) channels, green ( The color values of the three color channels of Green, G) channel and blue (Blue, B) channel, the depth image includes the depth value of the depth (Depth, D) channel, and the depth value of each pixel in the depth image data is a real object The distance value between the corresponding point in the face area and the camera, for example, the depth value of the pixel point corresponding to the nose tip in the depth image is the distance value between the nose tip of the real object and the camera. The electronic device can obtain a bionic face model of a real object by three-dimensional reconstruction based on the facial features included in the color image and the depth information included in the depth image.
在一种实施例中,电子设备可以根据一张二维的脸部图像,三维重建得到脸部仿生模型。例如,电子设备可以直接对真实对象的脸部区域进行拍摄,得到一张二维的脸部图像,即脸部图像数据,然后提取脸部图像中的关键点,关键点的坐标可以确定真实对象的脸部特征。例如,关键点中包括真实对象 的眼部区域中的多个关键点,眼部区域中的多个关键点可以勾勒出真实对象的眼睛轮廓。在提取得到可以表征脸部特征的关键点之后,电子设备可以从三维可变形模型(3D Morphable models,3DMM)库中选择平均脸部模型,将多个关键点拟合到平均脸部模型上,得到脸部仿生模型。In one embodiment, the electronic device can obtain a bionic face model by three-dimensional reconstruction according to a two-dimensional face image. For example, the electronic device can directly photograph the face area of a real object to obtain a two-dimensional face image, that is, face image data, and then extract key points in the face image. The coordinates of the key points can determine the face of the real object Department features. For example, the key points include multiple key points in the eye region of the real object, and the multiple key points in the eye region can outline the eye contour of the real object. After extracting key points that can characterize facial features, the electronic device can select an average face model from the 3D Morphable models (3DMM) library, and fit multiple key points to the average face model, Get a bionic face model.
本实施例中,脸部仿生模型可以是由一定数量的顶点、三角面片和关联结构等组成的三角网格模型,脸部仿生模型中包括真实对象的脸部特征,脸部特征例如真实对象的脸部轮廓、眉毛形状、眼睛形状、鼻子形状和嘴巴形状,以及眉毛、眼睛、鼻子和嘴巴等在脸部区域的布局位置。脸部仿生模型的重建方法可以根据需求设置,本实施例对此不做限制。In this embodiment, the bionic face model may be a triangular mesh model composed of a certain number of vertices, triangular patches and associated structures, etc. The bionic face model includes facial features of real objects, such as real objects. face contour, eyebrow shape, eye shape, nose shape and mouth shape, as well as the layout position of eyebrows, eyes, nose and mouth in the face area. The reconstruction method of the bionic face model can be set according to requirements, which is not limited in this embodiment.
步骤103、根据脸部仿生模型,对预先建立的参考脸部特征模型的初始模型参数进行优化,得到目标模型参数。Step 103: According to the face bionic model, optimize the initial model parameters of the pre-established reference facial feature model to obtain target model parameters.
其中,目标模型参数表征脸部仿生模型中包括的真实对象的脸部特征。参考脸部特征模型为用户设计的、虚拟的三维网格模型,参考脸部特征模型用于建立虚拟对象的虚拟脸部特征模型。不同的模型参数对应不同的脸部特征,初始模型参数为参考脸部特征模型初始状态下的模型参数。在虚拟对象的建立过程中,可以根据参考脸部特征模型建立虚拟对象的虚拟脸部特征模型,通过调整参考脸部特征模型的模型参数,即调整虚拟脸部特征模型的模型参数,可以使虚拟脸部特征模型具有不同的脸部特征。Among them, the target model parameters represent the facial features of the real objects included in the facial bionic model. The reference facial feature model is a virtual three-dimensional mesh model designed by the user, and the reference facial feature model is used to establish a virtual facial feature model of the virtual object. Different model parameters correspond to different facial features, and the initial model parameters are the model parameters in the initial state of the reference facial feature model. In the process of establishing the virtual object, a virtual facial feature model of the virtual object can be established according to the reference facial feature model. By adjusting the model parameters of the reference facial feature model, that is, adjusting the model parameters of the virtual facial feature model, the virtual facial feature model can be The facial feature model has different facial features.
本实施例中,在得到真实对象的脸部仿生模型之后,可以将参考脸部特征模型拟合到脸部仿生模型,从而实现对参考脸部特征模型的初始模型参数的优化,得到目标模型参数,也即根据脸部仿生模型对参考脸部特征模型的初始参数进行调整,以将脸部仿生模型中的脸部特征向参考脸部特征模型迁移,得到可以表征脸部特征的目标模型参数,根据目标模型参数调整参考脸部特征模型,可以使参考脸部特征模型具有真实对象的脸部特征。In this embodiment, after obtaining the facial bionic model of the real object, the reference facial feature model can be fitted to the facial bionic model, so as to optimize the initial model parameters of the reference facial feature model and obtain the target model parameters , that is, the initial parameters of the reference facial feature model are adjusted according to the facial bionic model, so as to migrate the facial features in the facial bionic model to the reference facial feature model, and obtain the target model parameters that can characterize the facial features, By adjusting the reference facial feature model according to the target model parameters, the reference facial feature model can have the facial features of the real object.
在一种实施例中,参考脸部特征模型例如特征混合形状(Identity blendshape)模型,Identity blendshape模型包括一组特征基(blendshape)和一个基本网格,基本网格为一个平均的虚拟脸部模型,每个特征基为一个基本脸部特征模型。每个特征基具有对应的权重,Identity blendshape模型在初始状态下每个特征基的权重即初始模型参数。特征基与基本网格构成脸部特征模型,调整Identity blendshape模型中每个特征基的权重,可以得到具有不 同脸部特征的虚拟脸部特征模型。In one embodiment, the reference facial feature model is such as a feature blend shape (Identity blendshape) model, the Identity blendshape model includes a set of feature bases (blendshape) and a basic grid, and the basic grid is an average virtual face model , each feature base is a basic facial feature model. Each feature base has a corresponding weight, and the weight of each feature base of the Identity blendshape model in the initial state is the initial model parameter. The feature base and the basic grid form a facial feature model, and by adjusting the weight of each feature base in the Identity blendshape model, a virtual facial feature model with different facial features can be obtained.
其中,Identity blendshape模型中的每个特征基为一个三维网格模型,三维网格模型由预设数量的顶点组成,每个顶点具有三个维度的坐标,即x轴、y轴、z轴上的坐标。例如,每个特征基包括n个顶点,Identity blendshape模型可以包括m个特征基B 3×n和一个基本网格H 3×n,m和n为正整数。每个特征基的权重为w,则Identity blendshape模型可以通过公式A表示: Among them, each feature base in the Identity blendshape model is a three-dimensional mesh model. The three-dimensional mesh model consists of a preset number of vertices, and each vertex has three-dimensional coordinates, that is, on the x-axis, y-axis, and z-axis. coordinate of. For example, each feature base includes n vertices, and the Identity blendshape model may include m feature bases B 3×n and one basic mesh H 3×n , where m and n are positive integers. The weight of each feature base is w, then the Identity blendshape model can be represented by formula A:
B 3×n,m×w m,1+H 3×n,1 B 3×n,m ×w m,1 +H 3×n,1
其中,每个特征基B 3×n为一个基本矩阵,下标3×n表示基本矩阵包括特征基中每个顶点具有的三个维度的向量,即x轴、y轴和z轴的坐标,m个基本矩阵构成一个B 3×n,m矩阵,m个基本矩阵的权重构成一个w m,1的矩阵。基本网格为一个中性脸部矩阵,下标3×n表示基本网格中每个顶点具有的三个维度的向量。 Among them, each feature base B 3×n is a basic matrix, and the subscript 3×n indicates that the basic matrix includes a vector of three dimensions that each vertex in the feature base has, that is, the coordinates of the x-axis, y-axis and z-axis, The m basic matrices form a B 3×n,m matrix, and the weights of the m basic matrices form a w m,1 matrix. The base mesh is a neutral face matrix, and the subscript 3×n represents a three-dimensional vector of each vertex in the base mesh.
在拟合过程中,可以设置关于权重w的优化函数f(w),针对参考脸部特征模型的顶点在x轴方向上的优化,优化目标可以设置为公式B:In the fitting process, the optimization function f(w) about the weight w can be set. For the optimization of the vertices of the reference face feature model in the x-axis direction, the optimization objective can be set as formula B:
‖S(B xw+H x-P x)‖ 2 ‖S(B x w+H x -P x )‖ 2
其中,下标x表示x维度的向量,P x为脸部仿生模型在x维度的向量,公式B表示参考脸部特征模型与脸部仿生模型在x维度上的误差。S为x维度的系数矩阵,在计算过程中可以根据系数矩阵,确定每个特征基的权重在计算过程中的取值,S中的元素可以定义为: Among them, the subscript x represents the vector of the x dimension, P x is the vector of the face bionic model in the x dimension, and the formula B represents the error between the reference facial feature model and the face bionic model in the x dimension. S is the coefficient matrix of the x dimension. In the calculation process, the weight of each feature base can be determined according to the coefficient matrix in the calculation process. The elements in S can be defined as:
Figure PCTCN2021140590-appb-000001
Figure PCTCN2021140590-appb-000001
其中,i为系数矩阵中的第i行,j为系数矩阵中的第j列,在计算过程中可以根据系数矩阵,设置位于参考脸部特征模型对应的矩阵中,除矩阵的对角上的权重之外的其他权重为0。即如上所示,在i不等于j时,权重为0。Among them, i is the i-th row in the coefficient matrix, and j is the j-th column in the coefficient matrix. In the calculation process, according to the coefficient matrix, it can be set in the matrix corresponding to the reference face feature model, except the diagonal of the matrix. Weights other than weights are 0. That is, as shown above, when i is not equal to j, the weight is 0.
依次类推,综合x、y和z三个维度的分量,优化函数f(w)为:By analogy, synthesizing the components of the three dimensions of x, y and z, the optimization function f(w) is:
Figure PCTCN2021140590-appb-000002
Figure PCTCN2021140590-appb-000002
其中,
Figure PCTCN2021140590-appb-000003
为特征基在x、y和z三个维度上的向量矩阵,
Figure PCTCN2021140590-appb-000004
为基本网格在x、y和z三个维度上的向量矩阵,
Figure PCTCN2021140590-appb-000005
Figure PCTCN2021140590-appb-000006
为脸部仿生模型在x、y和z三个维度上的向量矩阵。
Figure PCTCN2021140590-appb-000007
为3n×3n的分块矩阵:
in,
Figure PCTCN2021140590-appb-000003
is the vector matrix of the feature base in the three dimensions of x, y and z,
Figure PCTCN2021140590-appb-000004
is the vector matrix of the basic grid in the three dimensions of x, y and z,
Figure PCTCN2021140590-appb-000005
Figure PCTCN2021140590-appb-000006
is the vector matrix of the face bionic model in the x, y and z dimensions.
Figure PCTCN2021140590-appb-000007
is a block matrix of 3n × 3n:
Figure PCTCN2021140590-appb-000008
Figure PCTCN2021140590-appb-000008
在优化过程中,使优化函数f(w)的值尽可能的小,即使参考脸部特征模型与脸部仿生模型之间的误差尽可能的小,从而使参考脸部特征模型接近脸部仿生模型,具有脸部仿生模型包括的脸部特征。具体的,可以将优化函数f(w)转换为二次规划方程(Quadratic Programming,QP)方程,求解QP方程,可以得到每个特征基的权重w,即目标模型参数。In the optimization process, make the value of the optimization function f(w) as small as possible, even if the error between the reference face feature model and the face bionic model is as small as possible, so that the reference face feature model is close to the face bionic A model having the facial features included in the facial bionic model. Specifically, the optimization function f(w) can be converted into a quadratic programming equation (Quadratic Programming, QP) equation, and the QP equation can be solved to obtain the weight w of each feature base, that is, the target model parameter.
实际应用中,也可以通过其他方式对参考脸部特征模型进行优化,使参考脸部特征模型接近脸部仿生模型,得到目标模型参数。具体优化过程可以根据需求设置,本实施例对此不做限制。需要说明的是,为了便于得到精确的目标模型参数,可以使脸部仿生模型和参考脸部特征模型具有相同的网格结构。In practical applications, the reference facial feature model can also be optimized in other ways, so that the reference facial feature model is close to the face bionic model, and the target model parameters are obtained. The specific optimization process can be set according to requirements, which is not limited in this embodiment. It should be noted that, in order to obtain accurate target model parameters, the facial bionic model and the reference facial feature model may have the same grid structure.
需要说明的是,本实施例对参考脸部特征模型的初始模型参数进行优化,本质上是对参考脸部特征模型进行优化,使参考脸部特征模型具有脸部仿生模型中的脸部特征。如图2所示,图2是本公开实施例提供的一种参考脸部特征模型的优化示意图,在根据人物(真实对象)的脸部图像201重建得到脸部仿生模型202之后,可以根据脸部仿生模型202对参考脸部特征模型的初始模型参数进行优化,得到目标模型参数,在根据目标模型参数对参考脸部特征模型进行调整之后,可以得到如图2所示的优化后的参考脸部仿生模型203。如图2所示,优化后的参考脸部特征模型中包括了人物的脸部特征,脸部特征由优化后的参考脸部特征模型中的目标模型参数决定,本实施例只使用目标脸部模型中的目标模型参数对虚拟脸部特征模型的模型参数进行调整。It should be noted that this embodiment optimizes the initial model parameters of the reference facial feature model, essentially optimizing the reference facial feature model, so that the reference facial feature model has the facial features in the face bionic model. As shown in FIG. 2, FIG. 2 is an optimized schematic diagram of a reference facial feature model provided by an embodiment of the present disclosure. The partial bionic model 202 optimizes the initial model parameters of the reference facial feature model to obtain target model parameters. After adjusting the reference facial feature model according to the target model parameters, the optimized reference face as shown in FIG. 2 can be obtained. Department Bionic Model 203. As shown in FIG. 2 , the optimized reference facial feature model includes the facial features of the person, and the facial features are determined by the target model parameters in the optimized reference facial feature model. In this embodiment, only the target face is used. The target model parameters in the model adjust the model parameters of the virtual facial feature model.
步骤104、根据目标模型参数,调整虚拟脸部特征模型的模型参数,以将目标模型参数表征的脸部特征迁移到虚拟脸部特征模型中,得到目标脸部特征模型。Step 104: Adjust the model parameters of the virtual facial feature model according to the target model parameters, so as to migrate the facial features represented by the target model parameters into the virtual facial feature model to obtain the target facial feature model.
其中,虚拟脸部特征模型为根据参考脸部特征模型预先建立的、属于虚拟对象的脸部特征模型。结合上述举例,若参考脸部特征模型为Identity blendshape模型,则虚拟对象的脸部特征模型可以根据Identity blendshape模 型建立,根据Identity blendshape模型建立虚拟脸部特征模型的具体过程可以根据需求设置,本实施例对此不做限制。The virtual facial feature model is a facial feature model that is pre-established according to the reference facial feature model and belongs to a virtual object. Combined with the above example, if the reference facial feature model is the Identity blendshape model, the facial feature model of the virtual object can be established according to the Identity blendshape model, and the specific process of establishing the virtual facial feature model according to the Identity blendshape model can be set according to requirements. The example does not limit this.
本实施例中,在得到目标参数模型之后,可以根据目标模型参数调整虚拟脸部特征模型的模型参数,得到目标脸部特征模型。结合上述举例,虚拟脸部特征模型根据参考脸部特征模型建立,虚拟脸部特征模型和参考脸部特征模型具有相同的网格结构,并且具有相同的模型参数,虚拟脸部特征也可以通过公式A表示。在得到目标模型参数,也即优化后的参考脸部特征模型中每个特征基的权重w之后,可以将权重w带入公式A,得到目标虚拟脸部特征模型。由于目标模型参数是根据脸部仿生模型优化得到的,可以表征真实对象的脸部特征,根据目标模型参数调整虚拟对象的虚拟脸部特征模型时,可以将真实对象的脸部特征迁移到虚拟对象的虚拟脸部特征模型,使虚拟对象具有真实对象的脸部特征。根据目标模型参数,对虚拟脸部特征模型的模型参数进行调整的具体过程可以根据需求设置,本实施例对此不做限制。In this embodiment, after the target parameter model is obtained, the model parameters of the virtual facial feature model can be adjusted according to the target model parameters to obtain the target facial feature model. In combination with the above example, the virtual facial feature model is established based on the reference facial feature model. The virtual facial feature model and the reference facial feature model have the same grid structure and have the same model parameters. The virtual facial feature can also be calculated by formula A said. After obtaining the target model parameters, that is, the weight w of each feature base in the optimized reference facial feature model, the weight w can be brought into the formula A to obtain the target virtual facial feature model. Since the target model parameters are optimized according to the face bionic model, it can represent the facial features of the real object. When adjusting the virtual facial feature model of the virtual object according to the target model parameters, the facial features of the real object can be transferred to the virtual object. The virtual facial feature model of the virtual object makes the virtual object have the facial features of the real object. According to the target model parameters, the specific process of adjusting the model parameters of the virtual facial feature model can be set as required, which is not limited in this embodiment.
综上所述,本实施例中,电子设备首先获取真实对象的脸部图像数据,基于脸部图像数据,三维重建得到真实对象的脸部仿生模型,然后根据脸部仿生模型,对预先建立的参考脸部特征模型的初始模型参数进行优化,得到目标模型参数,并根据目标模型参数,调整虚拟脸部特征模型的模型参数,将目标模型参数表征的脸部特征迁移到虚拟脸部特征模型中,得到虚拟对象的目标脸部特征模型。在脸部特征的迁移过程中,首先将脸部特征迁移到参考脸部特征模型中,然后将参考脸部特征模型中的脸部特征迁移到虚拟对象的虚拟脸部特征模型中,通过参考脸部特征模型实现脸部特征的过渡,避免脸部特征直接从脸部仿生模型向虚拟脸部特征模型迁移,从而可以解决虚拟对象的脸部结构与真实对象的脸部结构差异较大时,虚拟脸部特征模型发生不正常形变的问题,进而避免虚拟对象的脸部结构异常。To sum up, in this embodiment, the electronic device first obtains the face image data of the real object, obtains the bionic face model of the real object through three-dimensional reconstruction based on the face image data, and then, according to the Refer to the initial model parameters of the facial feature model for optimization to obtain the target model parameters, and adjust the model parameters of the virtual facial feature model according to the target model parameters, and migrate the facial features represented by the target model parameters to the virtual facial feature model. , to obtain the target facial feature model of the virtual object. During the migration of facial features, the facial features are first migrated to the reference facial feature model, and then the facial features in the reference facial feature model are migrated to the virtual facial feature model of the virtual object. The facial feature model realizes the transition of facial features and avoids the direct migration of facial features from the facial bionic model to the virtual facial feature model. The problem of abnormal deformation of the facial feature model, so as to avoid the abnormal facial structure of the virtual object.
图3是本公开实施例提供的另一种虚拟对象的脸部生成方法的步骤流程图,如图3所示,该方法可以包括FIG. 3 is a flowchart of steps of another method for generating a face of a virtual object provided by an embodiment of the present disclosure. As shown in FIG. 3 , the method may include
步骤301、获取真实对象的脸部图像数据。Step 301: Obtain face image data of a real object.
步骤302、基于脸部图像数据和平均脸部模型,三维重建得到脸部仿生模型。Step 302: Based on the face image data and the average face model, obtain a bionic face model by three-dimensional reconstruction.
步骤303、根据平均脸部模型,对参考脸部特征模型进行校准,得到校准 后的参考脸部特征模型。Step 303: According to the average face model, calibrate the reference facial feature model to obtain a calibrated reference facial feature model.
其中,平均脸部模型为根据脸部模型数据集处理得到的参考脸部模型,校准后的参考脸部特征模型的结构与平均脸部模型的结构匹配。The average face model is a reference face model obtained by processing the face model data set, and the structure of the calibrated reference face feature model matches the structure of the average face model.
本实施例中,在对初始模型参数进行优化之前,首先可以对参考脸部特征模型进行校准,使参考脸部特征模型与平均脸部模型的结构匹配。In this embodiment, before optimizing the initial model parameters, the reference facial feature model may be calibrated first, so that the reference facial feature model matches the structure of the average facial model.
在一种实施例中,可以建立真实对象的脸部模型数据集,例如若真实对象为人,则可以建立关于人脸的脸部模型数据集。结合步骤102,脸部模型数据集可以为3DMM库,平均脸部模型可以为3DMM库中的平均人脸模型。在对参考脸部特征模型进行优化之前,可以根据平均脸部模型(即平均人脸模型)对Identity blendshape模型进行校准。例如,可以根据平均脸部模型的结构,调节Identity blendshape模型的尺寸、旋转角度和轮廓,使Identity blendshape模型与平均脸部模型的尺寸一致,使Identity blendshape模型与平均脸部模型的旋转角度一致,以及使Identity blendshape模型的轮廓与平均脸部模型的轮廓一致。In one embodiment, a face model data set of a real object may be established. For example, if the real object is a person, a face model data set of a human face may be established. In combination with step 102, the face model dataset may be a 3DMM library, and the average face model may be an average face model in the 3DMM library. The Identity blendshape model can be calibrated against the average face model (i.e. the average face model) before optimizing the reference face feature model. For example, according to the structure of the average face model, the size, rotation angle and contour of the Identity blendshape model can be adjusted, so that the size of the Identity blendshape model and the average face model are consistent, and the rotation angle of the Identity blendshape model and the average face model are consistent, As well as making the contours of the Identity blendshape model consistent with the contours of the average face model.
本实施例中,可以对参考脸部特征模型中的一项或多项参数进行校准。在校准过程中,也可以选择对应的形变迁移(Deformation Transfer)算法,将平均脸部模型所表现出的形状变化迁移到参考脸部特征模型中,使参考脸部特征模型与平均脸部模型的结构匹配。具体对参考脸部特征模型进行校准的方法可以根据需求设置,本实施例对此不做限制。In this embodiment, one or more parameters in the reference facial feature model may be calibrated. In the calibration process, the corresponding Deformation Transfer algorithm can also be selected to transfer the shape change shown by the average face model to the reference face feature model, so that the reference face feature model and the average face model are different from each other. Structure matches. The specific method for calibrating the reference facial feature model can be set according to requirements, which is not limited in this embodiment.
实际应用中,在对参考脸部特征模型进行优化之前,首先对参考脸部特征模型进行校准,使参考脸部特征模型与平均脸部模型具有相同或相近的结构。当参考脸部特征模型的结构与平均脸部模型相匹配时,在优化过程中,可以得到更准确的目标模型参数,使目标模型参数表征的脸部特征模型更接近真实对象的脸部特征。In practical applications, before optimizing the reference facial feature model, the reference facial feature model is calibrated first, so that the reference facial feature model and the average facial model have the same or similar structure. When the structure of the reference facial feature model matches the average facial model, in the optimization process, more accurate target model parameters can be obtained, so that the facial feature model represented by the target model parameters is closer to the facial features of the real object.
步骤304、根据脸部仿生模型,对校准后的参考脸部特征模型的初始模型参数进行优化,得到目标模型参数。Step 304: Optimize the initial model parameters of the calibrated reference facial feature model according to the face bionic model to obtain target model parameters.
可选地,该方法还可以包括:Optionally, the method may also include:
在优化过程中,控制优化后的参考脸部特征模型与脸部仿生模型的结构匹配。During the optimization process, the optimized reference facial feature model is controlled to match the structure of the facial bionic model.
可选地,该方法还可以包括:Optionally, the method may also include:
在优化过程中,控制目标模型参数小于或等于预设阈值,以使优化后的参考脸部特征模型的结构符合设定结构条件。In the optimization process, the parameters of the target model are controlled to be less than or equal to the preset threshold, so that the structure of the optimized reference facial feature model conforms to the set structural conditions.
本实施例中,在对参考脸部特征模型进行校准之后,可以根据脸部仿生模型对校准后的参考脸部特征模型的初始模型参数进行优化,得到目标模型参数。同时,在优化过程中,可以增加约束条件,使优化后的参考脸部特征模型与脸部仿生模型的结构匹配,以及使优化后的参考脸部特征模型的结构符合设定结构条件。In this embodiment, after the reference facial feature model is calibrated, the initial model parameters of the calibrated reference facial feature model may be optimized according to the facial bionic model to obtain target model parameters. At the same time, in the optimization process, constraints can be added, so that the optimized reference facial feature model matches the structure of the face bionic model, and the optimized reference facial feature model conforms to the set structural conditions.
例如,可以在优化函数f(w)中增加结构约束项,结构约束项例如拉普拉斯(Laplacian)约束,通过结构约束项控制优化过程中的参考脸部特征模型的形变。具体的,可以设置x维度上的拉普拉斯约束为:For example, structural constraints such as Laplacian constraints can be added to the optimization function f(w), and the deformation of the reference facial feature model in the optimization process can be controlled by the structural constraints. Specifically, the Laplacian constraint on the x dimension can be set as:
α‖L(B xw+H x)-δ x2 α‖L(B x w+H x )-δ x2
其中,L为n×n的拉普拉斯矩阵,δ x是脸部仿生模型中的平均脸部模型中拉普拉斯坐标的x维元素的n×1向量,α为拉普拉斯约束项中正则项的系数,α可以大于或等于0、且小于或等于1。 where L is the n×n Laplacian matrix, δx is the n×1 vector of the x -dimensional elements of the Laplacian coordinates in the average face model in the face bionic model, and α is the Laplacian constraint The coefficient of the regular term in the term, α can be greater than or equal to 0 and less than or equal to 1.
同时,可以在优化函数f(w)中增加权重约束项,通过权重约束项使优化后的参考脸部特征模型的结构符合设定结构条件。具体的,可以设置x维度上的权重约束项为:At the same time, a weight constraint item can be added to the optimization function f(w), and the structure of the optimized reference facial feature model can meet the set structural conditions through the weight constraint item. Specifically, the weight constraint on the x dimension can be set as:
β‖w‖ 2 β‖w‖ 2
其中,β为统一的权重正则项的系数,β的可以大于或等于0、且小于或等于1。Among them, β is the coefficient of the uniform weight regular term, and β can be greater than or equal to 0 and less than or equal to 1.
结合上述举例,综合x、y和z三个维度的分量,优化函数f(w)可以设置为:Combined with the above example, synthesizing the components of the three dimensions of x, y and z, the optimization function f(w) can be set as:
Figure PCTCN2021140590-appb-000009
Figure PCTCN2021140590-appb-000009
其中,在权重约束项中,可以通过公式:s.t w∈[0,1]限制权重w大于或等于0,且小于或等于1。
Figure PCTCN2021140590-appb-000010
为3n×3n的分块矩阵:
Among them, in the weight constraint item, the weight w can be restricted to be greater than or equal to 0 and less than or equal to 1 by the formula: st w∈[0,1].
Figure PCTCN2021140590-appb-000010
is a block matrix of 3n × 3n:
Figure PCTCN2021140590-appb-000011
Figure PCTCN2021140590-appb-000011
Figure PCTCN2021140590-appb-000012
为平均脸部模型中拉普拉斯坐标中x维、y维和z维 元素向量。
Figure PCTCN2021140590-appb-000012
is a vector of x-, y-, and z-dimension elements in Laplacian coordinates of the average face model.
脸部仿生模型可以根据平均脸部模型和脸部图像数据三维重建得到,拉普拉斯约束表征参考脸部特征模型与平均脸部模型的结构误差,在优化过程中,使拉普拉斯约束项尽可能小,可以使优化后的参考脸部特征模型的结构接近脸部仿生模型的结构,从而可以使优化后的参考脸部特征模型与脸部仿生模型的结构匹配。同时,在优化过程中,使权重约束项尽可能小,可以使优化后的参考脸部特征模型中的权重(即目标模型参数)小于或等于预设阈值,当权重小于预设阈值时,可以避免出现较大的权重,可以避免优化后的参考脸部特征模型发生较大的形变,从而可以避免优化后的参考脸部特征模型出现奇怪的形状。The face bionic model can be obtained by 3D reconstruction based on the average face model and face image data. The Laplacian constraint represents the structural error between the reference facial feature model and the average face model. In the optimization process, the Laplacian constraint is used. The term is as small as possible, so that the structure of the optimized reference facial feature model can be close to the structure of the bionic face model, so that the optimized reference facial feature model can be matched with the structure of the bionic face model. At the same time, in the optimization process, making the weight constraint as small as possible can make the weight in the optimized reference facial feature model (that is, the target model parameter) less than or equal to the preset threshold, and when the weight is less than the preset threshold, it can be Avoiding large weights can avoid large deformation of the optimized reference facial feature model, thereby avoiding strange shapes of the optimized reference facial feature model.
同理,可以将优化函数f(w)转换为QP方程,求解得到每个特征基的权重w。In the same way, the optimization function f(w) can be converted into a QP equation, and the weight w of each feature base can be obtained by solving it.
在优化过程中,通过拉普拉斯约束项,可以使优化得到的参考脸部特征模型的结构与脸部仿生模型的结构匹配。如图2所示,当参考脸部特征模型的结构与脸部仿生模型的结构匹配时,可以使优化后的参考脸部特征模型中具有更多的脸部仿生模型中的脸部特征,从而可以使目标模型参数可以更准确的表征真实对象的脸部特征。In the optimization process, the structure of the reference facial feature model obtained by optimization can be matched with the structure of the face bionic model through the Laplace constraint. As shown in Figure 2, when the structure of the reference facial feature model matches the structure of the facial bionic model, the optimized reference facial feature model can have more facial features in the facial bionic model, so that The target model parameters can more accurately represent the facial features of real objects.
同时,在优化过程中,控制目标模型参数w小于或等于预设阈值,可以使优化后的参考脸部特征模型的结构不发生较大的变化,从而可以使优化后的参考脸部特征模型的结构不出现奇怪的形变,进行可以使目标模型参数可以更准确的表征真实对象的脸部特征。At the same time, in the optimization process, the control target model parameter w is less than or equal to the preset threshold, so that the structure of the optimized reference facial feature model does not change greatly, so that the optimized reference facial feature model can be There is no strange deformation of the structure, and the target model parameters can more accurately represent the facial features of real objects.
实际应用中,在优化过程中,可以只控制优化后的参考脸部特征模型与脸部仿生模型的结构匹配,或者只控制目标模型参数小于或等于预设阈值。In practical applications, in the optimization process, only the optimized reference facial feature model can be controlled to match the structure of the facial bionic model, or only the target model parameters can be controlled to be less than or equal to a preset threshold.
可选地,在控制目标模型参数小于或等于预设阈值,以使优化后的参考脸部特征模型的结构符合设定结构条件时,可以包括:Optionally, when the parameters of the control target model are less than or equal to a preset threshold, so that the structure of the optimized reference facial feature model meets the set structural conditions, the following steps may be included:
控制优化后的参考脸部特征模型中的目标脸部特征符合设定特征条件。The target facial features in the optimized reference facial feature model are controlled to meet the set feature conditions.
结合上述举例,在参考脸部特征模型中,每个特征基为一个基本脸部特征模型,每个特征基具有对应的权重。在参考脸部特征模型中,每个脸部特征由一定数量的特征基表征,例如参考脸部特征模型中的眉毛特征可以通过多个特征基表征。在优化过程中,可以在权重约束项中为表征目标脸部特征 (目标脸部特征例如眉毛特征)的多个特征基设对应的系数,使优化后的参考脸部特征模型中的眉毛特征符合设定特征条件。Combining the above examples, in the reference facial feature model, each feature base is a basic facial feature model, and each feature base has a corresponding weight. In the reference facial feature model, each facial feature is represented by a certain number of feature bases, for example, the eyebrow feature in the reference facial feature model can be represented by multiple feature bases. In the optimization process, the corresponding coefficients can be set for multiple feature bases representing the target facial features (target facial features such as eyebrow features) in the weight constraint item, so that the eyebrow features in the optimized reference facial feature model conform to Set feature conditions.
在一种实施例中,优化函数f(w)可以表示为:In one embodiment, the optimization function f(w) can be expressed as:
Figure PCTCN2021140590-appb-000013
Figure PCTCN2021140590-appb-000013
其中,C为权重约束中β分量对应的系数矩阵,C中的元素可以定义为:Among them, C is the coefficient matrix corresponding to the β component in the weight constraint, and the elements in C can be defined as:
Figure PCTCN2021140590-appb-000014
Figure PCTCN2021140590-appb-000014
其中,i为系数矩阵中的第i行,j为系数矩阵中的第j列,在计算过程中可以根据系数矩阵,设置位于参考脸部特征模型对应的矩阵中,除矩阵的对角上的权重之外的其他权重0。即如上所示,在i不等于j时,权重为0。在优化过程中,通过系数矩阵,可以对参考脸部特征模型中的权重进行更细致的调整。Among them, i is the i-th row in the coefficient matrix, and j is the j-th column in the coefficient matrix. In the calculation process, according to the coefficient matrix, it can be set in the matrix corresponding to the reference face feature model, except the diagonal of the matrix. Weights other than weights are 0. That is, as shown above, when i is not equal to j, the weight is 0. During the optimization process, the weights in the reference face feature model can be adjusted more finely through the coefficient matrix.
在一种实施例中,针对参考脸部特征模型中的目标脸部特征,可以在系数矩阵中为表征目标脸部特征的多个特征基的权重设置对应的系数,在优化过程中,可以控制目标脸部特征模型的形变,从而可以使优化后的参考脸部特征模型中的目标脸部特征符合设定特征条件。例如,针对眉毛特征,可以在系数矩阵中为表征眉毛特征的多个特征基的权重分别设置较大的系数,在优化过程中,通过较大的系数可以控制眉毛特征发生较小的形变,使优化后的参考脸部特征模型中的眉毛特征符合设定特征条件。再例如,针对参考脸部特征模型中的眼睛特征,可以在系数矩阵中为组成眼睛特征的多个特征基的权重分别设置较小的系数,在优化过程中,通过较小的系数可以使眼睛特征发生较大的形变,使优化后的参考脸部特征模型中的眼睛符合设定特征条件。In one embodiment, for the target facial features in the reference facial feature model, corresponding coefficients may be set in the coefficient matrix for the weights of multiple feature bases representing the target facial features. Deformation of the target facial feature model, so that the target facial features in the optimized reference facial feature model can meet the set feature conditions. For example, for the eyebrow feature, a larger coefficient can be set for the weights of multiple feature bases representing the eyebrow feature in the coefficient matrix. The eyebrow features in the optimized reference facial feature model meet the set feature conditions. For another example, for the eye feature in the reference face feature model, smaller coefficients can be set in the coefficient matrix for the weights of the multiple feature bases that make up the eye feature. In the optimization process, the smaller coefficients can make the eyes The features undergo large deformation, so that the eyes in the optimized reference facial feature model meet the set feature conditions.
需要说明的是,结构约束项和权重约束项的具体设置方法可以根据需求设置,本实施例对此不做限制。It should be noted that the specific setting methods of the structure constraint item and the weight constraint item may be set according to requirements, which are not limited in this embodiment.
步骤305、根据目标模型参数,调整虚拟脸部特征模型的模型参数,以将目标模型参数表征的脸部特征迁移到虚拟脸部特征模型中,得到目标脸部特征模型。Step 305: Adjust the model parameters of the virtual facial feature model according to the target model parameters, so as to migrate the facial features represented by the target model parameters into the virtual facial feature model to obtain the target facial feature model.
综上所述,本实施例中,电子设备首先获取真实对象的脸部图像数据, 基于脸部图像数据,三维重建得到真实对象的脸部仿生模型,然后根据脸部仿生模型,对预先建立的参考脸部特征模型的初始模型参数进行优化,得到目标模型参数,并根据目标模型参数,调整虚拟脸部特征模型的模型参数,将目标模型参数表征的脸部特征迁移到虚拟脸部特征模型中,得到虚拟对象的目标脸部特征模型。在脸部特征的迁移过程中,首先将脸部特征迁移到参考脸部特征模型中,然后将参考脸部特征模型中的脸部特征迁移到虚拟对象的虚拟脸部特征模型中,通过参考脸部特征模型实现脸部特征的过渡,避免脸部特征直接从脸部仿生模型向虚拟脸部特征模型迁移,从而可以解决虚拟对象的脸部结构与真实对象的脸部结构差异较大时,虚拟脸部特征模型发生不正常形变的问题,进而避免虚拟对象的脸部结构异常。To sum up, in this embodiment, the electronic device first obtains the face image data of the real object, obtains the bionic face model of the real object through three-dimensional reconstruction based on the face image data, and then, according to the Refer to the initial model parameters of the facial feature model for optimization to obtain the target model parameters, and adjust the model parameters of the virtual facial feature model according to the target model parameters, and migrate the facial features represented by the target model parameters to the virtual facial feature model. , to obtain the target facial feature model of the virtual object. During the migration of facial features, the facial features are first migrated to the reference facial feature model, and then the facial features in the reference facial feature model are migrated to the virtual facial feature model of the virtual object. The facial feature model realizes the transition of facial features and avoids the direct migration of facial features from the facial bionic model to the virtual facial feature model. The problem of abnormal deformation of the facial feature model, so as to avoid the abnormal facial structure of the virtual object.
可选地,步骤302可以通过如下方式实现:Optionally, step 302 can be implemented in the following manner:
基于脸部图像数据和平均脸部模型,三维重建得到脸部仿生模型。Based on the face image data and the average face model, the face bionic model is obtained by 3D reconstruction.
其中,平均脸部模型为根据脸部模型数据集处理得到的参考脸部模型。The average face model is a reference face model obtained by processing according to the face model dataset.
结合步骤102,脸部模型数据集可以为3DMM库,平均脸部模型可以为3DMM库中的平均人脸模型。此时,可以采用步骤102所述的方法,基于采集到的人脸的二维脸部图像和平均人脸模型建立真实对象的脸部仿生模型。In combination with step 102, the face model dataset may be a 3DMM library, and the average face model may be an average face model in the 3DMM library. At this time, the method described in step 102 can be used to establish a bionic face model of the real object based on the collected two-dimensional face image and the average face model.
实际应用中,基于平均脸部模型,三维重建得到脸部仿生模型,并根据平均脸部模型对参考脸部特征模型进行校准,使校准后的参考脸部特征模型的结构与平均脸部模型的结构向匹配,可以使脸部仿生模型与校准后的参考脸部特征模型具有相同或相近的网格结构。因此,在根据脸部仿生模型,对校准后的参考脸部特征模型的初始模型参数进行优化的过程中,得到的目标模型参数可以更准确的表征真实对象的脸部特征。In practical applications, based on the average face model, the face bionic model is obtained by 3D reconstruction, and the reference face feature model is calibrated according to the average face model, so that the structure of the calibrated reference face feature model is the same as that of the average face model. Structural matching can make the facial bionic model and the calibrated reference facial feature model have the same or similar mesh structure. Therefore, in the process of optimizing the initial model parameters of the calibrated reference facial feature model according to the face bionic model, the obtained target model parameters can more accurately represent the facial features of the real object.
可选地,该方法还可以包括:Optionally, the method may also include:
根据目标脸部特征模型,对虚拟对象的虚拟脸部表情模型进行优化,得到目标脸部表情模型;目标脸部表情模型中包括目标脸部特征模型中的脸部特征;According to the target facial feature model, the virtual facial expression model of the virtual object is optimized to obtain the target facial expression model; the target facial expression model includes the facial features in the target facial feature model;
基于目标脸部表情模型,对真实对象的表情进行跟踪。Based on the target facial expression model, the expressions of real objects are tracked.
本实施例中,在得到目标脸部特征模型,可以将目标脸部特征模型中的脸部特征迁移到虚拟对象的虚拟脸部表情模型中,以根据虚拟脸部表情模型对真实对象的表情进行跟踪。In this embodiment, after obtaining the target facial feature model, the facial features in the target facial feature model can be migrated to the virtual facial expression model of the virtual object, so as to perform an analysis on the expression of the real object according to the virtual facial expression model. track.
其中,虚拟脸部表情模型用于构建真实对象的脸部表情,虚拟脸部表情模型例如表情混合形状(Expression Blendshape)模型。Expression Blendshape模型包括一组特征基(blendshape)和一个基本网格(blendshape),基本网格为一个平均脸部模型,每个特征基为一个基本脸部表情模型。每个特征基具有对应的权重,调整Expression Blendshape模型每个特征基的权重可以得到具有不同表情的Expression Blendshape模型。Among them, the virtual facial expression model is used to construct the facial expression of the real object, and the virtual facial expression model is, for example, an expression blend shape (Expression Blendshape) model. The Expression Blendshape model includes a set of feature bases (blendshape) and a basic mesh (blendshape). The basic mesh is an average face model, and each feature base is a basic facial expression model. Each feature base has a corresponding weight, and by adjusting the weight of each feature base of the Expression Blendshape model, an Expression Blendshape model with different expressions can be obtained.
结合上述举例,在得到目标脸部特征模型之后,可以将Expression Blendshape模型拟合到目标脸部特征模型中,从而实现对Expression Blendshape模型的优化,将目标脸部特征模型的脸部特征迁移到Expression Blendshape模型中,使Expression Blendshape模型具有真实对象的脸部特征。在得到具有脸部特征的Expression Blendshape模型之后,可以基于目标脸部表情模型,对真实对象的表情进行跟踪。Expression Blendshape模型的优化过程可以依据参考脸部特征模型的优化过程,基于目标脸部表情模型对真实对象的表情进行跟踪的具体方法可以根据需求设置,本实施例对此不做限制。Combined with the above example, after obtaining the target facial feature model, the Expression Blendshape model can be fitted to the target facial feature model, so as to realize the optimization of the Expression Blendshape model and transfer the facial features of the target facial feature model to Expression In the Blendshape model, the Expression Blendshape model has the facial features of the real object. After obtaining the Expression Blendshape model with facial features, the expression of the real object can be tracked based on the target facial expression model. The optimization process of the Expression Blendshape model can be based on the optimization process of the reference facial feature model, and the specific method for tracking the expression of the real object based on the target facial expression model can be set as required, which is not limited in this embodiment.
需要说明的是,本实施例提供的虚拟对象的脸部生成方法,可以由电子设备单独实施,也可以由多个电子设备配合实施。例如,客户端可以获取用户的脸部图像数据,并将脸部图像数据上传至服务器,服务器可以基于脸部图像数据,三维重建得到真实对象的脸部仿生模型,并根据脸部仿生模型,对参考脸部特征模型的初始模型参数进行优化,得到目标模型参数,将目标模型参数发送至客户端,客户端可以根据目标模型参数,调整虚拟脸部特征模型的模型参数,得到目标脸部特征模型。虚拟对象的脸部生成方法的具体实施例过程可以根据需求设置,本实施例对此不做限制。It should be noted that, the method for generating a face of a virtual object provided in this embodiment may be implemented by an electronic device alone, or may be implemented by a plurality of electronic devices in cooperation. For example, the client can obtain the user's face image data, and upload the face image data to the server, and the server can obtain the bionic face model of the real object by 3D reconstruction based on the face image data, and according to the face bionic model, Refer to the initial model parameters of the facial feature model for optimization, obtain the target model parameters, and send the target model parameters to the client. The client can adjust the model parameters of the virtual facial feature model according to the target model parameters to obtain the target facial feature model. . The specific embodiment process of the method for generating a face of a virtual object can be set according to requirements, which is not limited in this embodiment.
图4是本公开实施例提供的一种虚拟对象的脸部生成装置的框图,如图4所示,该装置400可以包括:FIG. 4 is a block diagram of an apparatus for generating a face of a virtual object provided by an embodiment of the present disclosure. As shown in FIG. 4 , the apparatus 400 may include:
获取模块401,用于获取真实对象的脸部图像数据。The acquiring module 401 is used for acquiring face image data of a real object.
重建模块402,用于基于脸部图像数据,三维重建得到真实对象的脸部仿生模型。The reconstruction module 402 is used for obtaining a bionic face model of a real object by three-dimensional reconstruction based on the face image data.
优化模块403,用于根据脸部仿生模型,对预先建立的参考脸部特征模型的初始模型参数进行优化,得到目标模型参数;目标模型参数表征脸部仿生模型包括的真实对象的脸部特征。The optimization module 403 is configured to optimize the initial model parameters of the pre-established reference facial feature model according to the facial bionic model to obtain target model parameters; the target model parameters represent the facial features of the real objects included in the facial bionic model.
调整模块404,用于根据目标模型参数,调整虚拟脸部特征模型的模型参数,以将目标模型参数表征的脸部特征迁移到虚拟脸部特征模型中,得到目标脸部特征模型;虚拟脸部特征模型为根据参考脸部特征模型预先建立的、属于虚拟对象的脸部特征模型。The adjustment module 404 is used to adjust the model parameters of the virtual facial feature model according to the target model parameters, so as to migrate the facial features represented by the target model parameters into the virtual facial feature model to obtain the target facial feature model; The feature model is a face feature model that is pre-established according to the reference face feature model and belongs to the virtual object.
可选地,优化模块403,具体用于根据平均脸部模型,对参考脸部特征模型进行校准,得到校准后的参考脸部特征模型;其中,平均脸部模型为根据脸部模型数据集处理得到的参考脸部模型,校准后的参考脸部特征模型的结构与平均脸部模型的结构匹配;根据脸部仿生模型,对校准后的参考脸部特征模型的初始模型参数进行优化,得到目标模型参数。Optionally, the optimization module 403 is specifically configured to calibrate the reference facial feature model according to the average face model to obtain a calibrated reference facial feature model; wherein, the average face model is processed according to the face model data set. The obtained reference face model, the structure of the calibrated reference face feature model matches the structure of the average face model; according to the face bionic model, the initial model parameters of the calibrated reference face feature model are optimized to obtain the target model parameters.
可选地,优化模块403还用于控制优化后的参考脸部特征模型与脸部仿生模型的结构匹配。Optionally, the optimization module 403 is further configured to control the structure matching between the optimized reference facial feature model and the facial bionic model.
可选地,优化模块403还用于控制目标模型参数小于或等于预设阈值,以使优化后的参考脸部特征模型的结构符合设定结构条件。Optionally, the optimization module 403 is further configured to control the parameter of the target model to be less than or equal to a preset threshold, so that the structure of the optimized reference facial feature model conforms to the set structure condition.
可选地,优化模块403还用于控制优化后的参考脸部特征模型中的目标脸部特征符合设定特征条件。Optionally, the optimization module 403 is further configured to control the target facial feature in the optimized reference facial feature model to meet the set feature condition.
可选地,重建模块402具体用于基于脸部图像数据和平均脸部模型,三维重建得到脸部仿生模型。Optionally, the reconstruction module 402 is specifically configured to obtain a bionic face model by three-dimensional reconstruction based on the face image data and the average face model.
可选地,优化模块403,还用于根据目标脸部特征模型,对虚拟对象的虚拟脸部表情模型进行优化,得到目标脸部表情模型;目标脸部表情模型中包括目标脸部特征模型中的脸部特征;基于目标脸部表情模型,对真实对象的表情进行跟踪。Optionally, the optimization module 403 is further configured to optimize the virtual facial expression model of the virtual object according to the target facial expression model to obtain the target facial expression model; the target facial expression model includes the target facial expression model. The facial features of the real object are tracked based on the target facial expression model.
在本公开实施例中,电子设备首先获取真实对象的脸部图像数据,基于脸部图像数据,三维重建得到真实对象的脸部仿生模型,然后根据脸部仿生模型,对预先建立的参考脸部特征模型的初始模型参数进行优化,得到目标模型参数,并根据目标模型参数,调整虚拟脸部特征模型的模型参数,将目标模型参数表征的脸部特征迁移到虚拟脸部特征模型中,得到虚拟对象的目标脸部特征模型。在脸部特征的迁移过程中,首先将脸部特征迁移到参考脸部特征模型中,然后将参考脸部特征模型中的脸部特征迁移到虚拟对象的虚拟脸部特征模型中,通过参考脸部特征模型实现脸部特征的过渡,避免脸部特征直接从脸部仿生模型向虚拟脸部特征模型迁移,从而可以解决虚拟对象 的脸部结构与真实对象的脸部结构差异较大时,虚拟脸部特征模型发生不正常形变的问题,进而避免虚拟对象的脸部结构异常。In the embodiment of the present disclosure, the electronic device first obtains the face image data of the real object, obtains the bionic face model of the real object through three-dimensional reconstruction based on the face image data, and then, according to the bionic face model, performs the pre-established reference face The initial model parameters of the feature model are optimized to obtain the target model parameters, and the model parameters of the virtual facial feature model are adjusted according to the target model parameters, and the facial features represented by the target model parameters are transferred to the virtual facial feature model to obtain a virtual facial feature model. The target facial feature model for the subject. During the migration of facial features, the facial features are first migrated to the reference facial feature model, and then the facial features in the reference facial feature model are migrated to the virtual facial feature model of the virtual object. The facial feature model realizes the transition of facial features and avoids the direct migration of facial features from the facial bionic model to the virtual facial feature model. The problem of abnormal deformation of the facial feature model, so as to avoid the abnormal facial structure of the virtual object.
本公开实施例提供的虚拟对象的脸部生成装置具备执行虚拟对象的脸部生成方法相应的功能模块,可执行本公开实施例所提供的虚拟对象的脸部生成方法,且能达到相同的有益效果。The device for generating faces of virtual objects provided by the embodiments of the present disclosure has functional modules corresponding to the methods for generating faces of virtual objects, and can execute the methods for generating faces of virtual objects provided by the embodiments of the present disclosure, and can achieve the same beneficial effects. Effect.
在本公开提供的又一实施例中,还提供了一种电子设备,电子设备可以包括:处理器、存储器以及存储在所述存储器上并可在所述处理器上运行的计算机程序,所述处理器执行所述程序时实现上述虚拟对象的脸部生成方法实施例的各个过程,且能达到相同的技术效果,为避免重复,这里不再赘述。In yet another embodiment provided by the present disclosure, an electronic device is also provided. The electronic device may include: a processor, a memory, and a computer program stored on the memory and executable on the processor, the When the processor executes the program, each process of the above-mentioned embodiment of the method for generating a face of a virtual object is implemented, and the same technical effect can be achieved. In order to avoid repetition, details are not repeated here.
示例的,如图5所示,图5是本公开实施例提供的一种电子设备的硬件结构示意图,该电子设备具体可以包括:处理器501、存储装置502、具有触摸功能的显示屏503、输入装置504、输出装置505以及通信装置506。该电子设备中处理器501的数量可以是一个或者多个,图5中以一个处理器501为例。该电子设备的处理器501、存储装置502、显示屏503、输入装置504、输出装置505以及通信装置506可以通过总线或者其他方式连接。5 is a schematic diagram of a hardware structure of an electronic device provided by an embodiment of the present disclosure, the electronic device may specifically include: a processor 501, a storage device 502, a display screen 503 with a touch function, Input device 504 , output device 505 and communication device 506 . The number of processors 501 in the electronic device may be one or more, and one processor 501 is taken as an example in FIG. 5 . The processor 501 , the storage device 502 , the display screen 503 , the input device 504 , the output device 505 and the communication device 506 of the electronic device may be connected by a bus or in other ways.
在本公开提供的又一实施例中,还提供了一种计算机可读存储介质,该计算机可读存储介质中存储有指令,当其在计算机上运行时,使得计算机执行上述实施例中任一所述的虚拟对象的脸部生成方法。In yet another embodiment provided by the present disclosure, a computer-readable storage medium is also provided, where instructions are stored in the computer-readable storage medium, when the computer-readable storage medium runs on a computer, the computer causes the computer to execute any one of the foregoing embodiments. The face generation method of the virtual object.
在本公开提供的又一实施例中,还提供了一种包含指令的计算机程序产品,当其在计算机上运行时,使得计算机执行上述实施例中任一所述的虚拟对象的脸部生成方法。In yet another embodiment provided by the present disclosure, there is also provided a computer program product including instructions, which, when run on a computer, causes the computer to execute the method for generating a face of a virtual object described in any of the foregoing embodiments .
需要说明的是,在本文中,诸如第一和第二等之类的关系术语仅仅用来将一个实体或者操作与另一个实体或操作区分开来,而不一定要求或者暗示这些实体或操作之间存在任何这种实际的关系或者顺序。而且,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者设备所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括所述要素的过程、方法、物品或者设备中还存在另外的相同要素。It should be noted that, in this document, relational terms such as first and second are only used to distinguish one entity or operation from another entity or operation, and do not necessarily require or imply any relationship between these entities or operations. any such actual relationship or sequence exists. Moreover, the terms "comprising", "comprising" or any other variation thereof are intended to encompass a non-exclusive inclusion such that a process, method, article or device that includes a list of elements includes not only those elements, but also includes not explicitly listed or other elements inherent to such a process, method, article or apparatus. Without further limitation, an element qualified by the phrase "comprising a..." does not preclude the presence of additional identical elements in a process, method, article or apparatus that includes the element.
本说明书中的各个实施例均采用相关的方式描述,各个实施例之间相同 相似的部分互相参见即可,每个实施例重点说明的都是与其他实施例的不同之处。尤其,对于系统实施例而言,由于其基本相似于方法实施例,所以描述的比较简单,相关之处参见方法实施例的部分说明即可。Each embodiment in this specification is described in a related manner, and the same and similar parts between the various embodiments may be referred to each other, and each embodiment focuses on the differences from other embodiments. In particular, for the system embodiments, since they are basically similar to the method embodiments, the description is relatively simple, and for related parts, please refer to the partial descriptions of the method embodiments.
以上所述仅为本公开的较佳实施例而已,并非用于限定本公开的保护范围。凡在本公开的精神和原则之内所作的任何修改、等同替换、改进等,均包含在本公开的保护范围内。The above descriptions are only preferred embodiments of the present disclosure, and are not intended to limit the protection scope of the present disclosure. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present disclosure are included in the protection scope of the present disclosure.

Claims (11)

  1. 一种虚拟对象的脸部生成方法,其特征在于,包括:A face generation method for a virtual object, comprising:
    获取真实对象的脸部图像数据;Obtain face image data of real objects;
    基于所述脸部图像数据,三维重建得到所述真实对象的脸部仿生模型;Based on the face image data, three-dimensional reconstruction obtains a bionic face model of the real object;
    根据所述脸部仿生模型,对预先建立的参考脸部特征模型的初始模型参数进行优化,得到目标模型参数;所述目标模型参数表征所述脸部仿生模型包括的所述真实对象的脸部特征;According to the face bionic model, the initial model parameters of the pre-established reference facial feature model are optimized to obtain target model parameters; the target model parameters represent the face of the real object included in the face bionic model feature;
    根据所述目标模型参数,调整虚拟脸部特征模型的模型参数,以将所述目标模型参数表征的所述脸部特征迁移到所述虚拟脸部特征模型中,得到目标脸部特征模型;所述虚拟脸部特征模型为根据所述参考脸部特征模型预先建立的、属于虚拟对象的脸部特征模型。According to the target model parameters, the model parameters of the virtual facial feature model are adjusted to migrate the facial features represented by the target model parameters into the virtual facial feature model to obtain a target facial feature model; The virtual facial feature model is a facial feature model that is pre-established according to the reference facial feature model and belongs to a virtual object.
  2. 根据权利要求1所述的方法,其特征在于,所述根据所述脸部仿生模型,对预先建立的参考脸部特征模型的初始模型参数进行优化,得到目标模型参数,包括:The method according to claim 1, wherein, according to the facial bionic model, the initial model parameters of the pre-established reference facial feature model are optimized to obtain target model parameters, comprising:
    根据平均脸部模型,对所述参考脸部特征模型进行校准,得到校准后的参考脸部特征模型;其中,所述平均脸部模型为根据脸部模型数据集处理得到的参考脸部模型,所述校准后的参考脸部特征模型的结构与所述平均脸部模型的结构匹配;According to the average face model, the reference face feature model is calibrated to obtain a calibrated reference face feature model; wherein, the average face model is a reference face model obtained by processing according to the face model data set, the structure of the calibrated reference facial feature model matches the structure of the average facial model;
    根据所述脸部仿生模型,对所述校准后的参考脸部特征模型的初始模型参数进行优化,得到所述目标模型参数。According to the facial bionic model, the initial model parameters of the calibrated reference facial feature model are optimized to obtain the target model parameters.
  3. 根据权利要求2所述的方法,其特征在于,在所述根据所述脸部仿生模型,对所述校准后的参考脸部特征模型的初始模型参数进行优化,得到所述目标模型参数时,包括:The method according to claim 2, wherein, when the initial model parameters of the calibrated reference facial feature model are optimized according to the face bionic model to obtain the target model parameters, include:
    控制优化后的参考脸部特征模型与所述脸部仿生模型的结构匹配。The optimized reference facial feature model is controlled to match the structure of the facial bionic model.
  4. 根据权利要求2所述的方法,其特征在于,在所述根据所述脸部仿生模型,对所述校准后的参考脸部特征模型的初始模型参数进行优化,得到所述目标模型参数时,包括:The method according to claim 2, wherein, when the initial model parameters of the calibrated reference facial feature model are optimized according to the face bionic model to obtain the target model parameters, include:
    控制所述目标模型参数小于或等于预设阈值,以使优化后的参考脸部特征模型的结构符合设定结构条件。The parameters of the target model are controlled to be less than or equal to a preset threshold, so that the structure of the optimized reference facial feature model conforms to the set structure conditions.
  5. 根据权利要求4所述的方法,其特征在于,在所述控制所述目标模型 参数小于或等于预设阈值,以使优化后的参考脸部特征模型的结构符合设定结构条件时,包括:The method according to claim 4, wherein, when the control of the target model parameter is less than or equal to a preset threshold, so that the structure of the optimized reference facial feature model conforms to a set structure condition, comprising:
    控制优化后的参考脸部特征模型中的目标脸部特征符合设定特征条件。The target facial features in the optimized reference facial feature model are controlled to meet the set feature conditions.
  6. 根据权利要求2所述的方法,其特征在于,所述基于所述脸部图像数据,三维重建得到所述真实对象的脸部仿生模型,包括:The method according to claim 2, wherein the three-dimensional reconstruction based on the face image data to obtain the bionic face model of the real object, comprising:
    基于所述脸部图像数据和所述平均脸部模型,三维重建得到所述脸部仿生模型。Based on the face image data and the average face model, the face bionic model is obtained by three-dimensional reconstruction.
  7. 根据权利要求1-6任一项所述的方法,其特征在于,在所述根据所述目标模型参数,调整虚拟脸部特征模型的模型参数,以将所述目标模型参数表征的所述脸部特征迁移到所述虚拟脸部特征模型中,得到目标脸部特征模型之后,还包括:The method according to any one of claims 1-6, characterized in that, in the adjustment of the model parameters of the virtual face feature model according to the target model parameters, so as to represent the face represented by the target model parameters The facial features are migrated into the virtual facial feature model, and after obtaining the target facial feature model, it also includes:
    根据所述目标脸部特征模型,对所述虚拟对象的虚拟脸部表情模型进行优化,得到目标脸部表情模型;所述目标脸部表情模型中包括所述目标脸部特征模型中的所述脸部特征;According to the target facial feature model, the virtual facial expression model of the virtual object is optimized to obtain a target facial expression model; the target facial expression model includes the facial features;
    基于所述目标脸部表情模型,对所述真实对象的表情进行跟踪。Based on the target facial expression model, the expression of the real object is tracked.
  8. 一种虚拟对象的脸部生成装置,其特征在于,包括:A device for generating faces of virtual objects, comprising:
    获取模块,用于获取真实对象的脸部图像数据;an acquisition module for acquiring face image data of real objects;
    重建模块,用于基于所述脸部图像数据,三维重建得到所述真实对象的脸部仿生模型;a reconstruction module, configured to obtain a bionic face model of the real object by three-dimensional reconstruction based on the face image data;
    优化模块,用于根据所述脸部仿生模型,对预先建立的参考脸部特征模型的初始模型参数进行优化,得到目标模型参数;所述目标模型参数表征所述脸部仿生模型包括的所述真实对象的脸部特征;an optimization module, configured to optimize the initial model parameters of the pre-established reference facial feature model according to the face bionic model to obtain target model parameters; the target model parameters represent the facial features of real objects;
    调整模块,用于根据所述目标模型参数,调整虚拟脸部特征模型的模型参数,以将所述目标模型参数表征的所述脸部特征迁移到所述虚拟脸部特征模型中,得到目标脸部特征模型;所述虚拟脸部特征模型为根据所述参考脸部特征模型预先建立的、属于虚拟对象的脸部特征模型。an adjustment module for adjusting the model parameters of the virtual facial feature model according to the target model parameters, so as to migrate the facial features represented by the target model parameters into the virtual facial feature model to obtain a target face A facial feature model; the virtual facial feature model is a facial feature model that is pre-established according to the reference facial feature model and belongs to a virtual object.
  9. 一种电子设备,其特征在于,包括处理器,存储器及存储在所述存储器上并可在所述处理器上运行的程序或指令,所述程序或指令被所述处理器执行时实现如权利要求1-7所述的虚拟对象的脸部生成方法的步骤。An electronic device, characterized in that it includes a processor, a memory, and a program or instruction stored on the memory and executable on the processor, the program or instruction being executed by the processor to achieve the right The steps of the method for generating the face of a virtual object as described in claim 1-7.
  10. 一种可读存储介质,其特征在于,所述可读存储介质上存储程序或 指令,所述程序或指令被处理器执行时实现如权利要求1-7所述的虚拟对象的脸部生成方法的步骤。A readable storage medium, characterized in that a program or an instruction is stored on the readable storage medium, and when the program or instruction is executed by a processor, the method for generating a face of a virtual object according to claims 1-7 is implemented A step of.
  11. 一种计算机程序产品,包括计算机可读代码,当所述计算机可读代码在电子设备上运行时,导致所述电子设备执行根据权利要求1-7所述的虚拟对象的脸部生成方法的步骤。A computer program product comprising computer readable code which, when run on an electronic device, causes the electronic device to perform the steps of the method for generating faces of virtual objects according to claims 1-7 .
PCT/CN2021/140590 2020-12-29 2021-12-22 Face generation method and apparatus for virtual object, and device and readable storage medium WO2022143354A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011607616.7 2020-12-29
CN202011607616.7A CN112699791A (en) 2020-12-29 2020-12-29 Face generation method, device and equipment of virtual object and readable storage medium

Publications (1)

Publication Number Publication Date
WO2022143354A1 true WO2022143354A1 (en) 2022-07-07

Family

ID=75512434

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/140590 WO2022143354A1 (en) 2020-12-29 2021-12-22 Face generation method and apparatus for virtual object, and device and readable storage medium

Country Status (2)

Country Link
CN (1) CN112699791A (en)
WO (1) WO2022143354A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115393486A (en) * 2022-10-27 2022-11-25 科大讯飞股份有限公司 Method, device and equipment for generating virtual image and storage medium
CN117274528A (en) * 2023-08-31 2023-12-22 北京百度网讯科技有限公司 Method and device for acquiring three-dimensional grid data, electronic equipment and readable storage medium

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112699791A (en) * 2020-12-29 2021-04-23 百果园技术(新加坡)有限公司 Face generation method, device and equipment of virtual object and readable storage medium
CN114723890A (en) * 2022-04-12 2022-07-08 北京字跳网络技术有限公司 Virtual object generation method and device, readable medium and electronic equipment
CN115359220B (en) * 2022-08-16 2024-05-07 支付宝(杭州)信息技术有限公司 Method and device for updating virtual image of virtual world

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10489958B1 (en) * 2013-03-15 2019-11-26 Lucasfilm Entertainment Company Ltd. Facial animation models
CN110517340A (en) * 2019-08-30 2019-11-29 腾讯科技(深圳)有限公司 A kind of facial model based on artificial intelligence determines method and apparatus
CN110517337A (en) * 2019-08-29 2019-11-29 成都数字天空科技有限公司 Cartoon role expression generation method, animation method and electronic equipment
CN111739155A (en) * 2020-06-24 2020-10-02 网易(杭州)网络有限公司 Virtual character face pinching method and device and terminal equipment
CN112699791A (en) * 2020-12-29 2021-04-23 百果园技术(新加坡)有限公司 Face generation method, device and equipment of virtual object and readable storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10489958B1 (en) * 2013-03-15 2019-11-26 Lucasfilm Entertainment Company Ltd. Facial animation models
CN110517337A (en) * 2019-08-29 2019-11-29 成都数字天空科技有限公司 Cartoon role expression generation method, animation method and electronic equipment
CN110517340A (en) * 2019-08-30 2019-11-29 腾讯科技(深圳)有限公司 A kind of facial model based on artificial intelligence determines method and apparatus
CN111739155A (en) * 2020-06-24 2020-10-02 网易(杭州)网络有限公司 Virtual character face pinching method and device and terminal equipment
CN112699791A (en) * 2020-12-29 2021-04-23 百果园技术(新加坡)有限公司 Face generation method, device and equipment of virtual object and readable storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115393486A (en) * 2022-10-27 2022-11-25 科大讯飞股份有限公司 Method, device and equipment for generating virtual image and storage medium
CN117274528A (en) * 2023-08-31 2023-12-22 北京百度网讯科技有限公司 Method and device for acquiring three-dimensional grid data, electronic equipment and readable storage medium

Also Published As

Publication number Publication date
CN112699791A (en) 2021-04-23

Similar Documents

Publication Publication Date Title
WO2022143354A1 (en) Face generation method and apparatus for virtual object, and device and readable storage medium
US11302064B2 (en) Method and apparatus for reconstructing three-dimensional model of human body, and storage medium
CN103208133B (en) The method of adjustment that in a kind of image, face is fat or thin
CN108154550B (en) RGBD camera-based real-time three-dimensional face reconstruction method
Shi et al. Automatic acquisition of high-fidelity facial performances using monocular videos
Kim et al. Inversefacenet: Deep single-shot inverse face rendering from a single image
WO2022001236A1 (en) Three-dimensional model generation method and apparatus, and computer device and storage medium
WO2022143645A1 (en) Three-dimensional face reconstruction method and apparatus, device, and storage medium
CN107452049B (en) Three-dimensional head modeling method and device
GB2544596A (en) Style transfer for headshot portraits
CN107657664B (en) Image optimization method and device after face expression synthesis, storage medium and computer equipment
US20230169727A1 (en) Generative Nonlinear Human Shape Models
TWI780919B (en) Method and apparatus for processing face image, electronic device and storage medium
CN108564619B (en) Realistic three-dimensional face reconstruction method based on two photos
CN112734890A (en) Human face replacement method and device based on three-dimensional reconstruction
WO2022148379A1 (en) Image processing method and apparatus, electronic device, and readable storage medium
CN109325994B (en) Method for enhancing data based on three-dimensional face
CN114842136A (en) Single-image three-dimensional face reconstruction method based on differentiable renderer
WO2024103890A1 (en) Model construction method and apparatus, reconstruction method and apparatus, and electronic device and non-volatile readable storage medium
CN113902851A (en) Face three-dimensional reconstruction method and device, electronic equipment and storage medium
CN113538682B (en) Model training method, head reconstruction method, electronic device, and storage medium
CN116433812B (en) Method and device for generating virtual character by using 2D face picture
CN112308957B (en) Optimal fat and thin face portrait image automatic generation method based on deep learning
JP7251003B2 (en) Face mesh deformation with fine wrinkles
Chen et al. Extending 3D Lucas–Kanade tracking with adaptive templates for head pose estimation

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21914091

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21914091

Country of ref document: EP

Kind code of ref document: A1