WO2020228385A1 - 虚拟对象的变形处理方法、装置、设备及存储介质 - Google Patents
虚拟对象的变形处理方法、装置、设备及存储介质 Download PDFInfo
- Publication number
- WO2020228385A1 WO2020228385A1 PCT/CN2020/074705 CN2020074705W WO2020228385A1 WO 2020228385 A1 WO2020228385 A1 WO 2020228385A1 CN 2020074705 W CN2020074705 W CN 2020074705W WO 2020228385 A1 WO2020228385 A1 WO 2020228385A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- bone
- virtual object
- bone parameter
- object model
- dimensional target
- Prior art date
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 11
- 238000007493 shaping process Methods 0.000 title abstract 4
- 210000000988 bone and bone Anatomy 0.000 claims abstract description 430
- 230000009466 transformation Effects 0.000 claims abstract description 45
- 230000004927 fusion Effects 0.000 claims description 86
- 238000000034 method Methods 0.000 claims description 51
- 230000008859 change Effects 0.000 claims description 48
- 239000011159 matrix material Substances 0.000 claims description 48
- 230000001815 facial effect Effects 0.000 claims description 8
- 239000000203 mixture Substances 0.000 abstract description 6
- 210000003128 head Anatomy 0.000 description 37
- 230000014509 gene expression Effects 0.000 description 16
- 230000000694 effects Effects 0.000 description 9
- 238000004590 computer program Methods 0.000 description 7
- 238000010586 diagram Methods 0.000 description 4
- 210000004709 eyebrow Anatomy 0.000 description 4
- 239000013598 vector Substances 0.000 description 4
- 230000003287 optical effect Effects 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 238000009877 rendering Methods 0.000 description 3
- 238000004422 calculation algorithm Methods 0.000 description 2
- 238000006073 displacement reaction Methods 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 238000000926 separation method Methods 0.000 description 2
- 238000010171 animal model Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 210000000887 face Anatomy 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 230000000750 progressive effect Effects 0.000 description 1
- 230000000644 propagated effect Effects 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 239000000758 substrate Substances 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/04—Context-preserving transformations, e.g. by using an importance map
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/20—Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T13/00—Animation
- G06T13/20—3D [Three Dimensional] animation
- G06T13/40—3D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/005—General purpose rendering architectures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
- G06T17/20—Finite element generation, e.g. wire-frame surface description, tesselation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/006—Mixed reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/97—Determining parameters from multiple pictures
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F2300/00—Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
- A63F2300/50—Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by details of game servers
- A63F2300/55—Details of game data or player data management
- A63F2300/5546—Details of game data or player data management using player registration data, e.g. identification, account, preferences, game history
- A63F2300/5553—Details of game data or player data management using player registration data, e.g. identification, account, preferences, game history user representation in the game field, e.g. avatar
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F2300/00—Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
- A63F2300/60—Methods for processing data by generating or executing the game program
- A63F2300/66—Methods for processing data by generating or executing the game program for rendering three dimensional images
- A63F2300/6607—Methods for processing data by generating or executing the game program for rendering three dimensional images for animating game characters, e.g. skeleton kinematics
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2200/00—Indexing scheme for image data processing or generation, in general
- G06T2200/04—Indexing scheme for image data processing or generation, in general involving 3D image data
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2200/00—Indexing scheme for image data processing or generation, in general
- G06T2200/24—Indexing scheme for image data processing or generation, in general involving graphical user interfaces [GUIs]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20048—Transform domain processing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30008—Bone
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
- G06T2207/30201—Face
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2210/00—Indexing scheme for image generation or computer graphics
- G06T2210/44—Morphing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2219/00—Indexing scheme for manipulating 3D models or images for computer graphics
- G06T2219/20—Indexing scheme for editing of 3D models
- G06T2219/2021—Shape modification
Definitions
- the present disclosure relates to the field of virtual reality, and in particular to a method, device, equipment and storage medium for deforming a virtual object.
- the original blendshape data is usually used to drive the expression of the virtual object.
- the original blend shape data if you continue to use the original blend shape data To run the emoji will produce a large error.
- the purpose of one or more embodiments of this specification is to provide a method, device, device, and storage medium for deforming a virtual object.
- a method for deforming a virtual object includes:
- first bone parameters and first fusion deformation data corresponding to a three-dimensional preset virtual object model including a plurality of bones, where the first fusion deformation data is used to represent the degree of deformation of the preset virtual object;
- second fusion deformation data corresponding to the three-dimensional target virtual object model is determined.
- the second fusion corresponding to the three-dimensional target virtual object model is determined
- Deformation data including:
- second fusion deformation data corresponding to the three-dimensional target virtual object model is determined.
- obtaining the first grid data corresponding to the three-dimensional preset virtual object model includes:
- first grid data corresponding to the first fusion deformation data is obtained.
- acquiring the second grid data corresponding to the three-dimensional target virtual object model includes:
- the second grid data corresponding to the three-dimensional target virtual object model is determined.
- determining the transformation matrix of all bones in the three-dimensional target virtual object model according to the second bone parameter and the first bone parameter includes:
- the virtual object model is a face model
- the bones included in the virtual object model are head bones
- the three-dimensional target virtual object model is generated through the following steps:
- a three-dimensional target face model is generated.
- the obtaining bone parameter adjustment information of a three-dimensional target face model includes:
- the bone parameter adjustment information is determined according to the bone parameter adjustment instruction.
- adjusting the head bone parameters corresponding to the three-dimensional target face model according to the bone parameter adjustment information includes:
- the bone parameter of at least one head bone associated with the bone parameter adjustment information, and adjust the bone parameter of the at least one head bone according to the bone parameter adjustment information.
- At least one head bone is multiple head bones
- adjusting the bone parameters of the at least one head bone according to the bone parameter adjustment information includes:
- the bone parameters of the multiple head bones are simultaneously adjusted according to the bone parameter adjustment information.
- adjusting the bone parameters of the at least one head bone according to the bone parameter adjustment information includes:
- the bone parameter of the at least one head bone associated with the bone parameter adjustment information is adjusted according to the change ratio of the bone parameter adjustment information within the adjustment range.
- determining the bone parameter adjustment information according to the bone parameter adjustment instruction includes:
- a virtual object deformation processing device including:
- the first acquiring unit is configured to acquire first bone parameters and first fusion deformation data corresponding to a three-dimensional preset virtual object model including multiple bones, where the first fusion deformation data is used to represent the degree of deformation of the preset virtual object;
- the second acquiring unit is configured to acquire the second bone parameter corresponding to the three-dimensional target virtual object model, and determine the transformation relationship between the second bone parameter and the first bone parameter;
- the determining unit is configured to determine the second fusion deformation data corresponding to the three-dimensional target virtual object model according to the transformation relationship between the second bone parameter and the first bone parameter and the first fusion deformation data.
- a virtual object deformation processing device in a third aspect, includes a storage medium and a processor.
- the storage medium is used to store machine executable instructions that can run on the processor.
- the machine-executable instructions implement any of the virtual object deformation processing methods described in this specification.
- a machine-readable storage medium on which machine-executable instructions are stored, and when the machine-executable instructions are executed by a processor, the virtual object deformation processing method described in any one of this specification is implemented.
- the three-dimensional target virtual object model generated by adjusting the virtual object model is based on the preset bone parameters of the virtual object model obtained in advance And the first fusion deformation data and the bone parameters corresponding to the target virtual object model to obtain the second fusion deformation data corresponding to the target virtual object model, so as to be able to update the data of the fusion deformer synchronously with the adjustment of the preset virtual object model, so that The fusion deformer is adapted to the new virtual object model (ie, the target virtual object model).
- the accuracy of expression driving can be improved.
- Fig. 1 is an example of a method for deforming a virtual object provided by at least one embodiment of this specification.
- Figure 2A shows a schematic diagram of a preset face model established based on bones.
- Fig. 2B is an example of a grid corresponding to the preset face model in Fig. 2A.
- Fig. 3 is an example of a method for obtaining second fusion deformation data provided by at least one embodiment of this specification.
- Fig. 4 is an example of a method for obtaining a transformation matrix of a bone provided by at least one embodiment of this specification.
- Fig. 5 is an example of a method for generating a target face model provided by at least one embodiment of this specification.
- Fig. 6 is an example of setting adjustment parameters provided by at least one embodiment of this specification.
- FIG. 7A is an example of a two-layer parameter setting method for adjusting the overall size of the eyes provided by at least one embodiment of this specification.
- FIG. 7B is an example of a control for adjusting the overall size of the eyes provided by at least one embodiment of this specification.
- 7C and 7D are examples of the face model before adjustment and the target face model provided by at least one embodiment of this specification.
- Fig. 8 is an example of a method for generating facial makeup provided by at least one embodiment of this specification.
- Fig. 9A is an example of a face map provided by at least one embodiment of this specification.
- FIG. 9B is an example of a texture of a replaceable component provided in at least one embodiment of this specification.
- FIG. 9C is an example of generating a map texture provided by at least one embodiment of this specification.
- 10A and 10B are the face model before makeup and the face model after makeup provided by at least one embodiment of this specification.
- FIG. 11A is an example of a virtual object deformation processing apparatus provided by at least one embodiment of this specification.
- FIG. 11B is an example of another virtual object deformation processing apparatus provided by at least one embodiment of this specification.
- Fig. 12 is an example of a virtual object deformation processing device provided by at least one embodiment of this specification.
- the virtual object model in this specification can be a three-dimensional animal model or a three-dimensional human body model simulated by a computer, and is not limited.
- a face model is mainly used as an example to describe the deformation processing of the virtual object.
- the user can customize the virtual object model by himself, for example, the appearance of the model can be changed by adjusting the structural parameters (for example, bone parameters) of the preset virtual object model.
- Fusion deformation data will be used in the deformation processing of virtual objects.
- Fusion deformation data is the data stored in the fusion deformer, which is used to characterize the degree of deformation of virtual objects (for example, human faces). It can be the mesh data corresponding to when the model is fused from one shape to another. The difference. Since the standard fusion deformation data is made for the preset virtual object model, after adjusting the preset virtual object model (for example, pinching the face), there is a problem of whether it can fit the standard fusion deformation data, so the standard fusion The deformation data is updated. In the following description, taking the virtual object as a human face as an example, the preset human face may be a standard human face, which is generated using default parameters commonly used in related technologies.
- the bone parameter corresponding to the preset virtual object model is called the first bone parameter, and the corresponding fusion deformation data is called the first fusion deformation data, and the bone corresponding to the target virtual object model
- the parameter is called the second bone parameter, and the corresponding fusion deformation data is called the second fusion deformation data.
- At least one embodiment of this specification provides a method for deforming a virtual object. As shown in Fig. 1, it is an embodiment of the method. The method may include the following steps.
- step 101 first bone parameters and first fusion deformation data corresponding to a three-dimensional preset virtual object model including multiple bones are acquired.
- the first fusion deformation data is used to characterize the degree of deformation of a preset virtual object (for example, a human face). That is, when the first fusion deformation data is applied to a preset face model, the preset face model can be deformed accordingly.
- a preset virtual object for example, a human face
- each bone can have multiple parameters.
- it may include at least one of the following: a displacement parameter t, a rotation parameter r, and a scaling parameter s.
- Figure 2A shows a schematic diagram of a preset face model established based on bones.
- the preset face model is a model obtained by building a skeleton based on 48 bones and performing skinning processing on the basis of the skeleton.
- the white lines in FIG. 2A represent the bones.
- step 102 the second bone parameter corresponding to the three-dimensional target virtual object model is acquired, and the transformation relationship between the second bone parameter and the first bone parameter is determined.
- the target virtual object model may be obtained by adjusting relative to the preset virtual object model, where the adjustment may be adjusting the first bone parameter of the preset virtual object model, thereby obtaining the target virtual object
- the second bone parameter of the model is the same as the first bone parameter in data format, and there is a difference in value.
- step 103 the second fusion deformation data corresponding to the three-dimensional target virtual object model is determined according to the transformation relationship between the second bone parameter and the first bone parameter and the first fusion deformation data.
- the second bone parameter is obtained by adjusting on the basis of the first bone parameter, there is a transformation relationship between the two. Applying the transformation relationship between the second bone parameter and the first bone parameter to the first fusion deformation data, the second fusion deformation data corresponding to the second bone parameter, that is, the three-dimensional target virtual object model can be obtained.
- the three-dimensional target virtual object model can be generated by adjusting the preset virtual object model.
- the second fusion deformation data corresponding to the target virtual object model is acquired.
- the data of the fusion deformer can be synchronously updated for the adjustment of the preset virtual object model, so that the fusion deformer can be adapted to the new virtual object model (ie, the target virtual object model).
- the accuracy of expression driving can be improved.
- the first fusion deformation data of the preset virtual object model may be the difference between the mesh data corresponding to the expression and the standard mesh data.
- the standard mesh data here may include mesh vertex data corresponding to the bones of the preset virtual object model.
- the bone parameters After adjusting the preset virtual object model, the bone parameters have changed, especially in the case of large changes. If you still run the expression with the standard fusion deformation data, there will be a problem of incompatibility and large expression errors. , Affecting the authenticity and vividness of the model. Therefore, it is necessary to update the fusion deformation data to adapt it to the adjusted three-dimensional target virtual object model, and to run the expression more accurately.
- the method for obtaining the second fusion deformation data is shown in FIG. 3, and may include:
- step 301 first grid data corresponding to the three-dimensional preset virtual object model is acquired.
- obtaining the first grid data corresponding to the three-dimensional preset virtual object model may include:
- the standard mesh data corresponding to the bone parameter can be obtained.
- 2B shows an example of the standard mesh data corresponding to the bone parameters of the preset face model (without running expressions).
- the standard mesh data may include the mesh corresponding to the bones of the preset face model without running expressions. Grid vertex data.
- the difference between the mesh data corresponding to the expression and the standard mesh data is stored in the fusion deformer, after obtaining the standard mesh data, it is combined with the first fusion deformer (the first fusion deformer).
- the difference stored in the fusion deformer corresponding to the deformation data is added together, and the first mesh data corresponding to the first fusion deformation data can be restored.
- the standard mesh data corresponding to the preset face model can be a vector [1,1,1,...,1,1]; a fusion deformer used to control the size of the eyes
- the stored The fusion deformation data can be a vector [0,0,0.5,...,0,0] (two vectors have the same size).
- the vector [1,1,1.5,...,1,1] obtained by adding the standard mesh data and the first fusion deformation data is the first mesh data corresponding to the expression of enlarged eyes.
- step 302 the second grid data corresponding to the three-dimensional target virtual object model is acquired.
- acquiring the second grid data corresponding to the three-dimensional target virtual object model may include:
- the second grid data corresponding to the three-dimensional target virtual object model is determined.
- Fig. 4 shows the flow of an embodiment of the method for obtaining the transformation matrix of each bone. As shown in Figure 4, the method may include the following steps.
- step 401 the first position matrix of each bone in the preset virtual object model is obtained according to the first bone parameter corresponding to the preset virtual object model.
- step 402 the second position matrix of each bone in the three-dimensional target virtual object model is obtained according to the second bone parameter corresponding to the target virtual object model.
- step 403 a transformation matrix of each bone is obtained according to the second position matrix and the first position matrix.
- the transformation matrix of each bone can be obtained.
- the transformation matrix of the bone can be calculated by the following formula:
- T represents the transformation matrix of the skeleton
- T new represents the first position matrix of the skeleton corresponding to the preset virtual object model
- T normal represents the second position matrix of the bone corresponding to the target virtual object model
- inverse() represents the inversion of the matrix
- new mesh vertex data can be obtained. It mainly uses the idea of skeleton skinning, and calculates the new grid vertex coordinates according to the change state of the skeleton and the binding information of each grid vertex, that is, the second grid data corresponding to the target virtual object model. For example, the following formula can be used for calculation:
- Vertex new, Vertex ori mesh vertices represent the new and the original mesh vertices
- k denotes the k th receiving bone influences the vertex
- T i denotes the k th receiving bone influences the vertex
- weight i denote bone transformation matrix and corresponding weight.
- step 303 the second fusion deformation data corresponding to the three-dimensional target virtual object model is determined according to the second grid data and the first grid data.
- the second fusion deformation data corresponding to the target virtual object model can be obtained.
- the first mesh data obtained in step 301 corresponds to the mesh data corresponding to the enlarged eyes effect obtained on the preset face model.
- the second grid data obtained in step 302 corresponds to the grid data obtained on the target face model to obtain the effect of enlarged eyes.
- the obtained second fusion deformation data is the data stored in the fusion deformer corresponding to the target face model.
- the fusion deformer corresponding to the preset virtual object model is updated by synchronously applying the transformation matrix of the skeleton to the target virtual object model, thereby improving Expression-driven accuracy.
- the virtual object deformation processing method in this embodiment is not limited to the face model generated by the method for generating the target virtual object model described above, and it can be used to preset virtual object models.
- the update of the fusion deformation data when performing any customization For the update of the fusion deformation data of the customized model, in its transformation matrix, T new represents the position matrix of the bone corresponding to the customized model, and T normal represents the position matrix of the bone corresponding to the model before customization.
- the target virtual object model applied to may be a face model
- the bones included in the face model are head bones
- the face model may be generated in the following manner.
- Fig. 5 shows the flow of an embodiment of the method for generating a target face model. As shown in Fig. 5, the method may include the following steps.
- step 501 the bone parameter adjustment information of the three-dimensional target face model is obtained.
- the bone parameter adjustment information may be determined by receiving a bone parameter adjustment instruction, and according to the bone parameter adjustment instruction.
- the acquired bone parameter adjustment information may be at least one of the following: change amount, or relative change amount or change ratio.
- step 502 the head bone parameters corresponding to the three-dimensional target face model are adjusted according to the bone parameter adjustment information.
- the head bone parameters are adjusted based on the amount of change contained in the bone parameter adjustment information, or the relative amount of change or the change ratio, so that the head bone parameter changes correspondingly, such as increasing / Decrease the corresponding amount of change or relative change, or increase/decrease the corresponding change ratio.
- a set of adjusted bone parameters can be obtained.
- step 503 a three-dimensional target face model is generated according to the adjusted head bone parameters.
- the head bone parameters are adjusted, that is, the bones of the three-dimensional target face model are adjusted, thereby changing the bone structure of the model, and obtaining the desired three-dimensional target face model.
- the head bone parameters are adjusted through the bone parameter adjustment information to achieve simultaneous adjustment of at least one bone parameter in the head, so that the overall shape and local details of the model can be adjusted simultaneously, which can achieve rapid
- the purpose of adjustment can also be fine-tuned.
- the method for generating the three-dimensional target face model will be described in more detail.
- adjustment parameters for adjusting the bone parameters of the three-dimensional target virtual object model can be preset.
- the adjustment parameter is a specific implementation of the aforementioned bone parameter adjustment information.
- the adjustment parameters are set based on the bone parameters of the three-dimensional target virtual object model. Therefore, the bone parameters of the three-dimensional target virtual object model (that is, the head bone parameters) can be obtained first.
- Bi the bone parameter
- one or more of the position, direction, and size of the bone can be changed, so that the bone structure of the three-dimensional target virtual object model can be changed.
- the adjustment parameters of the bone parameters of the three-dimensional target virtual object model are set to adjust the bone structure.
- the adjustment parameter is associated with the parameter of at least one bone in the head.
- the correlation here means that when the adjustment parameter is changed, one or more parameters of the at least one bone may change simultaneously. Therefore, the adjustment parameter can be regarded as a controller of the bone parameter associated with it.
- the at least one bone may be a bone belonging to at least one partial area in the head.
- the above adjustment parameters can be set by the following method.
- the method can include:
- Which bones each adjustment parameter is associated with can be preset.
- the parameters of the bones associated with the adjustment parameter are also adjusted at the same time.
- the parameters of multiple bones in a local area can be associated with the adjustment parameter. It can be all the bones in the local area or part of the bones.
- the eye bone adjuster used to adjust the overall size of the eyes suppose there are E1 bones in the eye area, but only E2 bones need to be adjusted (E2 ⁇ E1), that is, it can control the changes in eye size. Then the eye bone regulator only needs to control these E2 bones.
- the eye bone adjuster used to adjust the overall size of the eyes is associated with the parameters of the bones eye_01, eye_02, and eye_03 (eye_01, eye_02, and eye_03 represent the bones of the eye area) to make the eye bones The regulator can control these three bones.
- the bone parameter associated with the adjustment parameter is obtained.
- the adjustment parameters can control all 9 parameters, or can control one or more of them. Which parameters of each bone are associated with the adjustment parameters may be preset.
- controller1, controller2,..., controllerM are the respective regulators, where M is the number of regulators, and the regulator controller1 can control the three bones bone1, bone2, bone3, and controller1 can control each
- the parameters of each bone are as follows: the translation parameter (tx, ty) of bone1, the scaling parameter (sx, sy, sz) of bone2, and the rotation parameter (rx) of bone3. That is, by adjusting the adjustment parameters of controller1, the above-mentioned parameters of the three bones can be adjusted simultaneously.
- each bone parameter associated with the adjustment parameter is set, so that the bone parameter is adjusted at the same time according to the change of the adjustment parameter.
- each associated bone parameter is set so that the bone parameter is adjusted at the same time according to the change of the adjustment parameter, thereby realizing the adjustment parameter pair correlation Control of joint bone parameters.
- one or more bone parameters associated with the adjustment parameter may be adjusted at the same change ratio as the adjustment parameter. For example, the adjustment range of the value of the adjustment parameter is increased by 1/10, and the adjustment range of the value of one or more bone parameters associated with it is also increased by 1/10 at the same time.
- the adjustment of the adjustment parameter to the bone parameter can be realized by the following method.
- the method includes the following steps.
- the change range may be preset. Since the range is used to determine the relative change of the adjustment parameter, the setting of the specific range has no effect on the realization of the technical solution of the present disclosure.
- the change range may be preset, which can specifically set the change range of the bone parameters according to the actual need to adjust the displacement, direction, and distance of the bone. Exemplarily, if the variation range of a bone parameter is 0, it indicates that this parameter cannot be adjusted, that is, it is not controlled by the adjustment parameter.
- each bone parameter associated with the adjustment parameter is set so that: according to the change ratio of the adjustment parameter within the change range, the same change ratio is simultaneously performed within the change range of the bone parameter.
- the change ratio mentioned here can also be expressed in terms of relative change. For example, for controller1 in Figure 6, if the change range of its adjustment parameter is [0,1], when the adjustment parameter value changes from 0.2 to 0.3, then the relative change is 1/10, that is, change The ratio is 1/10. Then, the bone parameter values of bone1, bone2, and bone3 associated with controller1 are adjusted by 1/10 at the same time. In this way, simultaneous adjustment of the adjustment parameter and the bone parameter associated with the adjustment parameter is achieved.
- the above bone parameters change in proportion to the adjustment parameters, which can be implemented by a linear interpolation algorithm.
- different change modes can also be set for different bone parameters.
- the adjustment parameter value of controller1 changes from 0.2 to 0.3, it can adjust the bone parameters of bone1 up by 1/10, and set bone2 and bone parameters.
- the bone parameters of bone3 are adjusted upward by 1/5.
- different bone parameters can have different changing trends.
- each bone parameter also changes from the minimum value to the maximum value within the respective change interval, but the change process of each bone parameter may be different.
- the above different bone parameters change in different trends with the adjustment parameters, which can be implemented by a nonlinear interpolation algorithm.
- obtaining the bone parameter adjustment information includes:
- the bone parameter adjustment information is determined according to the bone parameter adjustment instruction.
- the bone parameter adjustment instruction may refer to an adjustment instruction for the adjustment parameter. According to the adjustment instruction of the adjustment parameter, the bone parameter adjustment information can be determined.
- the bone parameter adjustment information After acquiring the bone parameter adjustment information, acquire the bone parameter of at least one head bone associated with the bone parameter adjustment information, that is, acquire the bone of at least one head bone associated with the adjustment parameter Parameters, adjusting the bone parameters of the at least one head bone according to the bone parameter adjustment information.
- the at least one head bone is one bone
- adjust the bone parameters of the bone according to the bone parameter adjustment information in the case where the at least one head bone is multiple bones, according to the The bone parameter adjustment information simultaneously adjusts the bone parameters of the multiple head bones.
- adjusting the bone parameters of the at least one head bone according to the bone parameter adjustment information may include:
- the bone parameter of the at least one head bone associated with the bone parameter adjustment information is adjusted according to the change ratio of the bone parameter adjustment information within the adjustment range, wherein the adjustment of the bone parameter is in the adjustment of the bone parameter Within the scope.
- a control such as a slider
- the adjustment instruction of the adjustment parameter can be generated by operating the control.
- the model by acquiring the output change of the control, that is, acquiring the bone parameter adjustment information corresponding to the adjustment instruction, the associated bone parameter can be adjusted.
- the following takes the adjustment of the overall size of the eyes in FIG. 7A as an example to specifically describe the control of the adjustment parameters on the face model.
- set the parameter change range of the adjuster controller1_eye used to adjust the overall size of the eyes to [0,1], that is, the adjustment parameter value can be any value between [0,1], and the slider can be operated
- the adjustment parameter value changes in this interval, as shown in Figure 7B.
- the adjuster controller1_eye can control the three bones eye_01, eye_02, and eye_03 in the eye area.
- the parameters of each bone that can be controlled by the adjuster controller1_eye are as follows: the scaling parameters of the bone eye_01 (sx, sy, sz), the scaling parameters of the bone eye_02 ( sx, sy, sz), the scaling parameters (sx, sy, sz) of the bone eye_03, for example, the adjustment range of each parameter is [-1, 1], that is, each parameter value can be within the interval Any number.
- the adjustment range of other parameters of these three bones is 0, that is, they cannot be adjusted by the adjuster controller1_eye, and are not displayed in the control list.
- the adjusted parameter value is changed and the value is obtained. Then use this value to linearly interpolate the parameters of the above three bones corresponding to the regulator controller1_eye within the corresponding adjustment range to obtain the adjusted value of each parameter. That is, based on the change of the adjustment parameter value of the adjuster controller1_eye within its parameter range, the associated bone parameter value changes proportionally within its adjustment range. This realizes the adjustment of the overall size of the eyes.
- Figure 7C shows a schematic diagram of the overall size of the original eyes, where each frame on the left represents the bone parameters, which are the overall head parameters (head_all_p_N), the overall left cheek parameters (cheek_p_L), and the left cheek parameters (cheek_01_p_L, cheek_02_p_L, cheek_03_p_L) ), the overall parameters of the right cheek (cheek_p_R), the parameters of the right cheek (cheek_01_p_R, cheek_02_p_R, cheek_03_p_R), the overall parameters of the eye (eye_p_L), the eye parameters (eye_01_p_L, eye_02_p_L, eye_03_p_L, adjustment_04_p_Leye), The slider control of the device.
- head_all_p_N the overall head parameters
- cheek_p_L the overall left cheek parameters
- the left cheek parameters cheek_01_p_L, cheek_02_p_L, cheek_03_p_L
- FIGS. 7C to 7D show the three-dimensional model obtained by skinning on the basis of the skeleton diagram, but the effect of the bone change can be reflected from it.
- the parameter of the bone symmetrical to the adjusted bone changes accordingly.
- the bones of the face are symmetrical.
- the bones on the other side will also change accordingly.
- the parameters of the symmetrical bones are related, and if one of the parameters changes, the other parameters will also change accordingly.
- the local area mentioned in the present disclosure may be one area or multiple areas that need to be controlled by adjusting the parameters in order to achieve certain effects.
- the control corresponding to the adjustment parameter can be named based on the effect that the adjustment parameter can achieve. As shown in Fig. 7C to Fig. 7D, the controls named “left and right eyebrows” and “up and down eyebrows” are included, which intuitively reflects the effects that can be achieved by the control and is convenient for users to operate.
- custom makeup is usually generated by model segmentation, and a separate model is separated for each replaceable part on the basis of the role model, but due to this method, multiple CPU calls are caused during the rendering phase Graphical programming interface (draw call), therefore, will seriously affect the performance of the program.
- a method for generating facial makeup for the generated face model is proposed. As shown in FIG. 8, the following steps may be included.
- step 801 a face map is generated based on the bone parameters of the face model.
- a face map can be generated according to the bone parameters of the face model.
- the face map may include multiple regions corresponding to makeup. For different bone parameters, the size and position of each rectangular region are usually different.
- the makeup mentioned here refers to the face parts that can be replaced for the face model, such as eyebrows, blush, lips, beard, etc., rather than the irreplaceable facial parts that are already in the generated model .
- the replaceable facial parts are not limited to the above, and may also include other facial makeup.
- the face texture may also be a face texture created and generated in other ways, and is not limited to the generation based on the bone parameters of the aforementioned face model.
- the face map includes rectangular areas corresponding to eyebrows, blush, lips or lips and mustache.
- each rectangular area may include at least one of the following parameters: width, height, coordinate lateral offset value, coordinate longitudinal offset value.
- step 802 a map of replaceable parts corresponding to the makeup is obtained for the region.
- the replaceable component textures of the corresponding makeup can be produced and generated according to the parameters of the rectangular area, and the corresponding textures can also be called and imported.
- An example of the texture is shown in Figure 9B, where the texture of each replaceable part is consistent with the width and height of the corresponding rectangular area.
- the color of the texture of each replaceable part can be changed, and a layer of detailed texture can also be added to the texture.
- the texture of the replaceable part map can be generated by mixing the transparency information and texture information of the replaceable part map.
- the texture information is the texture information selected for the texture of the replaceable part, and the mixing formula is as follows:
- Color final represents the final color of the texture
- the color of the face texture is displayed
- step 803 the replaceable component textures acquired for each makeup are merged with the facial textures to obtain a merged texture.
- the replaceable part map of each makeup can be merged with the face map in the following way:
- the replaceable part texture According to the coordinate lateral offset value and the coordinate longitudinal offset value of the rectangular area corresponding to the replaceable part map, copy the replaceable part texture to the corresponding rectangular area of the face texture; match the face texture to the replaceable part texture Blend according to transparency information.
- the transparency information is the transparency information of the texture of the replaceable component.
- renderTexture the render texture
- the face map and its corresponding lighting model can be copied to the rendering texture together.
- step 804 the merged texture is mapped to the face model to generate facial makeup of the face model.
- the face texture can be first rendered to a frame buffer object (FrameBufferObject), and the frame buffer object is associated with the face model corresponding to the face model on the GPU according to the texture coordinate UV of the face model Map objects, thereby realizing the mapping of the merged texture to the face model.
- FrameBufferObject frame buffer object
- FIG. 10A shows the initial face model
- FIG. 10B shows the face model after the face makeup is generated by the above method.
- the method of merging and then drawing is used to improve the rendering efficiency.
- FIG. 11A provides a virtual object deformation processing device. As shown in FIG. 11A, the device may include:
- the first acquiring unit 1101 is configured to acquire first bone parameters and first fusion deformation data corresponding to a three-dimensional preset virtual object model including multiple bones, where the first fusion deformation data is used to represent the degree of deformation of the preset virtual object ;
- the second acquiring unit 1102 is configured to acquire the second bone parameter corresponding to the three-dimensional target virtual object model, and determine the transformation relationship between the second bone parameter and the first bone parameter;
- the determining unit 1103 is configured to determine a second fusion deformation corresponding to the face model of the three-dimensional target virtual object according to the transformation relationship between the second bone parameter and the first bone parameter and the first fusion deformation data data.
- Fig. 11B provides another virtual object deformation processing device.
- the second acquisition unit 1102 includes a face model generation subunit 1102_1, which is used to generate a target face model, specifically for:
- a three-dimensional target face model is generated.
- FIG. 12 is a virtual object deformation processing device provided by at least one embodiment of this specification.
- the device may include a storage medium 1201 and a processor 1202, and the storage medium 1201 is configured to store a processor that can run on the processor.
- Machine executable instructions the processor 1202 is configured to implement the virtual object deformation processing method described in any embodiment of this specification when executing the machine executable instructions.
- At least one embodiment of this specification also provides a machine-readable storage medium on which machine-executable instructions are stored, and when the machine-executable instructions are executed by a processor, any one of the virtual object deformation processing methods described in this specification is implemented .
- one or more embodiments of this specification can be provided as a method, a system, or a computer program product. Therefore, one or more embodiments of this specification may adopt the form of a complete hardware embodiment, a complete software embodiment, or an embodiment combining software and hardware. Moreover, one or more embodiments of this specification may adopt a computer program implemented on one or more computer-usable storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) containing computer-usable program codes. The form of the product.
- computer-usable storage media including but not limited to disk storage, CD-ROM, optical storage, etc.
- the embodiments of the subject and functional operations described in this specification can be implemented in the following: digital electronic circuits, tangible computer software or firmware, computer hardware including the structures disclosed in this specification and their structural equivalents, or among them A combination of one or more.
- the embodiments of the subject matter described in this specification can be implemented as one or more computer programs, that is, one or one of the computer program instructions encoded on a tangible non-transitory program carrier to be executed by a data processing device or to control the operation of the data processing device Multiple modules.
- the program instructions may be encoded on artificially generated propagated signals, such as machine-generated electrical, optical or electromagnetic signals, which are generated to encode information and transmit it to a suitable receiver device for data
- the processing device executes.
- the computer storage medium may be a machine-readable storage device, a machine-readable storage substrate, a random or serial access memory device, or a combination of one or more of them.
- the processing and logic flow described in this specification can be executed by one or more programmable computers executing one or more computer programs to perform corresponding functions by operating according to input data and generating output.
- the processing and logic flow can also be executed by a dedicated logic circuit, such as FPGA (Field Programmable Gate Array) or ASIC (Application Specific Integrated Circuit), and the device can also be implemented as a dedicated logic circuit.
- FPGA Field Programmable Gate Array
- ASIC Application Specific Integrated Circuit
- Computers suitable for executing computer programs include, for example, general-purpose and/or special-purpose microprocessors, or any other type of central processing unit.
- the central processing unit will receive instructions and data from a read-only memory and/or random access memory.
- the basic components of a computer include a central processing unit for implementing or executing instructions and one or more memory devices for storing instructions and data.
- a computer will also include one or more mass storage devices for storing data, such as magnetic disks, magneto-optical disks, or optical disks, or the computer will be operatively coupled to this mass storage device to receive data from or send data to it. It transmits data, or both.
- the computer does not have to have such equipment.
- the computer can be embedded in another device, such as a mobile phone, a personal digital assistant (PDA), a mobile audio or video player, a game console, a global positioning system (GPS) receiver, or a universal serial bus (USB ) Portable storage devices with flash drives, to name a few.
- PDA personal digital assistant
- GPS global positioning system
- USB universal serial bus
- Computer-readable media suitable for storing computer program instructions and data include all forms of non-volatile memory, media, and memory devices, including, for example, semiconductor memory devices (such as EPROM, EEPROM, and flash memory devices), magnetic disks (such as internal hard disks or Removable disk), magneto-optical disk, CD ROM and DVD-ROM disk.
- semiconductor memory devices such as EPROM, EEPROM, and flash memory devices
- magnetic disks such as internal hard disks or Removable disk
- magneto-optical disk CD ROM and DVD-ROM disk.
- the processor and the memory can be supplemented by or incorporated into a dedicated logic circuit.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Graphics (AREA)
- Software Systems (AREA)
- Computer Hardware Design (AREA)
- General Engineering & Computer Science (AREA)
- Architecture (AREA)
- Geometry (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Processing Or Creating Images (AREA)
Abstract
Description
Claims (25)
- 一种虚拟对象变形处理方法,其特征在于,所述方法包括:获取包括多根骨骼的三维预设虚拟对象模型对应的第一骨骼参数和第一融合变形数据,所述第一融合变形数据用于表征预设虚拟对象的变形程度;获取三维目标虚拟对象模型对应的第二骨骼参数,确定所述第二骨骼参数和所述第一骨骼参数之间的变换关系;根据所述第二骨骼参数和所述第一骨骼参数之间的变换关系与所述第一融合变形数据,确定与所述三维目标虚拟对象模型对应的第二融合变形数据。
- 根据权利要求1所述的方法,其特征在于,根据所述第二骨骼参数和所述第一骨骼参数之间的变换关系与所述第一融合变形数据,确定与所述三维目标虚拟对象模型对应的第二融合变形数据,包括:获取所述三维预设虚拟对象模型对应的第一网格数据;获取所述三维目标虚拟对象模型对应的第二网格数据;根据所述第二网格数据和所述第一网格数据,确定所述三维目标虚拟对象模型对应的第二融合变形数据。
- 根据权利要求2所述的方法,其特征在于,获取所述三维预设虚拟对象模型对应的第一网格数据,包括:获取所述第一骨骼参数对应的标准网格数据;根据所述标准网格数据和所述第一融合变形数据,得到所述第一融合变形数据相对应的第一网格数据。
- 根据权利要求2或3所述的方法,其特征在于,获取所述三维目标虚拟对象模型对应的第二网格数据,包括:根据所述第二骨骼参数与所述第一骨骼参数,确定所述三维目标虚拟对象模型中所有骨骼的变换矩阵;通过将所述变换矩阵应用于所述第一网格数据,确定所述三维目标虚拟对象模型对应的第二网格数据。
- 根据权利要求4所述的方法,其特征在于,根据所述第二骨骼参数与所述第一骨骼参数,确定所述三维目标虚拟对象模型中所有骨骼的变换矩阵,包括:根据所述三维预设虚拟对象模型对应的第一骨骼参数,获取所述三维预设虚拟对象模型中每个骨骼的第一位置矩阵;根据所述三维目标虚拟对象模型所对应的第二骨骼参数,获取所述三维目标虚拟对象模型中每个骨骼的第二位置矩阵;根据所述第二位置矩阵和所述第一位置矩阵,获得每个骨骼的变换矩阵。
- 根据权利要求1-5中任一项所述的方法,其特征在于,所述虚拟对象模型为人脸模型,所述虚拟对象模型包括的骨骼为头部骨骼,所述三维目标虚拟对象模型通过以下步骤生成:获取三维目标人脸模型的骨骼参数调整信息;根据所述骨骼参数调整信息,调整所述三维目标人脸模型对应的头部骨骼参数;根据调整后的所述头部骨骼参数,生成三维目标人脸模型。
- 根据权利要求6所述的方法,其特征在于,所述获取三维目标人脸模型的骨骼参数调整信息包括:接收骨骼参数调整指令;根据所述骨骼参数调整指令确定所述骨骼参数调整信息。
- 根据权利要求6或7所述的方法,其特征在于,根据所述骨骼参数调整信息,调整所述三维目标人脸模型对应的头部骨骼参数,包括:获取所述头部骨骼参数中,与所述骨骼参数调整信息相关联的至少一根头部骨骼的骨骼参数,根据所述骨骼参数调整信息调整所述至少一根头部骨骼的骨骼参数。
- 根据权利要求8所述的方法,其特征在于,至少一根头部骨骼为多根头部骨骼,根据所述骨骼参数调整信息调整所述至少一根头部骨骼的骨骼参数,包括:根据所述骨骼参数调整信息同时调整所述多根头部骨骼的骨骼参数。
- 根据权利要求8或9所述的方法,其特征在于,根据所述骨骼参数调整信息调整所述至少一根头部骨骼的骨骼参数,包括:获取所述骨骼参数调整信息的调节范围;获取与所述骨骼参数调整信息相关联的每个骨骼参数的调节范围;根据所述骨骼参数调整信息在调节范围内的变化比例调整与所述骨骼参数调整信息相关联的所述至少一根头部骨骼的骨骼参数。
- 根据权利要求6-10中任一项所述的方法,其特征在于,根据所述骨骼参数调整指令确定所述骨骼参数调整信息,包括:获取针对所述骨骼参数调整指令设置的控件的输出变化量,根据所述输出变化量确定所述骨骼参数调整信息。
- 根据权利要求6-11中任一项所述的方法,其特征在于,所述方法还包括:基于所述三维目标人脸模型的骨骼参数生成脸部贴图,其中,所述脸部贴图包括多个与妆容对应的区域;针对所述区域获取相应妆容的可更换部件贴图;将针对各个妆容获取的可更换部件贴图与所述脸部贴图进行合并,得到合并后的贴图;将所述合并后的贴图映射到所述三维目标人脸模型上,生成所述三维目标人脸模型的脸部妆容。
- 一种虚拟对象变形处理装置,其特征在于,包括:第一获取单元,用于获取包括多根骨骼的三维预设虚拟对象模型对应的第一骨骼参数和第一融合变形数据,所述第一融合变形数据用于表征预设虚拟对象的变形程度;第二获取单元,用于获取三维目标虚拟对象模型对应的第二骨骼参数,确定所述第二骨骼参数和所述第一骨骼参数之间的变换关系;确定单元,用于根据所述第二骨骼参数和所述第一骨骼参数之间的变换关系与所述第一融合变形数据,确定与所述三维目标虚拟对象模型对应的第二融合变形数据。
- 根据权利要求13所述的装置,其特征在于,所述确定单元具体用于:获取所述三维预设虚拟对象模型对应的第一网格数据;获取所述三维目标虚拟对象模型对应的第二网格数据;根据所述第二网格数据和所述第一网格数据,确定所述三维目标虚拟对象模型对应 的第二融合变形数据。
- 根据权利要求14所述的装置,其特征在于,所述确定单元在用于获取所述三维预设虚拟对象模型对应的第一网格数据时,具体用于:获取所述第一骨骼参数对应的标准网格数据;根据所述标准网格数据和所述第一融合变形数据,得到所述第一融合变形数据相对应的第一网格数据。
- 根据权利要求14或15所述的装置,其特征在于,所述确定单元在用于获取所述三维目标虚拟对象模型对应的第二网格数据时,具体用于:根据所述第二骨骼参数与所述第一骨骼参数,确定所述三维目标虚拟对象模型中所有骨骼的变换矩阵;通过将所述变换矩阵应用于所述第一网格数据,确定所述三维目标虚拟对象模型对应的第二网格数据。
- 根据权利要求16所述的装置,其特征在于,所述确定单元在根据所述第二骨骼参数与所述第一骨骼参数,确定所述三维目标虚拟对象模型中所有骨骼的变换矩阵时,具体用于:根据所述三维预设虚拟对象模型对应的第一骨骼参数,获取所述三维预设虚拟对象模型中每个骨骼的第一位置矩阵;根据所述三维目标虚拟对象模型所对应的第二骨骼参数,获取所述三维目标虚拟对象模型中每个骨骼的第二位置矩阵;根据所述第二位置矩阵和所述第一位置矩阵,获得每个骨骼的变换矩阵。
- 根据权利要求13-17中任一项所述的装置,其特征在于,所述虚拟对象模型为人脸模型,所述虚拟对象模型包括的骨骼为头部骨骼,所述第二获取单元还包括三维目标人脸模型生成单元,用于:获取三维目标人脸模型的骨骼参数调整信息;根据所述骨骼参数调整信息,调整所述三维目标人脸模型对应的头部骨骼参数;根据调整后的所述头部骨骼参数,生成三维目标人脸模型。
- 根据权利要求18所述的装置,其特征在于,所述三维目标人脸模型生成单元在用于获取三维目标人脸模型的骨骼参数调整信息时,具体用于:接收骨骼参数调整指令;根据所述骨骼参数调整指令确定所述骨骼参数调整信息。
- 根据权利要求18或19所述的装置,其特征在于,所述三维目标人脸模型生成单元在用于根据所述骨骼参数调整信息,调整所述三维目标人脸模型对应的头部骨骼参数时,具体用于:获取所述头部骨骼参数中,与所述骨骼参数调整信息相关联的至少一根头部骨骼的骨骼参数,根据所述骨骼参数调整信息调整所述至少一根头部骨骼的骨骼参数。
- 根据权利要求20所述的装置,其特征在于,至少一根头部骨骼为多根头部骨骼,所述三维目标人脸模型生成单元在用于根据所述骨骼参数调整信息调整所述至少一根头部骨骼的骨骼参数时,具体用于:根据所述骨骼参数调整信息同时调整所述多根头部骨骼的骨骼参数。
- 根据权利要求20或21所述的装置,其特征在于,所述三维目标人脸模型生成 单元在用于根据所述骨骼参数调整信息调整所述至少一根头部骨骼的骨骼参数时,具体用于:获取所述骨骼参数调整信息的调节范围;获取与所述骨骼参数调整信息相关联的每个骨骼参数的调节范围;根据所述骨骼参数调整信息在调节范围内的变化比例调整与所述骨骼参数调整信息相关联的所述至少一根头部骨骼的骨骼参数。
- 根据权利要求18-22中任一项所述的装置,其特征在于,所述三维目标人脸模型生成单元在用于根据所述骨骼参数调整信息,调整所述头部骨骼参数时,具体用于:获取针对所述骨骼参数调整指令设置的控件的输出变化量,根据所述输出变化量确定所述骨骼参数调整信息。
- 一种虚拟对象变形处理设备,其特征在于,所述设备包括存储介质、处理器,所述存储介质用于存储可在处理器上运行的机器可执行指令,所述处理器用于在执行所述机器可执行指令时实现权利要求1至12中任一项所述的方法。
- 一种机器可读存储介质,其上存储有机器可执行指令,其特征在于,所述机器可执行指令被处理器执行时实现权利要求1至12中任一项所述的方法。
Priority Applications (5)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2020558622A JP7126001B2 (ja) | 2019-05-15 | 2020-02-11 | 仮想オブジェクトの変形処理方法及び装置、機器並びに記憶媒体 |
EP20806857.7A EP3971819A4 (en) | 2019-05-15 | 2020-02-11 | SHAPING PROCESSING METHOD, APPARATUS AND DEVICE FOR VIRTUAL OBJECT AND MEMORY MEDIA |
KR1020207031891A KR20200139240A (ko) | 2019-05-15 | 2020-02-11 | 가상 객체 변형 처리 방법, 장치, 기기 및 저장 매체 |
SG11202010607UA SG11202010607UA (en) | 2019-05-15 | 2020-02-11 | Method, apparatus and device for processing deformation of virtual object, and storage medium |
US17/079,980 US11100709B2 (en) | 2019-05-15 | 2020-10-26 | Method, apparatus and device for processing deformation of virtual object, and storage medium |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910403876.3A CN110111247B (zh) | 2019-05-15 | 2019-05-15 | 人脸变形处理方法、装置及设备 |
CN201910403876.3 | 2019-05-15 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US17/079,980 Continuation US11100709B2 (en) | 2019-05-15 | 2020-10-26 | Method, apparatus and device for processing deformation of virtual object, and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2020228385A1 true WO2020228385A1 (zh) | 2020-11-19 |
Family
ID=67490278
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2020/074705 WO2020228385A1 (zh) | 2019-05-15 | 2020-02-11 | 虚拟对象的变形处理方法、装置、设备及存储介质 |
Country Status (8)
Country | Link |
---|---|
US (1) | US11100709B2 (zh) |
EP (1) | EP3971819A4 (zh) |
JP (1) | JP7126001B2 (zh) |
KR (1) | KR20200139240A (zh) |
CN (1) | CN110111247B (zh) |
SG (1) | SG11202010607UA (zh) |
TW (1) | TWI752494B (zh) |
WO (1) | WO2020228385A1 (zh) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112581573A (zh) * | 2020-12-15 | 2021-03-30 | 北京百度网讯科技有限公司 | 虚拟形象驱动方法、装置、设备、介质和程序产品 |
CN113298948A (zh) * | 2021-05-07 | 2021-08-24 | 中国科学院深圳先进技术研究院 | 三维网格重建方法、装置、设备及存储介质 |
CN113350792A (zh) * | 2021-06-16 | 2021-09-07 | 网易(杭州)网络有限公司 | 虚拟模型的轮廓处理方法、装置、计算机设备及存储介质 |
CN113379880A (zh) * | 2021-07-02 | 2021-09-10 | 福建天晴在线互动科技有限公司 | 一种表情自动化生产方法及其装置 |
CN113450452A (zh) * | 2021-07-05 | 2021-09-28 | 网易(杭州)网络有限公司 | 三维模型文件的转换方法和装置 |
Families Citing this family (26)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110111247B (zh) * | 2019-05-15 | 2022-06-24 | 浙江商汤科技开发有限公司 | 人脸变形处理方法、装置及设备 |
GB2585078B (en) * | 2019-06-28 | 2023-08-09 | Sony Interactive Entertainment Inc | Content generation system and method |
CN110624244B (zh) * | 2019-10-24 | 2023-04-18 | 网易(杭州)网络有限公司 | 游戏中脸部模型的编辑方法、装置和终端设备 |
CN110766777B (zh) * | 2019-10-31 | 2023-09-29 | 北京字节跳动网络技术有限公司 | 虚拟形象的生成方法、装置、电子设备及存储介质 |
CN111651152A (zh) * | 2020-04-27 | 2020-09-11 | 北京编程猫科技有限公司 | 一种基于图形化编程的变换人物图画的方法及装置 |
CN111714885A (zh) * | 2020-06-22 | 2020-09-29 | 网易(杭州)网络有限公司 | 游戏角色模型生成、角色调整方法、装置、设备及介质 |
CN112017295B (zh) * | 2020-08-28 | 2024-02-09 | 重庆灵翎互娱科技有限公司 | 一种可调节动态头模型生成方法、终端和计算机存储介质 |
CN112107865A (zh) * | 2020-09-27 | 2020-12-22 | 完美世界(北京)软件科技发展有限公司 | 一种面部动画模型处理方法、装置、电子设备及存储介质 |
CN112419454B (zh) * | 2020-11-25 | 2023-11-28 | 北京市商汤科技开发有限公司 | 一种人脸重建方法、装置、计算机设备及存储介质 |
CN112330805B (zh) * | 2020-11-25 | 2023-08-08 | 北京百度网讯科技有限公司 | 人脸3d模型生成方法、装置、设备及可读存储介质 |
CN112419485B (zh) * | 2020-11-25 | 2023-11-24 | 北京市商汤科技开发有限公司 | 一种人脸重建方法、装置、计算机设备及存储介质 |
CN112562043B (zh) * | 2020-12-08 | 2023-08-08 | 北京百度网讯科技有限公司 | 图像处理方法、装置和电子设备 |
CN112967212A (zh) * | 2021-02-01 | 2021-06-15 | 北京字节跳动网络技术有限公司 | 一种虚拟人物的合成方法、装置、设备及存储介质 |
CN112967364A (zh) * | 2021-02-09 | 2021-06-15 | 咪咕文化科技有限公司 | 一种图像处理方法、装置及设备 |
CN113050795A (zh) * | 2021-03-24 | 2021-06-29 | 北京百度网讯科技有限公司 | 虚拟形象的生成方法及装置 |
CN113362435B (zh) * | 2021-06-16 | 2023-08-08 | 网易(杭州)网络有限公司 | 虚拟对象模型的虚拟部件变化方法、装置、设备及介质 |
CN113470148B (zh) * | 2021-06-30 | 2022-09-23 | 完美世界(北京)软件科技发展有限公司 | 表情动画制作方法及装置、存储介质、计算机设备 |
CN113422977B (zh) * | 2021-07-07 | 2023-03-14 | 上海商汤智能科技有限公司 | 直播方法、装置、计算机设备以及存储介质 |
CN113546420B (zh) * | 2021-07-23 | 2024-04-09 | 网易(杭州)网络有限公司 | 虚拟对象的控制方法、装置、存储介质及电子设备 |
CN113610992B (zh) * | 2021-08-04 | 2022-05-20 | 北京百度网讯科技有限公司 | 骨骼驱动系数确定方法、装置、电子设备及可读存储介质 |
CN113658307A (zh) * | 2021-08-23 | 2021-11-16 | 北京百度网讯科技有限公司 | 图像处理方法及装置 |
CN113986015B (zh) * | 2021-11-08 | 2024-04-30 | 北京字节跳动网络技术有限公司 | 虚拟道具的处理方法、装置、设备和存储介质 |
CN114677476A (zh) * | 2022-03-30 | 2022-06-28 | 北京字跳网络技术有限公司 | 一种脸部处理方法、装置、计算机设备及存储介质 |
CN115601484B (zh) * | 2022-11-07 | 2023-03-28 | 广州趣丸网络科技有限公司 | 虚拟人物面部驱动方法、装置、终端设备和可读存储介质 |
KR102652652B1 (ko) * | 2022-11-29 | 2024-03-29 | 주식회사 일루니 | 아바타 생성 장치 및 방법 |
CN115937373B (zh) * | 2022-12-23 | 2023-10-03 | 北京百度网讯科技有限公司 | 虚拟形象驱动方法、装置、设备以及存储介质 |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106846499A (zh) * | 2017-02-09 | 2017-06-13 | 腾讯科技(深圳)有限公司 | 一种虚拟模型的生成方法及装置 |
CN107146199A (zh) * | 2017-05-02 | 2017-09-08 | 厦门美图之家科技有限公司 | 一种人脸图像的融合方法、装置及计算设备 |
CN109395390A (zh) * | 2018-10-26 | 2019-03-01 | 网易(杭州)网络有限公司 | 游戏角色脸部模型的处理方法、装置、处理器及终端 |
CN110111247A (zh) * | 2019-05-15 | 2019-08-09 | 浙江商汤科技开发有限公司 | 人脸变形处理方法、装置及设备 |
Family Cites Families (23)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN100545871C (zh) * | 2006-05-12 | 2009-09-30 | 中国科学院自动化研究所 | 一种直接传递三维模型姿态的方法 |
GB2450757A (en) | 2007-07-06 | 2009-01-07 | Sony Comp Entertainment Europe | Avatar customisation, transmission and reception |
JP4977742B2 (ja) | 2008-10-17 | 2012-07-18 | 株式会社スクウェア・エニックス | 3次元モデル表示システム |
CN101968891A (zh) * | 2009-07-28 | 2011-02-09 | 上海冰动信息技术有限公司 | 用于游戏的照片自动生成三维图形系统 |
CN101783026B (zh) * | 2010-02-03 | 2011-12-07 | 北京航空航天大学 | 三维人脸肌肉模型的自动构造方法 |
CN101833788B (zh) * | 2010-05-18 | 2011-09-07 | 南京大学 | 一种采用手绘草图的三维人体建模方法 |
CN102054296A (zh) * | 2011-01-20 | 2011-05-11 | 西北大学 | 一种局部刚性网格变形方法 |
US8922553B1 (en) * | 2011-04-19 | 2014-12-30 | Disney Enterprises, Inc. | Interactive region-based linear 3D face models |
CN102157010A (zh) * | 2011-05-25 | 2011-08-17 | 上海大学 | 基于分层建模及多体驱动的三维人脸动画实现方法 |
JP6207210B2 (ja) | 2013-04-17 | 2017-10-04 | キヤノン株式会社 | 情報処理装置およびその方法 |
US9378576B2 (en) * | 2013-06-07 | 2016-06-28 | Faceshift Ag | Online modeling for real-time facial animation |
US9202300B2 (en) * | 2013-06-20 | 2015-12-01 | Marza Animation Planet, Inc | Smooth facial blendshapes transfer |
CN104978764B (zh) * | 2014-04-10 | 2017-11-17 | 华为技术有限公司 | 三维人脸网格模型处理方法和设备 |
CN104376594B (zh) * | 2014-11-25 | 2017-09-29 | 福建天晴数码有限公司 | 三维人脸建模方法和装置 |
CN104537630A (zh) * | 2015-01-22 | 2015-04-22 | 厦门美图之家科技有限公司 | 一种基于年龄估计的图像美颜方法和装置 |
GB2543893A (en) * | 2015-08-14 | 2017-05-03 | Metail Ltd | Methods of generating personalized 3D head models or 3D body models |
CN105654537B (zh) * | 2015-12-30 | 2018-09-21 | 中国科学院自动化研究所 | 一种实现与虚拟角色实时互动的表情克隆方法及装置 |
CN107633542A (zh) * | 2016-07-19 | 2018-01-26 | 珠海金山网络游戏科技有限公司 | 一种捏脸编辑和动画附加融合方法和系统 |
WO2018195485A1 (en) * | 2017-04-21 | 2018-10-25 | Mug Life, LLC | Systems and methods for automatically creating and animating a photorealistic three-dimensional character from a two-dimensional image |
CN107705365A (zh) * | 2017-09-08 | 2018-02-16 | 郭睿 | 可编辑的三维人体模型创建方法、装置、电子设备及计算机程序产品 |
CN109191570B (zh) * | 2018-09-29 | 2023-08-22 | 网易(杭州)网络有限公司 | 游戏角色脸部模型的调整方法、装置、处理器及终端 |
CN109711335A (zh) * | 2018-12-26 | 2019-05-03 | 北京百度网讯科技有限公司 | 通过人体特征对目标图片进行驱动的方法及装置 |
CN109727302B (zh) * | 2018-12-28 | 2023-08-08 | 网易(杭州)网络有限公司 | 骨骼创建方法、装置、电子设备及存储介质 |
-
2019
- 2019-05-15 CN CN201910403876.3A patent/CN110111247B/zh active Active
-
2020
- 2020-02-11 SG SG11202010607UA patent/SG11202010607UA/en unknown
- 2020-02-11 JP JP2020558622A patent/JP7126001B2/ja active Active
- 2020-02-11 WO PCT/CN2020/074705 patent/WO2020228385A1/zh unknown
- 2020-02-11 EP EP20806857.7A patent/EP3971819A4/en active Pending
- 2020-02-11 KR KR1020207031891A patent/KR20200139240A/ko not_active Application Discontinuation
- 2020-05-13 TW TW109115911A patent/TWI752494B/zh active
- 2020-10-26 US US17/079,980 patent/US11100709B2/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106846499A (zh) * | 2017-02-09 | 2017-06-13 | 腾讯科技(深圳)有限公司 | 一种虚拟模型的生成方法及装置 |
CN107146199A (zh) * | 2017-05-02 | 2017-09-08 | 厦门美图之家科技有限公司 | 一种人脸图像的融合方法、装置及计算设备 |
CN109395390A (zh) * | 2018-10-26 | 2019-03-01 | 网易(杭州)网络有限公司 | 游戏角色脸部模型的处理方法、装置、处理器及终端 |
CN110111247A (zh) * | 2019-05-15 | 2019-08-09 | 浙江商汤科技开发有限公司 | 人脸变形处理方法、装置及设备 |
Non-Patent Citations (1)
Title |
---|
See also references of EP3971819A4 * |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112581573A (zh) * | 2020-12-15 | 2021-03-30 | 北京百度网讯科技有限公司 | 虚拟形象驱动方法、装置、设备、介质和程序产品 |
CN112581573B (zh) * | 2020-12-15 | 2023-08-04 | 北京百度网讯科技有限公司 | 虚拟形象驱动方法、装置、设备、介质和程序产品 |
CN113298948A (zh) * | 2021-05-07 | 2021-08-24 | 中国科学院深圳先进技术研究院 | 三维网格重建方法、装置、设备及存储介质 |
CN113350792A (zh) * | 2021-06-16 | 2021-09-07 | 网易(杭州)网络有限公司 | 虚拟模型的轮廓处理方法、装置、计算机设备及存储介质 |
CN113350792B (zh) * | 2021-06-16 | 2024-04-09 | 网易(杭州)网络有限公司 | 虚拟模型的轮廓处理方法、装置、计算机设备及存储介质 |
CN113379880A (zh) * | 2021-07-02 | 2021-09-10 | 福建天晴在线互动科技有限公司 | 一种表情自动化生产方法及其装置 |
CN113379880B (zh) * | 2021-07-02 | 2023-08-11 | 福建天晴在线互动科技有限公司 | 一种表情自动化生产方法及其装置 |
CN113450452A (zh) * | 2021-07-05 | 2021-09-28 | 网易(杭州)网络有限公司 | 三维模型文件的转换方法和装置 |
CN113450452B (zh) * | 2021-07-05 | 2023-05-26 | 网易(杭州)网络有限公司 | 三维模型文件的转换方法和装置 |
Also Published As
Publication number | Publication date |
---|---|
CN110111247B (zh) | 2022-06-24 |
CN110111247A (zh) | 2019-08-09 |
TWI752494B (zh) | 2022-01-11 |
US20210043000A1 (en) | 2021-02-11 |
EP3971819A1 (en) | 2022-03-23 |
JP2021527862A (ja) | 2021-10-14 |
US11100709B2 (en) | 2021-08-24 |
KR20200139240A (ko) | 2020-12-11 |
SG11202010607UA (en) | 2020-12-30 |
TW202046250A (zh) | 2020-12-16 |
JP7126001B2 (ja) | 2022-08-25 |
EP3971819A4 (en) | 2022-07-20 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2020228385A1 (zh) | 虚拟对象的变形处理方法、装置、设备及存储介质 | |
TWI748432B (zh) | 三維局部人體模型的生成方法、裝置、設備及電腦可讀儲存介質 | |
US9639914B2 (en) | Portrait deformation method and apparatus | |
WO2018201551A1 (zh) | 一种人脸图像的融合方法、装置及计算设备 | |
CN107452049B (zh) | 一种三维头部建模方法及装置 | |
CN109151540B (zh) | 视频图像的交互处理方法及装置 | |
US20120113106A1 (en) | Method and apparatus for generating face avatar | |
JP2023526566A (ja) | 高速で深い顔面変形 | |
KR20200107957A (ko) | 이미지 처리 방법 및 장치, 전자 기기 및 저장 매체 | |
KR20190043925A (ko) | 헤어 스타일 시뮬레이션 서비스를 제공하는 방법, 시스템 및 비일시성의 컴퓨터 판독 가능 기록 매체 | |
US10297036B2 (en) | Recording medium, information processing apparatus, and depth definition method | |
WO2022183723A1 (zh) | 特效控制方法及装置 | |
WO2021155828A1 (en) | Method and system for implementing dynamic input resolution for vslam systems | |
CN117152382A (zh) | 虚拟数字人面部表情计算方法和装置 | |
Ostrovka et al. | Development of a method for changing the surface properties of a three-dimensional user avatar | |
CN117132713A (zh) | 模型训练方法、数字人驱动方法及相关装置 | |
JP2023171165A (ja) | スタイル変換プログラム、スタイル変換装置、およびスタイル変換方法 | |
JP5975092B2 (ja) | 画像処理装置、画像処理方法及びプログラム | |
CN117315115A (zh) | 基于透明度纹理贴图的头发重建方法及电子设备 | |
CN116958338A (zh) | 对象姿态的调整方法、装置、设备、介质及产品 | |
CN114663628A (zh) | 图像处理方法、装置、电子设备及存储介质 | |
JP2010033187A (ja) | 画像処理装置、画像処理方法及びプログラム |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
ENP | Entry into the national phase |
Ref document number: 2020558622 Country of ref document: JP Kind code of ref document: A |
|
ENP | Entry into the national phase |
Ref document number: 20207031891 Country of ref document: KR Kind code of ref document: A |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 20806857 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
ENP | Entry into the national phase |
Ref document number: 2020806857 Country of ref document: EP Effective date: 20211215 |