WO2022110791A1 - Method and apparatus for face reconstruction, and computer device, and storage medium - Google Patents

Method and apparatus for face reconstruction, and computer device, and storage medium Download PDF

Info

Publication number
WO2022110791A1
WO2022110791A1 PCT/CN2021/102431 CN2021102431W WO2022110791A1 WO 2022110791 A1 WO2022110791 A1 WO 2022110791A1 CN 2021102431 W CN2021102431 W CN 2021102431W WO 2022110791 A1 WO2022110791 A1 WO 2022110791A1
Authority
WO
WIPO (PCT)
Prior art keywords
face
target
data
real
virtual
Prior art date
Application number
PCT/CN2021/102431
Other languages
French (fr)
Chinese (zh)
Inventor
徐胜伟
王权
钱晨
Original Assignee
北京市商汤科技开发有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京市商汤科技开发有限公司 filed Critical 北京市商汤科技开发有限公司
Priority to JP2022520004A priority Critical patent/JP2023507863A/en
Priority to KR1020227010819A priority patent/KR20220075339A/en
Publication of WO2022110791A1 publication Critical patent/WO2022110791A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/403D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/60Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor
    • A63F13/65Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor automatically by game devices or servers from real world data, e.g. measurement in live racing competition
    • A63F13/655Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor automatically by game devices or servers from real world data, e.g. measurement in live racing competition by importing photos, e.g. of the player
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/60Methods for processing data by generating or executing the game program
    • A63F2300/69Involving elements of the real world in the game world, e.g. measurement in live races, real video
    • A63F2300/695Imported photos, e.g. of the player

Abstract

The present invention provides a method and apparatus for face reconstruction, and a computer device, and a storage medium. The method comprises: generating a first real face model on the basis of a target image; performing, by using multiple pre-generated second real face models, fitting processing on the first real face model to obtain fitting coefficients respectively corresponding to the multiple second real face models; generating target bone data and a target skin morph coefficient on the basis of the fitting coefficients respectively corresponding to the multiple second real face models, and virtual face models having preset styles and respectively corresponding to the multiple second real face models; and generating, on the basis of the target bone data and the target skin morph coefficient, a target virtual face model corresponding to the first real face model.

Description

重建人脸的方法、装置、计算机设备及存储介质Method, device, computer equipment and storage medium for reconstructing human face
相关申请的交叉引用CROSS-REFERENCE TO RELATED APPLICATIONS
本专利申请要求于2020年11月25日提交的、申请号为202011337901.1、发明名称为“一种人脸重建方法、装置、计算机设备及存储介质”的中国专利申请的优先权,该申请以引用的方式并入本文中。This patent application claims the priority of the Chinese patent application filed on November 25, 2020, with the application number of 202011337901.1 and the invention titled "A method, device, computer equipment and storage medium for face reconstruction", which is cited by reference manner is incorporated into this article.
技术领域technical field
本公开涉及图像处理技术领域,具体而言,涉及一种重建人脸的方法、装置、计算机设备及存储介质。The present disclosure relates to the technical field of image processing, and in particular, to a method, an apparatus, a computer device and a storage medium for reconstructing a human face.
背景技术Background technique
通常,能够根据真实人脸或自身喜好建立虚拟人脸三维模型,以实现人脸的重建,在游戏、动漫、虚拟社交等领域具有广泛应用。例如在游戏中,玩家可以通过游戏程序提供的人脸重建系统来依照玩家提供的图像中包括的真实人脸而生成虚拟人脸三维模型,并利用所生成的虚拟人脸三维模型更有代入感的参与游戏。Usually, a three-dimensional model of a virtual face can be established according to a real face or one's own preferences, so as to realize the reconstruction of the face, which has a wide range of applications in the fields of games, animation, and virtual social interaction. For example, in a game, the player can use the face reconstruction system provided by the game program to generate a virtual face 3D model according to the real face included in the image provided by the player, and use the generated virtual face 3D model to have a more sense of substitution participation in the game.
目前,在基于人像图像中包括的真实人脸进行人脸重建时,通常是基于人脸图像来提取人脸轮廓特征,然后将提取的人脸轮廓特征和预先生成的虚拟三维模型进行匹配、融合,以生成与真实人脸对应的虚拟人脸三维模型;但是,由于人脸轮廓特征与预先生成的虚拟三维模型的匹配度较低,使得生成的虚拟人脸三维模型与真实人脸形象之间的相似度较低。At present, when performing face reconstruction based on real faces included in portrait images, face contour features are usually extracted based on face images, and then the extracted face contour features and pre-generated virtual three-dimensional models are matched and fused. , to generate a 3D model of the virtual face corresponding to the real face; however, due to the low matching degree between the contour features of the face and the pre-generated virtual 3D model, the generated 3D model of the virtual face and the real face image are closely related. the similarity is low.
发明内容SUMMARY OF THE INVENTION
本公开实施例至少提供一种重建人脸的方法、装置、计算机设备及存储介质。The embodiments of the present disclosure provide at least a method, an apparatus, a computer device, and a storage medium for reconstructing a human face.
第一方面,本公开实施例提供了一种重建人脸的方法,包括:基于目标图像生成第一真实人脸模型;利用预先生成的多个第二真实人脸模型对所述第一真实人脸模型进行拟合处理,得到多个第二真实人脸模型分别对应的拟合系数;基于所述多个第二真实人脸模型分别对应的拟合系数、所述多个第二真实人脸模型分别对应的具有预设风格的虚拟人脸模型,生成目标骨骼数据以及目标蒙皮变形系数;基于所述目标骨骼数据以及所述目标蒙皮变形系数,生成与所述第一真实人脸模型对应的目标虚拟人脸模型。In a first aspect, an embodiment of the present disclosure provides a method for reconstructing a face, including: generating a first real face model based on a target image; performing a fitting process on the face model to obtain fitting coefficients corresponding to the plurality of second real face models respectively; based on the fitting coefficients corresponding to the plurality of second real face models respectively, the plurality of second real face models The virtual face models with preset styles corresponding to the models respectively generate target bone data and target skin deformation coefficients; based on the target bone data and the target skin deformation coefficients, generate the first real face model The corresponding target virtual face model.
该实施方式中,利用拟合系数作为媒介,建立了多个第二真实人脸模型与第一真实人脸模型之间的关联关系,该关联关系能够表征基于第二真实人脸模型建立的虚拟人脸模型、和基于第一真实人脸模型建立的目标虚拟人脸模型之间的关联;另外,目标蒙皮变形系数能够表征目标图像中人脸蒙皮发生变形的特征,如骨骼相同的情况下,存在可以由蒙皮表征的胖瘦差异;基于拟合系数以及目标蒙皮变形系数确定的目标虚拟人脸模型,既具有预设风格、及第一真实人脸模型对应的原始人脸的特征,又可以体现原始人 脸的胖瘦特征,所生成的目标虚拟人脸模型,和第一真实人脸模型对应的原始人脸之间具有更高的相似度。In this embodiment, using the fitting coefficient as a medium, a plurality of associations between the second real face models and the first real face models are established, and the associations can represent the virtual reality established based on the second real face models. The relationship between the face model and the target virtual face model established based on the first real face model; in addition, the target skin deformation coefficient can characterize the deformation characteristics of the face skin in the target image, such as the case of the same bones There is a difference in fat and thinness that can be represented by the skin; the target virtual face model determined based on the fitting coefficient and the target skin deformation coefficient has both the preset style and the original face corresponding to the first real face model. It can also reflect the fat and thin features of the original face, and the generated target virtual face model has a higher similarity with the original face corresponding to the first real face model.
在一种可选的实施方式中,基于所述多个第二真实人脸模型分别对应的拟合系数、所述多个第二真实人脸模型分别对应的具有预设风格的虚拟人脸模型,生成目标蒙皮变形系数,包括:基于所述多个第二真实人脸模型分别对应的拟合系数、以及多个所述虚拟人脸模型分别包括的蒙皮变形系数,生成所述目标蒙皮变形系数。In an optional implementation manner, based on the fitting coefficients corresponding to the plurality of second real face models respectively, and the virtual face models with preset styles corresponding to the plurality of second real face models respectively , generating a target skin deformation coefficient, comprising: generating the target mask based on the fitting coefficients corresponding to the plurality of second real face models and the skin deformation coefficients respectively included in the plurality of virtual face models Skin deformation coefficient.
该实施方式中,将标准虚拟人脸模型的标准蒙皮数据作为基准,在确定了虚拟人脸模型的蒙皮变形系数后,能够基于表征虚拟人脸模型和目标虚拟人脸模型之间的关联关系的拟合系数,确定目标虚拟人脸的目标蒙皮变形系数,从而能够基于目标蒙皮变形系数更准确的确定目标虚拟人脸的蒙皮数据,使得生成的目标虚拟人脸模型和第一真实人脸模型对应的原始人脸之间具有更高的相似度。In this embodiment, the standard skin data of the standard virtual face model is used as a reference, and after the skin deformation coefficient of the virtual face model is determined, the correlation between the characterizing virtual face model and the target virtual face model can be based on The fitting coefficient of the target virtual face can determine the target skin deformation coefficient of the target virtual face, so that the skin data of the target virtual face can be more accurately determined based on the target skin deformation coefficient, so that the generated target virtual face model and the first The original face corresponding to the real face model has a higher similarity.
在一种可选的实施方式中,所述基于所述多个第二真实人脸模型分别对应的拟合系数、以及多个所述虚拟人脸模型分别包括的蒙皮变形系数,生成所述目标蒙皮变形系数,包括:对所述多个第二真实人脸模型分别对应的拟合系数进行归一化处理;基于归一化处理后的拟合系数、以及所述虚拟人脸模型分别包括的蒙皮变形系数,得到所述目标蒙皮变形系数。In an optional implementation manner, the generating the said multiple second real face models is based on the fitting coefficients corresponding to the multiple second real face models and the skin deformation coefficients respectively included in the multiple virtual face models. The target skin deformation coefficient includes: normalizing the fitting coefficients corresponding to the plurality of second real face models respectively; based on the normalized fitting coefficients and the virtual face models respectively The included skin deformation coefficients are obtained to obtain the target skin deformation coefficients.
该实施方式中,通过对所述多个第二真实人脸模型分别对应的拟合系数进行归一化处理,使得基于归一化处理后的拟合系数、以及所述虚拟人脸模型分别包括的蒙皮变形系数得到目标蒙皮变形系数时,数据的表达更加的简单,简化了处理过程,提高了后续在使用拟合结果进行人脸重建的处理速度。In this embodiment, by performing normalization processing on the fitting coefficients corresponding to the plurality of second real face models respectively, the fitting coefficients based on the normalization processing and the virtual face models respectively include When the target skin deformation coefficient is obtained from the skin deformation coefficient of , the expression of the data is simpler, the processing process is simplified, and the processing speed of subsequent face reconstruction using the fitting results is improved.
在一种可选的实施方式中,所述基于所述目标骨骼数据、以及所述目标蒙皮变形系数,生成与所述第一真实人脸模型对应的目标虚拟人脸模型,包括:基于所述目标骨骼数据、以及标准虚拟人脸模型中标准骨骼数据与标准蒙皮数据之间的关联关系,对所述标准蒙皮数据进行位置变换处理,生成中间蒙皮数据;基于所述目标蒙皮变形系数,对所述中间蒙皮数据进行变形处理,得到目标蒙皮数据;基于所述目标骨骼数据、以及所述目标蒙皮数据,构成所述目标虚拟人脸模型。In an optional implementation manner, the generating a target virtual face model corresponding to the first real face model based on the target bone data and the target skin deformation coefficient includes: Describe the target skeleton data and the association relationship between the standard skeleton data and the standard skin data in the standard virtual face model, perform position transformation processing on the standard skin data, and generate intermediate skin data; based on the target skin data The deformation coefficient is used to deform the intermediate skin data to obtain target skin data; based on the target bone data and the target skin data, the target virtual face model is formed.
该实施方式中,在生成中间蒙皮数据后,利用目标蒙皮变形系数对中间蒙皮数据进行变形处理,得到的目标蒙皮数据不仅可以表征第一真实人脸模型的外貌特征,还能够表现出第一真实人脸的胖瘦程度,生成的目标虚拟人脸模型不仅具有外貌上的差异,还具有胖瘦程度的差异,使得在生成不同的目标虚拟人脸时,与第一真实人脸模型对应的原始人脸具有更高的相似度。In this embodiment, after the intermediate skin data is generated, the intermediate skin data is deformed by using the target skin deformation coefficient, and the obtained target skin data can not only represent the appearance features of the first real face model, but also can express The degree of fatness and thinness of the first real face is obtained, and the generated target virtual face model not only has the difference in appearance, but also has the difference in the degree of fatness and thinness, so that when different target virtual faces are generated, they are different from those of the first real face. The original face corresponding to the model has a higher similarity.
在一种可选的实施方式中,所述目标骨骼数据包括以下至少一种:目标骨骼位置数据、目标骨骼缩放数据、以及目标骨骼旋转数据;所述多个虚拟人脸模型分别对应的骨骼数据包括以下至少一种:所述虚拟人脸的多块人脸骨骼中每块人脸骨骼对应的骨骼旋转数据、骨骼位置数据、以及骨骼缩放数据。In an optional implementation manner, the target skeleton data includes at least one of the following: target skeleton position data, target skeleton scaling data, and target skeleton rotation data; skeleton data corresponding to the plurality of virtual face models respectively It includes at least one of the following: bone rotation data, bone position data, and bone scaling data corresponding to each of the multiple face bones of the virtual face.
该实施方式中,利用骨骼数据能够更精确的表征多块人脸骨骼中每块骨骼对应的骨骼数据,并且利用目标骨骼数据,能够更精确的确定目标虚拟人脸模型。In this embodiment, the bone data corresponding to each of the multiple face bones can be more accurately represented by using the bone data, and the target virtual face model can be determined more accurately by using the target bone data.
在一种可选的实施方式中,基于所述多个第二真实人脸模型分别对应的拟合系数、所述多个第二真实人脸模型分别对应的具有预设风格的虚拟人脸模型,生成目标骨骼数据,包括:基于所述多个第二真实人脸模型分别对应的拟合系数,对所述多个虚拟人脸模型分别对应的骨骼位置数据进行插值处理,得到所述目标骨骼位置数据。In an optional implementation manner, based on the fitting coefficients corresponding to the plurality of second real face models respectively, and the virtual face models with preset styles corresponding to the plurality of second real face models respectively , generating target skeleton data, comprising: performing interpolation processing on the skeleton position data corresponding to the plurality of virtual face models based on the fitting coefficients corresponding to the plurality of second real face models respectively, to obtain the target skeleton location data.
在一种可选的实施方式中,基于所述多个第二真实人脸模型分别对应的拟合系数、所述多个第二真实人脸模型分别对应的具有预设风格的虚拟人脸模型,生成目标骨骼数据,包括:基于所述多个第二真实人脸模型分别对应的拟合系数,对所述多个虚拟人脸模型分别对应的骨骼缩放数据进行插值处理,得到所述目标骨骼缩放数据。In an optional implementation manner, based on the fitting coefficients corresponding to the plurality of second real face models respectively, and the virtual face models with preset styles corresponding to the plurality of second real face models respectively , generating target skeleton data, comprising: performing interpolation processing on the skeleton scaling data corresponding to the plurality of virtual face models based on the fitting coefficients corresponding to the plurality of second real face models respectively, to obtain the target skeleton Scale data.
在一种可选的实施方式中,基于所述多个第二真实人脸模型分别对应的拟合系数、所述多个第二真实人脸模型分别对应的具有预设风格的虚拟人脸模型,生成目标骨骼数据,包括:将所述多个虚拟人脸模型分别对应的骨骼旋转数据转换为四元数,并对所述多个虚拟人脸模型分别对应的四元数进行正则化处理,得到正则化四元数;基于所述多个第二真实人脸模型分别对应的拟合系数,对所述多个虚拟人脸模型分别对应的所述正则化四元数进行插值处理,得到所述目标骨骼旋转数据。In an optional implementation manner, based on the fitting coefficients corresponding to the plurality of second real face models respectively, and the virtual face models with preset styles corresponding to the plurality of second real face models respectively , generating target skeleton data, including: converting the skeleton rotation data corresponding to the plurality of virtual face models into quaternions, and performing regularization processing on the quaternions corresponding to the plurality of virtual face models, respectively, obtaining a regularized quaternion; based on the fitting coefficients corresponding to the multiple second real face models respectively, perform interpolation processing on the regularized quaternions corresponding to the multiple virtual face models respectively, to obtain the obtained Describe the target bone rotation data.
在一种可选的实施方式中,所述基于目标图像生成第一真实人脸模型,包括:获取包括原始人脸的目标图像;对所述目标图像中包括的所述原始人脸进行三维人脸重建,得到所述第一真实人脸模型。In an optional implementation manner, the generating the first real face model based on the target image includes: acquiring a target image including an original face; performing a three-dimensional human face analysis on the original face included in the target image face reconstruction to obtain the first real face model.
该实施方式中,利用对原始人脸进行三维人脸重建得到的第一真实人脸模型,可以更准确且全面的表征目标图像中原始人脸的人脸特征。In this embodiment, the face features of the original face in the target image can be more accurately and comprehensively represented by using the first real face model obtained by reconstructing the original face in three dimensions.
在一种可选的实施方式中,根据以下方式预先生成多个所述第二真实人脸模型:获取包括参考人脸的多张参考图像;针对所述多张参考图像中的每张参考图像,对所述参考图像中包括的所述参考人脸进行三维人脸重建,得到所述参考图像对应的所述第二真实人脸模型。In an optional implementation manner, multiple second real face models are pre-generated according to the following methods: acquiring multiple reference images including reference faces; for each reference image in the multiple reference images , performing three-dimensional face reconstruction on the reference face included in the reference image to obtain the second real face model corresponding to the reference image.
该实施方式中,利用多张参考图像,可以尽量覆盖到较为广泛的人脸外形特征,因此,基于多张参考图像中的每张参考图像进行三维人脸重建得到的第二真实人脸模型同样可以尽量覆盖到较为广泛的人脸外形特征。In this embodiment, by using multiple reference images, it is possible to cover a wider range of face shape features as much as possible. Therefore, the second real face model obtained by performing 3D face reconstruction based on each reference image in the multiple reference images is also the same as It can cover as wide a range of facial features as possible.
在一种可选的实施方式中,还包括针对所述多个第二真实人脸模型中的每个第二真实人脸模型,采用下述方式获取所述第二真实人脸模型对应的具有预设风格的虚拟人脸模型:生成所述第二真实人脸模型对应的具有预设风格的中间虚拟人脸模型;基于相对于标准虚拟人脸模型的多组预设蒙皮变形系数,生成与所述第二真实人脸模型对应的虚拟人脸模型相对于所述标准虚拟人脸模型的蒙皮变形系数;利用所述蒙皮变形系数,对所述中间虚拟人脸模型中的中间蒙皮数据进行调整;基于调整后的中间蒙皮数据、以及所述中间虚拟人脸模型的中间骨骼数据,生成所述每个第二真实人脸模型的虚拟人脸模 型。In an optional implementation manner, the method further includes, for each second real face model in the plurality of second real face models, using the following method to obtain the corresponding A virtual face model with a preset style: generating an intermediate virtual face model with a preset style corresponding to the second real face model; based on multiple sets of preset skin deformation coefficients relative to the standard virtual face model, generating The skin deformation coefficient of the virtual face model corresponding to the second real face model relative to the standard virtual face model; using the skin deformation coefficient, the intermediate mask in the intermediate virtual face model is The skin data is adjusted; based on the adjusted intermediate skin data and the intermediate skeleton data of the intermediate virtual face model, a virtual face model of each of the second real face models is generated.
该实施方式中,通过蒙皮变形系数,对第二真实人脸模型对应的中间虚拟人脸模型的中间蒙皮数据进行调整,使得生成的虚拟人脸模型不仅具有预设风格、以及第二真实人脸模型的外貌特征,还能够表征与第二真实人脸模型对应的参考人脸胖瘦程度,使得虚拟人脸模型和对应的参考人脸之间具有更高的相似度。In this embodiment, the intermediate skin data of the intermediate virtual face model corresponding to the second real face model is adjusted by the skin deformation coefficient, so that the generated virtual face model not only has the preset style, but also the second real face model. The appearance features of the face model can also represent the fatness and thinness of the reference face corresponding to the second real face model, so that the virtual face model and the corresponding reference face have a higher similarity.
在一种可选的实施方式中,所述利用预先生成的多个第二真实人脸模型对所述第一真实人脸模型进行拟合处理,得到多个第二真实人脸模型分别对应的拟合系数,包括:对所述多个第二真实人脸模型以及所述第一真实人脸模型进行最小二乘处理,得到所述多个第二真实人脸模型分别对应的拟合系数。In an optional implementation manner, the first real face model is fitted by using a plurality of pre-generated second real face models to obtain corresponding corresponding second real face models respectively. The fitting coefficients include: performing least squares processing on the plurality of second real face models and the first real face models to obtain fitting coefficients corresponding to the plurality of second real face models respectively.
该实施方式中,利用拟合系数,可以准确的表征在利用多个第二真实人脸模型拟合第一真实人脸模型时的拟合情况。In this embodiment, by using the fitting coefficient, the fitting situation when the first real face model is fitted by using a plurality of second real face models can be accurately represented.
第二方面,本公开实施例还提供一种重建人脸的装置,包括:第一生成模块,用于基于目标图像生成第一真实人脸模型;处理模块,用于利用预先生成的多个第二真实人脸模型对所述第一真实人脸模型进行拟合处理,得到多个第二真实人脸模型分别对应的拟合系数;第二生成模块,用于基于所述多个第二真实人脸模型分别对应的拟合系数、及所述多个第二真实人脸模型分别对应的具有预设风格的虚拟人脸模型,生成目标骨骼数据以及目标蒙皮变形系数;第三生成模块,用于基于所述目标骨骼数据以及所述目标蒙皮变形系数,生成与所述第一真实人脸模型对应的目标虚拟人脸模型。In a second aspect, an embodiment of the present disclosure further provides an apparatus for reconstructing a human face, including: a first generating module for generating a first real face model based on a target image; a processing module for using a plurality of pre-generated Two real face models perform fitting processing on the first real face model to obtain fitting coefficients corresponding to a plurality of second real face models respectively; The fitting coefficients corresponding to the face models, and the virtual face models with preset styles corresponding to the plurality of second real face models, respectively, generate target skeleton data and target skin deformation coefficients; a third generation module, for generating a target virtual face model corresponding to the first real face model based on the target bone data and the target skin deformation coefficient.
一种可选的实施方式中,所述第二生成模块在基于所述多个第二真实人脸模型分别对应的拟合系数、所述多个第二真实人脸模型分别对应的具有预设风格的虚拟人脸模型,生成目标蒙皮变形系数时,用于:基于所述多个第二真实人脸模型分别对应的拟合系数、以及多个所述虚拟人脸模型分别包括的蒙皮变形系数,生成所述目标蒙皮变形系数。In an optional embodiment, the second generation module is based on the fitting coefficients corresponding to the plurality of second real face models respectively, and the preset values corresponding to the plurality of second real face models respectively. The style of the virtual face model, when generating the target skin deformation coefficient, is used for: based on the fitting coefficients corresponding to the plurality of second real face models respectively, and the skins respectively included in the plurality of the virtual face models deformation coefficient to generate the target skin deformation coefficient.
一种可选的实施方式中,所述第二生成模块在基于所述多个第二真实人脸模型分别对应的拟合系数、以及多个所述虚拟人脸模型分别包括的蒙皮变形系数,生成所述目标蒙皮变形系数时,用于:对所述多个第二真实人脸模型分别对应的拟合系数进行归一化处理;基于归一化处理后的拟合系数、以及所述虚拟人脸模型分别包括的蒙皮变形系数,得到所述目标蒙皮变形系数。In an optional embodiment, the second generation module is based on the fitting coefficients corresponding to the plurality of second real face models and the skin deformation coefficients respectively included in the plurality of virtual face models. , when generating the target skin deformation coefficient, it is used to: perform normalization processing on the fitting coefficients corresponding to the plurality of second real face models respectively; The skin deformation coefficients respectively included in the virtual face models are obtained to obtain the target skin deformation coefficients.
一种可选的实施方式中,所述第三生成模块在基于所述目标骨骼数据、以及所述目标蒙皮变形系数,生成与所述第一真实人脸模型对应的目标虚拟人脸模型时,用于:基于所述目标骨骼数据、以及标准虚拟人脸模型中标准骨骼数据与标准蒙皮数据之间的关联关系,对所述标准蒙皮数据进行位置变换处理,生成中间蒙皮数据;基于所述目标蒙皮变形系数,对所述中间蒙皮数据进行变形处理,得到目标蒙皮数据;基于所述目标骨骼数据、以及所述目标蒙皮数据,构成所述目标虚拟人脸模型。In an optional embodiment, when the third generation module generates the target virtual face model corresponding to the first real face model based on the target bone data and the target skin deformation coefficient is used to: perform position transformation processing on the standard skin data based on the target bone data and the association between the standard bone data and standard skin data in the standard virtual face model, and generate intermediate skin data; Based on the target skin deformation coefficient, the intermediate skin data is deformed to obtain target skin data; the target virtual face model is formed based on the target bone data and the target skin data.
一种可选的实施方式中,所述目标骨骼数据包括以下至少一种:目标骨骼位置数据、目标骨骼缩放数据、以及目标骨骼旋转数据;所述多个虚拟人脸模型分别对应的骨骼数 据包括以下至少一种:所述虚拟人脸的多块人脸骨骼中每块人脸骨骼对应的骨骼旋转数据、骨骼位置数据、以及骨骼缩放数据。In an optional implementation manner, the target skeleton data includes at least one of the following: target skeleton position data, target skeleton scaling data, and target skeleton rotation data; the skeleton data corresponding to the plurality of virtual face models respectively include: At least one of the following: bone rotation data, bone position data, and bone scaling data corresponding to each face bone in the multiple face bones of the virtual face.
一种可选的实施方式中,所述第二生成模块在基于所述多个第二真实人脸模型分别对应的拟合系数、所述多个第二真实人脸模型分别对应的具有预设风格的虚拟人脸模型,生成目标骨骼数据时,用于:基于所述多个第二真实人脸模型分别对应的拟合系数,对所述多个虚拟人脸模型分别对应的骨骼位置数据进行插值处理,得到所述目标骨骼位置数据。In an optional embodiment, the second generation module is based on the fitting coefficients corresponding to the plurality of second real face models respectively, and the preset values corresponding to the plurality of second real face models respectively. The style of the virtual face model, when generating the target skeleton data, is used for: based on the fitting coefficients corresponding to the plurality of second real face models, respectively, on the skeleton position data corresponding to the plurality of virtual face models. Interpolation processing is performed to obtain the target bone position data.
一种可选的实施方式中,第二生成模块在基于所述多个第二真实人脸模型分别对应的拟合系数、所述多个第二真实人脸模型分别对应的具有预设风格的虚拟人脸模型,生成目标骨骼数据时,用于:基于所述多个第二真实人脸模型分别对应的拟合系数,对所述多个虚拟人脸模型分别对应的骨骼缩放数据进行插值处理,得到所述目标骨骼缩放数据。In an optional embodiment, the second generation module is based on the fitting coefficients corresponding to the plurality of second real face models and the preset styles corresponding to the plurality of second real face models respectively. The virtual face model, when generating target skeleton data, is used to: perform interpolation processing on the skeleton scaling data corresponding to the plurality of virtual face models based on the fitting coefficients corresponding to the plurality of second real face models respectively to obtain the target bone scaling data.
一种可选的实施方式中,所述第二生成模块在基于所述多个第二真实人脸模型分别对应的拟合系数、所述多个第二真实人脸模型分别对应的具有预设风格的虚拟人脸模型,生成目标骨骼数据时,用于:将所述多个虚拟人脸模型分别对应的骨骼旋转数据转换为四元数,并对所述多个虚拟人脸模型分别对应的四元数进行正则化处理,得到正则化四元数;基于所述多个第二真实人脸模型分别对应的拟合系数,对所述多个虚拟人脸模型分别对应的所述正则化四元数进行插值处理,得到所述目标骨骼旋转数据。In an optional embodiment, the second generation module is based on the fitting coefficients corresponding to the plurality of second real face models respectively, and the preset values corresponding to the plurality of second real face models respectively. The style of the virtual face model, when generating target skeleton data, is used to: convert the skeleton rotation data corresponding to the plurality of virtual face models respectively into quaternions, and to the corresponding virtual face models of the plurality of virtual face models. Perform regularization processing on the quaternion to obtain a regularized quaternion; based on the fitting coefficients corresponding to the plurality of second real face models respectively, the regularization quaternions corresponding to the plurality of virtual face models The arity is interpolated to obtain the rotation data of the target bone.
一种可选的实施方式中,所述第一生成模块在基于目标图像生成第一真实人脸模型时,用于:获取包括原始人脸的目标图像;对所述目标图像中包括的所述原始人脸进行三维人脸重建,得到所述第一真实人脸模型。In an optional embodiment, when generating the first real face model based on the target image, the first generation module is used to: acquire a target image including the original face; Three-dimensional face reconstruction is performed on the original face to obtain the first real face model.
一种可选的实施方式中,所述处理模块根据以下方式预先生成所述多个第二真实人脸模型:获取包括参考人脸的多张参考图像;针对所述多张参考图像中的每张参考图像,对所述参考图像中包括的参考人脸进行三维人脸重建,得到所述参考图像对应的所述第二真实人脸模型。In an optional implementation manner, the processing module pre-generates the plurality of second real face models according to the following methods: acquiring a plurality of reference images including a reference face; for each of the plurality of reference images; A reference image is taken, and three-dimensional face reconstruction is performed on the reference face included in the reference image to obtain the second real face model corresponding to the reference image.
一种可选的实施方式中,还包括获取模块,用于针对所述多个第二真实人脸模型中的每个第二真实人脸模型,采用下述方式获取所述第二真实人脸模型对应的具有预设风格的虚拟人脸模型:生成所述第二真实人脸模型对应的具有预设风格的中间虚拟人脸模型;基于相对于标准虚拟人脸模型的多组预设蒙皮变形系数,生成与所述第二真实人脸模型对应的虚拟人脸模型相对于所述标准虚拟人脸模型的蒙皮变形系数;利用所述蒙皮变形系数,对所述中间虚拟人脸模型中的中间蒙皮数据进行调整;基于调整后的中间蒙皮数据、以及所述中间虚拟人脸模型的中间骨骼数据,生成所述第二真实人脸模型的虚拟人脸模型。In an optional implementation manner, an acquisition module is also included for acquiring the second real human face in the following manner for each second real human face model in the plurality of second real human face models. A virtual face model with a preset style corresponding to the model: generating an intermediate virtual face model with a preset style corresponding to the second real face model; based on multiple sets of preset skins relative to the standard virtual face model deformation coefficient, to generate the skin deformation coefficient of the virtual face model corresponding to the second real face model relative to the standard virtual face model; using the skin deformation coefficient, to the intermediate virtual face model The middle skin data in the middle skin is adjusted; based on the adjusted middle skin data and the middle skeleton data of the middle virtual face model, a virtual face model of the second real face model is generated.
一种可选的实施方式中,所述处理模块利用预先生成的多个第二真实人脸模型对所述第一真实人脸模型进行拟合处理,得到多个第二真实人脸模型分别对应的拟合系数时,用于:对所述多个第二真实人脸模型以及所述第一真实人脸模型进行最小二乘处理,得 到所述多个第二真实人脸模型分别对应的拟合系数。In an optional implementation manner, the processing module uses a plurality of pre-generated second real face models to perform fitting processing on the first real face model, and obtains a plurality of second real face models corresponding to When the fitting coefficient is , it is used to: perform least squares processing on the plurality of second real face models and the first real face models to obtain the corresponding fitting coefficients of the plurality of second real face models respectively. Combined coefficient.
第三方面,本公开可选实现方式还提供一种计算机设备,处理器、存储器,所述存储器存储有所述处理器可执行的机器可读指令,所述处理器用于执行所述存储器中存储的机器可读指令,所述机器可读指令被所述处理器执行时,所述机器可读指令被所述处理器执行时执行上述第一方面,或第一方面中任一种可能的实施方式中的步骤。In a third aspect, an optional implementation manner of the present disclosure further provides a computer device, a processor, and a memory, where the memory stores machine-readable instructions executable by the processor, and the processor is configured to execute the instructions stored in the memory. machine-readable instructions, when the machine-readable instructions are executed by the processor, when the machine-readable instructions are executed by the processor, the above-mentioned first aspect, or any possible implementation of the first aspect, is executed steps in the method.
第四方面,本公开可选实现方式还提供一种计算机可读存储介质,该计算机可读存储介质上存储有计算机程序,该计算机程序被运行时执行上述第一方面,或第一方面中任一种可能的实施方式中的步骤。In a fourth aspect, an optional implementation manner of the present disclosure further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and the computer program executes the first aspect, or any of the first aspect, when the computer program is run. steps in one possible implementation.
关于上述重建人脸的装置、计算机设备、及计算机可读存储介质的效果描述参见上述重建人脸的方法的说明,这里不再赘述。为使本公开的上述目的、特征和优点能更明显易懂,下文特举较佳实施例,并配合所附附图,作详细说明如下。For a description of the effects of the above-mentioned apparatus for reconstructing a human face, computer equipment, and a computer-readable storage medium, please refer to the description of the above-mentioned method for reconstructing a human face, which will not be repeated here. In order to make the above-mentioned objects, features and advantages of the present disclosure more obvious and easy to understand, the preferred embodiments are exemplified below, and are described in detail as follows in conjunction with the accompanying drawings.
附图说明Description of drawings
为了更清楚地说明本公开实施例的技术方案,下面将对实施例中所需要使用的附图作简单地介绍。这些附图示出了符合本公开的实施例,并与说明书一起用于说明本公开的技术方案。应当理解,以下附图仅示出了本公开的某些实施例,因此不应被看作是对范围的限定,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他相关的附图。In order to illustrate the technical solutions of the embodiments of the present disclosure more clearly, the accompanying drawings required in the embodiments will be briefly introduced below. These drawings illustrate embodiments consistent with the present disclosure, and together with the description, serve to explain the technical solutions of the present disclosure. It should be understood that the following drawings only show some embodiments of the present disclosure, and therefore should not be regarded as limiting the scope. Other related figures are obtained from these figures.
图1示出了本公开一实施例所提供的一种重建人脸的方法的流程图;FIG. 1 shows a flowchart of a method for reconstructing a human face provided by an embodiment of the present disclosure;
图2示出了本公开另一实施例所提供的一种重建人脸的方法的流程图;FIG. 2 shows a flowchart of a method for reconstructing a human face provided by another embodiment of the present disclosure;
图3示出了本公开实施例所提供的一种得到目标蒙皮变形系数的具体方法的流程图;FIG. 3 shows a flowchart of a specific method for obtaining a target skin deformation coefficient provided by an embodiment of the present disclosure;
图4示出了本公开实施例所提供的一种基于目标骨骼数据、以及目标蒙皮变形系数,生成与第一真实人脸模型对应的目标虚拟人脸模型的具体方法的流程图;4 shows a flowchart of a specific method for generating a target virtual face model corresponding to a first real face model based on target bone data and target skin deformation coefficients provided by an embodiment of the present disclosure;
图5示出了本公开实施例提供的一种重建人脸的方法中涉及的多个人脸以及人脸模型的示例;FIG. 5 shows an example of multiple faces and face models involved in a method for reconstructing a face provided by an embodiment of the present disclosure;
图6示出了本公开实施例提供的一种重建人脸的装置的示意图;FIG. 6 shows a schematic diagram of an apparatus for reconstructing a human face provided by an embodiment of the present disclosure;
图7示出了本公开实施例所提供的一种计算机设备的示意图。FIG. 7 shows a schematic diagram of a computer device provided by an embodiment of the present disclosure.
具体实施方式Detailed ways
为使本公开实施例的目的、技术方案和优点更加清楚,下面将结合本公开实施例中附图,对本公开实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本公开一部分实施例,而不是全部的实施例。通常在此处描述和示出的本公开实施例的组件可以以各种不同的配置来布置和设计。因此,以下对本公开的实施例的详细描述并非旨在限制要求保护的本公开的范围,而是仅仅表示本公开的选定实施例。基于本公开的实施例,本领域技术人员在没有做出创造性劳动的前提下所获得的所有其他实施 例,都属于本公开保护的范围。In order to make the purposes, technical solutions and advantages of the embodiments of the present disclosure more clear, the technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present disclosure. Obviously, the described embodiments are only These are some, but not all, embodiments of the present disclosure. The components of the disclosed embodiments generally described and illustrated herein may be arranged and designed in a variety of different configurations. Thus, the following detailed description of the embodiments of the present disclosure is not intended to limit the scope of the disclosure as claimed, but is merely representative of selected embodiments of the disclosure. Based on the embodiments of the present disclosure, all other embodiments obtained by those skilled in the art without creative work fall within the protection scope of the present disclosure.
经研究发现,利用人脸重建的方法可以根据真实人脸或自身喜好建立虚拟人脸三维模型。其中,在基于人像图像中的真实人脸进行人脸重建的情况下,通常先对人像图像中的真实人脸进行特征提取,以得到人脸轮廓特征,再将人脸轮廓特征与预先生成的虚拟三维模型中的特征进行匹配,并基于匹配的结果,将人脸轮廓特征与虚拟三维模型进行融合,以获取与人像图像中的真实人脸对应的虚拟人脸三维模型。由于在将人脸轮廓特征与预先生成的虚拟三维模型中的特征进行匹配时,匹配的准确率较低,使得虚拟三维模型与人脸轮廓特征之间匹配的误差较大,容易造成依据匹配结果对人脸轮廓特征与人脸虚拟三维模型进行融合得到的虚拟人脸三维模型与人像图像中的人脸相似度较低的问题。After research, it is found that the method of face reconstruction can establish a three-dimensional model of virtual face according to the real face or one's own preferences. Among them, in the case of face reconstruction based on the real face in the portrait image, the feature extraction is usually performed on the real face in the portrait image to obtain the face contour features, and then the face contour features are combined with the pre-generated face contour features. The features in the virtual 3D model are matched, and based on the matching results, the facial contour features are fused with the virtual 3D model to obtain a virtual 3D model of the face corresponding to the real face in the portrait image. Since the matching accuracy is low when the face contour features are matched with the features in the pre-generated virtual three-dimensional model, the matching error between the virtual three-dimensional model and the face contour features is relatively large, and it is easy to cause the matching results based on the matching results. The problem of low similarity between the virtual face 3D model obtained by fusing the face contour feature and the face virtual 3D model and the face in the portrait image.
针对以上方案所存在的缺陷,本公开实施例提供了本公开提供了一种重建人脸的方法,能够生成具有预设风格并且具有第一真实人脸模型对应的原始人脸的特征的目标虚拟人脸模型,该目标虚拟人脸模型可以体现原始人脸的胖瘦特征,与第一真实人脸模型对应的原始人脸之间具有较高的相似度。In view of the defects of the above solutions, the embodiments of the present disclosure provide a method for reconstructing a face, which can generate a target virtual image with a preset style and features of the original face corresponding to the first real face model The face model, the target virtual face model can reflect the fat and thin features of the original face, and has a high similarity with the original face corresponding to the first real face model.
为便于对本实施例进行理解,首先对本公开实施例所公开的一种重建人脸的方法进行详细介绍,本公开实施例所提供的重建人脸的方法的执行主体一般为具有一定计算能力的计算机设备,该计算机设备例如包括:终端设备或服务器或其它处理设备,终端设备可以为用户设备(User Equipment,UE)、移动设备、用户终端、终端、蜂窝电话、无绳电话、个人数字助理(Personal Digital Assistant,PDA)、手持设备、计算设备、车载设备、可穿戴设备等。在一些可能的实现方式中,该重建人脸的方法可以通过处理器调用存储器中存储的计算机可读指令的方式来实现。In order to facilitate the understanding of this embodiment, a method for reconstructing a face disclosed by an embodiment of the present disclosure is first introduced in detail. The execution subject of the method for reconstructing a face provided by the embodiment of the present disclosure is generally a computer with a certain computing capability. equipment, the computer equipment for example includes: terminal equipment or server or other processing equipment, the terminal equipment can be user equipment (User Equipment, UE), mobile equipment, user terminal, terminal, cellular phone, cordless phone, personal digital assistant (Personal Digital Assistant) Assistant, PDA), handheld devices, computing devices, in-vehicle devices, wearable devices, etc. In some possible implementations, the method for reconstructing a human face may be implemented by the processor calling computer-readable instructions stored in the memory.
下面对本公开实施例提供的重建人脸的方法加以说明。The following describes the method for reconstructing a human face provided by the embodiments of the present disclosure.
图1为本公开实施例提供的重建人脸的方法的流程图,如图1所示,所述方法包括步骤S101至S104,其中:FIG. 1 is a flowchart of a method for reconstructing a face provided by an embodiment of the present disclosure. As shown in FIG. 1 , the method includes steps S101 to S104, wherein:
S101:基于目标图像生成第一真实人脸模型。S101: Generate a first real face model based on the target image.
S102:利用预先生成的多个第二真实人脸模型对第一真实人脸模型进行拟合处理,得到多个第二真实人脸模型分别对应的拟合系数。S102: Perform fitting processing on the first real face model by using multiple pre-generated second real face models to obtain fitting coefficients corresponding to the multiple second real face models respectively.
S103:基于多个第二真实人脸模型分别对应的拟合系数、以及多个第二真实人脸模型分别对应的具有预设风格的虚拟人脸模型,生成目标骨骼数据以及目标蒙皮变形系数。其中,目标蒙皮变形系数可表示待生成的目标人脸模型的蒙皮数据相对于预先生成的标准虚拟人脸模型的标准蒙皮数据的变形。S103: Generate target skeleton data and target skin deformation coefficients based on fitting coefficients corresponding to the plurality of second real face models and virtual face models with preset styles corresponding to the plurality of second real face models respectively . The target skin deformation coefficient may represent the deformation of the skin data of the target face model to be generated relative to the standard skin data of the pre-generated standard virtual face model.
S104:基于目标骨骼数据以及目标蒙皮变形系数,生成与第一真实人脸模型对应的目标虚拟人脸模型。S104: Generate a target virtual face model corresponding to the first real face model based on the target bone data and the target skin deformation coefficient.
本公开实施例提供了一种重建人脸的方法,将拟合系数作为媒介,建立了多个第二真实人脸模型与第一真实人脸模型之间的关联关系,该关联关系能够表征基于第二真实 人脸模型建立的虚拟人脸模型、和基于第一真实人脸模型建立的目标虚拟人脸模型之间的关联,同时,通过目标蒙皮变形系数表征目标图像中人脸蒙皮变形的特征,如骨骼相同的情况下,存在的胖瘦差异,从而基于拟合系数、以及虚拟人脸模型生成了目标虚拟人脸模型,该目标虚拟人脸模型既具有预设风格又具有第一真实人脸模型对应的原始人脸的特征,而且还可以体现原始人脸的胖瘦特征,所生成的目标虚拟人脸模型和第一真实人脸模型对应的原始人脸之间具有较高的相似度。The embodiment of the present disclosure provides a method for reconstructing a face, which uses the fitting coefficient as a medium to establish a plurality of associations between the second real face models and the first real face model, and the associations can represent The relationship between the virtual face model established by the second real face model and the target virtual face model established based on the first real face model, at the same time, the face skin deformation in the target image is represented by the target skin deformation coefficient characteristics, such as the difference in fat and thinness when the bones are the same, so that the target virtual face model is generated based on the fitting coefficient and the virtual face model. The target virtual face model has both a preset style and a first The features of the original face corresponding to the real face model can also reflect the fat and thin features of the original face. The generated target virtual face model and the original face corresponding to the first real face model have a higher difference. similarity.
下面对上述步骤S101至S104加以详细说明。The above steps S101 to S104 will be described in detail below.
针对上述步骤S101,目标图像例如为获取的包括人脸的图像,例如,在利用诸如相机等的拍摄设备对某一对象进行拍摄时获取的包括人脸的图像。此时,例如可以将图像中包括的任一张人脸确定为原始人脸,并将原始人脸作为人脸重建的对象。For the above step S101, the target image is, for example, an acquired image including a human face, for example, an image including a human face acquired when a certain object is photographed with a photographing device such as a camera. At this time, for example, any face included in the image can be determined as the original face, and the original face can be used as the object of face reconstruction.
在将本公开实施例提供的重建人脸的方法应用于不同的场景下时,目标图像的获取方法也有所区别。When the method for reconstructing a face provided by the embodiment of the present disclosure is applied to different scenarios, the method for acquiring the target image is also different.
例如,在将该重建人脸的方法应用于游戏中的情况下,可以通过游戏设备中安装的图像获取设备获取包括了游戏玩家的脸部的图像,或者可以从游戏设备的相册中选择包括了游戏玩家的脸部的图像、并将获取的包括了游戏玩家的脸部的图像作为目标图像。For example, when the method for reconstructing a human face is applied to a game, an image including the face of the game player can be obtained through an image obtaining device installed in the game device, or an image including the face of the game player can be selected from the album of the game device. The image of the face of the game player, and the acquired image including the face of the game player is used as the target image.
又例如,在将重建人脸的方法应用于手机等终端设备的情况下,可以由终端设备的摄像头采集包括用户人脸的图像,或者从终端设备的相册中选择包括了用户人脸的图像,或者从终端设备中安装的其他应用程序中接收包括用户的脸部的图像。For another example, when the method for reconstructing a face is applied to a terminal device such as a mobile phone, an image including the user's face can be collected by the camera of the terminal device, or an image including the user's face can be selected from the album of the terminal device, Or receive images including the user's face from other applications installed in the terminal device.
又例如,在将重建人脸的方法应用于直播场景下,可以从直播设备获取的视频流中包括的多帧视频帧图像中获取包含人脸的视频帧图像;并将包含人脸的视频帧图像作为目标图像。此处,目标图像例如可以有多帧;多帧目标图像例如可以是对视频流进行采样获得。For another example, when the method for reconstructing a human face is applied to a live broadcast scene, a video frame image containing a human face can be obtained from multiple frames of video frame images included in a video stream obtained by a live broadcast device; image as the target image. Here, for example, the target image may have multiple frames; for example, the multiple-frame target image may be obtained by sampling a video stream.
在基于目标图像生成第一真实人脸模型时,例如可以采用下述方式:获取包括原始人脸的目标图像;对目标图像中包括的原始人脸进行三维人脸重建,得到第一真实人脸模型。When generating the first real face model based on the target image, for example, the following methods may be used: acquiring a target image including the original face; performing three-dimensional face reconstruction on the original face included in the target image to obtain the first real face Model.
此处,在对目标图像中包括的原始人脸进行三维人脸重建时,例如可以采用三维可变形人脸模型(3 Dimensions Morphable Models,3DMM)得到原始人脸对应的第一真实人脸模型。其中,第一真实人脸模型例如包括目标图像中原始人脸的多个关键点中每个关键点在预设的相机坐标系中的位置信息。Here, when performing three-dimensional face reconstruction on the original face included in the target image, for example, a three-dimensional deformable face model (3 Dimensions Morphable Models, 3DMM) can be used to obtain the first real face model corresponding to the original face. Wherein, the first real face model includes, for example, the position information of each key point in the preset camera coordinate system among the multiple key points of the original face in the target image.
针对上述步骤S102,第二真实人脸模型是基于包括参考人脸的参考图像生成的。其中,不同参考图像中的参考人脸可以不同;示例性地,可以选取性别、年龄、肤色、胖瘦程度等中至少一项不同的多个人,针对多个人中的每个人,获取每个人的人脸图像,并将获取的人脸图像作为参考图像。这样,基于多个参考图像获取的多个第二真实人脸模型,能够尽量覆盖到较为广泛的人脸外形特征。For the above step S102, the second real face model is generated based on the reference image including the reference face. The reference faces in different reference images may be different; exemplarily, multiple people with different at least one of gender, age, skin color, degree of fatness and thinness, etc. may be selected, and for each of the multiple people, obtain the face image, and use the acquired face image as a reference image. In this way, the plurality of second real face models obtained based on the plurality of reference images can cover a relatively wide range of face shape features as much as possible.
其中,参考人脸例如包括N个不同对象对应的人脸,(N为大于1的整数)。示例 性地,可以通过对N个不同对象分别进行拍摄,得到分别对应于N个不同对象的N张照片,且每张照片均对应一个参考人脸。此时,可以将此N张照片作为N张参考图像;或者,从预先拍摄好的包括不同人脸的多张图像中,确定N张参考图像。The reference face includes, for example, faces corresponding to N different objects, (N is an integer greater than 1). Exemplarily, by shooting N different objects respectively, N photos corresponding to the N different objects can be obtained, and each photo corresponds to a reference face. At this time, the N photos may be used as N reference images; or, N reference images may be determined from a plurality of pre-shot images including different faces.
示例性地,生成多个第二真实人脸模型的方法包括:获取包括参考人脸的多张参考图像;针对多张参考图像中的每张参考图像,对该参考图像中包括的参考人脸进行三维人脸重建,得到该参考图像对应的第二真实人脸模型。Exemplarily, the method for generating multiple second real face models includes: acquiring multiple reference images including a reference face; for each reference image in the multiple reference images, obtaining the reference face included in the reference image. Perform three-dimensional face reconstruction to obtain a second real face model corresponding to the reference image.
其中,对参考人脸进行三维人脸重建的方法与上述对原始人脸进行三维人脸重建的方法类似,在此不再赘述。所得到的第二真实人脸模型,包括参考图像中参考人脸的多个关键点中每个关键点在预设的相机坐标系中的位置信息。此时,该第二真实人脸模型的坐标系和第一真实人脸模型的坐标系可以为同一坐标系。Wherein, the method for performing 3D face reconstruction on the reference face is similar to the above-mentioned method for performing 3D face reconstruction on the original face, and will not be repeated here. The obtained second real face model includes position information of each key point in the preset camera coordinate system among the multiple key points of the reference face in the reference image. At this time, the coordinate system of the second real face model and the coordinate system of the first real face model may be the same coordinate system.
利用预先生成的多个第二真实人脸模型对第一真实人脸模型进行拟合处理,得到多个第二真实人脸模型分别对应的多个拟合系数,例如可以采用下述方式来实现:对多个第二真实人脸模型以及第一真实人脸模型进行最小二乘处理,得到多个第二真实人脸模型分别对应的拟合系数。Perform fitting processing on the first real face model by using multiple pre-generated second real face models to obtain multiple fitting coefficients corresponding to the multiple second real face models respectively. For example, the following methods can be used to achieve : Least square processing is performed on the multiple second real face models and the first real face model to obtain the fitting coefficients corresponding to the multiple second real face models respectively.
示例性地,可以将第一真实人脸模型对应的模型数据表示为D a,将第二真实人脸模型对应的模型数据表示为D bi(i∈[1,N]),其中,D bi表示N个第二真实人脸模型中的第i个第二真实人脸模型。 Exemplarily, the model data corresponding to the first real face model can be represented as D a , and the model data corresponding to the second real face model can be represented as D bi (i∈[1,N]), where D bi Indicates the i-th second real face model among the N second real face models.
利用D a对D b1至D bN中的每一项进行最小二乘处理,可以得到N个拟合值,该拟合值表示为α i(i∈[1,N])。其中,α i表征第i个第二真实人脸模型对应的拟合值。利用N个拟合值,可以确定拟合系数Alpha,例如可以用系数矩阵表示,也即 Using D a to perform least squares processing on each of D b1 to D bN , N fitting values can be obtained, and the fitting values are expressed as α i (i∈[1,N]). Among them, α i represents the fitting value corresponding to the ith second real face model. Using the N fitted values, the fitting coefficient Alpha can be determined, for example, it can be represented by a coefficient matrix, that is,
Alpha=[α 12,…,α N]。 Alpha=[α 12 ,...,α N ].
此处,在通过多个第二真实人脸模型拟合第一真实人脸模型的过程中,通过多个拟合系数对多个第二真实人脸模型进行加权求和后得到的数据,可以与第一真实人脸模型的数据尽可能接近。Here, in the process of fitting the first real face model by using the plurality of second real face models, the data obtained after the weighted summation of the plurality of second real face models by using the plurality of fitting coefficients can be as close as possible to the data of the first real face model.
该拟合系数又可视为利用多个第二真实人脸模型表达第一真实人脸模型时每个第二真实人脸模型的表达系数。也即利用多个第二真实人脸模型分别在表达系数中对应的多个拟合值,可以将第二真实人脸模型向第一真实人脸模型进行转化拟合。The fitting coefficient can also be regarded as an expression coefficient of each second real face model when the first real face model is expressed by using a plurality of second real face models. That is, the second real face model can be converted and fitted to the first real face model by using the respective fitting values corresponding to the expression coefficients of the multiple second real face models.
针对上述步骤S103,预设风格例如可以为卡通风格、古代风格或抽象风格等,可以根据实际的需要进行具体地设定。示例性地,针对预设风格为卡通风格的情况,具有预设风格的虚拟人脸模型例如可以为具有某种卡通风格的虚拟人脸模型。For the above step S103, the preset style may be, for example, a cartoon style, an ancient style, or an abstract style, and may be specifically set according to actual needs. Exemplarily, for the case where the preset style is a cartoon style, the virtual face model with the preset style may be, for example, a virtual face model with a certain cartoon style.
其中,虚拟人脸模型可包括骨骼数据、以及蒙皮数据和/或蒙皮变形系数。蒙皮变形系数表示虚拟人脸模型的蒙皮数据相对于预先生成的标准虚拟人脸模型的标准蒙皮数据的变形。Wherein, the virtual face model may include bone data, and skin data and/or skin deformation coefficients. The skin deformation coefficient represents the deformation of the skin data of the virtual face model relative to the standard skin data of the pre-generated standard virtual face model.
参见图2所示,本公开实施例提供了针对所述多个第二真实人脸模型中的每个第二 真实人脸模型,生成第二该真实人脸模型对应的具有预设风格的虚拟人脸模型的具体方法,包括:Referring to FIG. 2 , an embodiment of the present disclosure provides that, for each second real face model in the plurality of second real face models, a virtual image with a preset style corresponding to the second real face model is generated. Specific methods of face model, including:
S201:生成该第二真实人脸模型对应的具有预设风格的中间虚拟人脸模型。S201: Generate an intermediate virtual face model with a preset style corresponding to the second real face model.
此处,生成与该第二真实人脸模型对应的具有预设风格的中间虚拟人脸模型的方法例如包括下述(a1)和(a2)中至少一种。Here, the method for generating an intermediate virtual face model with a preset style corresponding to the second real face model includes, for example, at least one of the following (a1) and (a2).
(a1)可以基于参考图像制作具有参考人脸特征的、且具有预设风格的虚拟人脸图像,并对虚拟人脸图像中的虚拟人脸进行三维建模,得到虚拟人脸图像中虚拟人脸的骨骼数据以及蒙皮数据。(a1) Based on the reference image, a virtual face image with reference face features and a preset style can be produced, and three-dimensional modeling of the virtual face in the virtual face image can be obtained to obtain a virtual face in the virtual face image. Skeletal data for the face as well as skin data.
其中,骨骼数据包括为虚拟人脸预设的多个骨骼在预设坐标系中的骨骼旋转数据、骨骼缩放数据、以及骨骼位置数据。此处,多个骨骼例如可以进行多层级的划分;例如包括根(root)骨骼、五官骨骼和五官细节骨骼;其中五官骨骼可以包括:眉骨骼、鼻骨骼、颧骨骨骼、下颌骨骼和嘴骨骼等;五官细节骨骼例如又可以将不同的五官骨骼再进行进一步的详细划分。可以根据不同风格的虚拟图像需求进行具体地设定,在此不做限定。The skeleton data includes skeleton rotation data, skeleton scaling data, and skeleton position data of multiple bones preset for the virtual face in a preset coordinate system. Here, for example, multiple bones can be divided into multiple levels; for example, they include root (root) bones, facial bones, and facial detail bones; wherein facial bones may include: eyebrow bones, nasal bones, zygomatic bones, mandibular bones, and mouth bones Etc.; the facial features detail bones, for example, can further divide different facial features bones into further details. Specific settings can be made according to different styles of virtual image requirements, which are not limited here.
蒙皮数据包括虚拟人脸的表面中多个位置点在预设的模型坐标系中的位置信息、以及每个位置点与多个骨骼中至少一个骨骼的关联关系信息。其中,该模型坐标系为针对虚拟人脸模型建立的三维坐标系。The skinned data includes position information of multiple position points on the surface of the virtual face in the preset model coordinate system, and information about the association relationship between each position point and at least one bone among the multiple bones. The model coordinate system is a three-dimensional coordinate system established for the virtual face model.
将对虚拟人脸图像中的虚拟人脸进行三维建模得到的虚拟模型作为第二真实人脸模型对应的中间虚拟人脸模型。The virtual model obtained by performing three-dimensional modeling on the virtual face in the virtual face image is used as an intermediate virtual face model corresponding to the second real face model.
(a2)预先生成一具有预设风格的标准虚拟人脸模型。该标准虚拟人脸模型同样包括标准骨骼数据、标准蒙皮数据、以及标准骨骼数据与标准蒙皮数据之间的关联关系。基于多张参考图像中的每张参考图像对应的参考人脸的人脸特征,对标准虚拟人脸模型中的标准骨骼数据进行调整,以使调整后的标准虚拟人脸模型在具有预设风格的同时,还包括了参考图像中参考人脸的特征;然后,基于标准骨骼数据与标准蒙皮数据之间的关联关系,对标准蒙皮数据进行调整,同时还可以为标准蒙皮数据添加参考人脸所具有的特征信息,基于修改后的标准骨骼数据和修改后的标准蒙皮数据,生成第二真实人脸模型对应的中间虚拟人脸模型。(a2) Pre-generate a standard virtual face model with a preset style. The standard virtual face model also includes standard bone data, standard skin data, and an association relationship between the standard bone data and the standard skin data. Based on the face features of the reference face corresponding to each reference image in the multiple reference images, the standard skeleton data in the standard virtual face model is adjusted, so that the adjusted standard virtual face model has a preset style At the same time, it also includes the features of the reference face in the reference image; then, based on the relationship between the standard bone data and the standard skin data, the standard skin data is adjusted, and a reference can also be added to the standard skin data. The feature information of the human face is based on the modified standard skeleton data and the modified standard skin data to generate an intermediate virtual face model corresponding to the second real face model.
此处,中间虚拟人脸模型的具体数据表示可以参见上述(a1)中所描述的,在此不再赘述。Here, for the specific data representation of the intermediate virtual face model, reference may be made to the description in (a1) above, which will not be repeated here.
S202:基于相对于标准虚拟人脸模型的多组预设蒙皮变形系数,生成与该第二真实人脸模型对应的虚拟人脸模型相对于标准虚拟人脸模型的蒙皮变形系数。S202: Generate skin deformation coefficients of the virtual face model corresponding to the second real face model relative to the standard virtual face model based on multiple sets of preset skin deformation coefficients relative to the standard virtual face model.
这里,针对标准虚拟人脸模型,生成的多组蒙皮变形系数是在标准虚拟人脸模型的骨骼未发生改变的情况下,仅仅对标准虚拟人脸模型的标准蒙皮数据中表征标准虚拟人脸模型的例如颧骨等的具体位置对应的至少部分位置点进行调整的调整系数。Here, for the standard virtual face model, the generated sets of skin deformation coefficients are only used to characterize the standard virtual human in the standard skin data of the standard virtual face model under the condition that the bones of the standard virtual face model have not changed. An adjustment coefficient for adjusting at least part of the position points corresponding to specific positions of the face model, such as cheekbones.
其中,每组蒙皮变形系数表征对标准蒙皮数据中至少部分位置点在模型坐标系中的位置进行调整的结果,使得标准虚拟人脸模型中与调整的位置点所对应部位呈现变胖或者变瘦的效果。Wherein, each group of skin deformation coefficients represents the result of adjusting the position of at least part of the position points in the standard skin data in the model coordinate system, so that the parts corresponding to the adjusted position points in the standard virtual face model appear fat or slimming effect.
通过多组预设蒙皮数据来组合参考人脸对应的蒙皮变形系数时,例如可以对多组预设蒙皮数据进行拟合,使得拟合后的结果,与参考人脸的人脸形状相似。When using multiple sets of preset skin data to combine the skin deformation coefficients corresponding to the reference face, for example, multiple sets of preset skin data can be fitted, so that the fitting result is consistent with the face shape of the reference face. resemblance.
S203:利用蒙皮变形系数,对中间虚拟人脸模型中的中间蒙皮数据进行调整,并基于调整后的中间蒙皮数据、以及中间虚拟人脸模型的中间骨骼数据,生成该第二真实人脸模型对应的虚拟人脸模型。S203: Adjust the intermediate skin data in the intermediate virtual face model by using the skin deformation coefficient, and generate the second real person based on the adjusted intermediate skin data and the intermediate bone data of the intermediate virtual face model The virtual face model corresponding to the face model.
例如,在一种可能的实施方式中,可以获取R组预设蒙皮变形系数Blendshape;此处,每一组预设的蒙皮变形系数中,包括与蒙皮数据中多个位置点分别对应的变形系数值。示例性地,若蒙皮数据中的位置点有W个,每个位置点均对应一个变形系数值,则R组预设蒙皮变形系数中的每组蒙皮变形系数的维度为W。For example, in a possible implementation, R groups of preset skin deformation coefficients Blendshape may be obtained; here, each group of preset skin deformation coefficients includes points corresponding to multiple positions in the skin data respectively. deformation coefficient value. Exemplarily, if there are W position points in the skin data, and each position point corresponds to a deformation coefficient value, the dimension of each group of skin deformation coefficients in the R groups of preset skin deformation coefficients is W.
其中,利用Blendshape i(i∈[1,R])表示第i组预设蒙皮变形系数。利用R组预设蒙皮变形系数,可以对标准虚拟人脸模型的胖瘦做出修改,以获取R个调整胖瘦特征后的标准虚拟人脸模型。 Among them, Blendshape i (i∈[1,R]) is used to represent the i-th group of preset skin deformation coefficients. Using the R group of preset skin deformation coefficients, the fatness and thinness of the standard virtual face model can be modified to obtain R standard virtual face models after adjusting the fatness and thinness features.
在生成虚拟人脸模型时,可以用R组预设蒙皮变形系数Blendshape组合得到虚拟人脸模型的蒙皮变形系数。此处,例如可以为不同的预设蒙皮变形系数添加对应的权值,利用该权值,将R组预设蒙皮变形系数进行加权求和,以得到某一虚拟人脸模型的蒙皮变形系数。When generating a virtual face model, the skin deformation coefficients of the virtual face model can be obtained by combining the preset skin deformation coefficients Blendshape of the R group. Here, for example, corresponding weights can be added to different preset skin deformation coefficients, and by using the weights, the R group of preset skin deformation coefficients can be weighted and summed to obtain the skin of a certain virtual face model. deformation factor.
示例性地,在预先生成N个第二真实人脸模型、且获取R组预设蒙皮变形系数的情况下,第i个真实人脸的蒙皮变形系数Blendshape i的维度为R×W。N个第二真实人脸模型分别对应的蒙皮变形系数,能够构成一维度为N×R×W的矩阵;该矩阵中,包括N个第二真实人脸模型分别对应的虚拟人脸模型的蒙皮变形系数。 Exemplarily, when N second real face models are generated in advance and R groups of preset skin deformation coefficients are acquired, the dimension of the skin deformation coefficient Blendshape i of the i-th real face is R×W. The skin deformation coefficients corresponding to the N second real face models can form a matrix with a dimension of N×R×W; the matrix includes the virtual face models corresponding to the N second real face models respectively. Skin deformation factor.
另外,在利用蒙皮变形系数,对中间虚拟人脸模型中的蒙皮数据进行调整时,还可以对中间虚拟人脸模型的骨骼数据进行微调,优化所生成的虚拟人脸模型的面部细节特征,使得生成的虚拟人脸模型,与参考人脸具有更高的相似度。In addition, when using the skin deformation coefficient to adjust the skin data in the intermediate virtual face model, the skeleton data of the intermediate virtual face model can also be fine-tuned to optimize the facial details of the generated virtual face model. , so that the generated virtual face model has a higher similarity with the reference face.
在得到N个第二真实人脸模型分别对应的虚拟人脸模型后,即能够利用N个虚拟人脸模型以及对应的拟合系数,拟合目标虚拟人脸模型,生成目标骨骼数据以及目标蒙皮变形数据。After the virtual face models corresponding to the N second real face models are obtained, the target virtual face model can be fitted by using the N virtual face models and the corresponding fitting coefficients, and the target skeleton data and the target mask can be generated. Skin deformation data.
具体地,目标虚拟人脸模型包括:目标骨骼数据、以及目标蒙皮数据;其中目标蒙皮数据是基于目标骨骼数据、以及目标虚拟人脸模型的目标蒙皮变形数据确定的。Specifically, the target virtual face model includes: target bone data and target skin data; wherein the target skin data is determined based on the target bone data and target skin deformation data of the target virtual face model.
本公开实施例在基于多个第二真实人脸模型对应的拟合系数、以及多个虚拟人脸模型分别对应的骨骼数据,得到目标骨骼数据时,例如包括:基于多个第二真实人脸模型分别对应的拟合系数,对多个虚拟人脸模型分别对应的骨骼数据进行插值处理,得到目标骨骼数据。In the embodiment of the present disclosure, when the target skeleton data is obtained based on the fitting coefficients corresponding to the multiple second real face models and the skeleton data corresponding to the multiple virtual face models, for example, the method includes: based on the multiple second real face models According to the fitting coefficients corresponding to the models, the skeleton data corresponding to the multiple virtual face models are interpolated to obtain the target skeleton data.
其中,虚拟人脸模型对应的骨骼数据包括以下至少一种:虚拟人脸的多块人脸骨骼中每块人脸骨骼对应的骨骼旋转数据、骨骼位置数据和骨骼缩放数据。得到的目标骨骼数据包括以下至少一种:目标骨骼位置数据、目标骨骼缩放数据、以及目标骨骼旋转数据。The bone data corresponding to the virtual face model includes at least one of the following: bone rotation data, bone position data and bone scaling data corresponding to each face bone among the multiple face bones of the virtual face. The obtained target bone data includes at least one of the following: target bone position data, target bone scaling data, and target bone rotation data.
示例性地,在基于多个第二真实人脸模型对应的拟合系数,对多个虚拟人脸模型分别对应的骨骼数据进行插值处理,得到目标骨骼数据时,例如可以采用下述(b1)至(b3)中至少一项:Exemplarily, based on the fitting coefficients corresponding to the plurality of second real face models, interpolation processing is performed on the skeleton data corresponding to the plurality of virtual face models respectively to obtain the target skeleton data, for example, the following (b1) can be used: To at least one of (b3):
(b1)基于多个第二真实人脸模型分别对应的拟合系数,对多个虚拟人脸模型分别对应的骨骼位置数据进行插值处理,得到目标骨骼位置数据。(b1) Based on the fitting coefficients corresponding to the plurality of second real face models respectively, perform interpolation processing on the bone position data corresponding to the plurality of virtual face models respectively to obtain target bone position data.
(b2)基于多个第二真实人脸模型分别对应的拟合系数,对多个虚拟人脸模型分别对应的骨骼缩放数据进行插值处理,得到目标骨骼缩放数据。(b2) Based on the fitting coefficients corresponding to the plurality of second real face models respectively, perform interpolation processing on the skeleton scaling data corresponding to the plurality of virtual face models respectively to obtain the target skeleton scaling data.
(b3)将多个虚拟人脸模型分别对应的骨骼旋转数据转换为四元数,并对得到的四元数进行正则化处理,得到正则化四元数;基于多个第二真实人脸模型分别对应的拟合系数,对多个虚拟人脸模型分别对应的正则化四元数进行插值处理,得到目标骨骼旋转数据。(b3) Converting the bone rotation data corresponding to multiple virtual face models into quaternions, and performing regularization processing on the obtained quaternions to obtain regularized quaternions; based on multiple second real face models According to the corresponding fitting coefficients, interpolation processing is performed on the regularization quaternions corresponding to the multiple virtual face models respectively to obtain the target bone rotation data.
在具体实施中,针对上述方法(b1)以及方法(b2),在获取骨骼位置数据、及骨骼缩放数据的情况下,还包括基于多个第二真实人脸模型确定各层级骨骼、及各层级骨骼对应的局部坐标系。其中,在对人脸模型进行骨骼层级分层的情况下,例如可以直接按照生物学骨骼分层方法确定骨骼层级,也可以根据人脸重建的要求确定骨骼层级,具体的分层方法可以根据实际情况确定,在此不再赘述。In a specific implementation, for the above-mentioned method (b1) and method (b2), in the case of acquiring bone position data and bone scaling data, it also includes determining each level of bones and each level based on a plurality of second real face models. The local coordinate system corresponding to the bone. Among them, in the case of stratifying the skeleton level of the face model, for example, the skeleton level can be determined directly according to the biological skeleton stratification method, or the skeleton level can be determined according to the requirements of face reconstruction, and the specific stratification method can be determined according to the actual situation. The situation is determined and will not be repeated here.
在确定各个骨骼层级后,即可基于各个骨骼层级建立每个骨骼层级对应的骨骼坐标系。示例性地,可以将各层级骨骼表示为Bone iAfter each bone level is determined, a bone coordinate system corresponding to each bone level can be established based on each bone level. Illustratively, each level of bone can be represented as Bone i .
此时,骨骼位置数据可以包括虚拟人脸模型中的各层级骨骼Bone i在对应的骨骼坐标系下的三维坐标值;骨骼缩放数据可以包括虚拟人脸模型中的各层级骨骼Bone i在对应的骨骼坐标系下,用于表征骨骼缩放程度的百分比,例如为80%、90%或100%。 At this time, the bone position data may include the three-dimensional coordinate values of each level of bone Bone i in the virtual face model in the corresponding bone coordinate system; the bone scaling data may include the corresponding bone Bone i of each level in the virtual face model. In the bone coordinate system, the percentage used to represent the degree of bone scaling, such as 80%, 90% or 100%.
在一种可能的实施方式中,将第i个虚拟人脸模型对应的骨骼位置数据表示为Pos i,将第i个虚拟人脸模型对应的骨骼缩放数据表示为Scaling i。此时,骨骼位置数据Pos i包含多个层级骨骼分别对应的骨骼位置数据,且骨骼缩放数据Scaling i包含多个层级骨骼分别对应的骨骼缩放数据。 In a possible implementation manner, the bone position data corresponding to the ith virtual face model is represented as Pos i , and the bone scaling data corresponding to the ith virtual face model is represented as Scaling i . At this time, the bone position data Pos i includes the bone position data corresponding to the plurality of hierarchical bones respectively, and the bone scaling data Scaling i includes the bone scaling data corresponding to the plurality of hierarchical bones respectively.
此时对应的拟合系数为a i。基于M个第二真实人脸模型分别对应的拟合系数,对M个虚拟人脸模型分别对应的骨骼位置数据Pos i进行插值处理,得到目标骨骼位置数据。 At this time, the corresponding fitting coefficient is a i . Based on the fitting coefficients corresponding to the M second real face models respectively, interpolation processing is performed on the bone position data Pos i corresponding to the M virtual face models respectively to obtain the target bone position data.
示例性地,例如可以将拟合系数作为各个虚拟人脸模型对应的权重,对M个虚拟人脸模型分别对应的骨骼位置数据Pos i进行加权求和处理,实现插值处理的过程。此时,目标骨骼位置数据Pos new满足下述公式(1): Exemplarily, for example, the fitting coefficient can be used as the weight corresponding to each virtual face model, and the weighted summation processing is performed on the bone position data Pos i corresponding to the M virtual face models respectively, so as to realize the process of interpolation processing. At this time, the target bone position data Pos new satisfies the following formula (1):
Figure PCTCN2021102431-appb-000001
Figure PCTCN2021102431-appb-000001
类似地,基于M个第二真实人脸模型分别对应的拟合系数,对M个虚拟人脸模型分别对应的骨骼缩放数据进行插值处理,得到目标骨骼缩放数据,其中,将第i个虚拟人脸模型对应的骨骼缩放数据表示为Scaling i,可以将M个第二真实人脸模型分别对应的拟合系数,作为对应虚拟人脸模型的权重,对M个虚拟人脸模型分别对应的骨骼缩放数据进行加权求和处理,以实现对M个虚拟人脸模型进行插值处理;在该种情况下,目标骨骼缩放数据Scaling new满足下述公式(2): Similarly, based on the fitting coefficients corresponding to the M second real face models, interpolation processing is performed on the bone scaling data corresponding to the M virtual face models respectively, so as to obtain the target bone scaling data, wherein the ith virtual human The bone scaling data corresponding to the face model is represented as Scaling i , and the fitting coefficients corresponding to the M second real face models can be used as the weights of the corresponding virtual face models, and the bones corresponding to the M virtual face models can be scaled respectively. The data is weighted and summed to realize interpolation processing on the M virtual face models; in this case, the target bone scaling data Scaling new satisfies the following formula (2):
Figure PCTCN2021102431-appb-000002
Figure PCTCN2021102431-appb-000002
针对上述方法(b3),骨骼旋转数据可以包括虚拟人脸模型中的各个骨骼分别在对应的骨骼坐标系下,用于表征骨骼的旋转坐标变换程度的向量值,例如,包含旋转轴和旋转角。在一种可能的实施方式中,将第i个虚拟人脸模型对应的骨骼旋转数据表示为Trans i。由于骨骼旋转数据所包含的旋转角存在万向节死锁的问题,故将骨骼旋转数据转换为四元数,并且对四元数正则化,得到正则化四元数数数据,表示为Trans' i,以防止直接对四元数进行加权求和处理时产生过拟合的现象。 For the above method (b3), the bone rotation data may include a vector value representing the degree of rotation coordinate transformation of the bones for each bone in the virtual face model in the corresponding bone coordinate system, for example, including a rotation axis and a rotation angle . In a possible implementation manner, the bone rotation data corresponding to the ith virtual face model is represented as Trans i . Since the rotation angle contained in the bone rotation data has the problem of gimbal deadlock, the bone rotation data is converted into a quaternion, and the quaternion is regularized to obtain the regularized quaternion data, which is expressed as Trans' i , in order to prevent overfitting when the quaternion is directly weighted and summed.
在基于M个第二真实人脸模型分别对应的拟合系数,对M个虚拟人脸模型分别对应的正则化四元数Trans' i进行插值处理时,也可以将M个第二真实人脸模型分别对应的拟合系数作为权重,对M个虚拟人脸模型分别对应的正则化四元数进行加权求和;在该种情况下,目标骨骼旋转数据Trans new满足下述公式(3): When performing interpolation processing on the regularization quaternion Trans' i corresponding to the M second real face models based on the fitting coefficients corresponding to the M second real face models, the M second real face models may also be used for interpolation. The fitting coefficients corresponding to the models are used as weights, and the regularization quaternions corresponding to the M virtual face models are weighted and summed; in this case, the target bone rotation data Trans new satisfies the following formula (3):
Figure PCTCN2021102431-appb-000003
Figure PCTCN2021102431-appb-000003
另外,还可以采用其他的插值方法,得到目标骨骼位置数据Pos new、目标骨骼缩放数据Scaling new、及目标骨骼旋转数据Trans new,具体的可以根据实际的需要进行确定,本公开不做限定。 In addition, other interpolation methods can also be used to obtain target bone position data Pos new , target bone scaling data Scaling new , and target bone rotation data Trans new , which can be determined according to actual needs, which are not limited in this disclosure.
基于上述(b1)、(b2)、以及(b3)中得到的目标骨骼位置数据Pos new、目标骨骼缩放数据Scaling new、及目标骨骼旋转数据Trans new后,即可确定目标骨骼数据,表示为Bone new。示例性地,可以将该目标骨骼数据以向量形式表示为: Based on the target bone position data Pos new , target bone scaling data Scaling new , and target bone rotation data Trans new obtained in (b1), (b2), and (b3) above, the target bone data can be determined, which is represented as Bone new . Exemplarily, the target bone data can be expressed in vector form as:
(Pos new,Scaling new,Trans new)。 (Pos new , Scaling new , Trans new ).
在确定了多个第二真实人脸模型分别对应的拟合系数、以及多个第二真实人脸模型分别对应的具有预设风格的虚拟人脸模型的情况下,在生成目标蒙皮变形系数时,例如可以采用下述方法:基于所述多个第二真实人脸模型分别对应的拟合系数、以及多 个所述虚拟人脸模型分别包括的蒙皮变形系数,生成所述目标蒙皮变形系数。其中,虚拟人脸模型的蒙皮变形系数表示虚拟人脸模型的蒙皮数据相对于预先生成的标准虚拟人脸模型的标准蒙皮数据的变形。When the fitting coefficients corresponding to the plurality of second real face models and the virtual face models with preset styles corresponding to the plurality of second real face models are determined, the target skin deformation coefficient is generated. For example, the following method may be adopted: generating the target skin based on the fitting coefficients corresponding to the multiple second real face models and the skin deformation coefficients respectively included in the multiple virtual face models deformation factor. The skin deformation coefficient of the virtual face model represents the deformation of the skin data of the virtual face model relative to the standard skin data of the pre-generated standard virtual face model.
参见图3所示,本公开实施例还提供了一种得到目标蒙皮变形系数的具体方法,包括:Referring to FIG. 3 , an embodiment of the present disclosure further provides a specific method for obtaining a target skin deformation coefficient, including:
S301:对多个第二真实人脸模型分别对应的拟合系数进行归一化处理。S301: Normalize the fitting coefficients corresponding to the plurality of second real face models respectively.
其中,在对多个第二真实人脸模型分别对应的拟合系数进行归一化处理的情况下,例如可以采用归一化函数(Softmax)求取概率值,表征多个第二真实人脸模型分别对应的拟合系数在多个拟合系数中的比值,设归一化后的拟合系数为Alpha NormWherein, in the case of normalizing the fitting coefficients corresponding to the multiple second real face models, for example, a normalization function (Softmax) may be used to obtain a probability value to represent the multiple second real faces The ratio of the corresponding fitting coefficients of the models among multiple fitting coefficients, and the normalized fitting coefficients are set as Alpha Norm .
示例性地,在第二真实人脸模型有N个的情况下,进行归一化处理得到的拟合系数Alpha Norm的维度为N。 Exemplarily, when there are N second real face models, the dimension of the fitting coefficient Alpha Norm obtained by performing the normalization process is N.
S302:基于归一化后的拟合系数,对多个虚拟人脸模型分别包括的蒙皮变形系数进行插值处理,得到目标蒙皮变形系数。S302: Based on the normalized fitting coefficients, perform interpolation processing on the skin deformation coefficients respectively included in the multiple virtual face models to obtain the target skin deformation coefficients.
此处,利用多个第二真实人脸模型分别对应的拟合系数分别对虚拟人脸模型包括的蒙皮变形系数进行拟合,得到的拟合结果可以表征多个第二真实人脸模型对虚拟人脸模型的影响力,生成目标蒙皮变形系数。其中,目标蒙皮变形系数例如可以对人脸的胖瘦做出调整,使得得到的目标虚拟人脸模型与目标图像中的人脸胖瘦特征相符。Here, the skin deformation coefficients included in the virtual face model are respectively fitted with the fitting coefficients corresponding to the plurality of second real face models, and the obtained fitting results can represent the pairs of the plurality of second real face models. The influence of the virtual face model to generate target skin deformation coefficients. The target skin deformation coefficient may, for example, adjust the fatness and thinness of the face, so that the obtained target virtual face model is consistent with the fatness and thinness features of the face in the target image.
示例性地,可以基于归一化后的所述拟合系数,对所述多个虚拟人脸模型分别对应的所述蒙皮变形系数进行加权求和,以实现对多个虚拟人脸模型分别对应的蒙皮变形系数进行插值处理,得到所述目标蒙皮变形系数。Exemplarily, based on the normalized fitting coefficients, the skin deformation coefficients corresponding to the multiple virtual face models may be weighted and summed, so as to achieve the Corresponding skin deformation coefficients are subjected to interpolation processing to obtain the target skin deformation coefficients.
归一化处理得到的拟合系数Alpha Norm可以表示维度为N的第一向量,R个虚拟人脸模型分别对应的蒙皮变形系数,能够形成一维度为N×R的第二向量;此时,将所述多个虚拟人脸模型分别对应的所述蒙皮变形系数进行加权求和,例如可以直接将第一向量和第二向量进行相乘,得到目标蒙皮变形系数。 The fitting coefficient Alpha Norm obtained by the normalization process can represent the first vector with dimension N, and the skin deformation coefficients corresponding to the R virtual face models respectively can form the second vector with dimension N×R; , the skin deformation coefficients corresponding to the multiple virtual face models are weighted and summed, for example, the first vector and the second vector may be directly multiplied to obtain the target skin deformation coefficient.
示例性地,例如可以采用下述公式得到目标蒙皮变形系数,表示为Blendshape',且Blendshape'满足下述公式(4):Exemplarily, for example, the following formula can be used to obtain the target skin deformation coefficient, which is expressed as Blendshape', and Blendshape' satisfies the following formula (4):
Blendshape'=Blendshape×Alpha Norm   (4)。 Blendshape'=Blendshape×Alpha Norm (4).
针对上述S104,参见图4所示,本公开实施例还提供了一种基于目标骨骼数据、以及目标蒙皮变形系数,生成与第一真实人脸模型对应的目标虚拟人脸模型的具体方法,包括:For the above S104, referring to FIG. 4, an embodiment of the present disclosure further provides a specific method for generating a target virtual face model corresponding to the first real face model based on the target bone data and the target skin deformation coefficient, include:
S401:基于目标骨骼数据、以及标准虚拟人脸模型中标准骨骼数据与标准蒙皮数据之间的关联关系,对蒙皮数据进行位置变换处理,生成中间蒙皮数据。S401: Based on the target bone data and the relationship between the standard bone data and the standard skin data in the standard virtual face model, perform position transformation processing on the skin data to generate intermediate skin data.
其中,标准虚拟人脸模型中标准骨骼数据与标准蒙皮数据之间的关联关系,可 例如为各层级骨骼对应的标准骨骼数据与标准蒙皮数据之间的关联关系。基于此关联关系,即可将蒙皮绑定在虚拟人脸模型中的骨骼上。Wherein, the association relationship between the standard skeleton data and the standard skin data in the standard virtual face model may be, for example, the association relationship between the standard skeleton data corresponding to each level of bones and the standard skin data. Based on this relationship, the skin can be bound to the bones in the virtual face model.
利用目标骨骼数据、以及标准虚拟人脸模型中标准骨骼数据与标准蒙皮数据之间的关联关系,可以对多个层级骨骼对应位置的蒙皮数据进行位置变换处理,以使生成的目标蒙皮数据中对应层级骨骼的位置可以与对应的目标骨骼数据中的位置相符,此时,例如可以将进行位置变换处理后的蒙皮数据,作为生成的中间蒙皮数据。Using the target skeleton data and the relationship between the standard skeleton data and the standard skin data in the standard virtual face model, the skin data at the corresponding positions of multiple levels of bones can be subjected to position transformation, so that the generated target skin The position of the corresponding level bone in the data may be consistent with the position in the corresponding target bone data. In this case, for example, the skin data after the position transformation process may be used as the generated intermediate skin data.
S402:基于目标蒙皮变形系数,对中间蒙皮数据进行变形处理,得到目标蒙皮数据。S402: Perform deformation processing on the intermediate skin data based on the target skin deformation coefficient to obtain target skin data.
S403:基于目标骨骼数据、以及目标蒙皮数据,构成目标虚拟人脸模型。S403: Construct a target virtual face model based on the target bone data and the target skin data.
此处,利用目标骨骼数据,可以确定用于构建目标虚拟人脸模型的各层级骨骼;且利用目标蒙皮数据,可以确定将模型绑定至骨骼上的蒙皮,从而构成目标虚拟人脸模型。Here, using the target bone data, it is possible to determine the various levels of bones used to construct the target virtual face model; and using the target skin data, it is possible to determine the skin that binds the model to the bones, thereby forming the target virtual face model. .
其中,确定目标虚拟人脸模型的方法包括下述至少一种:基于目标骨骼数据、及目标蒙皮数据直接建立目标虚拟人脸模型;利用各层级骨骼对应的目标骨骼数据替换第一真实人脸模型中对应的各层级骨骼数据,再利用目标蒙皮数据建立目标虚拟人脸模型。具体建立目标虚拟人脸模型的方法可以按照实际情况确定,在此不再赘述。The method for determining a target virtual face model includes at least one of the following: directly establishing a target virtual face model based on target bone data and target skin data; replacing the first real face with target bone data corresponding to each level of bones The corresponding skeleton data of each level in the model, and then use the target skin data to establish the target virtual face model. The specific method for establishing the target virtual face model can be determined according to the actual situation, and will not be repeated here.
本公开实施例还提供了一种利用本公开实施例提供的重建人脸的方法,对获取目标图像Pic A中的原始人脸A对应的目标虚拟人脸模型Mod Aim的具体过程的说明。 The embodiments of the present disclosure also provide a method for reconstructing a face provided by the embodiments of the present disclosure, and describe the specific process of acquiring the target virtual face model Mod Aim corresponding to the original face A in the target image Pic A.
确定目标虚拟人脸模型Mod Aim包括下述步骤(c1)至(c6): Determining the target virtual face model Mod Aim includes the following steps (c1) to (c6):
(c1)准备素材;其中,包括:准备标准虚拟人脸模型的素材;以及准备虚拟图片的素材。(c1) Preparing materials; including: preparing materials for standard virtual face models; and preparing materials for virtual pictures.
在准备标准虚拟人脸模型的素材时,以选取卡通风格作为预设风格为例,首先设置一个卡通风格的标准虚拟人脸模型Mod BaseWhen preparing the material of the standard virtual face model, take the cartoon style as the default style as an example, first set a cartoon style standard virtual face model Mod Base .
生成9组预设蒙皮变形系数;其中,使用9组蒙皮变形系数分别对标准虚拟人脸模型的标准蒙皮数据进行不同部位、和/或不同程度的改变,即可对标准虚拟人脸的胖瘦进行调整,涵盖绝大多数脸型特征。Generate 9 groups of preset skin deformation coefficients; among them, use the 9 groups of skin deformation coefficients to change different parts and/or different degrees of the standard skin data of the standard virtual face model, and then the standard virtual face can be changed The fat and thin are adjusted to cover most face features.
在准备虚拟图片的素材时,收集24张虚拟图片Pic 1~Pic 24;收集的24张虚拟图片中的虚拟人脸B 1~B 24对应的男生、女生的数量均衡,并且尽可能包含较广泛的五官特征分布。 When preparing the material of the virtual picture, collect 24 virtual pictures Pic 1 to Pic 24 ; the number of boys and girls corresponding to the virtual faces B 1 to B 24 in the collected 24 virtual pictures is balanced and includes as wide a range as possible. distribution of facial features.
(c2)人脸模型重建;其中,包括:利用目标图像Pic A中原始人脸A生成第一真实人脸模型Mod fst;以及利用虚拟图片中的虚拟人脸B 1~B 24生成第二真实人脸模型Mod snd-1~Mod snd-24(c2) face model reconstruction; including: using the original face A in the target image Pic A to generate a first real face model Mod fst ; and using the virtual faces B 1 to B 24 in the virtual picture to generate a second real face model Face model Mod snd-1 ~ Mod snd-24 .
在确定原始人脸A生成第一真实人脸模型Mod fst时,首先对目标图像中的人脸进行转正剪裁,然后利用预先训练好的RGB重建神经网络,生成原始人脸A对应的第 一真实人脸模型Mod fst。同样的,利用预先训练好的RGB重建神经网络,可以确定虚拟人脸B 1~B 24分别对应的第二真实人脸模型Mod snd-1~Mod snd-24When it is determined that the original face A generates the first real face model Mod fst , firstly, the face in the target image is straightened and cropped, and then the pre-trained RGB is used to reconstruct the neural network to generate the first real face corresponding to the original face A. Face model Mod fst . Similarly, by using the pre-trained RGB reconstruction neural network, the second real face models Mod snd-1 to Mod snd-24 corresponding to the virtual faces B 1 to B 24 respectively can be determined.
在确定第二真实人脸模型Mod snd-1~Mod snd-24后,还包括:利用预设的风格,利用人工调整的方式,确定第二真实人脸模型Mod snd-1~Mod snd-24分别对应的具有预设风格的虚拟人脸模型Mod fic-1~Mod fic-24After determining the second real face models Mod snd-1 to Mod snd-24 , the method further includes: determining the second real face models Mod snd- 1 to Mod snd -24 by using a preset style and manual adjustment The corresponding virtual face models Mod fic-1 to Mod fic-24 with preset styles respectively.
另外,还会基于9组预设蒙皮变形系数,生成24个虚拟人脸模型的蒙皮变形系数。In addition, based on 9 groups of preset skin deformation coefficients, the skin deformation coefficients of 24 virtual face models will be generated.
(c3)拟合处理;其中,包括:利用多个第二真实人脸模型对第一真实人脸模型进行拟合处理,得到多个第二真实人脸模型分别对应的拟合系数:(c3) fitting processing; wherein, including: using a plurality of second real face models to perform fitting processing on the first real face model, to obtain the fitting coefficients corresponding to the plurality of second real face models respectively:
alpha=[alpha snd-1,alpha snd-2,…,alpha snd-24]。 alpha=[alpha snd-1 , alpha snd-2 , . . . , alpha snd-24 ].
在利用多个第二真实人脸模型对第一真实人脸模型进行拟合时,选取最小二乘法的方法进行拟合,得到24维系数alpha。When using a plurality of second real face models to fit the first real face model, the method of least squares is selected for fitting, and a 24-dimensional coefficient alpha is obtained.
(c4)确定目标蒙皮变形系数;其中,在确定目标蒙皮变形系数时,还包括下述(c4-1)、(c4-2)、以及(c4-3)。(c4) Determining the target skin deformation coefficient; wherein, when determining the target skin deformation coefficient, the following (c4-1), (c4-2), and (c4-3) are also included.
(c4-1)读取具有预设风格的虚拟人脸模型Mod fic-1~Mod fic-24分别对应的蒙皮变形系数blendshape fic-1~blendshape fic-24(c4-1) reading the respective skin deformation coefficients blendshape fic-1 to blendshape fic-24 corresponding to the virtual face models Mod fic-1 to Mod fic-24 with preset styles;
(c4-2)对多个第二真实人脸模型分别对应的拟合系数alpha进行归一化处理;(c4-2) normalizing the fitting coefficients alpha corresponding to the plurality of second real face models respectively;
(c4-3)利用多个第二真实人脸模型分别对应的拟合系数alpha对多个虚拟人脸模型分别包括的蒙皮变形系数blendshape fic-1~blendshape fic-24进行插值处理,生成目标蒙皮变形系数blendshape Aim(c4-3) Interpolate the skin deformation coefficients blendshape fic-1 to blendshape fic-24 respectively included in the multiple virtual face models by using the fitting coefficients alpha corresponding to the multiple second real face models to generate a target Skin deformation coefficient blendshape Aim .
(c5)确定目标骨骼数据;其中,在确定目标骨骼数据时,还包括下述(c5-1)以及(c5-2)。(c5) Determining target bone data; wherein, when determining the target bone data, the following (c5-1) and (c5-2) are also included.
(c5-1)读取骨骼数据;其中,骨骼数据包括:在各层级骨骼Bone i下具有预设风格的虚拟人脸模型Mod fic-1~Mod fic-24分别对应的骨骼位置数据Pos i、骨骼缩放数据Scaling i、以及骨骼旋转数据Trans i(c5-1) Read the skeleton data; wherein, the skeleton data includes: the skeleton position data Pos i , respectively corresponding to the virtual face models Mod fic-1 to Mod fic-24 with preset styles under the bones Bone i of each level. Bone scaling data Scaling i , and bone rotation data Trans i .
(c5-2)利用拟合系数alpha对预设风格的虚拟人脸模型Mod fic-1~Mod fic-24分别对应的骨骼数据进行插值处理,生成目标骨骼数据Bone new,该目标骨骼数据包括目标骨骼位置数据Pos new、目标骨骼缩放数据Scaling new、及目标骨骼旋转数据Trans new(c5-2) Using the fitting coefficient alpha to perform interpolation processing on the skeleton data corresponding to the virtual face models Mod fic-1 to Mod fic-24 of the preset style respectively, to generate target bone data Bone new , and the target bone data includes the target Bone position data Pos new , target bone scaling data Scaling new , and target bone rotation data Trans new .
(c6)生成目标虚拟人脸模型。(c6) Generate the target virtual face model.
基于目标骨骼数据以及目标蒙皮变形系数,将目标骨骼数据替换至标准虚拟人脸模型Mod Base中,并利用目标蒙皮变形系数blendshape Aim,将蒙皮与骨骼贴合,生成与第一真实人脸模型对应的目标虚拟人脸模型。 Based on the target bone data and the target skin deformation coefficient, replace the target bone data into the standard virtual face model Mod Base , and use the target skin deformation coefficient blendshape Aim to fit the skin and the bones to generate the first real person The target virtual face model corresponding to the face model.
参见图5所示,为本公开实施例提供的在上述具体示例包含的多个过程中使用 的具体数据的示例。其中,图5中a表示目标图像,51表示原始人脸A;图5中b表示具有卡通风格的标准虚拟人脸模型的示意图;图5中c表示在利用目标蒙皮变形系数对标准蒙皮数据中各个位置点进行调整后得到目标蒙皮数据中各个位置点的相对位置关系的示意图;图5中d表示得到的对应于原始人脸A生成的目标虚拟人脸模型的示意图。Referring to Fig. 5, an example of specific data used in multiple processes included in the above-mentioned specific example provided by the embodiment of the present disclosure. Among them, in Figure 5 a represents the target image, 51 represents the original face A; in Figure 5 b represents a schematic diagram of a standard virtual face model with a cartoon style; After each position point in the data is adjusted, a schematic diagram of the relative positional relationship of each position point in the target skin data is obtained; d in Figure 5 represents the obtained schematic diagram corresponding to the target virtual face model generated by the original face A.
此处,值得注意的是,上述(c1)至(c6)仅是完成重建人脸的方法一个具体示例,不对本公开实施例提供的重建人脸的方法造成限定。Here, it is worth noting that the above (c1) to (c6) are only a specific example of the method for reconstructing a human face, and do not limit the method for reconstructing a human face provided by the embodiments of the present disclosure.
本领域技术人员可以理解,在具体实施方式的上述方法中,各步骤的撰写顺序并不意味着严格的执行顺序而对实施过程构成任何限定,各步骤的具体执行顺序应当以其功能和可能的内在逻辑确定。Those skilled in the art can understand that in the above method of the specific implementation, the writing order of each step does not mean a strict execution order but constitutes any limitation on the implementation process, and the specific execution order of each step should be based on its function and possible Internal logic is determined.
基于同一发明构思,本公开实施例中还提供了与重建人脸的方法对应的重建人脸的装置,由于本公开实施例中的装置解决问题的原理与本公开实施例上述重建人脸的方法相似,因此装置的实施可以参见方法的实施,重复之处不再赘述。Based on the same inventive concept, an apparatus for reconstructing a face corresponding to the method for reconstructing a face is also provided in the embodiment of the present disclosure, because the principle of solving the problem by the apparatus in the embodiment of the present disclosure is the same as the above-mentioned method for reconstructing a face in the embodiment of the present disclosure. Similar, therefore, the implementation of the apparatus may refer to the implementation of the method, and repeated descriptions will not be repeated.
参照图6所示,本公开实施例提供了一种重建人脸的装置,所述装置包括:第一生成模块61、处理模块62、第二生成模块63、及第三生成模块64。Referring to FIG. 6 , an embodiment of the present disclosure provides an apparatus for reconstructing a human face. The apparatus includes: a first generation module 61 , a processing module 62 , a second generation module 63 , and a third generation module 64 .
第一生成模块61,用于基于目标图像生成第一真实人脸模型。The first generating module 61 is configured to generate a first real face model based on the target image.
处理模块62,用于利用预先生成的多个第二真实人脸模型对所述第一真实人脸模型进行拟合处理,得到多个第二真实人脸模型分别对应的拟合系数。The processing module 62 is configured to perform fitting processing on the first real face model by using a plurality of pre-generated second real face models to obtain fitting coefficients corresponding to the plurality of second real face models respectively.
第二生成模块63,用于基于所述多个第二真实人脸模型分别对应的拟合系数、以及所述多个第二真实人脸模型分别对应的具有预设风格的虚拟人脸模型,生成目标骨骼数据以及目标蒙皮变形系数。The second generation module 63 is configured to, based on the fitting coefficients corresponding to the plurality of second real face models respectively, and the virtual face models with preset styles corresponding to the plurality of second real face models respectively, Generate target bone data and target skin deformation coefficients.
第三生成模块64,用于基于所述目标骨骼数据以及所述目标蒙皮变形系数,生成与所述第一真实人脸模型对应的目标虚拟人脸模型。The third generation module 64 is configured to generate a target virtual face model corresponding to the first real face model based on the target bone data and the target skin deformation coefficient.
一种可选的实施方式中,所述虚拟人脸模型包括表示所述虚拟人脸模型的蒙皮数据相对于预先生成的标准虚拟人脸模型的标准蒙皮数据的形变的蒙皮变形系数。In an optional embodiment, the virtual face model includes a skin deformation coefficient representing the deformation of the skin data of the virtual face model relative to the standard skin data of the pre-generated standard virtual face model.
所述第二生成模块63在基于所述多个第二真实人脸模型分别对应的拟合系数、以及所述多个第二真实人脸模型分别对应的具有预设风格的虚拟人脸模型,生成目标蒙皮变形系数时,用于:基于所述多个第二真实人脸模型分别对应的拟合系数、以及多个所述虚拟人脸模型分别包括的蒙皮变形系数,生成所述目标蒙皮变形系数。The second generation module 63 is based on the fitting coefficients corresponding to the plurality of second real face models and the virtual face models with preset styles corresponding to the plurality of second real face models, respectively, When generating the target skin deformation coefficient, it is used for: generating the target based on the fitting coefficients corresponding to the plurality of second real face models and the skin deformation coefficients respectively included in the plurality of virtual face models Skin deformation factor.
一种可选的实施方式中,所述第二生成模块63在基于所述多个第二真实人脸模型分别对应的拟合系数、以及多个所述虚拟人脸模型分别包括的蒙皮变形系数,生成所述目标蒙皮变形系数时,用于:对所述多个第二真实人脸模型分别对应的拟合系数进行归一化处理;基于归一化处理后的拟合系数、以及所述虚拟人脸模型分别包括的蒙皮变形系数,得到所述目标蒙皮变形系数。In an optional embodiment, the second generation module 63 is based on the fitting coefficients corresponding to the plurality of second real face models and the skin deformations respectively included in the plurality of virtual face models. coefficient, when generating the target skin deformation coefficient, it is used for: normalizing the fitting coefficients corresponding to the plurality of second real face models respectively; based on the normalized fitting coefficients, and The skin deformation coefficients respectively included in the virtual face models are used to obtain the target skin deformation coefficients.
一种可选的实施方式中,所述第三生成模块64在基于所述目标骨骼数据、以及 所述目标蒙皮变形系数,生成与所述第一真实人脸模型对应的目标虚拟人脸模型时,用于:基于所述目标骨骼数据、以及标准虚拟人脸模型中标准骨骼数据与标准蒙皮数据之间的关联关系,对所述标准蒙皮数据进行位置变换处理,生成中间蒙皮数据;基于所述目标蒙皮变形系数,对所述中间蒙皮数据进行变形处理,得到目标蒙皮数据;基于所述目标骨骼数据、以及所述目标蒙皮数据,生成所述目标虚拟人脸模型。In an optional embodiment, the third generation module 64 generates a target virtual face model corresponding to the first real face model based on the target skeleton data and the target skin deformation coefficient. is used to: perform position transformation processing on the standard skin data based on the target bone data and the association between the standard bone data and standard skin data in the standard virtual face model, and generate intermediate skin data ; Based on the target skin deformation coefficient, deform the intermediate skin data to obtain target skin data; Based on the target bone data and the target skin data, generate the target virtual face model .
一种可选的实施方式中,所述目标骨骼数据包括以下至少一种:目标骨骼位置数据、目标骨骼缩放数据、以及目标骨骼旋转数据。In an optional implementation manner, the target bone data includes at least one of the following: target bone position data, target bone scaling data, and target bone rotation data.
所述虚拟人脸模型对应的骨骼数据包括以下至少一种:所述虚拟人脸的多块人脸骨骼中每块人脸骨骼对应的骨骼旋转数据、骨骼位置数据、以及骨骼缩放数据。The bone data corresponding to the virtual face model includes at least one of the following: bone rotation data, bone position data, and bone scaling data corresponding to each face bone among the multiple face bones of the virtual face.
一种可选的实施方式中,所述第二生成模块63在基于所述多个第二真实人脸模型分别对应的拟合系数、以及所述多个第二真实人脸模型分别对应的具有预设风格的虚拟人脸模型,生成目标骨骼数据时,用于:基于所述多个第二真实人脸模型分别对应的拟合系数,对所述多个虚拟人脸模型分别对应的骨骼位置数据进行插值处理,得到所述目标骨骼位置数据。In an optional implementation manner, the second generation module 63 is based on the fitting coefficients corresponding to the plurality of second real face models, and the corresponding corresponding to the plurality of second real face models. The virtual face model of the preset style, when generating the target skeleton data, is used for: based on the fitting coefficients corresponding to the plurality of second real face models respectively, for the skeleton positions corresponding to the plurality of virtual face models respectively The data is subjected to interpolation processing to obtain the target bone position data.
一种可选的实施方式中,第二生成模块63在基于所述多个第二真实人脸模型分别对应的拟合系数、以及所述多个第二真实人脸模型分别对应的具有预设风格的虚拟人脸模型,生成目标骨骼数据时,用于:基于所述多个第二真实人脸模型分别对应的拟合系数,对所述多个虚拟人脸模型分别对应的骨骼缩放数据进行插值处理,得到所述目标骨骼缩放数据。In an optional embodiment, the second generation module 63 is based on the fitting coefficients corresponding to the plurality of second real face models and the preset values corresponding to the plurality of second real face models respectively. The style of the virtual face model, when generating the target skeleton data, is used for: based on the fitting coefficients corresponding to the plurality of second real face models respectively, on the skeleton scaling data corresponding to the plurality of virtual face models respectively. Interpolation processing is performed to obtain the target bone scaling data.
一种可选的实施方式中,所述第二生成模块63在基于所述多个第二真实人脸模型分别对应的拟合系数、以及所述多个第二真实人脸模型分别对应的具有预设风格的虚拟人脸模型,生成目标骨骼数据时,用于:将所述多个虚拟人脸模型分别对应的骨骼旋转数据转换为四元数,并对所述多个虚拟人脸模型分别对应的四元数进行正则化处理,得到正则化四元数;基于所述多个第二真实人脸模型分别对应的拟合系数,对所述多个虚拟人脸模型分别对应的所述正则化四元数进行插值处理,得到所述目标骨骼旋转数据。In an optional implementation manner, the second generation module 63 is based on the fitting coefficients corresponding to the plurality of second real face models, and the corresponding corresponding to the plurality of second real face models. The virtual face model of the preset style, when generating target skeleton data, is used for: converting the skeleton rotation data corresponding to the plurality of virtual face models respectively into quaternions, and for the plurality of virtual face models respectively The corresponding quaternions are regularized to obtain a regularized quaternion; based on the fitting coefficients corresponding to the plurality of second real face models respectively, the regularization corresponding to the plurality of virtual face models is performed. Interpolate the quaternion to obtain the target bone rotation data.
一种可选的实施方式中,在基于目标图像生成第一真实人脸模型时,所述第一生成模块61用于:获取包括原始人脸的目标图像;对所述目标图像中包括的所述原始人脸进行三维人脸重建,得到所述第一真实人脸模型。In an optional embodiment, when generating the first real face model based on the target image, the first generation module 61 is used to: acquire a target image including the original face; 3D face reconstruction is performed on the original face to obtain the first real face model.
一种可选的实施方式中,所述处理模块62根据以下方式预先生成所述多个第二真实人脸模型:获取包括参考人脸的多张参考图像;针对所述多张参考图像中的每张参考图像,对所述参考图像中包括的参考人脸进行三维人脸重建,得到所述参考图像对应的第二真实人脸模型。In an optional implementation manner, the processing module 62 pre-generates the multiple second real face models according to the following methods: acquiring multiple reference images including reference faces; For each reference image, three-dimensional face reconstruction is performed on the reference face included in the reference image to obtain a second real face model corresponding to the reference image.
一种可选的实施方式中,该重建人脸的装置还包括获取模块65,用于针对所述多个第二真实人脸模型中的每个第二真实人脸模型,采用下述方式获取所述第二真实人脸模型对应的具有预设风格的虚拟人脸模型:生成所述第二真实人脸模型对应的具有预 设风格的中间虚拟人脸模型;基于相对于标准虚拟人脸模型的多组预设蒙皮变形系数,生成与所述第二真实人脸模型对应的虚拟人脸模型相对于所述标准虚拟人脸模型的蒙皮变形系数;利用所述蒙皮变形系数,对所述中间虚拟人脸模型中的中间蒙皮数据进行调整;基于调整后的中间蒙皮数据、以及所述中间虚拟人脸模型的中间骨骼数据,生成所述第二真实人脸模型的虚拟人脸模型。In an optional embodiment, the device for reconstructing a human face further includes an acquisition module 65, which is used for acquiring each second real human face model in the plurality of second real human face models in the following manner: A virtual face model with a preset style corresponding to the second real face model: generating an intermediate virtual face model with a preset style corresponding to the second real face model; The multiple sets of preset skin deformation coefficients are generated, and the skin deformation coefficients of the virtual face model corresponding to the second real face model relative to the standard virtual face model are generated; using the skin deformation coefficients, The intermediate skin data in the intermediate virtual face model is adjusted; based on the adjusted intermediate skin data and the intermediate skeleton data of the intermediate virtual face model, a virtual person of the second real face model is generated face model.
一种可选的实施方式中,所述处理模块62利用预先生成的多个第二真实人脸模型对所述第一真实人脸模型进行拟合处理,得到多个第二真实人脸模型分别对应的拟合系数时,用于:对所述多个第二真实人脸模型以及所述第一真实人脸模型进行最小二乘处理,得到所述多个第二真实人脸模型分别对应的拟合系数。In an optional implementation manner, the processing module 62 performs fitting processing on the first real face model by using a plurality of pre-generated second real face models to obtain a plurality of second real face models respectively. When the corresponding fitting coefficient is used, it is used to: perform least squares processing on the plurality of second real face models and the first real face models to obtain the corresponding corresponding fitting coefficients.
关于装置中的各模块的处理流程、以及各模块之间的交互流程的描述可以参照上述方法实施例中的相关说明,这里不再详述。For the description of the processing flow of each module in the apparatus and the interaction flow between the modules, reference may be made to the relevant descriptions in the foregoing method embodiments, which will not be described in detail here.
如图7所示,本公开实施例还提供了一种计算机设备,包括:处理器71和存储器72。As shown in FIG. 7 , an embodiment of the present disclosure further provides a computer device, including: a processor 71 and a memory 72 .
存储器72存储有处理器71可执行的机器可读指令,处理器71用于执行存储器72中存储的机器可读指令,所述机器可读指令被处理器71执行时,处理器71执行下述步骤:基于目标图像生成第一真实人脸模型;利用预先生成的多个第二真实人脸模型对所述第一真实人脸模型进行拟合处理,得到多个第二真实人脸模型分别对应的拟合系数;基于所述多个第二真实人脸模型分别对应的拟合系数、所述多个第二真实人脸模型分别对应的具有预设风格的虚拟人脸模型,生成目标骨骼数据以及目标蒙皮变形系数;基于所述目标骨骼数据以及所述目标蒙皮变形系数,生成与所述第一真实人脸模型对应的目标虚拟人脸模型。The memory 72 stores machine-readable instructions executable by the processor 71, and the processor 71 is configured to execute the machine-readable instructions stored in the memory 72. When the machine-readable instructions are executed by the processor 71, the processor 71 executes the following: Steps: generating a first real face model based on the target image; using a plurality of pre-generated second real face models to perform fitting processing on the first real face model, and obtaining a plurality of second real face models corresponding to based on the fitting coefficients corresponding to the plurality of second real face models respectively, and the virtual face models with preset styles corresponding to the plurality of second real face models respectively, generate target skeleton data and a target skin deformation coefficient; based on the target bone data and the target skin deformation coefficient, a target virtual face model corresponding to the first real face model is generated.
上述存储器72包括内存721和外部存储器722;这里的内存721也称内存储器,用于暂时存放处理器71中的运算数据,以及与硬盘等外部存储器722交换的数据,处理器71通过内存721与外部存储器722进行数据交换。The above-mentioned memory 72 includes a memory 721 and an external memory 722; the memory 721 here is also called an internal memory, which is used to temporarily store the operation data in the processor 71 and the data exchanged with the external memory 722 such as the hard disk. The external memory 722 performs data exchange.
上述指令的具体执行过程可以参考本公开实施例中所述的重建人脸的方法,此处不再赘述。For the specific execution process of the above instruction, reference may be made to the method for reconstructing a human face described in the embodiments of the present disclosure, and details are not described herein again.
本公开实施例还提供一种计算机可读存储介质,该计算机可读存储介质上存储有计算机程序,该计算机程序被处理器运行时执行上述方法实施例中所述的重建人脸的方法。其中,该存储介质可以是易失性或非易失的计算机可读取存储介质。Embodiments of the present disclosure further provide a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is run by a processor, the method for reconstructing a face described in the foregoing method embodiments is executed. Wherein, the storage medium may be a volatile or non-volatile computer-readable storage medium.
本公开实施例还提供一种计算机程序产品,该计算机程序产品承载有程序代码,所述程序代码包括的指令可用于执行上述方法实施例中所述的重建人脸的方法,具体可参见上述方法实施例,在此不再赘述。Embodiments of the present disclosure further provide a computer program product, where the computer program product carries program codes, and the instructions included in the program codes can be used to execute the methods for reconstructing a face described in the foregoing method embodiments. For details, please refer to the foregoing methods. The embodiments are not repeated here.
其中,上述计算机程序产品可以具体通过硬件、软件或其结合的方式实现。在一个可选实施例中,所述计算机程序产品具体体现为计算机存储介质,在另一个可选实施例中,计算机程序产品具体体现为软件产品,例如软件开发包(Software Development  Kit,SDK)等等。Wherein, the above-mentioned computer program product can be specifically implemented by means of hardware, software or a combination thereof. In an optional embodiment, the computer program product is embodied as a computer storage medium, and in another optional embodiment, the computer program product is embodied as a software product, such as a software development kit (Software Development Kit, SDK), etc. Wait.
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的系统和装置的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。在本公开所提供的几个实施例中,应该理解到,所揭露的系统、装置和方法,可以通过其它的方式实现。以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,又例如,多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些通信接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。Those skilled in the art can clearly understand that, for the convenience and brevity of description, for the specific working process of the system and device described above, reference may be made to the corresponding process in the foregoing method embodiments, which will not be repeated here. In the several embodiments provided by the present disclosure, it should be understood that the disclosed system, apparatus and method may be implemented in other manners. The apparatus embodiments described above are only illustrative. For example, the division of the units is only a logical function division. In actual implementation, there may be other division methods. For example, multiple units or components may be combined or Can be integrated into another system, or some features can be ignored, or not implemented. On the other hand, the shown or discussed mutual coupling or direct coupling or communication connection may be through some communication interfaces, indirect coupling or communication connection of devices or units, which may be in electrical, mechanical or other forms.
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。The units described as separate components may or may not be physically separated, and components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution in this embodiment.
另外,在本公开各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。In addition, each functional unit in each embodiment of the present disclosure may be integrated into one processing unit, or each unit may exist physically alone, or two or more units may be integrated into one unit.
所述功能如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个处理器可执行的非易失的计算机可读取存储介质中。基于这样的理解,本公开的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本公开各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。The functions, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a processor-executable non-volatile computer-readable storage medium. Based on this understanding, the technical solutions of the present disclosure can be embodied in the form of software products in essence, or the parts that make contributions to the prior art or the parts of the technical solutions. The computer software products are stored in a storage medium, including Several instructions are used to cause a computer device (which may be a personal computer, a server, or a network device, etc.) to execute all or part of the steps of the methods described in various embodiments of the present disclosure. The aforementioned storage medium includes: U disk, mobile hard disk, read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), magnetic disk or optical disk and other media that can store program codes .
最后应说明的是:以上所述实施例,仅为本公开的具体实施方式,用以说明本公开的技术方案,而非对其限制,本公开的保护范围并不局限于此,尽管参照前述实施例对本公开进行了详细的说明,本领域的普通技术人员应当理解:任何熟悉本技术领域的技术人员在本公开揭露的技术范围内,其依然可以对前述实施例所记载的技术方案进行修改或可轻易想到变化,或者对其中部分技术特征进行等同替换;而这些修改、变化或者替换,并不使相应技术方案的本质脱离本公开实施例技术方案的精神和范围,都应涵盖在本公开的保护范围之内。因此,本公开的保护范围应所述以权利要求的保护范围为准。Finally, it should be noted that the above-mentioned embodiments are only specific implementations of the present disclosure, and are used to illustrate the technical solutions of the present disclosure, but not to limit them. The protection scope of the present disclosure is not limited to this, although the aforementioned The embodiments describe the present disclosure in detail, and those skilled in the art should understand that: any person skilled in the art can still modify the technical solutions described in the foregoing embodiments within the technical scope disclosed by the present disclosure. Or can easily think of changes, or equivalently replace some of the technical features; and these modifications, changes or replacements do not make the essence of the corresponding technical solutions deviate from the spirit and scope of the technical solutions of the embodiments of the present disclosure, and should be covered in the present disclosure. within the scope of protection. Therefore, the protection scope of the present disclosure should be based on the protection scope of the claims.

Claims (15)

  1. 一种重建人脸的方法,包括:A method of reconstructing a human face, comprising:
    基于目标图像生成第一真实人脸模型;generating a first real face model based on the target image;
    利用预先生成的多个第二真实人脸模型对所述第一真实人脸模型进行拟合处理,得到多个第二真实人脸模型分别对应的拟合系数;Perform fitting processing on the first real face model by using a plurality of pre-generated second real face models to obtain fitting coefficients corresponding to the plurality of second real face models respectively;
    基于所述多个第二真实人脸模型分别对应的拟合系数、及所述多个第二真实人脸模型分别对应的具有预设风格的虚拟人脸模型,生成目标骨骼数据以及目标蒙皮变形系数;Based on the fitting coefficients corresponding to the plurality of second real face models, and the virtual face models with preset styles corresponding to the plurality of second real face models, respectively, target skeleton data and target skin are generated deformation coefficient;
    基于所述目标骨骼数据以及所述目标蒙皮变形系数,生成与所述第一真实人脸模型对应的目标虚拟人脸模型。Based on the target bone data and the target skin deformation coefficient, a target virtual face model corresponding to the first real face model is generated.
  2. 根据权利要求1所述的重建人脸的方法,其特征在于,基于所述多个第二真实人脸模型分别对应的拟合系数、及所述多个第二真实人脸模型分别对应的具有预设风格的虚拟人脸模型,生成目标蒙皮变形系数,包括:The method for reconstructing a human face according to claim 1, characterized in that, based on the fitting coefficients corresponding to the plurality of second real face models respectively, and the corresponding corresponding to the plurality of second real face models having The virtual face model of the preset style generates the target skin deformation coefficient, including:
    基于所述多个第二真实人脸模型分别对应的拟合系数、以及所述多个虚拟人脸模型分别包括的蒙皮变形系数,生成所述目标蒙皮变形系数。The target skin deformation coefficient is generated based on the fitting coefficients corresponding to the plurality of second real face models and the skin deformation coefficients respectively included in the plurality of virtual face models.
  3. 根据权利要求2所述的重建人脸的方法,其特征在于,所述基于所述多个第二真实人脸模型分别对应的拟合系数、以及所述多个虚拟人脸模型分别包括的蒙皮变形系数,生成所述目标蒙皮变形系数,包括:The method for reconstructing a human face according to claim 2, wherein the fitting coefficients corresponding to the plurality of second real face models and the masks respectively included in the plurality of virtual face models skin deformation coefficient, generating the target skin deformation coefficient, including:
    对所述多个第二真实人脸模型分别对应的拟合系数进行归一化处理;normalizing the fitting coefficients corresponding to the plurality of second real face models respectively;
    基于归一化处理后的拟合系数、以及所述多个虚拟人脸模型分别包括的蒙皮变形系数,得到所述目标蒙皮变形系数。The target skin deformation coefficient is obtained based on the normalized fitting coefficient and the skin deformation coefficient respectively included in the plurality of virtual face models.
  4. 根据权利要求1至3任一项所述的重建人脸的方法,其特征在于,所述基于所述目标骨骼数据以及所述目标蒙皮变形系数,生成与所述第一真实人脸模型对应的目标虚拟人脸模型,包括:The method for reconstructing a human face according to any one of claims 1 to 3, characterized in that, based on the target skeleton data and the target skin deformation coefficient, generating a model corresponding to the first real face The target virtual face model, including:
    基于所述目标骨骼数据、以及标准虚拟人脸模型中的标准骨骼数据与标准蒙皮数据之间的关联关系,对所述标准蒙皮数据进行位置变换处理,生成中间蒙皮数据;Based on the target skeleton data and the association between the standard skeleton data in the standard virtual face model and the standard skin data, perform position transformation processing on the standard skin data to generate intermediate skin data;
    基于所述目标蒙皮变形系数对所述中间蒙皮数据进行变形处理,得到目标蒙皮数据;Deformation processing is performed on the intermediate skin data based on the target skin deformation coefficient to obtain target skin data;
    基于所述目标骨骼数据、以及所述目标蒙皮数据,构成所述目标虚拟人脸模型。The target virtual face model is constructed based on the target bone data and the target skin data.
  5. 根据权利要求1至4任一项所述的重建人脸的方法,其特征在于,The method for reconstructing a human face according to any one of claims 1 to 4, wherein,
    所述目标骨骼数据包括以下至少一种:目标骨骼位置数据、目标骨骼缩放数据、以及目标骨骼旋转数据;The target bone data includes at least one of the following: target bone position data, target bone scaling data, and target bone rotation data;
    所述多个虚拟人脸模型分别对应的骨骼数据包括以下至少一种:所述虚拟人脸的多块人脸骨骼中每块人脸骨骼对应的骨骼旋转数据、骨骼位置数据、以及骨骼缩放数据。The skeleton data corresponding to the plurality of virtual face models respectively include at least one of the following: skeleton rotation data, skeleton position data, and skeleton scaling data corresponding to each face skeleton in the plurality of face skeletons of the virtual face .
  6. 根据权利要求5所述的重建人脸的方法,其特征在于,所述基于所述多个第二真实人脸模型分别对应的拟合系数、及所述多个第二真实人脸模型分别对应的具有预设风格的虚拟人脸模型,生成目标骨骼数据,包括:The method for reconstructing a face according to claim 5, wherein the fitting coefficients corresponding to the plurality of second real face models and the corresponding fitting coefficients of the plurality of second real face models respectively The virtual face model with preset style generates target skeleton data, including:
    基于所述多个第二真实人脸模型分别对应的拟合系数,对所述多个虚拟人脸模型分别对应的骨骼位置数据进行插值处理,得到所述目标骨骼位置数据。Based on the fitting coefficients corresponding to the plurality of second real face models respectively, interpolation processing is performed on the bone position data corresponding to the plurality of virtual face models, to obtain the target bone position data.
  7. 根据权利要求5或6所述的重建人脸的方法,其特征在于,所述基于所述多个第二真实人脸模型分别对应的拟合系数、及所述多个第二真实人脸模型分别对应的具有预设风格的虚拟人脸模型,生成目标骨骼数据,包括:The method for reconstructing a face according to claim 5 or 6, wherein the fitting coefficients corresponding to the plurality of second real face models respectively and the plurality of second real face models The corresponding virtual face models with preset styles are generated to generate target skeleton data, including:
    基于所述多个第二真实人脸模型分别对应的拟合系数,对所述多个虚拟人脸模型分别对应的骨骼缩放数据进行插值处理,得到所述目标骨骼缩放数据。Based on the respective fitting coefficients corresponding to the multiple second real face models, interpolation processing is performed on the skeleton scaling data corresponding to the multiple virtual face models respectively, to obtain the target skeleton scaling data.
  8. 根据权利要求5至7任一项所述的重建人脸的方法,其特征在于,所述基于所述多个第二真实人脸模型分别对应的拟合系数、及所述多个第二真实人脸模型分别对应的具有预设风格的虚拟人脸模型,生成目标骨骼数据,包括:The method for reconstructing a face according to any one of claims 5 to 7, wherein the fitting coefficients corresponding to the multiple second real face models and the multiple second real face models The face models respectively correspond to virtual face models with preset styles, and generate target skeleton data, including:
    将所述多个虚拟人脸模型分别对应的骨骼旋转数据转换为四元数,Converting the skeleton rotation data corresponding to the plurality of virtual face models respectively into quaternions,
    对所述多个虚拟人脸模型分别对应的四元数进行正则化处理,得到正则化四元数;performing regularization processing on the quaternions corresponding to the plurality of virtual face models respectively to obtain the regularized quaternions;
    基于所述多个第二真实人脸模型分别对应的拟合系数,对所述多个虚拟人脸模型分别对应的所述正则化四元数进行插值处理,得到所述目标骨骼旋转数据。Based on the fitting coefficients corresponding to the plurality of second real face models respectively, interpolation processing is performed on the regularization quaternions corresponding to the plurality of virtual face models respectively, to obtain the target bone rotation data.
  9. 根据权利要求1至8任一项所述的重建人脸的方法,其特征在于,所述基于目标图像生成第一真实人脸模型,包括:The method for reconstructing a human face according to any one of claims 1 to 8, wherein the generating the first real face model based on the target image comprises:
    获取包括原始人脸的目标图像;Get the target image including the original face;
    对所述目标图像中包括的所述原始人脸进行三维人脸重建,得到所述第一真实人脸模型。Performing three-dimensional face reconstruction on the original face included in the target image to obtain the first real face model.
  10. 根据权利要求1至9任一项所述的重建人脸的方法,其特征在于,根据以下方式预先生成所述多个第二真实人脸模型:The method for reconstructing a human face according to any one of claims 1 to 9, wherein the plurality of second real face models are pre-generated according to the following manner:
    获取包括参考人脸的多张参考图像;Obtain multiple reference images including reference faces;
    针对所述多张参考图像中的每张参考图像,对所述参考图像中包括的参考人脸进行三维人脸重建,得到所述参考图像对应的所述第二真实人脸模型。For each reference image in the plurality of reference images, three-dimensional face reconstruction is performed on the reference face included in the reference image to obtain the second real face model corresponding to the reference image.
  11. 根据权利要求1至10任一项所述的重建人脸的方法,其特征在于,还包括针对所述多个第二真实人脸模型中的每个第二真实人脸模型,采用下述方式获取所述第二真实人脸模型对应的具有预设风格的虚拟人脸模型:The method for reconstructing a human face according to any one of claims 1 to 10, further comprising, for each second real face model in the plurality of second real face models, adopting the following manner Obtain a virtual face model with a preset style corresponding to the second real face model:
    生成所述第二真实人脸模型对应的具有预设风格的中间虚拟人脸模型;generating an intermediate virtual face model with a preset style corresponding to the second real face model;
    基于相对于标准虚拟人脸模型的多组预设蒙皮变形系数,生成与所述第二真实人脸模型对应的虚拟人脸模型相对于所述标准虚拟人脸模型的蒙皮变形系数;generating skin deformation coefficients of the virtual face model corresponding to the second real face model relative to the standard virtual face model based on multiple sets of preset skin deformation coefficients relative to the standard virtual face model;
    利用所述蒙皮变形系数,对所述中间虚拟人脸模型中的中间蒙皮数据进行调整;并adjusting the intermediate skin data in the intermediate virtual face model using the skin deformation coefficient; and
    基于调整后的中间蒙皮数据、以及所述中间虚拟人脸模型的中间骨骼数据,生成所述第二真实人脸模型的虚拟人脸模型。Based on the adjusted intermediate skin data and the intermediate skeleton data of the intermediate virtual face model, a virtual face model of the second real face model is generated.
  12. 根据权利要求1至11任一项所述的重建人脸的方法,其特征在于,所述利用预先生成的多个第二真实人脸模型对所述第一真实人脸模型进行拟合处理,得到多个第二真实人脸模型分别对应的拟合系数,包括:The method for reconstructing a face according to any one of claims 1 to 11, wherein the first real face model is fitted by using a plurality of pre-generated second real face models, The fitting coefficients corresponding to multiple second real face models are obtained, including:
    对所述多个第二真实人脸模型以及所述第一真实人脸模型进行最小二乘处理,得到所述多个第二真实人脸模型分别对应的拟合系数。Least squares processing is performed on the plurality of second real face models and the first real face models to obtain fitting coefficients corresponding to the plurality of second real face models respectively.
  13. 一种重建人脸的装置,包括:A device for reconstructing a human face, comprising:
    第一生成模块,用于基于目标图像生成第一真实人脸模型;a first generation module for generating a first real face model based on the target image;
    处理模块,用于利用预先生成的多个第二真实人脸模型对所述第一真实人脸模型进行拟合处理,得到多个第二真实人脸模型分别对应的拟合系数;a processing module, configured to perform fitting processing on the first real face model by using a plurality of pre-generated second real face models to obtain fitting coefficients corresponding to the plurality of second real face models respectively;
    第二生成模块,用于基于所述多个第二真实人脸模型分别对应的拟合系数、及所述多个第二真实人脸模型分别对应的具有预设风格的虚拟人脸模型,生成目标骨骼数据以及目标蒙皮变形系数;The second generation module is configured to generate a virtual face model with a preset style corresponding to the plurality of second real face models respectively based on the fitting coefficients corresponding to the plurality of second real face models and the corresponding virtual face models with the preset style respectively. Target bone data and target skin deformation coefficient;
    第三生成模块,用于基于所述目标骨骼数据以及所述目标蒙皮变形系数,生成与所述第一真实人脸模型对应的目标虚拟人脸模型。The third generation module is configured to generate a target virtual face model corresponding to the first real face model based on the target bone data and the target skin deformation coefficient.
  14. 一种计算机设备,包括处理器和存储器,所述存储器存储有所述处理器可执行的机器可读指令,所述处理器用于执行所述存储器中存储的机器可读指令,所述机器可读指令被所述处理器执行时,所述处理器执行如权利要求1至12任一项所述的重建人脸的方法。A computer device comprising a processor and a memory, the memory storing machine-readable instructions executable by the processor, the processor for executing the machine-readable instructions stored in the memory, the machine-readable instructions When the instructions are executed by the processor, the processor executes the method for reconstructing a human face according to any one of claims 1 to 12.
  15. 一种计算机可读存储介质,其上存储有计算机程序,所述计算机程序被计算机设备运行时,所述计算机设备执行如权利要求1至12任一项所述的重建人脸的方法。A computer-readable storage medium on which a computer program is stored, and when the computer program is run by a computer device, the computer device executes the method for reconstructing a human face according to any one of claims 1 to 12.
PCT/CN2021/102431 2020-11-25 2021-06-25 Method and apparatus for face reconstruction, and computer device, and storage medium WO2022110791A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
JP2022520004A JP2023507863A (en) 2020-11-25 2021-06-25 Face reconstruction method, apparatus, computer device, and storage medium
KR1020227010819A KR20220075339A (en) 2020-11-25 2021-06-25 Face reconstruction method, apparatus, computer device and storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011337901.1 2020-11-25
CN202011337901.1A CN112419454B (en) 2020-11-25 2020-11-25 Face reconstruction method, device, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
WO2022110791A1 true WO2022110791A1 (en) 2022-06-02

Family

ID=74842193

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/102431 WO2022110791A1 (en) 2020-11-25 2021-06-25 Method and apparatus for face reconstruction, and computer device, and storage medium

Country Status (5)

Country Link
JP (1) JP2023507863A (en)
KR (1) KR20220075339A (en)
CN (1) CN112419454B (en)
TW (1) TWI773458B (en)
WO (1) WO2022110791A1 (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112419454B (en) * 2020-11-25 2023-11-28 北京市商汤科技开发有限公司 Face reconstruction method, device, computer equipment and storage medium
CN113160418A (en) * 2021-05-10 2021-07-23 上海商汤智能科技有限公司 Three-dimensional reconstruction method, device and system, medium and computer equipment
CN113808249B (en) * 2021-08-04 2022-11-25 北京百度网讯科技有限公司 Image processing method, device, equipment and computer storage medium
CN113610992B (en) * 2021-08-04 2022-05-20 北京百度网讯科技有限公司 Bone driving coefficient determining method and device, electronic equipment and readable storage medium
CN113805532B (en) * 2021-08-26 2023-05-23 福建天泉教育科技有限公司 Method and terminal for manufacturing physical robot actions
CN114529640B (en) * 2022-02-17 2024-01-26 北京字跳网络技术有限公司 Moving picture generation method, moving picture generation device, computer equipment and storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140085293A1 (en) * 2012-09-21 2014-03-27 Luxand, Inc. Method of creating avatar from user submitted image
CN109395390A (en) * 2018-10-26 2019-03-01 网易(杭州)网络有限公司 Processing method, device, processor and the terminal of game role facial model
CN110675475A (en) * 2019-08-19 2020-01-10 腾讯科技(深圳)有限公司 Face model generation method, device, equipment and storage medium
CN111695471A (en) * 2020-06-02 2020-09-22 北京百度网讯科技有限公司 Virtual image generation method, device, equipment and storage medium
CN111714885A (en) * 2020-06-22 2020-09-29 网易(杭州)网络有限公司 Game role model generation method, game role model generation device, game role adjustment device and game role adjustment medium
CN112419485A (en) * 2020-11-25 2021-02-26 北京市商汤科技开发有限公司 Face reconstruction method and device, computer equipment and storage medium
CN112419454A (en) * 2020-11-25 2021-02-26 北京市商汤科技开发有限公司 Face reconstruction method and device, computer equipment and storage medium

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5178662B2 (en) * 2009-07-31 2013-04-10 富士フイルム株式会社 Image processing apparatus and method, data processing apparatus and method, and program
KR101696007B1 (en) * 2013-01-18 2017-01-13 한국전자통신연구원 Method and device for creating 3d montage
JP6207210B2 (en) * 2013-04-17 2017-10-04 キヤノン株式会社 Information processing apparatus and method
KR101757642B1 (en) * 2016-07-20 2017-07-13 (주)레벨소프트 Apparatus and method for 3d face modeling
CN110111417B (en) * 2019-05-15 2021-04-27 浙江商汤科技开发有限公司 Method, device and equipment for generating three-dimensional local human body model
CN110111247B (en) * 2019-05-15 2022-06-24 浙江商汤科技开发有限公司 Face deformation processing method, device and equipment
CN110599573B (en) * 2019-09-03 2023-04-11 电子科技大学 Method for realizing real-time human face interactive animation based on monocular camera
CN111724457A (en) * 2020-03-11 2020-09-29 长沙千博信息技术有限公司 Realistic virtual human multi-modal interaction implementation method based on UE4
CN111784821B (en) * 2020-06-30 2023-03-14 北京市商汤科技开发有限公司 Three-dimensional model generation method and device, computer equipment and storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140085293A1 (en) * 2012-09-21 2014-03-27 Luxand, Inc. Method of creating avatar from user submitted image
CN109395390A (en) * 2018-10-26 2019-03-01 网易(杭州)网络有限公司 Processing method, device, processor and the terminal of game role facial model
CN110675475A (en) * 2019-08-19 2020-01-10 腾讯科技(深圳)有限公司 Face model generation method, device, equipment and storage medium
CN111695471A (en) * 2020-06-02 2020-09-22 北京百度网讯科技有限公司 Virtual image generation method, device, equipment and storage medium
CN111714885A (en) * 2020-06-22 2020-09-29 网易(杭州)网络有限公司 Game role model generation method, game role model generation device, game role adjustment device and game role adjustment medium
CN112419485A (en) * 2020-11-25 2021-02-26 北京市商汤科技开发有限公司 Face reconstruction method and device, computer equipment and storage medium
CN112419454A (en) * 2020-11-25 2021-02-26 北京市商汤科技开发有限公司 Face reconstruction method and device, computer equipment and storage medium

Also Published As

Publication number Publication date
KR20220075339A (en) 2022-06-08
TW202221651A (en) 2022-06-01
JP2023507863A (en) 2023-02-28
CN112419454A (en) 2021-02-26
CN112419454B (en) 2023-11-28
TWI773458B (en) 2022-08-01

Similar Documents

Publication Publication Date Title
WO2022110791A1 (en) Method and apparatus for face reconstruction, and computer device, and storage medium
WO2022110790A1 (en) Face reconstruction method and apparatus, computer device, and storage medium
WO2020192568A1 (en) Facial image generation method and apparatus, device and storage medium
US20200402284A1 (en) Animating avatars from headset cameras
CN110399849A (en) Image processing method and device, processor, electronic equipment and storage medium
CN111784821B (en) Three-dimensional model generation method and device, computer equipment and storage medium
CN111971713A (en) 3D face capture and modification using image and time tracking neural networks
WO2021253788A1 (en) Three-dimensional human body model construction method and apparatus
JP2013524357A (en) Method for real-time cropping of real entities recorded in a video sequence
EP3980974A1 (en) Single image-based real-time body animation
WO2013078404A1 (en) Perceptual rating of digital image retouching
WO2023077742A1 (en) Video processing method and apparatus, and neural network training method and apparatus
CN115601484B (en) Virtual character face driving method and device, terminal equipment and readable storage medium
WO2022110855A1 (en) Face reconstruction method and apparatus, computer device, and storage medium
CN112419144A (en) Face image processing method and device, electronic equipment and storage medium
CN114333034A (en) Face pose estimation method and device, electronic equipment and readable storage medium
CN108717730B (en) 3D character reconstruction method and terminal
CN114429518A (en) Face model reconstruction method, device, equipment and storage medium
CN111275610B (en) Face aging image processing method and system
CN115393487A (en) Virtual character model processing method and device, electronic equipment and storage medium
CN114612614A (en) Human body model reconstruction method and device, computer equipment and storage medium
US11423616B1 (en) Systems and methods for rendering avatar with high resolution geometry
CN114049250B (en) Method, device and medium for correcting face pose of certificate photo
WO2023005359A1 (en) Image processing method and device
WO2023132261A1 (en) Information processing system, information processing method, and information processing program

Legal Events

Date Code Title Description
ENP Entry into the national phase

Ref document number: 2022520004

Country of ref document: JP

Kind code of ref document: A

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21896299

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21896299

Country of ref document: EP

Kind code of ref document: A1