WO2023030381A1 - 三维人头重建方法、装置、设备及介质 - Google Patents

三维人头重建方法、装置、设备及介质 Download PDF

Info

Publication number
WO2023030381A1
WO2023030381A1 PCT/CN2022/116162 CN2022116162W WO2023030381A1 WO 2023030381 A1 WO2023030381 A1 WO 2023030381A1 CN 2022116162 W CN2022116162 W CN 2022116162W WO 2023030381 A1 WO2023030381 A1 WO 2023030381A1
Authority
WO
WIPO (PCT)
Prior art keywords
dimensional
head
model
projection
sample
Prior art date
Application number
PCT/CN2022/116162
Other languages
English (en)
French (fr)
Inventor
陈志兴
邓启力
刘志超
Original Assignee
北京字跳网络技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京字跳网络技术有限公司 filed Critical 北京字跳网络技术有限公司
Publication of WO2023030381A1 publication Critical patent/WO2023030381A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects

Definitions

  • the present disclosure relates to the technical field of image processing, and in particular to a method, device, equipment and medium for three-dimensional human head reconstruction.
  • portrait special effects functions have also been widely used.
  • 3D reconstruction is widely used in portrait special effects.
  • some portrait special effects can only be added after the portrait is deformed, and the deformation of the portrait often needs to be performed on the 3D model reconstructed for the portrait.
  • the current 3D reconstruction method can only realize the 3D reconstruction of the face of the portrait, but cannot realize the 3D reconstruction of the head of the portrait. Therefore, it is impossible to use the current 3D reconstruction method to add special effects to the head of the portrait.
  • the present disclosure provides a method, device, equipment and medium for 3D human head reconstruction.
  • the present disclosure provides a three-dimensional head reconstruction method, including:
  • the target model is pre-trained by multiple training samples, the training samples are generated according to the sample portrait image and the sample 3D head model, and the sample 3D head model is based on the sample portrait image.
  • the two-dimensional feature information related to the portrait is obtained by iterative fitting of the standard three-dimensional face statistical model, and the two-dimensional feature information includes face feature points and head projection contour lines;
  • a target three-dimensional head model corresponding to the target portrait image is generated.
  • the present disclosure provides a three-dimensional head reconstruction device, including:
  • a first acquisition unit configured to acquire a target portrait image
  • the first processing unit is configured to input the target portrait image into the target model to obtain the output result of the target model; wherein, the target model is pre-trained by a plurality of training samples, and the training samples are generated according to the sample portrait image and the sample three-dimensional human head model, and the sample is three-dimensional
  • the head model is obtained by iteratively fitting the standard 3D face statistical model according to the two-dimensional feature information related to the portrait in the sample portrait image.
  • the two-dimensional feature information includes face feature points and head projection contour lines;
  • the first generation unit is configured to generate a target three-dimensional head model corresponding to the target portrait image according to the output result.
  • the present disclosure provides a three-dimensional head reconstruction device, including:
  • the processor is used to read executable instructions from the memory, and execute the executable instructions to implement the three-dimensional human head reconstruction method described in the first aspect.
  • the present disclosure provides a computer-readable storage medium, the storage medium stores a computer program, and when the computer program is executed by a processor, the processor implements the three-dimensional head reconstruction method described in the first aspect.
  • the 3D head reconstruction method, device, equipment and medium of the embodiments of the present disclosure can acquire the target portrait image, input the target portrait image into the pre-trained target model, obtain the output result of the target model, and generate the target portrait according to the output result
  • the target 3D head model corresponding to the image because the 3D head model used to generate the training samples of the target model is obtained by iteratively fitting the standard 3D face statistical model according to the 2D feature information related to the portrait in the sample portrait image , and the two-dimensional feature information used for iterative fitting includes facial feature points and human head contours, so that the three-dimensional human head model used to generate the training samples of the target model can express the facial features and human head of the portrait in the sample portrait image Contour features, so that the target model trained based on the training sample can be used to detect information related to the face features and head contour features of the portrait.
  • the output result of the target model for the target portrait image can be used to express and target The information related to the face features and head contour features of the portrait in the portrait image, so that the target 3D head model that can express both the face feature and the head contour feature can be generated according to the output result, and the target 3D head model can be used for Add special effects to the head of the portrait.
  • FIG. 1 is a schematic flowchart of a three-dimensional head reconstruction method provided by an embodiment of the present disclosure
  • FIG. 2 is a schematic flowchart of another 3D head reconstruction method provided by an embodiment of the present disclosure
  • FIG. 3 is a flow chart of extracting a portrait projection contour line provided by an embodiment of the present disclosure
  • FIG. 4 is a schematic diagram of a three-dimensional human head model fitting process provided by an embodiment of the present disclosure
  • FIG. 5 is a schematic structural diagram of a three-dimensional head reconstruction device provided by an embodiment of the present disclosure.
  • FIG. 6 is a schematic structural diagram of a three-dimensional head reconstruction device provided by an embodiment of the present disclosure.
  • the term “comprise” and its variations are open-ended, ie “including but not limited to”.
  • the term “based on” is “based at least in part on”.
  • the term “one embodiment” means “at least one embodiment”; the term “another embodiment” means “at least one further embodiment”; the term “some embodiments” means “at least some embodiments.” Relevant definitions of other terms will be given in the description below.
  • Embodiments of the present disclosure provide a three-dimensional head reconstruction method, device, device and medium capable of performing three-dimensional reconstruction on a human head in a portrait image.
  • the three-dimensional head reconstruction method provided by the embodiment of the present disclosure will be described below with reference to FIG. 1 to FIG. 4 .
  • the 3D head reconstruction method may be executed by a 3D head reconstruction device, and the 3D head reconstruction device may be an electronic device or a server, which is not limited here.
  • electronic devices may include devices with communication functions such as mobile phones, tablet computers, desktop computers, notebook computers, vehicle terminals, wearable electronic devices, all-in-one computers, and smart home devices, and may also be devices simulated by virtual machines or simulators.
  • the server may include cloud servers or server clusters and other devices with storage and computing functions.
  • Fig. 1 shows a schematic flowchart of a three-dimensional human head reconstruction method provided by an embodiment of the present disclosure.
  • the 3D head reconstruction method may include the following steps.
  • the three-dimensional head reconstruction device may acquire a target portrait image that requires three-dimensional reconstruction of the human head, and the target portrait image may be a two-dimensional image.
  • the 3D head reconstruction device can capture an image in real time, and use the image as a target portrait image.
  • the three-dimensional head reconstruction device may also acquire an image locally selected by the user, and use this image as the target portrait image.
  • the three-dimensional head reconstruction device may receive images sent by other devices, and use the images as target portrait images.
  • the 3D head reconstruction device may extract any image from a local image library, and use this image as the target portrait image.
  • the 3D head reconstruction device can input the target portrait image into the pre-trained target model, so as to extract the information related to the 3D reconstruction of the human head in the target model, and obtain the target model output result.
  • the face feature points may be used to represent the face features of the portraits in the sample portrait images
  • the head contour lines may be used to represent the head contour features of the portraits in the sample portrait images.
  • the standard 3D face statistical model may include a standard 3D morphable model of the human head (3D Morphable Model, 3DMM).
  • 3DMM 3D Morphable Model
  • the two-dimensional feature information may also include shoulder feature points.
  • the shoulder feature points can be used to characterize the neck posture features of the portrait in the sample portrait image.
  • the 3D head reconstruction device may generate the target 3D head model corresponding to the target portrait image according to the output result according to the preset head model generation method.
  • the target portrait image can be obtained, and the target portrait image can be input into the pre-trained target model to obtain the output result of the target model, so as to generate the target three-dimensional head model corresponding to the target portrait image according to the output result, because
  • the 3D head model used to generate the training samples of the target model is obtained by iteratively fitting the standard 3D face statistical model according to the 2D feature information related to the portrait in the sample portrait image, and the 3D head model used for iterative fitting
  • the two-dimensional feature information includes face feature points and head contour lines, so that the three-dimensional head model used to generate the training samples of the target model can express the face features and head contour features of the portrait in the sample portrait image, so that based on the training samples
  • the trained target model can be used to detect information related to the facial features and head contour features of the portrait.
  • the output result of the target model for the target portrait image can be used to express the facial features and facial features of the portrait in the target portrait image.
  • the information related to the contour features of the head so that the target 3D head model that can express the features of the face and the contour of the head can be generated according to the output result, and the target 3D head model can be used to add special effects to the head of the portrait.
  • the training samples may include sample statistical model parameters corresponding to the sample portrait image and the sample 3D human head model.
  • the sample statistical model parameters are model parameters extracted from the sample 3D head model for characterizing the head features of the portrait, and the sample statistical model parameters are also models with different values in the sample 3D head model and the standard 3D face statistical model parameter.
  • the model parameters may include the identity coefficient of the portrait in the image, the expression coefficient of the portrait in the image, the posture coefficient of the portrait in the image, the selection coefficient of the portrait in the image, the translation coefficient of the portrait in the image, and the projection of the acquisition device that captures the image parameter.
  • the projection parameter may be an extrinsic parameter of the camera of the acquisition device.
  • the above training samples can be used to train the target model used to extract model parameters related to the three-dimensional reconstruction of the human head from the portrait image.
  • the 3D head reconstruction device may input the target portrait image into the target model obtained based on the above sample training to obtain an output result of the target model.
  • the output result may include target statistical model parameters.
  • the target statistical model parameters may be model parameters related to the three-dimensional reconstruction of the human head.
  • the target model can be used to quickly extract the target statistical model parameters corresponding to the target portrait image.
  • the 3D head reconstruction method may further include:
  • the mapping relationship between the sample portrait image in each training sample and the sample statistical model parameters is learned to obtain the target model.
  • the three-dimensional head reconstruction device may first acquire a plurality of training samples, and use the plurality of training samples to train the target model.
  • the 3D head reconstruction device can first scan a plurality of pre-established 3D head models, and obtain a standard 3D face statistical model through a principal component analysis (Principal Components Analysis, PCA) method, and then obtain a plurality of sample portrait images, And extract the two-dimensional feature information related to the portrait in each sample portrait image, and then iteratively fit the standard three-dimensional head model according to each two-dimensional feature information to obtain the sample three-dimensional head model corresponding to each sample portrait image, and then from each A sample 3D human head model extracts sample statistical model parameters, and uses each sample portrait image and the sample statistical model parameters corresponding to the sample portrait image as a training sample, thereby obtaining multiple training samples. Then, the 3D head reconstruction device can directly use the first regression loss function to regress the sample statistical model parameters corresponding to each sample portrait image through gradient descent or Gauss-Newton method and other solution optimization algorithms to obtain the target model.
  • PCA Principal Components Analysis
  • the first regression loss function may be a Smooth-L1 loss function.
  • a large amount of labeled data can be automatically collected through the above-mentioned method of generating a sample three-dimensional head model corresponding to a sample portrait image, and the cost of collecting training samples can be reduced.
  • the three-dimensional head reconstruction device can acquire multiple preset portrait images with different face angles and different expressions, and each preset portrait image includes a portrait head and portrait shoulders.
  • a 3D head reconstruction device can acquire 100,000 to 300,000 preset portrait images.
  • the three-dimensional head reconstruction device can respectively perform processing such as rotation and translation on each preset portrait image, so as to realize the amplification of image data.
  • the 3D head reconstruction device can use the augmented portrait image as a sample portrait image.
  • the 3D head reconstruction method may further include:
  • the target loss function after using the first regression loss function to train the target model, can also be used to continue learning the sample portrait images in each training sample through gradient descent or Gauss-Newton method and other solution optimization algorithms and the mapping relationship between the sample statistical model parameters to optimize the target model and obtain the optimized target model.
  • the target loss function may include a second regression loss function and a projection loss function, and the weight value of the identity coefficient in the second regression loss function is greater than the weight value of the identity coefficient in the first regression loss function.
  • the second regression loss function can also be a Smooth-L1 loss function, however, the weight value of the identity coefficient in the second regression loss function can be set to be greater than the weight value of the identity coefficient in the first regression loss function, so that Focus on training the shape of the human head and facial features to optimize the target model.
  • the weight values of the head shape coefficient, eye shape coefficient and mouth shape coefficient in the identity coefficient in the second regression loss function can be set to be greater than the head shape coefficient, eye shape coefficient and mouth shape coefficient in the identity coefficient in the first regression loss function.
  • the weight value of the mouth shape coefficient so as to focus on training the head shape, eye shape and mouth shape that can clearly distinguish the characteristics of the portrait, so as to optimize the target model.
  • the projection loss, the shape of the human head and the shape of facial features can be optimized, and the reliability of the detection result of the model parameter detection model can be improved.
  • S130 may specifically include: generating a target three-dimensional human head model according to target statistical model parameters and a standard three-dimensional human face statistical model.
  • the 3D head reconstruction device can replace the standard statistical model parameters in the standard 3D face statistical model with the target statistical model parameters to generate the target 3D head corresponding to the target portrait image. model to efficiently generate the target 3D human head model.
  • Fig. 2 shows a schematic flowchart of another 3D human head reconstruction method provided by an embodiment of the present disclosure.
  • the three-dimensional head reconstruction method may further include the following steps.
  • the three-dimensional head reconstruction device may acquire a sample portrait image that requires three-dimensional reconstruction of the human head, and the sample portrait image may be a two-dimensional image.
  • sample portrait image is similar to the target portrait image in the embodiment shown in FIG. 1 , and details are not described here.
  • the 3D head reconstruction device may perform portrait feature analysis on the sample portrait image to extract two-dimensional feature information related to the portrait in the sample portrait image.
  • the two-dimensional feature information may include facial feature points and human head contour lines.
  • S220 may specifically include:
  • the head contour line in the sample portrait image is extracted.
  • the 3D head reconstruction device can input the sample portrait image into the pre-trained face detection model to perform face feature point detection on the sample portrait image, and obtain the face feature points in the sample portrait image output by the face detection model .
  • the 3D head reconstruction device can input the sample portrait image into the pre-trained head prediction model to perform head contour detection on the sample portrait image, and obtain the head contour line in the sample portrait image output by the head prediction model.
  • S220 may also specifically include:
  • the 3D head reconstruction device can input the sample portrait image into the pre-trained face detection model to perform face feature point detection on the sample portrait image, and obtain the face feature points in the sample portrait image output by the face detection model .
  • the 3D head reconstruction device can input the sample portrait image into the pre-trained portrait segmentation model to perform portrait contour detection on the sample portrait image, obtain the portrait contour line in the sample portrait image output by the portrait segmentation model, and then determine the contour line of the portrait The head part, and the head part in the portrait contour line is used as the head contour line.
  • the three-dimensional head reconstruction device can use the two-dimensional feature information as supervision information to predict the standard three-dimensional face statistical model based on the two-dimensional feature information.
  • the preset number of times may be any number preset according to needs, and there is no limitation here.
  • the standard 3D facial statistical model may include 3DMM.
  • the standard 3D face statistical model can be obtained by scanning multiple 3D head models and using the PCA method.
  • the 3D head reconstruction device can iteratively fit the standard 3D face statistical model based on the 2D feature information, so as to adjust the statistical model parameters in the standard 3D face statistical model, and then obtain the sample statistical model parameters, Complete the model training on the sample 3D human head model.
  • the model parameters may include the identity coefficient of the portrait in the image, the expression coefficient of the portrait in the image, the pose coefficient of the portrait in the image, the selection coefficient of the portrait in the image, the translation coefficient of the portrait in the image, and the projection of the acquisition device that captures the image parameter.
  • the projection parameter may be an extrinsic parameter of the camera of the acquisition device.
  • the 2D feature information can be used as the supervisory information to predict the 3DMM model of the human head, so as to achieve the purpose of reconstructing the 3D human head in real time.
  • the two-dimensional feature information used for iterative fitting includes face feature points and head contour lines
  • the face features and head contour features of the portraits in the sample portrait images can be used to realize the three-dimensional analysis of the heads of the portraits in the sample portrait images.
  • Reconstruction to reconstruct a 3D head model that can express both face features and head contour features, and then use the reconstructed 3D head model to add special effects to the head of the portrait.
  • the 3D head reconstruction method may further include: performing plane projection on a standard 3D face statistical model to obtain projection features corresponding to 2D feature information information.
  • S230 may specifically include: based on the two-dimensional feature information and the projection feature information, iteratively fitting the standard three-dimensional face statistical model through the third regression loss function to obtain the sample three-dimensional head model.
  • the 3D head reconstruction device can planarly project the standard 3D face statistical model to obtain the projection feature information corresponding to the 2D feature information, and then use the 2D feature information as supervisory information to project the Feature information is used as the information to be optimized, and the third regression loss function is used to iteratively fit the standard 3D face statistical model through solution optimization algorithms such as gradient descent or Gauss-Newton method, so that the statistical model in the standard 3D face statistical model The parameters are adjusted to obtain the sample statistical model parameters, and the model training of the sample 3D human head model is completed.
  • solution optimization algorithms such as gradient descent or Gauss-Newton method
  • the standard 3D face statistical model can be reliably trained by using the 2D feature information and the projection feature information, so as to obtain a sample 3D head model corresponding to the sample portrait image.
  • the 3D head reconstruction method may also include:
  • performing plane projection on the standard 3D face statistical model to obtain the projection feature information corresponding to the 2D feature information may specifically include:
  • the standard 3D face statistical model When the standard 3D face statistical model is in the head posture, according to the projection parameters of the sample portrait image collection device, project the standard 3D face statistical model onto the imaging plane of the collection device to obtain projection feature information.
  • the 3D head reconstruction device after the 3D head reconstruction device acquires the sample portrait image, it can use the pre-trained pose detection model to detect the head pose of the portrait in the sample portrait image to obtain the head pose in the sample portrait image, and then The posture of the standard 3D face statistical model is adjusted so that the standard 3D face statistical model is in the head posture, and then, the projection parameters of the acquisition device of the sample portrait image are obtained from the image information of the sample portrait image, and the standard 3D face statistical model is The face statistical model is in the head posture, and according to the projection parameters of the acquisition device of the acquired sample portrait image, the standard three-dimensional face statistical model is projected onto the imaging plane of the acquisition device, and the projection feature information corresponding to the two-dimensional feature information is obtained , to improve the reliability of planar projection of standard 3D facial statistical models.
  • the projection feature information may include face projection feature points.
  • performing plane projection on the standard 3D face statistical model to obtain the projection feature information corresponding to the 2D feature information may specifically include:
  • each vertex projection point determine the face projection feature point.
  • the 3D head reconstruction device may first project each vertex of the standard 3D face statistical model to a 2D image according to the projection parameters of the acquisition device of the acquired sample portrait image when the standard 3D face statistical model is in the head posture.
  • the projection points of each vertex corresponding to the standard 3D face statistical model are obtained, and then, according to each vertex of the grid to which each 3D face feature point is pre-calibrated in the standard 3D face statistical model
  • Correspondence with each vertex projection point from each vertex projection point, extract the vertex projection point corresponding to the grid to which each 3D face feature point belongs, and calculate each network based on the vertex projection point corresponding to each grid
  • the center of gravity projection point corresponding to the grid so that the center of gravity projection point corresponding to each grid corresponds to the 3D face feature point to which the grid belongs, and then each calculated center of gravity projection point is used as the person corresponding to each 3D face feature point.
  • Face projection feature points are provided.
  • the projection feature information may also include the projection contour of the human head.
  • planar projection of the standard 3D face statistical model to obtain the projection feature information corresponding to the 2D feature information may also specifically include: generating the head projection contour line corresponding to the standard 3D face statistical model according to each vertex projection point.
  • the 3D head reconstruction device may perform image processing on each vertex projection point according to a preset contour line generation method to generate a head projection contour line corresponding to a standard 3D face statistical model.
  • generating the head projection contour line corresponding to the standard three-dimensional face statistical model may specifically include:
  • Erosion processing is performed on the first head region image to obtain a second head region image
  • Edge extraction is performed on the second head region image to obtain a human head projection contour line.
  • Fig. 3 shows a flow chart of extracting a projected contour line of a portrait provided by an embodiment of the present disclosure.
  • the 3D head reconstruction device can first project each vertex of the standard 3D face statistical model into a two-dimensional space to obtain a projection image 301 with projection points of each vertex, and then project a point to each vertex in the projection image 301 Perform dilation processing to fill the gaps between the projection points of each vertex to obtain the first head region image 302, then perform erosion processing on the first head region image 302 to eliminate the noise caused by the expansion, and obtain the second head region image 302
  • the Canny edge detection algorithm is finally used to extract the edge of the second head region image 303 to obtain the contour projection image 304 with the projection contour of the human head.
  • the graphics processing unit (Graphics Processing Unit, GPU) of the 3D head reconstruction device can be used to implement
  • the process of generating the projection contour of the head from the projection point is to generate the projection contour of the head corresponding to different portrait images in batches, and improve the speed of generating the projection contour of the head corresponding to the different portrait images.
  • the face feature points and the head contour line in the two-dimensional feature information can be used as the supervision information, and the face projection feature points and the head projection contour line can be used as the information to be optimized.
  • the corresponding relationship between the face feature points and the face projection feature points and the correspondence between the head contour line and the head projection contour line, using the third regression loss function to iteratively fit the standard 3D face statistical model to obtain the sample portrait The sample 3D human head model corresponding to the image.
  • the head contour can also be extracted from the head contour. Contour feature points, and iteratively fit the standard 3D face statistical model based on the face feature points and head contour feature points to obtain the sample 3D head model corresponding to the sample portrait image.
  • the projection feature information may include face projection feature points and head projection contour lines.
  • the 3D head reconstruction method may also include :
  • the contour line of the head and the contour line of the projection of the head are randomly sampled to obtain the feature points of the contour of the head and the feature points of the projection of the head.
  • the standard three-dimensional face statistical model is iteratively fitted through the third regression loss function, and the sample three-dimensional head model obtained can specifically include:
  • the standard 3D face statistical model is iteratively fitted through the third regression loss function, and the sample 3D head corresponding to the sample portrait image is obtained.
  • the 3D head reconstruction device can perform random sampling on the head contour and the head projection contour, respectively, to obtain a preset number of head contour feature points in the head contour and a preset number of head projection contours in the head projection contour feature points, and according to the principle of the closest distance, determine the head projection contour feature points corresponding to each head contour feature point, and then use the face feature points and head contour feature points as supervision information respectively, and the face projection feature points and the head projection contour
  • the feature points are respectively used as the information to be optimized.
  • the standard 3D regression is performed using the third regression loss function.
  • the statistical face model is iteratively fitted to obtain a sample 3D head model corresponding to the sample portrait image.
  • the amount of data calculation in the iterative fitting process can be reduced, thereby improving the efficiency of the iterative fitting.
  • the two-dimensional feature information may also include shoulder feature points, so that the sample three-dimensional human head model corresponding to the generated sample portrait image may also be used to express neck posture features.
  • S220 may also specifically include:
  • the 3D head reconstruction device can input the sample portrait image into the human body detection model obtained in advance to detect the human body feature points on the sample portrait image, and obtain the human body feature points of the portrait in the sample portrait image output by the human body detection model, and then from The shoulder feature points that are pre-marked as shoulder features are extracted from the human body feature points.
  • the shoulder feature point may be a shoulder joint point of a human body.
  • the projection feature information may also include shoulder projection feature points, and the shoulder projection feature points may cooperate with the shoulder feature points for iterative fitting to implement model training.
  • performing plane projection on the standard three-dimensional face statistical model to obtain the projection feature information corresponding to the two-dimensional feature information may also specifically include:
  • the shoulder projection feature point is determined.
  • the 3D head reconstruction device can compare each vertex and each Correspondence between the vertex projection points, from each vertex projection point, extract the vertex projection point corresponding to the grid corresponding to the three-dimensional human body feature point corresponding to the clavicle and upper trapezius muscle, and based on the corresponding vertex projection point of each extracted grid Vertex projection points calculate the projection points of the center of gravity corresponding to each grid, so that the projection points of the center of gravity corresponding to the grid to which the clavicle belongs correspond to the three-dimensional human body feature points of the clavicle, and the projection points of the center of gravity corresponding to the grid to which the upper trapezius muscle belongs correspond to the upper
  • the three-dimensional human body feature points of the trapezius muscle are corresponding, and then the calculated center-of-gravity projection points are used as the projection feature points corresponding to the three-dimensional face feature points of the clavicle and upper trapezius muscle, and then the
  • the shoulder projection feature point may be a vertex projection point corresponding to a three-dimensional shoulder joint point in a standard three-dimensional face statistical model.
  • the relationship between the center of gravity coordinates of the grid of the clavicle and the upper trapezius muscle and the 3D coordinates of the marked 3D shoulder feature points can be constructed by means of interpolation calculation. The mapping relationship among them is obtained, and then the calculation formula of the shoulder feature points is obtained, so as to realize the plane projection of the three-dimensional shoulder feature points through the calculation formula of the shoulder feature points.
  • the feature points of the face, the contour line of the head or the feature points of the contour of the head, and the feature points of the shoulder in the two-dimensional feature information can be used as the supervision information respectively, and the feature points of the face projection, the head The projected contour line or head projection contour feature points, and the shoulder projected feature points are used as the information to be optimized respectively.
  • the standard 3D face statistical model is iteratively simulated using the third regression loss function. combined to obtain the sample 3D head model, so that the sample 3D head model corresponding to the sample portrait image can be used to express the head contour features, face features, face movements, face angles and neck angles, and improve the generated sample 3D The reliability of the human head model.
  • FIG. 4 uses FIG. 4 as an example to describe a fitting process of a three-dimensional human head model provided by an embodiment of the present disclosure.
  • Fig. 4 shows a schematic diagram of a three-dimensional human head model fitting process provided by an embodiment of the present disclosure.
  • the fitting process of the three-dimensional human head model may include the following steps.
  • the 3D head reconstruction device may need to perform 3D reconstruction of the face image of the head, and then extract the face feature points in the face image based on the pre-trained face detection model, and extract the face feature points in the face image based on the pre-trained portrait segmentation model.
  • the head contour line and the shoulder feature points in the face image are extracted based on the pre-trained human body detection model, and then the standard 3D face statistical model is iteratively fitted using the face feature points, head contour line and shoulder feature points, A 3D human head model corresponding to the portrait image is obtained.
  • a target 3D head model that can express both face features and head contour features can be reliably and efficiently generated, and the target 3D head model can be used to add special effects to the head of a portrait.
  • An embodiment of the present disclosure also provides a three-dimensional head reconstruction device for implementing the above three-dimensional head reconstruction method, which will be described below with reference to FIG. 5 .
  • the 3D head reconstruction device may be set in a 3D head reconstruction device, which may be an electronic device or a server, and there is no limitation here.
  • electronic devices may include devices with communication functions such as mobile phones, tablet computers, desktop computers, notebook computers, vehicle terminals, wearable electronic devices, all-in-one computers, and smart home devices, and may also be devices simulated by virtual machines or simulators.
  • the server may include cloud servers or server clusters and other devices with storage and computing functions.
  • Fig. 5 shows a schematic structural diagram of a three-dimensional human head reconstruction device provided by an embodiment of the present disclosure.
  • the 3D head reconstruction apparatus 500 may include a first acquisition unit 510 , a first processing unit 520 and a first generation unit 530 .
  • the first acquiring unit 510 may be configured to acquire a target portrait image.
  • the first processing unit 520 may be configured to input the target portrait image into the target model to obtain an output result of the target model; wherein, the target model is obtained by pre-training a plurality of training samples, and the training samples are generated according to the sample portrait image and the sample three-dimensional human head model,
  • the sample 3D head model is obtained by iteratively fitting the standard 3D face statistical model according to the 2D feature information related to the portrait in the sample portrait image.
  • the 2D feature information includes face feature points and head projection contour lines.
  • the first generating unit 530 may be configured to generate a target three-dimensional head model corresponding to the target portrait image according to the output result.
  • the target portrait image can be obtained, and the target portrait image can be input into the pre-trained target model to obtain the output result of the target model, so as to generate the target three-dimensional head model corresponding to the target portrait image according to the output result, because
  • the 3D head model used to generate the training samples of the target model is obtained by iteratively fitting the standard 3D face statistical model according to the 2D feature information related to the portrait in the sample portrait image, and the 3D head model used for iterative fitting
  • the two-dimensional feature information includes face feature points and head contour lines, so that the three-dimensional head model used to generate the training samples of the target model can express the face features and head contour features of the portrait in the sample portrait image, so that based on the training samples
  • the trained target model can be used to detect information related to the facial features and head contour features of the portrait.
  • the output result of the target model for the target portrait image can be used to express the facial features and facial features of the portrait in the target portrait image.
  • the information related to the contour features of the head so that the target 3D head model that can express the features of the face and the contour of the head can be generated according to the output result, and the target 3D head model can be used to add special effects to the head of the portrait.
  • the training samples may include sample statistical model parameters corresponding to sample portrait images and sample three-dimensional head models, and the output results include target statistical model parameters.
  • the 3D head reconstruction apparatus 500 may further include a second acquisition unit, a first training unit, and a second training unit.
  • the second acquiring unit may be configured to acquire a plurality of training samples before acquiring the target portrait image.
  • the first training unit may be configured to learn the mapping relationship between the sample portrait images in each training sample and the sample statistical model parameters through the first regression loss function to obtain the target model.
  • the second training unit can be configured to continue to learn the mapping relationship between the sample portrait image in each training sample and the sample statistical model parameters through the target loss function to obtain an optimized target model; wherein the target loss function includes the second In the regression loss function and the projection loss function, the weight value of the identity coefficient in the second regression loss function is greater than the weight value of the identity coefficient in the first regression loss function.
  • the first generation unit 530 may be further configured to generate the target 3D head model according to the target statistical model parameters and the standard 3D face statistical model.
  • the 3D head reconstruction apparatus 500 may further include a third acquisition unit, a first extraction unit and a third training unit.
  • the third acquiring unit may be configured to acquire a sample portrait image before acquiring the target portrait image.
  • the first extraction unit may be configured to extract two-dimensional feature information from the sample portrait image.
  • the third training unit may be configured to iteratively fit a standard 3D face statistical model based on 2D feature information to obtain a sample 3D head model.
  • the 3D head reconstruction device 500 may also include a second extraction unit, which may be configured to iteratively fit a standard 3D face statistical model based on 2D feature information to obtain samples Before the 3D head model, planar projection is performed on the standard 3D face statistical model to obtain the projection feature information corresponding to the 2D feature information.
  • a second extraction unit which may be configured to iteratively fit a standard 3D face statistical model based on 2D feature information to obtain samples Before the 3D head model, planar projection is performed on the standard 3D face statistical model to obtain the projection feature information corresponding to the 2D feature information.
  • the third training unit may be further configured to iteratively fit the standard 3D face statistical model through the third regression loss function based on the 2D feature information and projection feature information to obtain a sample 3D head model.
  • the projection feature information may include face projection feature points.
  • the second extraction unit may include a first sub-extraction unit and a second sub-extraction unit.
  • the first sub-extraction unit may be configured to project each vertex of the standard 3D face statistical model into a 2D space, and obtain the projection of each vertex corresponding to the standard 3D face statistical model.
  • the second sub-extraction unit may be configured to determine the face projection feature points according to each vertex projection point.
  • the projection feature information may also include a projection contour of a human head.
  • the second extraction unit may further include a third sub-extraction unit, a fourth sub-extraction unit, and a fifth sub-extraction unit.
  • the third sub-extraction unit may be configured to perform dilation processing on each vertex projection point to obtain the first head region image.
  • the fourth sub-extraction unit may be configured to perform erosion processing on the first head region image to obtain the second head region image.
  • the fifth sub-extraction unit may be configured to perform edge extraction on the second head region image to obtain the projection contour of the human head.
  • the two-dimensional feature information may further include shoulder feature points, and the projection feature information may also include shoulder projection feature points.
  • the second extraction unit may further include a sixth sub-extraction unit, and the sixth sub-extraction unit may be configured to determine shoulder projection feature points according to each vertex projection point.
  • the projection feature information may include face projection feature points and head projection contour lines.
  • the 3D head reconstruction device 500 may also include a random sampling unit, which may be configured to iteratively simulate the standard 3D face statistical model through the third regression loss function based on the two-dimensional feature information and the projection feature information.
  • a random sampling unit which may be configured to iteratively simulate the standard 3D face statistical model through the third regression loss function based on the two-dimensional feature information and the projection feature information.
  • random sampling is performed on the contour line of the head and the contour line of the projection of the head, respectively, to obtain the feature points of the contour of the head and the feature points of the projection of the head.
  • the third training unit can be further configured to iterate the standard 3D face statistical model through the third regression loss function based on the face feature points, the head contour feature points, the face projection feature points and the head projection contour feature points Fitting to obtain the sample three-dimensional human head model.
  • the 3D head reconstruction device 500 may also include a posture detection unit, which may be configured to perform plane projection on a standard 3D face statistical model to obtain projection feature information corresponding to the 2D feature information Previously, head poses were detected in sample portrait images.
  • a posture detection unit which may be configured to perform plane projection on a standard 3D face statistical model to obtain projection feature information corresponding to the 2D feature information Previously, head poses were detected in sample portrait images.
  • the second extraction unit may be further configured to project the standard 3D statistical face model onto the imaging plane of the acquisition device according to the projection parameters of the acquisition device of the sample portrait image when the standard 3D statistical face model is in the head posture On, the projection feature information is obtained.
  • the three-dimensional head reconstruction device 500 shown in FIG. 5 can execute each step in the method embodiment shown in FIG. 1 to FIG. 2 , and realize each process in the method embodiment shown in FIG. 1 to FIG. 2 and effects, which will not be described here.
  • An embodiment of the present disclosure also provides a three-dimensional head reconstruction device, the three-dimensional head reconstruction device may include a processor and a memory, and the memory may be used to store executable instructions.
  • the processor can be used to read executable instructions from the memory, and execute the executable instructions to implement the three-dimensional head reconstruction method in the above-mentioned embodiments.
  • FIG. 6 shows a schematic structural diagram of a three-dimensional human head reconstruction device provided by an embodiment of the present disclosure. Referring specifically to FIG. 6 , it shows a schematic structural diagram of a three-dimensional head reconstruction device 600 suitable for implementing an embodiment of the present disclosure.
  • the three-dimensional head reconstruction device 600 may be an electronic device or a server, which is not limited here.
  • electronic devices may include devices with communication functions such as mobile phones, tablet computers, desktop computers, notebook computers, vehicle terminals, wearable electronic devices, all-in-one computers, and smart home devices, and may also be devices simulated by virtual machines or simulators.
  • the server may include cloud servers or server clusters and other devices with storage and computing functions.
  • the three-dimensional head reconstruction device 600 shown in FIG. 6 is only an example, and should not limit the functions and scope of use of this embodiment of the present disclosure.
  • the three-dimensional human head reconstruction device 600 may include a processing device (such as a central processing unit, a graphics processing unit, etc.) Various appropriate actions and processes are executed by accessing programs in the random access memory (RAM) 603 .
  • RAM random access memory
  • various programs and data required for the operation of the three-dimensional head reconstruction device 600 are also stored.
  • the processing device 601, ROM 602, and RAM 603 are connected to each other through a bus 604.
  • An input/output (I/O) interface 605 is also connected to the bus 604 .
  • the following devices can be connected to the I/O interface 605: input devices 606 including, for example, a touch screen, touchpad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; including, for example, a liquid crystal display (LCD), speaker, vibration an output device 607 such as a computer; a storage device 608 including, for example, a magnetic tape, a hard disk, etc.; and a communication device 609.
  • the communication means 609 may allow the three-dimensional head reconstruction device 600 to perform wireless or wired communication with other devices to exchange data.
  • FIG. 6 shows a three-dimensional head reconstruction apparatus 600 with various means, it should be understood that it is not a requirement to implement or possess all of the means shown. More or fewer means may alternatively be implemented or provided.
  • An embodiment of the present disclosure also provides a computer-readable storage medium, the storage medium stores a computer program, and when the computer program is executed by a processor, the processor implements the three-dimensional head reconstruction method in the above-mentioned embodiments.
  • Embodiments of the present disclosure also provide a computer program product including program instructions, and when the program instructions are run on the electronic device, the electronic device is made to execute the three-dimensional head reconstruction method in the above embodiments.
  • embodiments of the present disclosure include a computer program product, which includes a computer program carried on a non-transitory computer readable medium, where the computer program includes program code for executing the method shown in the flowchart.
  • the computer program may be downloaded and installed from a network via communication means 609, or from storage means 608, or from ROM 602.
  • the processing device 601 When the computer program is executed by the processing device 601, the above-mentioned functions defined in the three-dimensional head reconstruction method of the embodiment of the present disclosure are executed.
  • the above-mentioned computer-readable medium in the present disclosure may be a computer-readable signal medium or a computer-readable storage medium or any combination of the above two.
  • a computer readable storage medium may be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination thereof. More specific examples of computer-readable storage media may include, but are not limited to, electrical connections with one or more wires, portable computer diskettes, hard disks, random access memory (RAM), read-only memory (ROM), erasable Programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the above.
  • a computer-readable storage medium may be any tangible medium that contains or stores a program that can be used by or in conjunction with an instruction execution system, apparatus, or device.
  • a computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave carrying computer-readable program code therein. Such propagated data signals may take many forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the foregoing.
  • a computer-readable signal medium may also be any computer-readable medium other than a computer-readable storage medium, which can transmit, propagate, or transmit a program for use by or in conjunction with an instruction execution system, apparatus, or device .
  • Program code embodied on a computer readable medium may be transmitted by any appropriate medium, including but not limited to wires, optical cables, RF (radio frequency), etc., or any suitable combination of the above.
  • clients and servers can communicate using any currently known or future developed network protocol, such as HTTP, and can be interconnected with any form or medium of digital data communication (eg, a communication network).
  • a communication network examples include local area networks (“LANs”), wide area networks (“WANs”), internetworks (e.g., the Internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed network of.
  • LANs local area networks
  • WANs wide area networks
  • Internet internetworks
  • peer-to-peer networks e.g., ad hoc peer-to-peer networks
  • the above-mentioned computer-readable medium may be included in the above-mentioned three-dimensional head reconstruction device; or it may exist independently without being assembled into the three-dimensional head reconstruction device.
  • the above-mentioned computer-readable medium carries one or more programs, and when the above-mentioned one or more programs are executed by the 3D head reconstruction device, the 3D head reconstruction device is made to perform:
  • the target portrait image input the target portrait image into the target model to obtain the output result of the target model; wherein, the target model is pre-trained by multiple training samples, the training samples are generated according to the sample portrait image and the sample 3D head model, and the sample 3D head model According to the two-dimensional feature information related to the portrait in the sample portrait image, it is obtained by iteratively fitting the standard three-dimensional face statistical model.
  • the two-dimensional feature information includes the face feature points and the contour line of the head projection; The target 3D head model.
  • computer program codes for performing the operations of the present disclosure may be written in one or more programming languages or combinations thereof, including but not limited to object-oriented programming languages—such as Java, Smalltalk, C++, and also conventional procedural programming languages—such as "C" or similar programming languages.
  • the program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server.
  • the remote computer can be connected to the user computer through any kind of network, including a local area network (LAN) or a wide area network (WAN), or it can be connected to an external computer (such as through an Internet service provider). Internet connection).
  • LAN local area network
  • WAN wide area network
  • Internet service provider such as AT&T, MCI, Sprint, EarthLink, MSN, GTE, etc.
  • each block in a flowchart or block diagram may represent a module, program segment, or portion of code that contains one or more logical functions for implementing specified executable instructions.
  • the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or they may sometimes be executed in the reverse order, depending upon the functionality involved.
  • each block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations can be implemented by a dedicated hardware-based system that performs the specified functions or operations , or may be implemented by a combination of dedicated hardware and computer instructions.
  • the units involved in the embodiments described in the present disclosure may be implemented by software or by hardware. Wherein, the name of a unit does not constitute a limitation of the unit itself under certain circumstances.
  • FPGAs Field Programmable Gate Arrays
  • ASICs Application Specific Integrated Circuits
  • ASSPs Application Specific Standard Products
  • SOCs System on Chips
  • CPLD Complex Programmable Logical device
  • a machine-readable medium may be a tangible medium that may contain or store a program for use by or in conjunction with an instruction execution system, apparatus, or device.
  • a machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium.
  • a machine-readable medium may include, but is not limited to, electronic, magnetic, optical, electromagnetic, infrared, or semiconductor systems, apparatus, or devices, or any suitable combination of the foregoing.
  • machine-readable storage media would include one or more wire-based electrical connections, portable computer discs, hard drives, random access memory (RAM), read only memory (ROM), erasable programmable read only memory (EPROM or flash memory), optical fiber, compact disk read only memory (CD-ROM), optical storage, magnetic storage, or any suitable combination of the foregoing.
  • RAM random access memory
  • ROM read only memory
  • EPROM or flash memory erasable programmable read only memory
  • CD-ROM compact disk read only memory
  • magnetic storage or any suitable combination of the foregoing.

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Or Creating Images (AREA)
  • Image Analysis (AREA)

Abstract

一种三维人头重建方法、装置、设备及介质。其中,三维人头重建方法包括:获取目标人像图像;将目标人像图像输入目标模型,得到目标模型的输出结果,目标模型由多个训练样本预先训练得到,训练样本根据样本人像图像和样本三维人头模型生成,样本三维人头模型根据样本人像图像中与人像相关的二维特征信息对标准三维人脸统计模型进行迭代拟合得到,二维特征信息包括人脸特征点和人头投影轮廓线;根据输出结果,生成目标人像图像对应的目标三维人头模型。根据该方法实施例,能够实现对人像的人头的三维重建。

Description

三维人头重建方法、装置、设备及介质
相关申请的交叉引用
本申请要求于2021年09月01日提交的,申请号为202111022097.2、发明名称为“三维人头重建方法、装置、设备及介质”的中国专利申请的优先权,该申请的全部内容通过引用结合在本申请中。
技术领域
本公开涉及图像处理技术领域,尤其涉及一种三维人头重建方法、装置、设备及介质。
背景技术
随着视频应用和人像美化应用的普及,各种人像特效功能也得到了广泛的应用。三维重建作为一种有效的人像表述的技术,在人像特效功能中有着广泛的应用。例如,有些人像特效需要在对人像进行形变之后才能添加,而对人像的形变往往需要在针对人像重建的三维模型上才能进行。
但是,目前的三维重建方式仅能实现对人像的人脸进行三维重建,无法实现对人像的人头进行三维重建,因此,无法利用目前的三维重建方式为人像的人头添加特效。
发明内容
为了解决上述技术问题或者至少部分地解决上述技术问题,本公开提供了一种三维人头重建方法、装置、设备及介质。
第一方面,本公开提供了一种三维人头重建方法,包括:
获取目标人像图像;
将目标人像图像输入目标模型,得到目标模型的输出结果;其中,目标模型由多个训练样本预先训练得到,训练样本根据样本人像图像和样本三维人头模型生成,样本三维人头模型根据样本人像图像中与人像相关的二维特征信息对标准三维人脸统计模型进行迭代拟合得到,二维特征信息包括人脸特征点和人头投影轮廓线;
根据输出结果,生成目标人像图像对应的目标三维人头模型。
第二方面,本公开提供了一种三维人头重建装置,包括:
第一获取单元,配置为获取目标人像图像;
第一处理单元,配置为将目标人像图像输入目标模型,得到目标模型的输出结果;其中,目标模型由多个训练样本预先训练得到,训练样本根据样本人像图像和样本三维人头 模型生成,样本三维人头模型根据样本人像图像中与人像相关的二维特征信息对标准三维人脸统计模型进行迭代拟合得到,二维特征信息包括人脸特征点和人头投影轮廓线;
第一生成单元,配置为根据输出结果,生成目标人像图像对应的目标三维人头模型。
第三方面,本公开提供了一种三维人头重建设备,包括:
处理器;
存储器,用于存储可执行指令;
其中,处理器用于从存储器中读取可执行指令,并执行可执行指令以实现第一方面所述的三维人头重建方法。
第四方面,本公开提供了一种计算机可读存储介质,该存储介质存储有计算机程序,当计算机程序被处理器执行时,使得处理器实现第一方面所述的三维人头重建方法。
本公开实施例提供的技术方案与现有技术相比具有如下优点:
本公开实施例的三维人头重建方法、装置、设备及介质,能够获取目标人像图像,并将目标人像图像输入预先训练得到的目标模型,得到目标模型的输出结果,以根据输出结果,生成目标人像图像对应的目标三维人头模型,由于用于生成该目标模型的训练样本的三维人头模型是根据样本人像图像中与人像相关的二维特征信息对标准三维人脸统计模型进行迭代拟合所得到的,并且用于进行迭代拟合的二维特征信息包括人脸特征点和人头轮廓线,使得用于生成该目标模型的训练样本的三维人头模型可以表达样本人像图像中人像的人脸特征和人头轮廓特征,进而使得基于该训练样本训练的到的目标模型能够用于检测与人像的人脸特征和人头轮廓特征相关的信息,因此,目标模型针对目标人像图像的输出结果可以用于表达与目标人像图像中人像的人脸特征和人头轮廓特征相关的信息,从而使得根据输出结果可以生成既可以表述人脸特征又可以表述人头轮廓特征的目标三维人头模型,该目标三维人头模型可以用于为人像的人头添加特效。
附图说明
结合附图并参考以下具体实施方式,本公开各实施例的上述和其他特征、优点及方面将变得更加明显。贯穿附图中,相同或相似的附图标记表示相同或相似的元素。应当理解附图是示意性的,原件和元素不一定按照比例绘制。
图1为本公开实施例提供的一种三维人头重建方法的流程示意图;
图2为本公开实施例提供的另一种三维人头重建方法的流程示意图;
图3为本公开实施例提供的一种人像投影轮廓线的提取流程图;
图4为本公开实施例提供的一种三维人头模型拟合过程的示意图;
图5为本公开实施例提供的一种三维人头重建装置的结构示意图;
图6为本公开实施例提供的一种三维人头重建设备的结构示意图。
具体实施方式
下面将参照附图更详细地描述本公开的实施例。虽然附图中显示了本公开的某些实施例,然而应当理解的是,本公开可以通过各种形式来实现,而且不应该被解释为限于这里阐述的实施例,相反提供这些实施例是为了更加透彻和完整地理解本公开。应当理解的是,本公开的附图及实施例仅用于示例性作用,并非用于限制本公开的保护范围。
应当理解,本公开的方法实施方式中记载的各个步骤可以按照不同的顺序执行,和/或并行执行。此外,方法实施方式可以包括附加的步骤和/或省略执行示出的步骤。本公开的范围在此方面不受限制。
本文使用的术语“包括”及其变形是开放性包括,即“包括但不限于”。术语“基于”是“至少部分地基于”。术语“一个实施例”表示“至少一个实施例”;术语“另一实施例”表示“至少一个另外的实施例”;术语“一些实施例”表示“至少一些实施例”。其他术语的相关定义将在下文描述中给出。
需要注意,本公开中提及的“第一”、“第二”等概念仅用于对不同的装置、模块或单元进行区分,并非用于限定这些装置、模块或单元所执行的功能的顺序或者相互依存关系。
需要注意,本公开中提及的“一个”、“多个”的修饰是示意性而非限制性的,本领域技术人员应当理解,除非在上下文另有明确指出,否则应该理解为“一个或多个”。
本公开实施方式中的多个装置之间所交互的消息或者信息的名称仅用于说明性的目的,而并不是用于对这些消息或信息的范围进行限制。
本公开实施例提供了一种能够对人像图像中的人头进行三维重建的三维人头重建方法、装置、设备及介质。
下面首先结合图1至图4对本公开实施例提供的三维人头重建方法进行说明。
在本公开实施例中,该三维人头重建方法可以由三维人头重建设备执行,该三维人头重建设备可以为电子设备,也可以为服务器,在此不作限制。其中,电子设备可以包括移动电话、平板电脑、台式计算机、笔记本电脑、车载终端、可穿戴电子设备、一体机、智能家居设备等具有通信功能的设备,也可以是虚拟机或者模拟器模拟的设备。服务器可以包括云服务器或者服务器集群等具有存储及计算功能的设备。
图1示出了本公开实施例提供的一种三维人头重建方法的流程示意图。
如图1所示,该三维人头重建方法可以包括如下步骤。
S110、获取目标人像图像。
在本公开实施例中,三维人头重建设备可以获取需要对人头进行三维重建的目标人像图像,该目标人像图像可以为二维图像。
在一些实施例中,三维人头重建设备可以实时拍摄图像,并将该图像作为目标人像图像。
在另一些实施例中,三维人头重建设备也可以获取用户在本地所选择的图像,并将该图像作为目标人像图像。
在又一些实施例中,三维人头重建设备可以接收其他设备所发送的图像,并将该图像作为目标人像图像。
在再一些实施例中,三维人头重建设备可以从本地图像库中提取任意图像,并将该图像作为目标人像图像。
S120、将目标人像图像输入目标模型,得到目标模型的输出结果;其中,目标模型由多个训练样本预先训练得到,训练样本根据样本人像图像和样本三维人头模型生成,样本三维人头模型根据样本人像图像中与人像相关的二维特征信息对标准三维人脸统计模型进行迭代拟合得到,二维特征信息包括人脸特征点和人头投影轮廓线。
在本公开实施例中,三维人头重建设备可以在获取到目标人像图像之后,将目标人像图像输入预先训练得到的目标模型,以对目标模型中与人头三维重建相关的信息进行提取,得到目标模型的输出结果。
其中,人脸特征点可以用于表征样本人像图像中人像的人脸特征,人头轮廓线可以用于表征样本人像图像中人像的人头轮廓特征。
可选地,标准三维人脸统计模型可以包括标准人头三维形变模型(3D Morphable Model,3DMM)。
在本公开实施例中,进一步地,二维特征信息还可以包括肩部特征点。其中,肩部特征点可以用于表征样本人像图像中人像的颈部姿态特征。
S130、根据输出结果,生成目标人像图像对应的目标三维人头模型。
在本公开实施例中,三维人头重建设备可以在获取到输出结果之后,按照预设的人头模型生成方式,根据输出结果,生成目标人像图像对应的目标三维人头模型。
在本公开实施例中,能够获取目标人像图像,并将目标人像图像输入预先训练得到的目标模型,得到目标模型的输出结果,以根据输出结果,生成目标人像图像对应的目标三维人头模型,由于用于生成该目标模型的训练样本的三维人头模型是根据样本人像图像中与人像相关的二维特征信息对标准三维人脸统计模型进行迭代拟合所得到的,并且用于进行迭代拟合的二维特征信息包括人脸特征点和人头轮廓线,使得用于生成该目标模型的训练样本的三维人头模型可以表达样本人像图像中人像的人脸特征和人头轮廓特征,进而使 得基于该训练样本训练的到的目标模型能够用于检测与人像的人脸特征和人头轮廓特征相关的信息,因此,目标模型针对目标人像图像的输出结果可以用于表达与目标人像图像中人像的人脸特征和人头轮廓特征相关的信息,从而使得根据输出结果可以生成既可以表述人脸特征又可以表述人头轮廓特征的目标三维人头模型,该目标三维人头模型可以用于为人像的人头添加特效。
在本公开另一种实施方式中,为了提高三维人头的重建效率,训练样本可以包括样本人像图像和样本三维人头模型对应的样本统计模型参数。
其中,样本统计模型参数为从样本三维人头模型中提取的用于表征人像的人头特征的模型参数,该样本统计模型参数也时样本三维人头模型中与标准三维人脸统计模型中数值不同的模型参数。
可选地,模型参数可以包括图像中人像的身份系数、图像中人像的表情系数、图像中人像的姿态系数、图像中人像的选择系数、图像中人像的平移系数和采集图像的采集设备的投影参数。其中,投影参数可以为采集设备的相机外参数。
因此,在本公开实施例中,利用上述的训练样本可以训练得到用于从人像图像中提取与人头三维重建相关的模型参数的目标模型。
在本公开实施例中,当三维人头重建设备接收到目标人像图像之后,可以将目标人像图像输入基于上述样本训练得到的目标模型,得到目标模型的输出结果。
其中,输出结果可以包括目标统计模型参数。目标统计模型参数可以为与人头三维重建相关的模型参数。
由此,在本公开实施例中,可以通过目标模型实现对目标人像图像对应的目标统计模型参数的快速提取。
在本公开一些实施例中,在S110之前,该三维人头重建方法还可以包括:
获取多个训练样本;
通过第一回归损失函数,学习每个训练样本中的样本人像图像和样本统计模型参数之间的映射关系,得到目标模型。
在本公开实施例中,在获取目标人像图像之前,三维人头重建设备可以首先获取多个训练样本,并利用多个训练样本训练目标模型。
具体地,三维人头重建设备可以首先通过扫描多个预先建立好的三维人头模型,并通过主成分分析(Principal Components Analysis,PCA)方法获得标准三维人脸统计模型,然后获取多个样本人像图像,并提取每个样本人像图像中与人像相关的二维特征信息,进而根据每个二维特征信息对标准三维人头模型进行迭代拟合得到每个样本人像图像对应的样本三维人头模型,然后从每个样本三维人头模型提取样本统计模型参数,并将每个样本人 像图像和该样本人像图像对应的样本统计模型参数作为一个训练样本,由此,可以获取多个训练样本。接着,三维人头重建设备可以直接利用第一回归损失函数,通过梯度下降或高斯-牛顿法等解优化算法回归每个样本人像图像对应的样本统计模型参数,得到目标模型。
可选地,第一回归损失函数可以为Smooth-L1损失函数。
由此,在本公开实施例中,可以通过上述的生成样本人像图像对应的样本三维人头模型的方法自动采集大量标注数据,降低训练样本的采集成本。
进一步地,三维人头重建设备可以获取多个具有不同人脸角度、不同表情的预设人像图像,每个预设人像图像均包括人像头部和人像肩部。例如三维人头重建设备可以获取10-30万个预设人像图像。然后,三维人头重建设备可以对各个预设人像图像分别进行旋转、平移等处理,以实现图像数据的扩增。最后,三维人头重建设备可以将扩增后的人像图像作为样本人像图像。
由此,在本公开实施例中,可以实现对用于从人像图像中提取统计模型参数的目标模型的快速训练。
在本公开另一些实施例中,在得到模型参数检测模型之后,该三维人头重建方法还可以包括:
通过目标损失函数,继续学习每个训练样本中的样本人像图像和样本统计模型参数之间的映射关系,得到优化后的目标模型。
在本公开实施例中,在利用第一回归损失函数训练得到目标模型之后,还可以利用目标损失函数,通过梯度下降或高斯-牛顿法等解优化算法继续学习每个训练样本中的样本人像图像和样本统计模型参数之间的映射关系,以对目标模型进行模型优化,得到优化后的目标模型。
其中,目标损失函数可以包括第二回归损失函数和投影损失函数,第二回归损失函数中身份系数的权重值大于第一回归损失函数中身份系数的权重值。
可选地,第二回归损失函数也可以为Smooth-L1损失函数,但是,第二回归损失函数中身份系数的权重值可以被设置为大于第一回归损失函数中身份系数的权重值,使得可以对人头形状和五官形状进行着重训练,以对目标模型进行模型优化。
进一步地,第二回归损失函数中身份系数中的人头形状系数、眼睛形状系数和嘴巴形状系数的权重值可以被设置为大于第一回归损失函数中身份系数中的人头形状系数、眼睛形状系数和嘴巴形状系数的权重值,以对能够明显区分人像特征的人头形状、眼睛形状和嘴巴形状进行着重训练,以对目标模型进行模型优化。
由此,在本公开实施例中,可以对投影损失、人头形状和五官形状进行优化,提高模 型参数检测模型的检测结果的可靠性。
在本公开又一些实施例中,S130可以具体包括:根据目标统计模型参数和标准三维人脸统计模型,生成目标三维人头模型。
在本公开实施例中,在提取出目标统计模型参数之后,三维人头重建设备可以将标准三维人脸统计模型中的标准统计模型参数替换为目标统计模型参数,生成目标人像图像对应的目标三维人头模型,以高效地生成目标三维人头模型。
在本公开又一种实施方式中,为了提高对人像图像中的人头进行三维重建的效率,还提供了另一种三维人头重建方法,下面结合图2进行说明。
图2示出了本公开实施例提供的另一种三维人头重建方法的流程示意图。
如图2所示,在图1所示的S110之前,该三维人头重建方法还可以包括如下步骤。
S210、获取样本人像图像。
在本公开实施例中,三维人头重建设备可以获取需要对人头进行三维重建的样本人像图像,该样本人像图像可以为二维图像。
其中,样本人像图像与图1所示实施例中的目标人像图像相似,在此不做赘述。
S220、从样本人像图像中提取二维特征信息。
在本公开实施例中,在获取到样本人像图像之后,三维人头重建设备可以对样本人像图像进行人像特征分析,以提取样本人像图像中与人像相关的二维特征信息。
其中,二维特征信息可以包括人脸特征点和人头轮廓线。
在一些实施例中,S220可以具体包括:
基于预先训练得到的人脸检测模型,提取样本人像图像中的人脸特征点;
基于预先训练得到的人头预测模型,提取样本人像图像中的人头轮廓线。
具体地,三维人头重建设备可以将样本人像图像输入预先训练得到的人脸检测模型,以对样本人像图像进行人脸特征点检测,得到人脸检测模型输出的样本人像图像中的人脸特征点。三维人头重建设备可以将样本人像图像输入预先训练得到的人头预测模型,以对样本人像图像进行人头轮廓检测,得到人头预测模型输出的样本人像图像中的人头轮廓线。
在另一些实施例中,S220还可以具体包括:
基于预先训练得到的人脸检测模型,提取样本人像图像中的人脸特征点;
基于预先训练得到的人像分割模型,提取样本人像图像中的人像轮廓线;
将人像轮廓线中的人头部分作为人头轮廓线。
具体地,三维人头重建设备可以将样本人像图像输入预先训练得到的人脸检测模型,以对样本人像图像进行人脸特征点检测,得到人脸检测模型输出的样本人像图像中的人脸特征点。三维人头重建设备可以将样本人像图像输入预先训练得到的人像分割模型,以对 样本人像图像进行人像轮廓检测,得到人像分割模型输出的样本人像图像中的人像轮廓线,进而确定人像轮廓线中的人头部分,并将人像轮廓线中的人头部分作为人头轮廓线。
S230、基于二维特征信息对标准三维人脸统计模型进行迭代拟合,得到样本三维人头模型。
在本公开实施例中,在提取出样本人像图像中的二维特征信息之后,三维人头重建设备可以将二维特征信息作为监督信息,以基于二维特征信息对标准三维人脸统计模型进行预设次数的迭代拟合,实现对标准三维人脸统计模型的模型训练,进而得到样本人像图像对应的样本三维人头模型,使得该样本三维人头模型可以具有样本人像图像中人像的人脸特征和人头轮廓特征。
其中,预设次数可以为根据需要预先设置的任意次数,在此不作限制。
可选地,标准三维人脸统计模型可以包括3DMM。标准三维人脸统计模型可以通过扫描多个三维人头模型,并通过PCA方法获得。
可选地,三维人头重建设备可以基于二维特征信息对标准三维人脸统计模型进行迭代拟合,以实现对标准三维人脸统计模型中的统计模型参数的调整,进而得到样本统计模型参数,完成对样本三维人头模型的模型训练。
可选地,模型参数可以包括图像中人像的身份系数、图像中人像的表情系数、图像中人像的姿态系数、图像中人像的选择系数、图像中人像的平移系数和采集图像的采集设备的投影参数。其中,投影参数可以为采集设备的相机外参数。
由此,在本公开实施例中,可以将二维特征信息作为监督信息预测人头的3DMM的模型,从而达到实时重建三维人头的目的。由于用于进行迭代拟合的二维特征信息包括人脸特征点和人头轮廓线,因此,可以利用样本人像图像中人像的人脸特征和人头轮廓特征实现对样本人像图像中人像的人头的三维重建,以重建得到既可以表述人脸特征又可以表述人头轮廓特征的三维人头模型,进而可以利用重建得到的三维人头模型为人像的人头添加特效。
在本公开一些实施例中,为了提高迭代拟合的可靠性,在S230之前,该三维人头重建方法还可以包括:对标准三维人脸统计模型进行平面投影,得到二维特征信息对应的投影特征信息。
相应地,S230可以具体包括:基于二维特征信息和投影特征信息,通过第三回归损失函数对标准三维人脸统计模型进行迭代拟合,得到样本三维人头模型。
具体地,三维人头重建设备在获取样本人像图像之后,可以对标准三维人脸统计模型进行平面投影,得到二维特征信息对应的投影特征信息,进而可以将二维特征信息作为监督信息,将投影特征信息作为待优化信息,利用第三回归损失函数,通过梯度下降或高斯- 牛顿法等解优化算法对标准三维人脸统计模型进行迭代拟合,以对标准三维人脸统计模型中的统计模型参数进行调整,进而得到样本统计模型参数,完成对样本三维人头模型的模型训练。
在本公开实施例中,需要说明的是,在进行迭代拟合的过程中,在每一次拟合后,均需要重新对拟合得到的标准三维人脸统计模型进行平面投影,得到拟合得到的标准三维人脸统计模型对应的投影特征信息,进而利用二维特征信息和重新投影后的投影特征信息继续进行下一次拟合,直至迭代拟合结束。
由此,在本公开实施例中,可以利用二维特征信息和投影特征信息对标准三维人脸统计模型进行可靠地训练,以得到样本人像图像对应的样本三维人头模型。
可选地,在对标准三维人脸统计模型进行平面投影,得到二维特征信息对应的投影特征信息之前,该三维人头重建方法还可以包括:
检测样本人像图像中的头部姿态。
相应地,对标准三维人脸统计模型进行平面投影,得到二维特征信息对应的投影特征信息可以具体包括:
在标准三维人脸统计模型处于该头部姿态下,按照样本人像图像的采集设备的投影参数,将标准三维人脸统计模型投影到采集设备的成像平面上,得到投影特征信息。
具体地,三维人头重建设备在获取样本人像图像之后,可以利用预先训练得到的姿态检测模型,对样本人像图像中人像的头部姿态进行检测,得到样本人像图像中的头部姿态,然后,对标准三维人脸统计模型的姿态进行调整,使标准三维人脸统计模型处于该头部姿态下,接着,从样本人像图像的图像信息中获取样本人像图像的采集设备的投影参数,并在标准三维人脸统计模型处于该头部姿态下,按照获取的样本人像图像的采集设备的投影参数,将标准三维人脸统计模型投影到采集设备的成像平面上,得到二维特征信息对应的投影特征信息,以提高对标准三维人脸统计模型进行平面投影的可靠性。
在一些实施例中,投影特征信息可以包括人脸投影特征点。
相应地,对标准三维人脸统计模型进行平面投影,得到二维特征信息对应的投影特征信息可以具体包括:
将标准三维人脸统计模型的各个顶点投影到二维空间,得到标准三维人脸统计模型对应的各个顶点投影点;
根据各个顶点投影点,确定人脸投影特征点。
具体地,三维人头重建设备可以首先在标准三维人脸统计模型处于该头部姿态下,按照获取的样本人像图像的采集设备的投影参数,将标准三维人脸统计模型的各个顶点投影到二维空间如采集设备的成像平面上,得到标准三维人脸统计模型对应的各个顶点投影点, 然后,按照预先在标准三维人脸统计模型中标定的各个三维人脸特征点所属的网格的各个顶点与各个顶点投影点之间的对应关系,从各个顶点投影点中,提取各个三维人脸特征点所属的网格对应的顶点投影点,并基于每个网格对应的顶点投影点计算每个网格对应的重心投影点,使每个网格对应的重心投影点与该网格所属的三维人脸特征点相对应,进而将计算得到的各个重心投影点作为各个三维人脸特征点对应的人脸投影特征点。
在另一些实施例中,投影特征信息还可以包括人头投影轮廓线。
相应地,对标准三维人脸统计模型进行平面投影,得到二维特征信息对应的投影特征信息还可以具体包括:根据各个顶点投影点,生成标准三维人脸统计模型对应的人头投影轮廓线。
具体地,三维人头重建设备可以按照预先设置的轮廓线生成方式,对各个顶点投影点进行图像处理,生成标准三维人脸统计模型对应的人头投影轮廓线。
可选地,根据各个顶点投影点,生成标准三维人脸统计模型对应的人头投影轮廓线可以具体包括:
对各个顶点投影点进行膨胀处理,得到第一头部区域图像;
对第一头部区域图像进行腐蚀处理,得到第二头部区域图像;
对第二头部区域图像进行边缘提取,得到人头投影轮廓线。
图3示出了本公开实施例提供的一种人像投影轮廓线的提取流程图。
如图3所示,三维人头重建设备可以首先将标准三维人脸统计模型的各个顶点投影到二维空间,得到具有各个顶点投影点的投影图像301,然后对投影图像301中的各个顶点投影点进行膨胀处理,以将各个顶点投影点之间的空隙进行填充,得到第一头部区域图像302,接着对第一头部区域图像302进行腐蚀处理,消除因膨胀而产生的噪声,得到第二头部区域图像303,最后采用Canny边缘检测算法对第二头部区域图像303进行边缘提取,得到具有人头投影轮廓线的轮廓线投影图像304。
在本公开实施例中,可选地,在同时训练多个样本人像图像对应的样本三维人头模型的过程中,可以利用三维人头重建设备的图形处理器(Graphics Processing Unit,GPU)实现根据各个顶点投影点生成人头投影轮廓线的过程,以批量生成不同的人像图像对应的人头投影轮廓线,提高生成不同的人像图像对应的人头投影轮廓线的速度。
由此,在本公开实施例中,可以将二维特征信息中的人脸特征点和人头轮廓线分别作为监督信息,将人脸投影特征点和人头投影轮廓线分别作为待优化信息,根据人脸特征点与人脸投影特征点之间的对应关系以及人头轮廓线与人头投影轮廓线之间的对应关系,利用第三回归损失函数对标准三维人脸统计模型进行迭代拟合,得到样本人像图像对应的样本三维人头模型。
进一步地,为了提高迭代拟合的效率,在基于二维特征信息对标准三维人脸统计模型进行迭代拟合,得到样本人像图像对应的样本三维人头模型之前,还可以从人头轮廓线中提取人头轮廓特征点,并基于人脸特征点和人头轮廓特征点对标准三维人脸统计模型进行迭代拟合,得到样本人像图像对应的样本三维人头模型。
在本公开另一些实施例中,投影特征信息可以包括人脸投影特征点和人头投影轮廓线。
在这些实施例中,在基于二维特征信息和投影特征信息,通过第三回归损失函数对标准三维人脸统计模型进行迭代拟合,得到样本三维人头模型之前,该三维人头重建方法还可以包括:
对人头轮廓线和人头投影轮廓线分别进行随机采样,得到人头轮廓特征点和人头投影轮廓特征点。
相应地,基于二维特征信息和投影特征信息,通过第三回归损失函数对标准三维人脸统计模型进行迭代拟合,得到样本三维人头模型可以具体包括:
基于人脸特征点、人头轮廓特征点、人脸投影特征点和人头投影轮廓特征点,通过第三回归损失函数对标准三维人脸统计模型进行迭代拟合,得到样本人像图像对应的样本三维人头模型。
具体地,三维人头重建设备可以分别对人头轮廓线和人头投影轮廓线进行随机采样,得到人头轮廓线中的预设数量的人头轮廓特征点和人头投影轮廓线中的预设数量的人头投影轮廓特征点,并按照距离最近原则,确定每个人头轮廓特征点对应的人头投影轮廓特征点,进而将人脸特征点和人头轮廓特征点分别作为监督信息,将人脸投影特征点和人头投影轮廓特征点分别作为待优化信息,根据人脸特征点与人脸投影特征点之间的对应关系以及人头轮廓特征点与人头投影轮廓特征点之间的对应关系,利用第三回归损失函数对标准三维人脸统计模型进行迭代拟合,得到样本人像图像对应的样本三维人头模型。
由此,在本公开实施例中,可以减少迭代拟合过程中的数据计算量,进而提高迭代拟合的效率。
在本公开又一些实施例中,二维特征信息还可以包括肩部特征点,使得所生成的样本人像图像对应的样本三维人头模型还可以用于表达颈部姿态特征。
相应地,S220还可以具体包括:
基于预先训练得到的人体检测模型,提取样本人像图像中的人体特征点;
从人体特征点中提取肩部特征点。
具体地,三维人头重建设备可以将样本人像图像输入预先训练得到的人体检测模型,以对样本人像图像进行人体特征点检测,得到人体检测模型输出的样本人像图像中人像的人体特征点,进而从人体特征点中提取预先被标记为肩部特征的肩部特征点。
可选地,肩部特征点可以为人体的肩关节点。
在这些实施例中,可选地,投影特征信息还可以包括肩部投影特征点,该肩部投影特征点可以与肩部特征点配合进行迭代拟合,以实现模型训练。
相应地,对标准三维人脸统计模型进行平面投影,得到二维特征信息对应的投影特征信息还可以具体包括:
根据各个顶点投影点,确定肩部投影特征点。
具体地,在得到标准三维人脸统计模型对应的各个顶点投影点之后,三维人头重建设备可以按照预先在标准三维人脸统计模型中标定的各个三维人体特征点所属的网格的各个顶点与各个顶点投影点之间的对应关系,从各个顶点投影点中,提取锁骨和上斜方肌部位对应的三维人体特征点所属的网格对应的顶点投影点,并基于所提取的各个网格对应的顶点投影点计算各个网格对应的重心投影点,使锁骨所属的网格对应的重心投影点与锁骨的三维人体特征点相对应、上斜方肌部位所属的网格对应的重心投影点与上斜方肌部位的三维人体特征点相对应,进而将计算得到的各个重心投影点分别作为锁骨和上斜方肌部位的三维人脸特征点对应的投影特征点,然后将锁骨和上斜方肌部位对应的投影特征点输入预设的肩部特征点计算公式中,得到肩部特征点计算公式输出的肩部投影特征点。
可选地,肩部投影特征点可以为标准三维人脸统计模型中的三维肩关节点对应的顶点投影点。
可选地,由于3DMM中不包含肩关节点面信息,因此,可以通过插值计算的方式构建锁骨和上斜方肌部位的网格的重心坐标与所标定的三维肩部特征点的三维坐标之间的映射关系,进而得到肩部特征点计算公式,以通过肩部特征点计算公式实现对三维肩部特征点的平面投影。
由此,在本公开实施例中,可以将二维特征信息中的人脸特征点、人头轮廓线或人头轮廓特征点、以及肩部特征点分别作为监督信息,将人脸投影特征点、人头投影轮廓线或人头投影轮廓特征点、以及肩部投影特征点分别作为待优化信息,根据各个特征点或轮廓线之间对应关系,利用第三回归损失函数对标准三维人脸统计模型进行迭代拟合,得到样本三维人头模型,以使样本人像图像对应的样本三维人头模型可以用于对人头轮廓特征、人脸特征、人脸动作、人脸角度以及颈部角度进行表述,提高生成的样本三维人头模型的可靠性。
下面以图4为例对本公开实施例提供的一种三维人头模型拟合过程进行说明。
图4示出了本公开实施例提供的一种三维人头模型拟合过程的示意图。
如图4所示,该三维人头模型拟合过程可以包括如下步骤。
三维人头重建设备可以需要进行人头三维重建的人脸图像,然后基于预先训练得到的 人脸检测模型提取人脸图像中的人脸特征点、基于预先训练得到的人像分割模型提取人脸图像中的人头轮廓线以及基于预先训练得到的人体检测模型提取人脸图像中的肩部特征点,接着利用人脸特征点、人头轮廓线和肩部特征点对标准三维人脸统计模型进行迭代拟合,得到人像图像对应的三维人头模型。
由此,在本公开实施例中,可以快速地生成人像图像对应的三维人头模型以及获取人像图像对应的三维人头模型中的统计模型参数。
综上,在本公开实施例中,可以可靠、高效地生成既可以表述人脸特征又可以表述人头轮廓特征的目标三维人头模型,该目标三维人头模型可以用于为人像的人头添加特效。
本公开实施例还提供了一种用于实现上述的三维人头重建方法的三维人头重建装置,下面结合图5进行说明。
在本公开实施例中,该三维人头重建装置可以设置于三维人头重建设备中,该三维人头重建设备可以为电子设备,也可以为服务器,在此不作限制。其中,电子设备可以包括移动电话、平板电脑、台式计算机、笔记本电脑、车载终端、可穿戴电子设备、一体机、智能家居设备等具有通信功能的设备,也可以是虚拟机或者模拟器模拟的设备。服务器可以包括云服务器或者服务器集群等具有存储及计算功能的设备。
图5示出了本公开实施例提供的一种三维人头重建装置的结构示意图。
如图5所示,该三维人头重建装置500可以包括第一获取单元510、第一处理单元520和第一生成单元530。
该第一获取单元510可以配置为获取目标人像图像。
该第一处理单元520可以配置为将目标人像图像输入目标模型,得到目标模型的输出结果;其中,目标模型由多个训练样本预先训练得到,训练样本根据样本人像图像和样本三维人头模型生成,样本三维人头模型根据样本人像图像中与人像相关的二维特征信息对标准三维人脸统计模型进行迭代拟合得到,二维特征信息包括人脸特征点和人头投影轮廓线。
该第一生成单元530可以配置为根据输出结果,生成目标人像图像对应的目标三维人头模型。
在本公开实施例中,能够获取目标人像图像,并将目标人像图像输入预先训练得到的目标模型,得到目标模型的输出结果,以根据输出结果,生成目标人像图像对应的目标三维人头模型,由于用于生成该目标模型的训练样本的三维人头模型是根据样本人像图像中与人像相关的二维特征信息对标准三维人脸统计模型进行迭代拟合所得到的,并且用于进行迭代拟合的二维特征信息包括人脸特征点和人头轮廓线,使得用于生成该目标模型的训 练样本的三维人头模型可以表达样本人像图像中人像的人脸特征和人头轮廓特征,进而使得基于该训练样本训练的到的目标模型能够用于检测与人像的人脸特征和人头轮廓特征相关的信息,因此,目标模型针对目标人像图像的输出结果可以用于表达与目标人像图像中人像的人脸特征和人头轮廓特征相关的信息,从而使得根据输出结果可以生成既可以表述人脸特征又可以表述人头轮廓特征的目标三维人头模型,该目标三维人头模型可以用于为人像的人头添加特效。
在本公开一些实施例中,训练样本可以包括样本人像图像和样本三维人头模型对应的样本统计模型参数,输出结果包括目标统计模型参数。
在本公开一些实施例中,该三维人头重建装置500还可以包括第二获取单元、第一训练单元和第二训练单元。
该第二获取单元可以配置为在获取目标人像图像之前,获取多个训练样本。
该第一训练单元可以配置为通过第一回归损失函数,学习每个训练样本中的样本人像图像和样本统计模型参数之间的映射关系,得到目标模型。
该第二训练单元可以配置为通过目标损失函数,继续学习每个训练样本中的样本人像图像和样本统计模型参数之间的映射关系,得到优化后的目标模型;其中,目标损失函数包括第二回归损失函数和投影损失函数,第二回归损失函数中身份系数的权重值大于第一回归损失函数中身份系数的权重值。
在本公开一些实施例中,该第一生成单元530可以进一步配置为根据目标统计模型参数和标准三维人脸统计模型,生成目标三维人头模型。
在本公开一些实施例中,该三维人头重建装置500还可以包括第三获取单元、第一提取单元和第三训练单元。
该第三获取单元可以配置为在获取目标人像图像之前,获取样本人像图像。
该第一提取单元可以配置为从样本人像图像中提取二维特征信息。
该第三训练单元可以配置为基于二维特征信息对标准三维人脸统计模型进行迭代拟合,得到样本三维人头模型。
在本公开一些实施例中,该三维人头重建装置500还可以包括第二提取单元,该第二提取单元可以配置为在基于二维特征信息对标准三维人脸统计模型进行迭代拟合,得到样本三维人头模型之前,对标准三维人脸统计模型进行平面投影,得到二维特征信息对应的投影特征信息。
相应地,该第三训练单元可以进一步配置为基于二维特征信息和投影特征信息,通过第三回归损失函数对标准三维人脸统计模型进行迭代拟合,得到样本三维人头模型。
在本公开一些实施例中,投影特征信息可以包括人脸投影特征点。
相应地,该第二提取单元可以包括第一子提取单元和第二子提取单元。
该第一子提取单元可以配置为将标准三维人脸统计模型的各个顶点投影到二维空间,得到标准三维人脸统计模型对应的各个顶点投影。
该第二子提取单元可以配置为根据各个顶点投影点,确定人脸投影特征点。
在本公开一些实施例中,投影特征信息还可以包括人头投影轮廓线。
相应地,该第二提取单元还可以包括第三子提取单元、第四子提取单元和第五子提取单元。
该第三子提取单元可以配置为对各个顶点投影点进行膨胀处理,得到第一头部区域图像。
该第四子提取单元可以配置为对第一头部区域图像进行腐蚀处理,得到第二头部区域图像。
该第五子提取单元可以配置为对第二头部区域图像进行边缘提取,得到人头投影轮廓线。
在本公开一些实施例中,二维特征信息还可以包括肩部特征点,投影特征信息还可以包括肩部投影特征点。
相应地,该第二提取单元还可以包括第六子提取单元,该第六子提取单元可以配置为根据各个顶点投影点,确定肩部投影特征点。
在本公开一些实施例中,投影特征信息可以包括人脸投影特征点和人头投影轮廓线。
相应地,该三维人头重建装置500还可以包括随机采样单元,该随机采样单元可以配置为在基于二维特征信息和投影特征信息,通过第三回归损失函数对标准三维人脸统计模型进行迭代拟合,得到样本三维人头模型之前,对人头轮廓线和人头投影轮廓线分别进行随机采样,得到人头轮廓特征点和人头投影轮廓特征点。
相应地,该第三训练单元可以进一步配置为基于人脸特征点、人头轮廓特征点、人脸投影特征点和人头投影轮廓特征点,通过第三回归损失函数对标准三维人脸统计模型进行迭代拟合,得到样本三维人头模型。
在本公开一些实施例中,该三维人头重建装置500还可以包括姿态检测单元,该姿态检测单元可以配置为在对标准三维人脸统计模型进行平面投影,得到二维特征信息对应的投影特征信息之前,检测样本人像图像中的头部姿态。
相应地,该第二提取单元可以进一步配置为在标准三维人脸统计模型处于头部姿态下,按照样本人像图像的采集设备的投影参数,将标准三维人脸统计模型投影到采集设备的成像平面上,得到投影特征信息。
需要说明的是,图5所示的三维人头重建装置500可以执行图1至图2所示的方法实 施例中的各个步骤,并且实现图1至图2所示的方法实施例中的各个过程和效果,在此不做赘述。
本公开实施例还提供了一种三维人头重建设备,该三维人头重建设备可以包括处理器和存储器,存储器可以用于存储可执行指令。其中,处理器可以用于从存储器中读取可执行指令,并执行可执行指令以实现上述实施例中的三维人头重建方法。
图6示出了本公开实施例提供的一种三维人头重建设备的结构示意图。下面具体参考图6,其示出了适于用来实现本公开实施例中的三维人头重建设备600的结构示意图。
在本公开实施例中,该三维人头重建设备600可以为电子设备,也可以为服务器,在此不作限制。其中,电子设备可以包括移动电话、平板电脑、台式计算机、笔记本电脑、车载终端、可穿戴电子设备、一体机、智能家居设备等具有通信功能的设备,也可以是虚拟机或者模拟器模拟的设备。服务器可以包括云服务器或者服务器集群等具有存储及计算功能的设备。
需要说明的是,图6示出的三维人头重建设备600仅仅是一个示例,不应对本公开实施例的功能和使用范围带来任何限制。
如图6所示,该三维人头重建设备600可以包括处理装置(例如中央处理器、图形处理器等)601,其可以根据存储在只读存储器(ROM)602中的程序或者从存储装置608加载到随机访问存储器(RAM)603中的程序而执行各种适当的动作和处理。在RAM 603中,还存储有三维人头重建设备600操作所需的各种程序和数据。处理装置601、ROM 602以及RAM 603通过总线604彼此相连。输入/输出(I/O)接口605也连接至总线604。
通常,以下装置可以连接至I/O接口605:包括例如触摸屏、触摸板、键盘、鼠标、摄像头、麦克风、加速度计、陀螺仪等的输入装置606;包括例如液晶显示器(LCD)、扬声器、振动器等的输出装置607;包括例如磁带、硬盘等的存储装置608;以及通信装置609。通信装置609可以允许三维人头重建设备600与其他设备进行无线或有线通信以交换数据。虽然图6示出了具有各种装置的三维人头重建设备600,但是应理解的是,并不要求实施或具备所有示出的装置。可以替代地实施或具备更多或更少的装置。
本公开实施例还提供了一种计算机可读存储介质,该存储介质存储有计算机程序,当计算机程序被处理器执行时,使得处理器实现上述实施例中的三维人头重建方法。
特别地,根据本公开的实施例,上文参考流程图描述的过程可以被实现为计算机软件程序。
本公开实施例还提供了一种包括程序指令的计算机程序产品,当程序指令在电子设备上运行时,使得电子设备执行上述实施例中的三维人头重建方法。
例如,本公开的实施例包括一种计算机程序产品,其包括承载在非暂态计算机可读介质上的计算机程序,该计算机程序包含用于执行流程图所示的方法的程序代码。在这样的实施例中,该计算机程序可以通过通信装置609从网络上被下载和安装,或者从存储装置608被安装,或者从ROM 602被安装。在该计算机程序被处理装置601执行时,执行本公开实施例的三维人头重建方法中限定的上述功能。
需要说明的是,本公开上述的计算机可读介质可以是计算机可读信号介质或者计算机可读存储介质或者是上述两者的任意组合。计算机可读存储介质例如可以是——但不限于——电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者任意以上的组合。计算机可读存储介质的更具体的例子可以包括但不限于:具有一个或多个导线的电连接、便携式计算机磁盘、硬盘、随机访问存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(EPROM或闪存)、光纤、便携式紧凑磁盘只读存储器(CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。在本公开中,计算机可读存储介质可以是任何包含或存储程序的有形介质,该程序可以被指令执行系统、装置或者器件使用或者与其结合使用。而在本公开中,计算机可读信号介质可以包括在基带中或者作为载波一部分传播的数据信号,其中承载了计算机可读的程序代码。这种传播的数据信号可以采用多种形式,包括但不限于电磁信号、光信号或上述的任意合适的组合。计算机可读信号介质还可以是计算机可读存储介质以外的任何计算机可读介质,该计算机可读信号介质可以发送、传播或者传输用于由指令执行系统、装置或者器件使用或者与其结合使用的程序。计算机可读介质上包含的程序代码可以用任何适当的介质传输,包括但不限于:电线、光缆、RF(射频)等等,或者上述的任意合适的组合。
在一些实施方式中,客户端、服务器可以利用诸如HTTP之类的任何当前已知或未来研发的网络协议进行通信,并且可以与任意形式或介质的数字数据通信(例如,通信网络)互连。通信网络的示例包括局域网(“LAN”),广域网(“WAN”),网际网(例如,互联网)以及端对端网络(例如,ad hoc端对端网络),以及任何当前已知或未来研发的网络。
上述计算机可读介质可以是上述三维人头重建设备中所包含的;也可以是单独存在,而未装配入该三维人头重建设备中。
上述计算机可读介质承载有一个或者多个程序,当上述一个或者多个程序被该三维人头重建设备执行时,使得该三维人头重建设备执行:
获取目标人像图像;将目标人像图像输入目标模型,得到目标模型的输出结果;其中,目标模型由多个训练样本预先训练得到,训练样本根据样本人像图像和样本三维人头模型生成,样本三维人头模型根据样本人像图像中与人像相关的二维特征信息对标准三维人脸统计模型进行迭代拟合得到,二维特征信息包括人脸特征点和人头投影轮廓线;根据输出 结果,生成目标人像图像对应的目标三维人头模型。
在本公开实施例中,可以以一种或多种程序设计语言或其组合来编写用于执行本公开的操作的计算机程序代码,上述程序设计语言包括但不限于面向对象的程序设计语言—诸如Java、Smalltalk、C++,还包括常规的过程式程序设计语言—诸如“C”语言或类似的程序设计语言。程序代码可以完全地在用户计算机上执行、部分地在用户计算机上执行、作为一个独立的软件包执行、部分在用户计算机上部分在远程计算机上执行、或者完全在远程计算机或服务器上执行。在涉及远程计算机的情形中,远程计算机可以通过任意种类的网络——包括局域网(LAN)或广域网(WAN)—连接到用户计算机,或者,可以连接到外部计算机(例如利用因特网服务提供商来通过因特网连接)。
附图中的流程图和框图,图示了按照本公开各种实施例的系统、方法和计算机程序产品的可能实现的体系架构、功能和操作。在这点上,流程图或框图中的每个方框可以代表一个模块、程序段、或代码的一部分,该模块、程序段、或代码的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令。也应当注意,在有些作为替换的实现中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个接连地表示的方框实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这依所涉及的功能而定。也要注意的是,框图和/或流程图中的每个方框、以及框图和/或流程图中的方框的组合,可以用执行规定的功能或操作的专用的基于硬件的系统来实现,或者可以用专用硬件与计算机指令的组合来实现。
描述于本公开实施例中所涉及到的单元可以通过软件的方式实现,也可以通过硬件的方式来实现。其中,单元的名称在某种情况下并不构成对该单元本身的限定。
本文中以上描述的功能可以至少部分地由一个或多个硬件逻辑部件来执行。例如,非限制性地,可以使用的示范类型的硬件逻辑部件包括:现场可编程门阵列(FPGA)、专用集成电路(ASIC)、专用标准产品(ASSP)、片上系统(SOC)、复杂可编程逻辑设备(CPLD)等等。
在本公开的上下文中,机器可读介质可以是有形的介质,其可以包含或存储以供指令执行系统、装置或设备使用或与指令执行系统、装置或设备结合地使用的程序。机器可读介质可以是机器可读信号介质或机器可读储存介质。机器可读介质可以包括但不限于电子的、磁性的、光学的、电磁的、红外的、或半导体系统、装置或设备,或者上述内容的任何合适组合。机器可读存储介质的更具体示例会包括基于一个或多个线的电气连接、便携式计算机盘、硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦除可编程只读存储器(EPROM或快闪存储器)、光纤、便捷式紧凑盘只读存储器(CD-ROM)、光学储存设备、磁储存设备、或上述内容的任何合适组合。
以上描述仅为本公开的较佳实施例以及对所运用技术原理的说明。本领域技术人员应当理解,本公开中所涉及的公开范围,并不限于上述技术特征的特定组合而成的技术方案,同时也应涵盖在不脱离上述公开构思的情况下,由上述技术特征或其等同特征进行任意组合而形成的其它技术方案。例如上述特征与本公开中公开的(但不限于)具有类似功能的技术特征进行互相替换而形成的技术方案。
此外,虽然采用特定次序描绘了各操作,但是这不应当理解为要求这些操作以所示出的特定次序或以顺序次序执行来执行。在一定环境下,多任务和并行处理可能是有利的。同样地,虽然在上面论述中包含了若干具体实现细节,但是这些不应当被解释为对本公开的范围的限制。在单独的实施例的上下文中描述的某些特征还可以组合地实现在单个实施例中。相反地,在单个实施例的上下文中描述的各种特征也可以单独地或以任何合适的子组合的方式实现在多个实施例中。
尽管已经采用特定于结构特征和/或方法逻辑动作的语言描述了本主题,但是应当理解所附权利要求书中所限定的主题未必局限于上面描述的特定特征或动作。相反,上面所描述的特定特征和动作仅仅是实现权利要求书的示例形式。

Claims (15)

  1. 一种三维人头重建方法,其特征在于,包括:
    获取目标人像图像;
    将所述目标人像图像输入目标模型,得到所述目标模型的输出结果;其中,所述目标模型由多个训练样本预先训练得到,所述训练样本根据样本人像图像和样本三维人头模型生成,所述样本三维人头模型根据所述样本人像图像中与人像相关的二维特征信息对标准三维人脸统计模型进行迭代拟合得到,所述二维特征信息包括人脸特征点和人头投影轮廓线;
    根据所述输出结果,生成所述目标人像图像对应的目标三维人头模型。
  2. 根据权利要求1所述的方法,其特征在于,所述训练样本包括所述样本人像图像和所述样本三维人头模型对应的样本统计模型参数,所述输出结果包括目标统计模型参数。
  3. 根据权利要求2所述的方法,其特征在于,在所述获取目标人像图像之前,所述方法还包括:
    获取所述多个训练样本;
    通过第一回归损失函数,学习每个所述训练样本中的样本人像图像和样本统计模型参数之间的映射关系,得到所述目标模型;
    通过目标损失函数,继续学习每个所述训练样本中的样本人像图像和样本统计模型参数之间的映射关系,得到优化后的目标模型;其中,所述目标损失函数包括第二回归损失函数和投影损失函数,所述第二回归损失函数中身份系数的权重值大于所述第一回归损失函数中身份系数的权重值。
  4. 根据权利要求2所述的方法,其特征在于,所述根据所述输出结果,生成所述目标人像图像对应的目标三维人头模型,包括:
    根据所述目标统计模型参数和所述标准三维人脸统计模型,生成所述目标三维人头模型。
  5. 根据权利要求1所述的方法,其特征在于,在所述获取目标人像图像之前,所述方法还包括:
    获取所述样本人像图像;
    从所述样本人像图像中提取所述二维特征信息;
    基于所述二维特征信息对所述标准三维人脸统计模型进行迭代拟合,得到所述样本三维人头模型。
  6. 根据权利要求5所述的方法,其特征在于,在所述基于所述二维特征信息对所述标准三维人脸统计模型进行迭代拟合,得到所述样本三维人头模型之前,所述方法还包括:
    对所述标准三维人脸统计模型进行平面投影,得到所述二维特征信息对应的投影特征 信息;
    其中,所述基于所述二维特征信息对所述标准三维人脸统计模型进行迭代拟合,得到所述样本三维人头模型,包括:
    基于所述二维特征信息和所述投影特征信息,通过第三回归损失函数对所述标准三维人脸统计模型进行迭代拟合,得到所述样本三维人头模型。
  7. 根据权利要求6所述的方法,其特征在于,所述投影特征信息包括人脸投影特征点;
    其中,所述对所述标准三维人脸统计模型进行平面投影,得到所述二维特征信息对应的投影特征信息,包括:
    将所述标准三维人脸统计模型的各个顶点投影到二维空间,得到所述标准三维人脸统计模型对应的各个顶点投影点;
    根据所述各个顶点投影点,确定所述人脸投影特征点。
  8. 根据权利要求7所述的方法,其特征在于,所述投影特征信息还包括人头投影轮廓线;
    其中,所述对所述标准三维人脸统计模型进行平面投影,得到所述二维特征信息对应的投影特征信息,还包括:
    对所述各个顶点投影点进行膨胀处理,得到第一头部区域图像;
    对所述第一头部区域图像进行腐蚀处理,得到第二头部区域图像;
    对所述第二头部区域图像进行边缘提取,得到所述人头投影轮廓线。
  9. 根据权利要求7或8所述的方法,其特征在于,所述二维特征信息还包括肩部特征点,所述投影特征信息还包括肩部投影特征点;
    其中,所述对所述标准三维人脸统计模型进行平面投影,得到所述二维特征信息对应的投影特征信息,还包括:
    根据所述各个顶点投影点,确定所述肩部投影特征点。
  10. 根据权利要求6所述的方法,其特征在于,所述投影特征信息包括人脸投影特征点和人头投影轮廓线;
    其中,在所述基于所述二维特征信息和所述投影特征信息,通过第三回归损失函数对所述标准三维人脸统计模型进行迭代拟合,得到所述样本三维人头模型之前,所述方法还包括:
    对所述人头轮廓线和所述人头投影轮廓线分别进行随机采样,得到人头轮廓特征点和人头投影轮廓特征点;
    其中,所述基于所述二维特征信息和所述投影特征信息,通过第三回归损失函数对所述标准三维人脸统计模型进行迭代拟合,得到所述样本三维人头模型,包括:
    基于所述人脸特征点、所述人头轮廓特征点、所述人脸投影特征点和所述人头投影轮廓特征点,通过所述第三回归损失函数对所述标准三维人脸统计模型进行迭代拟合,得到所述样本三维人头模型。
  11. 根据权利要求6所述的方法,其特征在于,在所述对所述标准三维人脸统计模型进行平面投影,得到所述二维特征信息对应的投影特征信息之前,所述方法还包括:
    检测所述样本人像图像中的头部姿态;
    其中,所述对所述标准三维人脸统计模型进行平面投影,得到所述二维特征信息对应的投影特征信息,包括:
    在所述标准三维人脸统计模型处于所述头部姿态下,按照所述样本人像图像的采集设备的投影参数,将所述标准三维人脸统计模型投影到所述采集设备的成像平面上,得到所述投影特征信息。
  12. 一种三维人头重建装置,其特征在于,包括:
    第一获取单元,配置为获取目标人像图像;
    第一处理单元,配置为将所述目标人像图像输入目标模型,得到所述目标模型的输出结果;其中,所述目标模型由多个训练样本预先训练得到,所述训练样本根据样本人像图像和样本三维人头模型生成,所述样本三维人头模型根据所述样本人像图像中与人像相关的二维特征信息对标准三维人脸统计模型进行迭代拟合得到,所述二维特征信息包括人脸特征点和人头投影轮廓线;
    第一生成单元,配置为根据所述输出结果,生成所述目标人像图像对应的目标三维人头模型。
  13. 一种三维人头重建设备,其特征在于,包括:
    处理器;
    存储器,用于存储可执行指令;
    其中,所述处理器用于从所述存储器中读取所述可执行指令,并执行所述可执行指令以实现上述权利要求1-11中任一项所述的三维人头重建方法。
  14. 一种计算机可读存储介质,其特征在于,所述存储介质存储有计算机程序,当所述计算机程序被处理器执行时,使得处理器实现上述权利要求1-11中任一项所述的三维人头重建方法。
  15. 一种包含程序指令的计算机程序产品,其特征在于,当所述程序指令在电子设备上运行时,使得所述电子设备执行上述权利要求1-11中任一项所述的三维人头重建方法。
PCT/CN2022/116162 2021-09-01 2022-08-31 三维人头重建方法、装置、设备及介质 WO2023030381A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202111022097.2 2021-09-01
CN202111022097.2A CN115731341A (zh) 2021-09-01 2021-09-01 三维人头重建方法、装置、设备及介质

Publications (1)

Publication Number Publication Date
WO2023030381A1 true WO2023030381A1 (zh) 2023-03-09

Family

ID=85292199

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/116162 WO2023030381A1 (zh) 2021-09-01 2022-08-31 三维人头重建方法、装置、设备及介质

Country Status (2)

Country Link
CN (1) CN115731341A (zh)
WO (1) WO2023030381A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117058329A (zh) * 2023-10-11 2023-11-14 湖南马栏山视频先进技术研究院有限公司 一种人脸快速三维建模方法及系统
CN117456144A (zh) * 2023-11-10 2024-01-26 中国人民解放军海军航空大学 基于可见光遥感图像的目标建筑物三维模型优化方法

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106652025A (zh) * 2016-12-20 2017-05-10 五邑大学 一种基于视频流与人脸多属性匹配的三维人脸建模方法和打印装置
CN108510437A (zh) * 2018-04-04 2018-09-07 科大讯飞股份有限公司 一种虚拟形象生成方法、装置、设备以及可读存储介质
CN109255830A (zh) * 2018-08-31 2019-01-22 百度在线网络技术(北京)有限公司 三维人脸重建方法和装置
EP3657440A1 (de) * 2018-11-23 2020-05-27 Fielmann Ventures GmbH Verfahren und system zur dreidimensionalen rekonstruktion eines menschlichen kopfes aus mehreren bildern
CN112884889A (zh) * 2021-04-06 2021-06-01 北京百度网讯科技有限公司 模型训练、人头重建方法,装置,设备以及存储介质
CN113129425A (zh) * 2019-12-31 2021-07-16 Tcl集团股份有限公司 一种人脸图像三维重建方法、存储介质及终端设备

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106652025A (zh) * 2016-12-20 2017-05-10 五邑大学 一种基于视频流与人脸多属性匹配的三维人脸建模方法和打印装置
CN108510437A (zh) * 2018-04-04 2018-09-07 科大讯飞股份有限公司 一种虚拟形象生成方法、装置、设备以及可读存储介质
CN109255830A (zh) * 2018-08-31 2019-01-22 百度在线网络技术(北京)有限公司 三维人脸重建方法和装置
EP3657440A1 (de) * 2018-11-23 2020-05-27 Fielmann Ventures GmbH Verfahren und system zur dreidimensionalen rekonstruktion eines menschlichen kopfes aus mehreren bildern
CN113129425A (zh) * 2019-12-31 2021-07-16 Tcl集团股份有限公司 一种人脸图像三维重建方法、存储介质及终端设备
CN112884889A (zh) * 2021-04-06 2021-06-01 北京百度网讯科技有限公司 模型训练、人头重建方法,装置,设备以及存储介质

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117058329A (zh) * 2023-10-11 2023-11-14 湖南马栏山视频先进技术研究院有限公司 一种人脸快速三维建模方法及系统
CN117058329B (zh) * 2023-10-11 2023-12-26 湖南马栏山视频先进技术研究院有限公司 一种人脸快速三维建模方法及系统
CN117456144A (zh) * 2023-11-10 2024-01-26 中国人民解放军海军航空大学 基于可见光遥感图像的目标建筑物三维模型优化方法
CN117456144B (zh) * 2023-11-10 2024-05-07 中国人民解放军海军航空大学 基于可见光遥感图像的目标建筑物三维模型优化方法

Also Published As

Publication number Publication date
CN115731341A (zh) 2023-03-03

Similar Documents

Publication Publication Date Title
CN109214343B (zh) 用于生成人脸关键点检测模型的方法和装置
WO2023030381A1 (zh) 三维人头重建方法、装置、设备及介质
JP7373554B2 (ja) クロスドメイン画像変換
WO2020006961A1 (zh) 用于提取图像的方法和装置
JP7225188B2 (ja) ビデオを生成する方法および装置
CN108388889B (zh) 用于分析人脸图像的方法和装置
JP7361060B2 (ja) 3d関節点回帰モデル生成方法及び装置、電子機器、コンピュータ可読記憶媒体並びにコンピュータプログラム
CN109754464B (zh) 用于生成信息的方法和装置
CN111638791B (zh) 虚拟角色的生成方法、装置、电子设备及存储介质
WO2020211573A1 (zh) 用于处理图像的方法和装置
WO2023138560A1 (zh) 风格化图像生成方法、装置、电子设备及存储介质
WO2020253716A1 (zh) 图像生成方法和装置
WO2023061169A1 (zh) 图像风格迁移和模型训练方法、装置、设备和介质
WO2022033219A1 (zh) 人脸活体检测方法、系统、装置、计算机设备和存储介质
WO2022205755A1 (zh) 纹理生成方法、装置、设备及存储介质
CN111524216A (zh) 生成三维人脸数据的方法和装置
WO2023051244A1 (zh) 图像生成方法、装置、设备及存储介质
WO2023125365A1 (zh) 图像处理方法、装置、电子设备及存储介质
WO2023035935A1 (zh) 数据处理方法、装置、电子设备和存储介质
WO2023029893A1 (zh) 纹理映射方法、装置、设备及存储介质
JP2023526899A (ja) 画像修復モデルを生成するための方法、デバイス、媒体及びプログラム製品
WO2023273697A1 (zh) 图像处理方法、模型训练方法、装置、电子设备及介质
WO2020155908A1 (zh) 用于生成信息的方法和装置
WO2023143118A1 (zh) 图像处理方法、装置、设备及介质
WO2023185398A1 (zh) 一种脸部处理方法、装置、计算机设备及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22863516

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE