WO2020134925A1 - 人脸图像的光照检测方法、装置、设备和存储介质 - Google Patents

人脸图像的光照检测方法、装置、设备和存储介质 Download PDF

Info

Publication number
WO2020134925A1
WO2020134925A1 PCT/CN2019/123006 CN2019123006W WO2020134925A1 WO 2020134925 A1 WO2020134925 A1 WO 2020134925A1 CN 2019123006 W CN2019123006 W CN 2019123006W WO 2020134925 A1 WO2020134925 A1 WO 2020134925A1
Authority
WO
WIPO (PCT)
Prior art keywords
face
feature point
face image
key feature
brightness
Prior art date
Application number
PCT/CN2019/123006
Other languages
English (en)
French (fr)
Inventor
章菲倩
刘更代
颜乐驹
Original Assignee
广州市百果园信息技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 广州市百果园信息技术有限公司 filed Critical 广州市百果园信息技术有限公司
Priority to SG11202106040PA priority Critical patent/SG11202106040PA/en
Priority to US17/417,003 priority patent/US11908236B2/en
Priority to EP19905268.9A priority patent/EP3905103A4/en
Publication of WO2020134925A1 publication Critical patent/WO2020134925A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/64Three-dimensional objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/60Extraction of image or video features relating to illumination properties, e.g. using a reflectance or lighting model
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Definitions

  • Embodiments of the present application relate to the field of image processing, for example, to a method, device, device, and storage medium for lighting detection of a face image.
  • AR AugmentedReality
  • Embodiments of the present application provide a method, device, device, and storage medium for lighting detection of a face image, which simplifies the operation of lighting detection in a face image and improves the accuracy of lighting detection.
  • an embodiment of the present application provides a method for detecting light of a face image.
  • the method includes:
  • the illumination information of the face image is determined according to the correspondence between the preset brightness of the key feature point and the illumination, and the brightness of the corresponding feature point of the key feature point in the face image.
  • an embodiment of the present application provides a light detection device for a face image, which includes:
  • the image acquisition module is set to acquire the face image to be detected and the three-dimensional face grid template
  • a face reconstruction module configured to deform the three-dimensional face grid template according to the face image to obtain a reconstructed face grid model
  • the brightness determination module is set to determine the corresponding feature point of the key feature point in the face image according to the deformation position of the key feature point in the three-dimensional face grid template in the reconstructed face grid model The brightness of
  • the lighting information determination module is set to determine the face image according to the correspondence between the preset brightness of the key feature point and the light, and the brightness of the feature point corresponding to the key feature point in the face image Lighting information.
  • an embodiment of the present application provides a device, which includes:
  • One or more processors are One or more processors;
  • Storage device set to store one or more programs
  • the one or more processors When the one or more programs are executed by the one or more processors, the one or more processors implement the method described in any embodiment of the present application.
  • an embodiment of the present application provides a computer-readable storage medium that stores a computer program on the computer-readable storage medium, and when the computer program is executed by a processor, the program described in any embodiment of the present application is implemented. method.
  • FIG. 1A is a flowchart of a method for detecting illumination of a face image according to an embodiment of the present application
  • 1B is a schematic diagram of a three-dimensional face grid template in the method provided by an embodiment of the present application.
  • FIG. 2 is a schematic diagram of a principle of a process of predetermining a correspondence between preset brightness and illumination of key feature points in the method provided by the embodiment of the present application;
  • FIG. 3 is a schematic diagram of the principle of the light detection process of a face image provided by an embodiment of the present application.
  • FIG. 4 is a schematic structural diagram of a light detection device for a face image according to an embodiment of the present application.
  • FIG. 5 is a schematic structural diagram of a device provided by an embodiment of the present application.
  • the three-dimensional face model can be used to generate the corresponding reference light source image, and the lighting can be detected by matching the face image with the reference light source image, such as by manually marking the Binocular position, establish a coordinate system, and use the sampling points of the same position of different face images in the binocular coordinate system as the same geometric corresponding point, so as to detect lighting according to the brightness of each geometric corresponding point in different face images, but this method
  • the three-dimensional face model needs to be used to determine the reference light source image. Since the shape of different face images will change accordingly, the three-dimensional face model will be limited by the face shape of the different face images.
  • the geometric correspondence determined according to the reference light source image There is a certain error in the point, so that there is a large error in the light detection.
  • the embodiment of the present application is directed to that because the three-dimensional face model is limited by the shape of the faces in different face images, the key feature points determined according to the reference light source image have certain errors in the different face images, which makes the detection of light larger
  • the face grid model corresponding to the face image is reconstructed by the deformable three-dimensional grid face template, and the corresponding features in the face image are determined according to the deformation positions of the key feature points in the reconstructed face grid model
  • the brightness of the point so that according to the correspondence between the brightness and the preset brightness and the illumination, the illumination information of the face image is obtained, which solves the problem of large errors in the illumination detection by the three-dimensional face model in the related art, and improves the illumination detection Detection efficiency and accuracy.
  • the method in the embodiment of the present application can be applied to actual AR products, and according to the lighting information detected in the face image, the same light intensity is added to the virtual objects in the corresponding AR products, It enables virtual objects to be seamlessly embedded in real face images, giving users a feeling that virtual objects are real, and improving the user's experience of AR products, such as implementing some virtual things such as trying on hats and glasses.
  • the AR product of the object also helps to improve the recognition and tracking effect of face recognition and face tracking in the face image.
  • FIG. 1A is a flowchart of a lighting detection method for a face image provided by an embodiment of the present application.
  • This embodiment can be applied to any lighting information processing device that detects a face image by reconstructing a face grid model.
  • the solution of the embodiment of the present application may be applied in the case of how to detect the lighting information in the face image.
  • the method provided in this embodiment may be executed by the light detection device for the face image provided in the embodiment of the present application.
  • the device may be implemented by software and/or hardware, and integrated in the device that executes the method.
  • the device may It is any kind of processing equipment with image processing capability.
  • the method may include S110 to S140.
  • the face image is collected image data containing a face picture. Since this embodiment is applied to the illumination detection of the face image collected in the AR product, the virtual object in the AR product It can be seamlessly embedded in the real face image, so the current face image can be collected through the image collector such as the camera configured in the AR product.
  • the face image when processing the face image to be detected, in order to speed up the processing efficiency of the face image, it is necessary to reduce the pixel information contained in the face image.
  • the face image is required to be a grayscale image.
  • the initially collected face image is a color image, and then the grayscale processing is performed on the color image to obtain the face image to be detected in this embodiment.
  • the three-dimensional face grid template is a grid model of standard face shapes.
  • the three-dimensional face grid template is a three-dimensional face model with a grid distribution schematic created in advance according to the standard face shape; in the process of creating the three-dimensional face grid template, it can be selected according to The pixel resolution of the model and the shape of the grid are different, and a 3D face grid template with different accuracy or different grid shapes is created, so that the grid distribution density or grid distribution shape in the created 3D face grid template
  • the three-dimensional face grid template is composed of a quadrilateral grid
  • a three-dimensional face grid template composed of a triangular grid can also be created according to requirements .
  • a pre-created 3D face grid template is stored in the processing device that executes the light detection method of the face image, because the 3D face model will be affected by different faces
  • the limitation of the face shape in the image makes the light detection have a large error.
  • the three-dimensional face grid template is used to match the face image containing different face shapes, and the corresponding deformation can be performed according to the face shape
  • subsequent three-dimensional face grid templates can be matched with face images for different face images to be detected, Get the reconstructed face grid model.
  • the currently collected face image that is, the face image to be detected
  • the pre-stored three-dimensional face The grid template, that is, the grid model of the standard human face
  • processing is performed on the same three-dimensional face grid template, which reduces noise and defect interference in the process of reconstructing the corresponding face grid model from different face images, and solves the problem of using different face images
  • the directly generated grid model does not have the problem of consistent topology.
  • S120 Deform the three-dimensional face grid template according to the face image to obtain a reconstructed face grid model.
  • the 3D face grid template is deformed according to the face shape in the obtained face image, so that the face pose in the deformed 3D face grid template is approximately the same as the face pose in the face image, thereby Use the deformed 3D face grid template as the face grid model reconstructed from the face image.
  • a unique grid from the 3D face grid template can be reconstructed.
  • Perform a corresponding face mesh model, and the reconstructed face mesh model has the same mesh points as the three-dimensional face mesh template.
  • the local feature points in the three-dimensional face grid template can be extracted hierarchically, and the position of the local feature points extracted in the hierarchical layer in the face image can be matched to the three-dimensional person in turn.
  • the face mesh template is deformed to obtain the target face mesh model; then the global feature points in the target face mesh model are obtained, and the target face mesh is matched according to the posture matching position of the global feature points in the face image
  • the model is deformed to obtain a reconstructed face grid model.
  • the feature points are the pixel points in the model that can effectively represent the skin characteristics of the human face. Since the skin points in the face have a relatively obvious effect on the brightness change under different illumination, the feature points in this embodiment refer to the The pixel points corresponding to the skin points in the three-dimensional face grid template are obtained, that is, the multiple grid vertices or the center points of the grid patches distributed in the three-dimensional face grid template, such as the center point of the quadrilateral grid. Local feature points are some of the feature points contained in the 3D face grid template.
  • the posture matching position refers to the position of the face target part represented by multiple feature points in the three-dimensional face grid template, and determines that the target part corresponds to the matching position in the face image; for example, represents a three-dimensional face grid
  • the position of the matching position of the cheek feature points in the template in the face image is the position of the cheeks in the face image.
  • the global feature points are all the mesh vertices or the mesh patch center points distributed in the target face mesh model.
  • this embodiment extracts the local feature points in the three-dimensional face grid template by layers, and determines the posture matching positions of the multiple local feature points extracted by the layers in the face image in turn, according to The corresponding pose matching position controls the corresponding three-dimensional face grid template to be deformed in turn to obtain the target face grid model matching the face image; obtain the global feature points in the target face grid model, and according to multiple global The position of the feature points in the face image is matched, and the target face mesh model is deformed according to the above-mentioned deformation process to ensure that the reconstructed face mesh model is smooth enough, and it can be reconstructed as much as possible.
  • S130 Determine the brightness of the corresponding feature point of the key feature point in the face image according to the deformation position of the key feature point in the three-dimensional face grid template in the reconstructed face grid model.
  • the key feature points are local feature points that are pre-selected in the three-dimensional face grid template and can represent the location information of the face image, and are selected by the three-dimensional face grid template in the above-described deformation process
  • the local feature points may be different or the same, which is not limited, and the key feature points in this embodiment are optional in all the vertices of the mesh or the center point of the mesh patch can be clearly expressed in the 3D face mesh template
  • the key feature points In addition to the facial features such as lips and eyes, there are certain geometric points corresponding to the location points of the skin points such as the cheek or chin.
  • the brightness of the feature point refers to the hue information that can represent the position of the feature point in the face image, that is, the parameter of the lightness and darkness of the image at that position.
  • the face image in this embodiment is a grayscale image, so the key feature point is The brightness of the corresponding feature point in the face image is the pixel gray value of the feature point.
  • the pre-selected key feature points can be determined in the 3D face grid template.
  • the key feature points will also move correspondingly with the deformation of the 3D face mesh template, and meet certain constraints to ensure that the 3D face mesh template will be deformed during the deformation process. The consistency of the movement of each feature point.
  • the brightness of the corresponding feature points in the face image is obtained in advance, and the key feature points in the three-dimensional face grid template are determined, and the three-dimensional face grid template is deformed to obtain
  • the key feature point moves correspondingly with the deformation of the three-dimensional face grid template, and the corresponding deformation position in the reconstructed face grid model is due to the reconstruction
  • the face grid model is approximately the same as the face pose in the face image to be detected, so the key feature point can be determined in the face image according to the corresponding deformation position of the key feature point in the reconstructed face grid model
  • the feature points at the same position in the image are used as the corresponding feature points to obtain the brightness of the corresponding feature point, so as to subsequently determine the illumination information of the face image according to the correspondence between the brightness and the preset brightness of the key feature point and the illumination .
  • S140 Determine the illumination information of the face image according to the correspondence between the preset brightness of the key feature point and the illumination, and the brightness of the feature point corresponding to the key feature point in the face image.
  • the corresponding relationship between preset brightness and illumination refers to the corresponding transformation relationship between the brightness of the feature point in the face image and the illumination information of the face image.
  • a large number of already determined illumination The historical face image of the information, and the brightness of the feature points corresponding to the key feature points in the three-dimensional face grid template in multiple historical face images, to train to obtain the preset brightness and illumination of multiple key feature points Correspondence.
  • the illumination information refers to the illumination condition of the real ambient light corresponding to the current moment when the face image to be detected is collected, and may include the corresponding real illumination direction when the face image is collected.
  • the preset brightness and illumination of the predetermined key feature points may be obtained in advance.
  • the brightness of the feature points corresponding to the key feature points in the three-dimensional face grid template in the face image are transformed to obtain the face image when it is collected Lighting information in the real environment; the lighting information can be used later to add the same light intensity to the virtual objects in the corresponding AR products, that is, to perform the same lighting rendering on the virtual objects, so that the virtual objects can be seamlessly embedded in the real In the face image, to give the user a feeling that the virtual object is real, improve the user's experience of AR products.
  • the three-dimensional face grid model is deformed according to the face image to be detected to obtain a reconstructed face grid model, and the deformed position in the reconstructed face grid model according to the key feature points , Determine the brightness of the corresponding feature point of the key feature point in the face image, and then determine the illumination information of the face image according to the brightness and the corresponding relationship between the preset brightness and the illumination, without performing the heavy image acquisition process, and the person
  • the position matching of the face model and the face image is high, which solves the problem of large errors in the light detection through the three-dimensional face model due to the limitation of the face shape of different face images in the related art, which simplifies the face
  • the light detection operation in the image improves the detection efficiency and accuracy of the light detection.
  • FIG. 2 is a schematic diagram of a principle of a process of predetermining a correspondence between preset brightness and illumination of key feature points in a method provided by an embodiment of the present application.
  • This embodiment is an optional embodiment based on the above embodiment.
  • this embodiment uses a large number of historical face images and a three-dimensional face grid template to train the corresponding relationship between the preset brightness and illumination of multiple key feature points in the three-dimensional face grid template Explain.
  • this embodiment may include S210 to S240.
  • the historical face image is an image containing face images for which the lighting information of the image has been determined.
  • the preset brightness of the key feature points in the three-dimensional face grid template and the illumination is determined, it is collected A large number of historical face images with clear illumination information are constructed to construct the corresponding training set.
  • this embodiment executes the training process of the correspondence between preset brightness and illumination of multiple key feature points, a large number of historical face images and pre-stored three-dimensional faces can be obtained in the training set Grid template, and determine the historical illumination information of multiple historical face images; the historical illumination information includes the illumination direction of the historical face image.
  • the key feature points in the 3D face grid template corresponding to different historical face images The error in the face part represented by the feature point makes the light detection result have a large error.
  • the same three-dimensional face net stored in advance The lattice template is deformed to obtain multiple historical face images corresponding to the multiple reconstructed historical face grid models, so that the multiple historical face grid models have a consistent grid topology, which also ensures that the key feature points are The consistency of face parts represented by corresponding feature points in different historical face images.
  • S230 Determine the brightness of the corresponding feature point of the key feature point in the historical face image according to the deformation position of the key feature point in the three-dimensional face grid template in the reconstructed historical face grid model.
  • the key feature points in the 3D face grid template will be corresponding to the deformation of the 3D face grid template Movement, at this time, it can be determined that the key feature points in the three-dimensional face grid template correspond to the deformation positions of multiple historical face images in different reconstructed historical face grid models, so that according to the key feature points in multiple
  • the corresponding deformation position in the reconstructed historical face grid model determines the feature point of the key feature point at the same position in multiple historical face images as the corresponding feature point, thereby obtaining multiple historical face images
  • S240 Determine the correspondence between the preset brightness of the key feature point and the illumination according to the historical lighting information of the historical face image and the brightness of the corresponding feature point of the key feature point in the historical face image.
  • the brightness of the corresponding feature point of the key feature point in the historical face image is obtained, and the illumination information of multiple historical face images has been determined.
  • the brightness of the corresponding feature points in the multiple historical face images and the illumination information of the corresponding historical face images determine the correspondence between the preset brightness of the key feature points and the illumination; in the same way, determine the three-dimensional face Correspondence between preset brightness and illumination corresponding to multiple key feature points in the grid template.
  • determining the correspondence between the preset brightness of the key feature points and the illumination when determining the correspondence between the preset brightness of the key feature points and the illumination, according to the historical illumination information of the historical face image, and the corresponding feature points of the key feature points in the historical face image Brightness, determining the correspondence between the preset brightness of key feature points and illumination, which may include:
  • the training set since the training set includes a large number of historical face images, when determining the correspondence between the preset brightness of the key feature points and the illumination, the same processing operation is performed on each historical face image .
  • the historical face image when the brightness of the corresponding feature points in the historical face image of multiple key feature points in the three-dimensional face grid template is obtained, the historical face The illumination information of the image and the brightness of the corresponding feature points of the multiple key feature points in the historical face image determine the correspondence between the initial brightness of the multiple key feature points corresponding to the historical face image and the illumination.
  • the illumination information in this embodiment may be represented by spherical harmonic illumination.
  • the spherical harmonic illumination is a vector composed of illumination feature values in corresponding dimensions and capable of representing corresponding illumination information.
  • the key feature points x j in the three-dimensional face grid template, j is the serial number of the key feature points
  • the brightness of the corresponding feature points in the historical face image i of the key feature point x j is I j,i , this
  • the corresponding relationship between the initial brightness corresponding to the historical face image and the illumination is expressed by spherical harmonic illumination as among them, It is the spherical harmonic illumination representation of each key feature point corresponding to the historical face image. at this time among them,
  • L is the degree expressed by spherical harmonic illumination, and any integer value can be selected, which is not limited in this embodiment.
  • the spherical coordinates of the illumination direction in the illumination information of each historical face image are obtained.
  • the spherical harmonic illumination representation of multiple key feature points in the historical face image can be obtained according to the above process, at this time according to the following formula:
  • the correspondence between the initial brightness of the key feature points corresponding to each historical face image and the illumination can be obtained. among them, Correspondence between the preset brightness of key feature points and illumination.
  • the least square method is used to process the corresponding relationship between the initial brightness of the key feature points corresponding to the multiple historical face images and the illumination, and the corresponding relationship between the preset brightness of the key feature points and the illumination is obtained.
  • the corresponding relationship between the initial brightness and illumination corresponding to each key feature point in multiple historical face images is obtained, that is, in the above formula
  • the corresponding relationship between the initial brightness and the illumination corresponding to the multiple historical face images can be processed by the least square method, that is, the corresponding key feature points corresponding to the multiple historical face images Perform processing to find the matching between the optimal brightness and illumination of multiple key feature points corresponding to multiple historical face images by minimizing the sum of squares of errors, so as to obtain the preset brightness and illumination of each key feature point
  • the correspondence relationship between the two is such that the sum of squares of errors between the correspondence relationship between the initial brightness and illumination obtained for different historical face images and the correspondence relationship between the preset brightness and illumination is the smallest.
  • the technical solution provided in this embodiment deforms the three-dimensional face grid template according to a large number of historical face images to obtain multiple historical face images respectively corresponding to the reconstructed historical face grid model, and thus based on multiple historical face
  • the historical lighting information of the image, and the brightness of the corresponding feature points of multiple key feature points in multiple historical face images to obtain the correspondence between the preset brightness of each key feature point and the illumination, using a data-driven approach in advance
  • Obtaining a large number of historical face images improves the accuracy of the correspondence between the preset brightness of key feature points and the illumination.
  • FIG. 3 is a schematic diagram of the principle of a light detection process of a face image provided by an embodiment of the present application. This embodiment is an optional embodiment based on the above embodiment. In this embodiment, the illumination detection process in the face image is explained.
  • S310 to S370 may be included in this embodiment.
  • the 3D face grid template since there are pre-set key feature points in the 3D face grid template, in this embodiment, when the 3D face grid template is deformed according to the face image to obtain the reconstructed face grid model , The predetermined key feature points are determined in the 3D face grid template. Since this embodiment manually marks the corresponding feature points in the 3D face grid template, the 3D face grid template can be used Obtain the corresponding feature mark points as the pre-set key feature points.
  • S340 Determine the mapping position of the key feature point in the face image according to the deformation position of the key feature point in the three-dimensional face grid template in the reconstructed face grid model.
  • the key feature points when the key feature points are determined in the 3D face grid template, the key feature points will also be corresponding to the deformation of the 3D face grid template during the deformation process of the 3D face grid template Movement and meet certain constraints to ensure the consistency of the movement of multiple feature points during the deformation of the 3D face grid template; at this time, it can be determined that the key feature point is following the deformation of the 3D face grid template After the movement, the deformation position in the reconstructed face mesh model is approximately the same as the face pose in the face image to be detected. At this time, the deformation position can be determined according to the deformation position The mapping position of the key feature points in the face image after moving.
  • the mapping position of the moved key feature points in the face image is determined, and the face image in the face image can be directly obtained
  • the corresponding feature point at the mapping position is regarded as the corresponding feature point of the key feature point in the face image.
  • the brightness of the corresponding feature point at the mapping position in the face image is directly obtained, and the key feature point can be used as The brightness of the corresponding feature points in the face image.
  • S360 Determine the spherical harmonic illumination coefficient of the face image according to the correspondence between the preset brightness of the key feature point and the illumination, and the brightness of the feature point corresponding to the key feature point in the face image.
  • the illumination information of the face image can be represented by spherical harmonic illumination.
  • a predetermined value can be obtained Correspondence between the preset brightness of a number of key feature points and the illumination, so that the spherical harmonic illumination coefficient of the face image is determined according to the following formula:
  • Is the correspondence between the preset brightness of the key feature points and the light
  • I j is the brightness of the feature points corresponding to the key feature points in the face image
  • the spherical harmonic illumination coefficient of the face image is
  • the illumination information of the face image can be determined by the spherical harmonic illumination coefficient, as in Selected The light direction of the face image.
  • the technical solution provided in this embodiment deforms the three-dimensional face mesh model according to the face image to be detected to obtain the reconstructed face mesh model, and the position of the deformation in the reconstructed face mesh model according to the key feature points , Determine the brightness of the corresponding feature point of the key feature point in the face image, and then determine the illumination information of the face image according to the correspondence between the brightness and the preset brightness and illumination, without the need to perform the cumbersome image acquisition process, and the person
  • the position matching between the face model and the face image is high, which solves the problem of large errors in the light detection through the three-dimensional face model due to the limitation of the face shape of different face images in the related art, which simplifies the face
  • the light detection operation in the image improves the detection efficiency and accuracy of the light detection.
  • FIG. 4 is a schematic structural diagram of a light detection device for a face image provided by an embodiment of the present application.
  • the device may include: an image acquisition module 410, a face reconstruction module 420, and a brightness determination module 430 and illumination information determination module 440.
  • the image acquisition module 410 is set to acquire the face image to be detected and the three-dimensional face grid template
  • the face reconstruction module 420 is configured to deform the three-dimensional face grid template according to the face image to obtain a reconstructed face grid model
  • the brightness determination module 430 is set to determine the brightness of the corresponding feature point of the key feature point in the face image according to the deformation position of the key feature point in the three-dimensional face grid template in the reconstructed face grid model;
  • the illumination information determination module 440 is configured to determine the illumination information of the face image according to the correspondence between the preset brightness of the key feature point and the illumination, and the brightness of the feature point corresponding to the key feature point in the face image.
  • the three-dimensional face grid model is deformed according to the face image to be detected to obtain a reconstructed face grid model, and the deformed position in the reconstructed face grid model according to the key feature points , Determine the brightness of the corresponding feature point of the key feature point in the face image, and then determine the illumination information of the face image according to the brightness and the corresponding relationship between the preset brightness and the illumination, without performing the heavy image acquisition process, and the person
  • the position matching of the face model and the face image is high, which solves the problem of large errors in the light detection through the three-dimensional face model due to the limitation of the face shape of different face images in the related art, which simplifies the face
  • the light detection operation in the image improves the detection efficiency and accuracy of the light detection.
  • the light detection device of the face image provided by this embodiment can be applied to the light detection method of the face image provided by any of the above embodiments, and has corresponding functions and beneficial effects.
  • FIG. 5 is a schematic structural diagram of a device provided by an embodiment of the present application.
  • the device includes a processor 50, a storage device 51, and a communication device 52; the number of processor 50 in the device may be one or more
  • a processor 50 is used as an example; the processor 50, the storage device 51, and the communication device 52 in the device may be connected through a bus or other methods.
  • a connection through a bus is used as an example.
  • the storage device 51 may be configured to store software programs, computer executable programs, and modules, such as program instructions/modules corresponding to the methods provided in the embodiments of the present application.
  • the processor 50 executes the software functions, instructions, and modules stored in the storage device 51 to execute various functional applications and data processing of the device, that is, to implement the above method.
  • the communication device 52 may be configured to implement network connection or mobile data connection between devices.
  • a device provided in this embodiment may be configured to perform the method provided in any of the above embodiments, and has corresponding functions and effects.
  • Embodiments of the present application also provide a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the program is executed by a processor, the method in any of the foregoing embodiments may be implemented.
  • the method may include:
  • the three-dimensional face grid template is deformed according to the face image to obtain the reconstructed face grid model
  • the deformation position of the key feature points in the three-dimensional face grid template in the reconstructed face grid model determine the brightness of the corresponding feature points of the key feature points in the face image
  • the illumination information of the face image is determined.
  • An embodiment of the present application provides a storage medium containing computer-executable instructions.
  • the computer-executable instructions are not limited to the method operations described above, and can also execute the method provided by any embodiment of the present application.
  • the multiple units and modules included are only divided according to functional logic, but it is not limited to the above division, as long as the corresponding functions can be realized; in addition, the specific names of the multiple functional units It is just to make it easier to distinguish one from another.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Software Systems (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Image Analysis (AREA)
  • Processing Or Creating Images (AREA)
  • Image Processing (AREA)

Abstract

一种人脸图像的光照检测方法、装置、设备和存储介质。其中,该方法包括:获取待检测的人脸图像以及三维人脸网格模板(S110);根据人脸图像对三维人脸网格模板进行形变,得到重建的人脸网格模型(S120);根据三维人脸网格模板中的关键特征点在重建的人脸网格模型中的形变位置,确定关键特征点在人脸图像中对应的特征点的亮度(S130);根据关键特征点的预设亮度与光照的对应关系,以及,关键特征点在人脸图像中对应的特征点的亮度,确定人脸图像的光照信息(S140)。

Description

人脸图像的光照检测方法、装置、设备和存储介质
本申请要求在2018年12月28日提交中国专利局、申请号为201811627209.5的中国专利申请的优先权,该申请的全部内容通过引用结合在本申请中。
技术领域
本申请实施例涉及图像处理领域,例如涉及一种人脸图像的光照检测方法、装置、设备和存储介质。
背景技术
随着计算机技术的快速发展,增强现实(Augmented Reality,AR)类的产品已经广泛应用到人们的日常生活中,AR产品中的虚拟内容需要无缝嵌入到真实图像中,从而给予用户一种虚拟内容真实存在的感觉。对真实图像中的环境光照进行检测,进而为虚拟对象增加同样的光照环境,提高虚拟内容与真实环境之间融合的真实性。
由于光的传播受场景中物体几何和反射率等因素的影响,因此获取真实场景中的光照信息存在图像获取过程操作复杂,以及光照检测存在较大误差的问题。
发明内容
本申请实施例提供了一种人脸图像的光照检测方法、装置、设备和存储介质,简化人脸图像中光照检测操作,提高光照检测的准确性。
在一实施例中,本申请实施例提供了一种人脸图像的光照检测方法,该方法包括:
获取待检测的人脸图像以及三维人脸网格模板;
根据所述人脸图像对所述三维人脸网格模板进行形变,得到重建的人脸网格模型;
根据所述三维人脸网格模板中的关键特征点在重建的人脸网格模型中的形变位置,确定所述关键特征点在所述人脸图像中对应的特征点的亮度;
根据所述关键特征点的预设亮度与光照的对应关系,以及,所述关键特征点在所述人脸图像中对应的特征点的亮度,确定所述人脸图像的光照信息。
在一实施例中,本申请实施例提供了一种人脸图像的光照检测装置,该装置包括:
图像获取模块,设置为获取待检测的人脸图像以及三维人脸网格模板;
人脸重建模块,设置为根据所述人脸图像对所述三维人脸网格模板进行形变,得到重建的人脸网格模型;
亮度确定模块,设置为根据所述三维人脸网格模板中的关键特征点在重建的人脸网格模型中的形变位置,确定所述关键特征点在所述人脸图像中对应的特征点的亮度;
光照信息确定模块,设置为根据所述关键特征点的预设亮度与光照的对应关系,以及,所述关键特征点在所述人脸图像中对应的特征点的亮度,确定所述人脸图像的光照信息。
在一实施例中,本申请实施例提供了一种设备,该设备包括:
一个或多个处理器;
存储装置,设置为存储一个或多个程序;
当所述一个或多个程序被所述一个或多个处理器执行,使得所述一个或多个处理器实现本申请任意实施例中所述的方法。
在一实施例中,本申请实施例提供了一种计算机可读存储介质,该计算机可读存储介质上存储有计算机程序,该计算机程序被处理器执行时实现本申请任意实施例中所述的方法。
附图说明
图1A为本申请实施例提供的一种人脸图像的光照检测方法的流程图;
图1B为本申请实施例提供的方法中三维人脸网格模板的示意图;
图2为本申请实施例提供的方法中关键特征点的预设亮度和光照的对应关系的预先确定过程的原理示意图;
图3为本申请实施例提供的人脸图像的光照检测过程的原理示意图;
图4为本申请实施例提供的一种人脸图像的光照检测装置的结构示意图;
图5为本申请实施例提供的一种设备的结构示意图。
具体实施方式
下面结合附图和实施例对本申请进行说明。此处所描述的实施例仅仅用于解释本申请,而非对本申请的限定。附图中仅示出了与本申请相关的部分而非全部结构。此外,在不冲突的情况下,本申请中的实施例及实施例中的特征可以相互组合。
由于光的传播受场景中物体几何和反射率等因素的影响,因此采用下述三种方式来获取真实场景中的光照信息:
1)采用特殊的图像获取方式,如通过获取真实场景中镜面球(光探头)的图像来检测光照,采用鱼眼相机来获取全方向真实图像,或者采用深度摄像机恢复真实场景中的三维信息和光照条件,来度量真实场景的光照条件,但该种方法执行繁重的图像获取过程,操作较为复杂。
2)根据预先放置在真实场景中的预定义几何和反射属性的标记物进行光照检测:如利用乒乓球和平面标记物、方形标记物或者立方体标记物等,该方法无需执行繁重的图像获取过程,但对于预定义的几何对象和标记物具有一定要求,难以应用在多种实际场景中。
3)考虑到人脸图像具有相似的几何位置,可以利用三维人脸模型生成对应的基准光源图像,通过对人脸图像与基准光源图像进行匹配来检测光照,如通过人工标记人脸图像中的双眼位置,建立坐标系,将不同人脸图像在该双眼坐标系下位置相同的采样点作为同一几何对应点,从而根据每个几何对应点在不同人脸图像中的亮度检测光照,但是该方法需要利用三维人脸模型确定基准光源图像,由于不同人脸图像的形状会发生相应改变,此时三维人脸模型会受到不同人脸图像的人脸形状的限制,根据基准光源图像确定的几何对应点存在一定的误差,使得光照检测存在较大误差。
本申请实施例针对由于三维人脸模型会受到不同人脸图像中人脸形状的限制,因此根据基准光源图像确定的关键特征点在不同人脸图像中存在一定的误差,使得光照检测存在较大误差的问题,通过可形变的三维网格人脸模板重建人脸图像对应的人脸网格模型,根据关键特征点在重建的人脸网格模型中的形变位置,确定人脸图像中对应特征点的亮度,从而根据该亮度和预设亮度与光照的对应关系,得到人脸图像的光照信息,解决了相关技术中通过三维人脸模型进行光照检测存在较大误差的问题,提高了光照检测的检测效率和准确性。
在一实施例中,本申请实施例中的方法可应用于实际的AR类产品中,根据在人脸图像中检测出的光照信息,为对应AR类产品中的虚拟对象增加相同的光照强度,使得虚拟对象能够无缝地嵌入到真实的人脸图像中,从而给予用户一种虚拟对象是真实存在的感觉,提高用户对于AR类产品的使用体验,如实现一些诸如试戴帽子、眼镜等虚拟对象的AR类产品,还有助于提高人脸图像中的人脸识别和人脸跟踪等的识别跟踪效果。
实施例一
图1A为本申请实施例提供的一种人脸图像的光照检测方法的流程图,本实施例可适用于任一种通过重建人脸网格模型来检测人脸图像的光照信息处理设备中。本申请实施例的方案可以适用于如何检测人脸图像中的光照信息的情况中。本实施例提供的方法可以由本申请实施例提供的人脸图像的光照检测装置来执行,该装置可以通过软件和/或硬件的方式来实现,并集成在执行本方法的设备中,该设备可以是具备图像处理能力的任一种处理设备。
在一实施例中,参考图1A,该方法可以包括S110至S140。
S110,获取待检测的人脸图像以及三维人脸网格模板。
在一实施例中,人脸图像是采集的包含有人脸画面的图像数据,由于本实施例应用于对AR类产品中采集的人脸图像的光照检测,从而使该AR类产品中的虚拟对象能够无缝嵌入到真实的人脸图像中,因此可以通过AR类产品中配置的摄像头等图像采集器来采集当前的人脸图像。在一实施例中,在对待检测的人脸图像进行处理时,为了加快人脸图像的处理效率,需要减少人脸图像中包含的像素信息,此时要求该人脸图像为灰度图像,若初始采集的人脸图像为彩色图像,则对该彩色图像进行灰度化处理,得到本实施例中的待检测的人脸图像。
此外,三维人脸网格模板为标准人脸形状的网格模型。在一实施例中,三维人脸网格模板是根据标准的人脸形状预先创建的添加有网格分布示意的三维人脸模型;在三维人脸网格模板的创建过程中,可以根据选定的模型像素分辨率以及网格形状的不同,而创建不同精度或者不同网格形状的三维人脸网格模板,使创建的三维人脸网格模板中的网格分布密集程度或者网格分布形状不同,如图1B中所示的三维人脸网格模板,该三维人脸网格模板由四边形的网格组成,此外,还可以根据需求创建由三角形的网格组成的三维人脸网格模板。本实施例在检测人脸图像的光照信息之前,会在执行人脸图像的光照检测方法的处理设备中存储一个预先创建好的三维人脸网格模板,由于三维人脸模型会受到不同人脸图像中人脸形状的限制,使得光照检测存在较大的误差,本实施例中通过三维人脸网格模板与包含不同人脸形状的人脸图像进行匹配,能够根据人脸形状进行相应的形变,从而避免因不同人脸图像中不同的人脸形状产生的几何对应点提取误差的问题,后续针对不同待检测的人脸图像,均可以通过该三维人脸网格模板与人脸图像匹配,得到重建的人脸网格模型。
可选的,本实施例在对人脸图像中的光照信息进行检测时,获取当前采集的需要进行光照信息检测的人脸图像(即待检测的人脸图像),以及预先存储的三维人脸网格模板,也就是标准人脸的网格模型,通过该三维人脸网格模板与当前待检测的人脸图像进行匹配,能够得到重建的人脸网格模型。本实施例中针对同一个三维人脸网格模板进行处理,降低了根据不同人脸图像重建对应的人脸网格模型过程中存在的噪声和和缺损干扰,而且解决了通过不同的人脸 图像直接生成的网格模型不具备一致拓扑结构的问题。
S120,根据人脸图像对三维人脸网格模板进行形变,得到重建的人脸网格模型。
可选的,在得到本次待检测的人脸图像以及预先存储的三维人脸网格模板时,为了避免因不同人脸图像中不同的人脸形状产生的几何对应点提取误差的问题,可以根据获取的人脸图像中的人脸形状对该三维人脸网格模板进行形变,使得形变后的三维人脸网格模板中的人脸姿态与人脸图像中的人脸姿态近似相同,从而将形变后的三维人脸网格模板作为根据该人脸图像重建的人脸网格模型,此时对于不同的人脸图像,可以对应重建一个唯一的由三维人脸网格模板中的网格进行相应形变得到的人脸网格模型,该重建的人脸网格模型与三维人脸网格模板中具有相同的网格点。
在一实施例中,本实施例中可以通过分层次提取三维人脸网格模板中的局部特征点,并依次根据分层次提取的局部特征点在人脸图像中的姿态匹配位置,对三维人脸网格模板进行形变,得到目标人脸网格模型;进而获取目标人脸网格模型中的全局特征点,并根据全局特征点在人脸图像中的姿态匹配位置,对目标人脸网格模型进行形变,得到重建的人脸网格模型。
其中,特征点为模型中能够有效表示人脸的皮肤特征的像素点,由于人脸中的皮肤点对于不同光照作用下的亮度变化效果比较明显,因此本实施例中的特征点是指能够表示出三维人脸网格模板中皮肤点所对应的像素点,也就是三维人脸网格模板中所分布的多个网格顶点或者网格面片中心点,如四边形网格的中心点。局部特征点是三维人脸网格模板中包含的部分特征点。姿态匹配位置为根据多个特征点分别所表示的人脸目标部位在三维人脸网格模板中的位置,确定该目标部位对应位于人脸图像中的匹配位置;例如,表示三维人脸网格模板中的脸颊的特征点在人脸图像中的姿态匹配位置为该人脸图像中脸颊所在位置。全局特征点为目标人脸网格模型中所分布的全部网格顶点或者网格面片中心点。在一实施例中,本实施例通过分层次对三维人脸网格模板中的局部特征点进行提取,并依次确定分层次提取的多个局部特征点在人脸图像中的姿态匹配位置,根据对应的姿态匹配位置依次控制相应的三维人脸网格模板进行形变,得到与人脸图像匹配的目标人脸网格模型;获取目标人脸网格模型中的全局特征点,并根据多个全局特征点分别在人脸图像中的姿态匹配位置,参照上述提及的形变过程,对目标人脸网格模型进行形变,以保证重建的人脸网格模型足够平滑的同时,也能够尽量重建出更多的细节特征,调整得到与人脸图像更加相似的人脸网格模型,从而得到重建的人脸网格模型。此外,在三维人脸网格模板的形变过程中,除局部特征点之外的其他特征点在随着三维人脸网格模板的形变进行移动时,会满足一定的约束条件,保证三维人脸网格模板在形变过程中多个网格顶点移动的一致性。
S130,根据三维人脸网格模板中的关键特征点在重建的人脸网格模型中的 形变位置,确定关键特征点在人脸图像中对应的特征点的亮度。
在一实施例中,关键特征点是在三维人脸网格模板中预先选取出的能够代表人脸图像的部位信息的局部特征点,与三维人脸网格模板在上述形变过程中的选取的局部特征点可以不同,也可以相同,对此不作限定,本实施例中的关键特征点可选的是在全部网格顶点或者网格面片中心点中可以明确表示三维人脸网格模板中除嘴唇、眼睛等五官点之外的脸颊或下巴等皮肤点位置的具有一定几何对应位置意义的特征点。特征点的亮度是指能够表示人脸图像中的特征点所在位置的色调信息,也就是该位置的图像明暗的参数,本实施例中的人脸图像为灰度图像,因此关键特征点在人脸图像中对应的特征点的亮度为该特征点的像素灰度值。
可选的,在根据人脸图像对三维人脸网格模板进行形变,得到重建的人脸网格模型时,可以在三维人脸网格模板中确定出预先选取的关键特征点,在三维人脸网格模板进行形变的过程中,该关键特征点也会随着三维人脸网格模板的形变进行相应的移动,并满足一定的约束条件,保证三维人脸网格模板在形变过程中多个特征点移动的一致性。在检测人脸图像中的光照信息时,预先获取该人脸图像中对应特征点的亮度,在确定三维人脸网格模板中的关键特征点,且该三维人脸网格模板进行形变,得到重建的人脸网格模型的情况下,可以判断出该关键特征点随着三维人脸网格模板的形变进行相应移动后,在重建的人脸网格模型中对应的形变位置,由于重建的人脸网格模型与待检测的人脸图像中的人脸姿态近似相同,因此可以根据关键特征点在重建的人脸网格模型中对应的形变位置,确定出该关键特征点在人脸图像中相同位置的特征点,作为对应的特征点,从而获取该对应的特征点的亮度,以便后续根据该亮度以及该关键特征点的预设亮度与光照的对应关系,确定人脸图像的光照信息。
S140,根据关键特征点的预设亮度与光照的对应关系,以及,关键特征点在人脸图像中对应的特征点的亮度,确定人脸图像的光照信息。
在一实施例中,预设亮度与光照的对应关系是指人脸图像中特征点的亮度与该人脸图像的光照信息之间存在的对应变换关系,本实施例中可以通过大量已经确定光照信息的历史人脸图像,以及三维人脸网格模板中的关键特征点在多幅历史人脸图像中分别对应的特征点的亮度,来训练得到多个关键特征点的预设亮度与光照的对应关系。光照信息是指在采集待检测的人脸图像的当前时刻所对应的真实环境光的光照情况,可以包括该人脸图像采集时对应的真实光照方向。
可选的,本实施例在得到三维人脸网格模板中的关键特征点在人脸图像中对应的特征点的亮度时,可以获取为多个关键特征点预先确定的预设亮度与光照的对应关系,并根据该预设亮度与光照的对应关系,对三维人脸网格模板中的关键特征点在人脸图像中对应的特征点的亮度进行变换,从而得到该人脸图像在采集时真实环境中的光照信息;后续可以通过该光照信息为对应AR类产品 中的虚拟对象增加相同的光照强度,也就是对虚拟对象进行相同的光照渲染,使得虚拟对象能够无缝地嵌入到真实的人脸图像中,从而给予用户一种虚拟对象是真实存在的感觉,提高用户对于AR类产品的使用体验。
本实施例提供的技术方案,根据待检测的人脸图像对三维人脸网格模型进行形变,得到重建的人脸网格模型,根据关键特征点在重建的人脸网格模型中的形变位置,确定该关键特征点在人脸图像中对应的特征点的亮度,进而根据该亮度以及预设亮度与光照的对应关系,确定人脸图像的光照信息,无需执行繁重的图像获取过程,且人脸模型与人脸图像的位置匹配度较高,解决了相关技术中由于受到不同人脸图像的人脸形状的限制,通过三维人脸模型进行光照检测存在较大误差的问题,简化了人脸图像中光照检测操作,提高了光照检测的检测效率和准确性。
实施例二
图2为本申请实施例提供的方法中关键特征点的预设亮度和光照的对应关系的预先确定过程的原理示意图。本实施例是在上述实施例的基础上的可选实施例。在一实施例中,本实施例通过大量历史人脸图像以及三维人脸网格模板,对该三维人脸网格模板中的多个关键特征点的预设亮度和光照的对应关系的训练过程进行解释说明。
可选的,本实施例中可以包括S210至S240。
S210,获取历史人脸图像以及三维人脸网格模板,并确定历史人脸图像的历史光照信息。
其中,历史人脸图像是已经确定图像光照信息的包含人脸画面的图像,本实施例中在对三维人脸网格模板中关键特征点的预设亮度与光照的对应关系进行确定之前,采集大量的已经明确光照信息的历史人脸图像,构建对应的训练集。
在一实施例中,本实施例执行对多个关键特征点的预设亮度与光照的对应关系的训练过程时,可以在该训练集中获取大量的历史人脸图像,以及预先存储的三维人脸网格模板,并确定多幅历史人脸图像的历史光照信息;该历史光照信息中包括历史人脸图像的光照方向。
S220,根据历史人脸图像对三维人脸网格模板进行形变,得到重建的历史人脸网格模型。
可选的,为了避免由于不同历史人脸图像中不同的人脸形状产生的几何对应点提取误差的问题,而造成三维人脸网格模板中的关键特征点在不同历史人脸图像中对应的特征点所表示的人脸部位出现误差,使得光照检测结果存在较大误差的问题,同时由于后续需要确定三维人脸网格模板中的关键特征点在历史人脸图像中对应的特征点的亮度,此时本实施例中可以通过实施例一中描述的人脸网格模型的重建过程,相应的根据训练集中获取的多幅历史人脸图像, 分别对预先存储的同一个三维人脸网格模板进行形变,得到多幅历史人脸图像分别对应重建的多个历史人脸网格模型,使得多幅历史人脸网格模型具备一致的网格拓扑结构,也就保证了关键特征点在不同历史人脸图像中对应的特征点所表示的人脸部位的一致性。
S230,根据三维人脸网格模板中的关键特征点在重建的历史人脸网格模型中的形变位置,确定关键特征点在历史人脸图像中对应的特征点的亮度。
可选的,在得到多幅历史人脸图像分别对应重建的不同历史人脸网格模型时,由于三维人脸网格模板中的关键特征点会随着三维人脸网格模板的形变进行相应的移动,此时可以确定三维人脸网格模板中的关键特征点在多幅历史人脸图像分别对应重建的不同历史人脸网格模型中的形变位置,从而根据该关键特征点在多个重建的历史人脸网格模型中对应的形变位置,确定出该关键特征点在多幅历史人脸图像中相同位置处的特征点,作为对应的特征点,从而获取在多幅历史人脸图像中该对应的特征点的亮度,以便后续根据该亮度以及预先获取的多幅历史人脸图像的光照信息,确定关键特征点的预设亮度与光照的对应关系。
S240,根据历史人脸图像的历史光照信息,以及关键特征点在历史人脸图像中对应的特征点的亮度,确定关键特征点的预设亮度与光照的对应关系。
可选的,得到关键特征点在历史人脸图像中对应的特征点的亮度,同时已经确定多幅历史人脸图像的光照信息,此时可以针对每一关键特征点,根据该关键特征点在多幅历史人脸图像中对应的特征点的亮度,以及对应的历史人脸图像的光照信息,确定出该关键特征点的预设亮度与光照的对应关系;依照相同的方式,确定三维人脸网格模板中多个关键特征点分别对应的预设亮度与光照的对应关系。
可选的,本实施例中在对于关键特征点的预设亮度与光照的对应关系进行确定时,根据历史人脸图像的历史光照信息,以及关键特征点在历史人脸图像中对应的特征点的亮度,确定关键特征点的预设亮度与光照的对应关系,可以包括:
S241,针对每一幅历史人脸图像,根据该历史人脸图像的历史光照信息,以及关键特征点在该历史人脸图像中对应的特征点的亮度,确定关键特征点的初始亮度与光照的对应关系。
在一实施例中,由于训练集中包括大量历史人脸图像,此时在对关键特征点的预设亮度与光照的对应关系进行确定时,对每一幅历史人脸图像均执行相同的处理操作。本实施例中针对每一幅历史人脸图像,在获取到三维人脸网格模板中的多个关键特征点在该历史人脸图像中对应的特征点的亮度时,可以根据该历史人脸图像的光照信息,以及多个关键特征点在该历史人脸图像中对应的特征点的亮度,确定出该历史人脸图像对应的多个关键特征点的初始亮度与光照的对应关系。
示例性的,本实施例中的光照信息可以通过球谐光照的方式来表示,球谐光照是一种由对应维度下的光照特征值组成的、能够表示相应光照信息的向量。此时三维人脸网格模板中的关键特征点x j,j为关键特征点的序号,该关键特征点x j在历史人脸图像i中对应的特征点的亮度为I j,i,此时该历史人脸图像对应的初始亮度与光照的对应关系通过球谐光照表示为
Figure PCTCN2019123006-appb-000001
其中,
Figure PCTCN2019123006-appb-000002
为该历史人脸图像中对应的每个关键特征点的球谐光照表示。此时
Figure PCTCN2019123006-appb-000003
其中,
Figure PCTCN2019123006-appb-000004
为该历史人脸图像的光照信息中的光照方向球坐标;
Figure PCTCN2019123006-appb-000005
Figure PCTCN2019123006-appb-000006
此外,
Figure PCTCN2019123006-appb-000007
是通过如下方式递归计算得到的:
1)
Figure PCTCN2019123006-appb-000008
其中,
Figure PCTCN2019123006-appb-000009
x!!为小于x的所有奇数的乘积;
2)
Figure PCTCN2019123006-appb-000010
3)
Figure PCTCN2019123006-appb-000011
其中,L为球谐光照表示的度,可以选用任一整数值,本实施例对此不作限定。
此时,在得到每幅历史人脸图像的光照信息中的光照方向球坐标
Figure PCTCN2019123006-appb-000012
可以依据上述过程,得到该历史人脸图像中多个关键特征点的球谐光照表示,此时根据下述公式:
Figure PCTCN2019123006-appb-000013
可以得到每幅历史人脸图像对应的关键特征点的初始亮度与光照的对应关系。其中,
Figure PCTCN2019123006-appb-000014
为关键特征点的预设亮度与光照的对应关系。
S242,采用最小二乘法对多幅历史人脸图像分别对应的关键特征点的初始亮度与光照的对应关系进行处理,得到关键特征点的预设亮度与光照的对应关 系。
可选的,在得到每一关键特征点在多幅历史人脸图像中分别对应的初始亮度与光照的对应关系,也就是上述公式中的
Figure PCTCN2019123006-appb-000015
时,此时可以通过最小二乘法对多幅历史人脸图像分别对应的初始亮度与光照的对应关系进行处理,也就是对得到的每个关键特征点在多幅历史人脸图像中对应的
Figure PCTCN2019123006-appb-000016
进行处理,通过最小化误差的平方和寻找多幅历史人脸图像分别对应的多个关键特征点的最佳亮度与光照的对应关系的匹配,从而得到每个关键特征点的预设亮度与光照的对应关系,使得针对不同的历史人脸图像求得的初始亮度与光照的对应关系与该预设亮度与光照的对应关系之间误差的平方和为最小。
本实施例提供的技术方案,根据大量的历史人脸图像对三维人脸网格模板进行形变,得到多幅历史人脸图像分别对应重建的历史人脸网格模型,从而根据多幅历史人脸图像的历史光照信息,以及多个关键特征点在多幅历史人脸图像中分别对应的特征点的亮度,得到每个关键特征点的预设亮度和光照的对应关系,采用数据驱动的方式预先获取大量历史人脸图像,提高了关键特征点的预设亮度和光照的对应关系的准确性。
实施例三
图3为本申请实施例提供的人脸图像的光照检测过程的原理示意图。本实施例是在上述实施例的基础上的可选实施例。本实施例中对于人脸图像中的光照检测过程进行解释说明。
可选的,本实施例中可以包括S310至S370。
S310,获取待检测的人脸图像以及三维人脸网格模板。
S320,根据人脸图像对三维人脸网格模板进行形变,得到重建的人脸网格模型。
S330,获取三维人脸网格模板中的特征标记点,作为关键特征点。
在一实施例中,由于三维人脸网格模板中存在预先设定的关键特征点,本实施例在根据人脸图像对三维人脸网格模板进行形变,得到重建的人脸网格模型时,在三维人脸网格模板中确定出预先设定的关键特征点,由于本实施例通过人工在三维人脸网格模板中标注出对应的特征点,此时可以在三维人脸网格模板中获取对应的特征标记点,作为预先设定的关键特征点。
S340,根据三维人脸网格模板中的关键特征点在重建的人脸网格模型中的形变位置,确定关键特征点在人脸图像中的映射位置。
可选的,在三维人脸网格模板中确定出关键特征点时,在三维人脸网格模板进行形变的过程中,该关键特征点也会随着三维人脸网格模板的形变进行相应的移动,并满足一定的约束条件,保证三维人脸网格模板在形变过程中多个特征点移动的一致性;此时可以确定出该关键特征点在随着三维人脸网格模板 的形变进行移动后,在重建的人脸网格模型中的形变位置,由于重建的人脸网格模型与待检测的人脸图像中的人脸姿态近似相同,此时可以根据该形变位置确定出该关键特征点移动后在人脸图像中的映射位置。
S350,获取人脸图像中该映射位置处对应的特征点的亮度,作为关键特征点在人脸图像中对应的特征点的亮度。
在一实施例中,在确定关键特征点随着三维人脸网格模板的形变进行移动后,确定出移动后的关键特征点在人脸图像中的映射位置,可以直接获取人脸图像中该映射位置处对应的特征点,作为该关键特征点在人脸图像中对应的特征点,此时直接获取人脸图像中该映射位置处对应的特征点的亮度,则可以作为该关键特征点在人脸图像中对应的特征点的亮度。
S360,根据关键特征点的预设亮度与光照的对应关系,以及,关键特征点在人脸图像中对应的特征点的亮度,确定人脸图像的球谐光照系数。
可选的,本实施例中人脸图像的光照信息可以通过球谐光照的方式来表示,此时在得到多个关键特征点在人脸图像中对应的特征点的亮度时,可以获取预先确定的多个关键特征点的预设亮度与光照的对应关系,从而根据下述公式确定出人脸图像的球谐光照系数:
Figure PCTCN2019123006-appb-000017
其中,
Figure PCTCN2019123006-appb-000018
为关键特征点的预设亮度与光照的对应关系,I j为关键特征点在人脸图像中对应的特征点的亮度,
Figure PCTCN2019123006-appb-000019
为人脸图像的球谐光照系数。
S370,根据球谐光照系数确定人脸图像的光照信息。
可选的,在得到人脸图像的球谐光照系数时,可以通过该球谐光照系数确定人脸图像的光照信息,如在
Figure PCTCN2019123006-appb-000020
中选取出
Figure PCTCN2019123006-appb-000021
为人脸图像的光照方向。
本实施例提供的技术方案,根据待检测的人脸图像对三维人脸网格模型进行形变,得到重建的人脸网格模型,根据关键特征点在重建的人脸网格模型中的形变位置,确定该关键特征点在人脸图像中对应的特征点的亮度,进而根据该亮度以及预设亮度与光照的对应关系,确定人脸图像的光照信息,无需执行繁重的图像获取过程,且人脸模型与人脸图像的位置匹配度较高,解决了相关技术中由于受到不同人脸图像的人脸形状的限制,通过三维人脸模型进行光照检测存在较大误差的问题,简化了人脸图像中光照检测操作,提高了光照检测的检测效率和准确性。
实施例四
图4为本申请实施例提供的一种人脸图像的光照检测装置的结构示意图,具体的,如图4所示,该装置可以包括:图像获取模块410、人脸重建模块420、亮度确定模块430以及光照信息确定模块440。
图像获取模块410,设置为获取待检测的人脸图像以及三维人脸网格模板;
人脸重建模块420,设置为根据人脸图像对三维人脸网格模板进行形变,得到重建的人脸网格模型;
亮度确定模块430,设置为根据三维人脸网格模板中的关键特征点在重建的人脸网格模型中的形变位置,确定关键特征点在人脸图像中对应的特征点的亮度;
光照信息确定模块440,设置为根据关键特征点的预设亮度与光照的对应关系,以及,关键特征点在人脸图像中对应的特征点的亮度,确定人脸图像的光照信息。
本实施例提供的技术方案,根据待检测的人脸图像对三维人脸网格模型进行形变,得到重建的人脸网格模型,根据关键特征点在重建的人脸网格模型中的形变位置,确定该关键特征点在人脸图像中对应的特征点的亮度,进而根据该亮度以及预设亮度与光照的对应关系,确定人脸图像的光照信息,无需执行繁重的图像获取过程,且人脸模型与人脸图像的位置匹配度较高,解决了相关技术中由于受到不同人脸图像的人脸形状的限制,通过三维人脸模型进行光照检测存在较大误差的问题,简化了人脸图像中光照检测操作,提高了光照检测的检测效率和准确性。
本实施例提供的人脸图像的光照检测装置可适用于设置为上述任意实施例提供的人脸图像的光照检测方法,具备相应的功能和有益效果。
实施例五
图5为本申请实施例提供的一种设备的结构示意图,如图5所示,该设备包括处理器50、存储装置51和通信装置52;设备中处理器50的数量可以是一个或多个,图5中以一个处理器50为例;设备中的处理器50、存储装置51和通信装置52可以通过总线或其他方式连接,图5中以通过总线连接为例。
存储装置51作为一种计算机可读存储介质,可设置为存储软件程序、计算机可执行程序以及模块,如本申请实施例中提供的方法对应的程序指令/模块。处理器50通过运行存储在存储装置51中的软件程序、指令以及模块,从而执行设备的多种功能应用以及数据处理,即实现上述方法。
通信装置52可设置为实现设备间的网络连接或者移动数据连接。
本实施例提供的一种设备可设置为执行上述任意实施例提供的方法,具备相应的功能和效果。
实施例六
本申请实施例还提供了一种计算机可读存储介质,计算机可读存储介质上存储有计算机程序,该程序被处理器执行时可实现上述任意实施例中的方法。该方法可以包括:
获取待检测的人脸图像以及三维人脸网格模板;
根据人脸图像对三维人脸网格模板进行形变,得到重建的人脸网格模型;
根据三维人脸网格模板中的关键特征点在重建的人脸网格模型中的形变位置,确定关键特征点在人脸图像中对应的特征点的亮度;
根据关键特征点的预设亮度与光照的对应关系,以及,关键特征点在人脸图像中对应的特征点的亮度,确定人脸图像的光照信息。
本申请实施例所提供的一种包含计算机可执行指令的存储介质,其计算机可执行指令不限于如上所述的方法操作,还可以执行本申请任意实施例所提供的方法。
上述装置的实施例中,所包括的多个单元和模块只是按照功能逻辑进行划分的,但并不局限于上述的划分,只要能够实现相应的功能即可;另外,多个功能单元的具体名称也只是为了便于相互区分。

Claims (10)

  1. 一种人脸图像的光照检测方法,包括:
    获取待检测的人脸图像以及三维人脸网格模板;
    根据所述人脸图像对所述三维人脸网格模板进行形变,得到重建的人脸网格模型;
    根据所述三维人脸网格模板中的关键特征点在所述重建的人脸网格模型中的形变位置,确定所述关键特征点在所述人脸图像中对应的特征点的亮度;
    根据所述关键特征点的预设亮度与光照的对应关系,以及,所述关键特征点在所述人脸图像中对应的特征点的亮度,确定所述人脸图像的光照信息。
  2. 根据权利要求1所述的方法,其中,在所述获取待检测的人脸图像以及三维人脸网格模板之前,还包括:
    获取历史人脸图像以及所述三维人脸网格模板,并确定所述历史人脸图像的历史光照信息;
    根据所述历史人脸图像对所述三维人脸网格模板进行形变,得到重建的历史人脸网格模型;
    根据所述三维人脸网格模板中的关键特征点在所述重建的历史人脸网格模型中的形变位置,确定所述关键特征点在所述历史人脸图像中对应的特征点的亮度;
    根据所述历史人脸图像的历史光照信息,以及所述关键特征点在所述历史人脸图像中对应的特征点的亮度,确定所述关键特征点的预设亮度与光照的对应关系。
  3. 根据权利要求2所述的方法,其中,所述根据所述历史人脸图像的历史光照信息,以及所述关键特征点在所述历史人脸图像中对应的特征点的亮度,确定所述关键特征点的预设亮度与光照的对应关系,包括:
    针对每一幅历史人脸图像,根据所述历史人脸图像的历史光照信息,以及所述关键特征点在所述历史人脸图像中对应的特征点的亮度,确定所述关键特征点的初始亮度与光照的对应关系;
    采用最小二乘法对多幅所述历史人脸图像分别对应的所述关键特征点的初始亮度与光照的对应关系进行处理,得到所述关键特征点的预设亮度与光照的对应关系。
  4. 根据权利要求1所述的方法,其中,所述根据所述三维人脸网格模板中的关键特征点在所述重建的人脸网格模型中的形变位置,确定所述关键特征点在所述人脸图像中对应的特征点的亮度,包括:
    根据所述三维人脸网格模板中的关键特征点在所述重建的人脸网格模型中的形变位置,确定所述关键特征点在所述人脸图像中的映射位置;
    获取所述人脸图像中的所述映射位置处对应的特征点的亮度,作为所述关键特征点在所述人脸图像中对应的特征点的亮度。
  5. 根据权利要求4所述的方法,在所述根据所述三维人脸网格模板中的关键特征点在所述重建的人脸网格模型中的形变位置,确定所述关键特征点在所述人脸图像中的映射位置之前,还包括:
    获取所述三维人脸网格模板中的特征标记点,作为所述关键特征点。
  6. 根据权利要求1至5任一项所述的方法,其中,所述人脸图像为灰度图像,所述关键特征点在所述人脸图像中对应的特征点的亮度为所述特征点的像素灰度值。
  7. 根据权利要求1至5任一项所述的方法,其中,所述根据所述关键特征点的预设亮度与光照的对应关系,以及,所述关键特征点在所述人脸图像中对应的特征点的亮度,确定所述人脸图像的光照信息,包括:
    根据所述关键特征点的预设亮度与光照的对应关系,以及,所述关键特征点在所述人脸图像中对应的特征点的亮度,确定所述人脸图像的球谐光照系数;
    根据所述球谐光照系数确定所述人脸图像的光照信息。
  8. 一种人脸图像的光照检测装置,包括:
    图像获取模块,设置为获取待检测的人脸图像以及三维人脸网格模板;
    人脸重建模块,设置为根据所述人脸图像对所述三维人脸网格模板进行形变,得到重建的人脸网格模型;
    亮度确定模块,设置为根据所述三维人脸网格模板中的关键特征点在所述重建的人脸网格模型中的形变位置,确定所述关键特征点在所述人脸图像中对应的特征点的亮度;
    光照信息确定模块,设置为根据所述关键特征点的预设亮度与光照的对应关系,以及,所述关键特征点在所述人脸图像中对应的特征点的亮度,确定所述人脸图像的光照信息。
  9. 一种设备,包括:
    一个或多个处理器;
    存储装置,设置为存储一个或多个程序;
    所述一个或多个程序被所述一个或多个处理器执行,使得所述一个或多个 处理器实现如权利要求1-7中任一项所述的方法。
  10. 一种计算机可读存储介质,所述计算机可读存储介质上存储有计算机程序,所述计算机程序被处理器执行时实现如权利要求1-7中任一项所述的方法。
PCT/CN2019/123006 2018-12-28 2019-12-04 人脸图像的光照检测方法、装置、设备和存储介质 WO2020134925A1 (zh)

Priority Applications (3)

Application Number Priority Date Filing Date Title
SG11202106040PA SG11202106040PA (en) 2018-12-28 2019-12-04 Illumination detection method and apparatus for facial image, and device and storage medium
US17/417,003 US11908236B2 (en) 2018-12-28 2019-12-04 Illumination detection method and apparatus for face image, and device and storage medium
EP19905268.9A EP3905103A4 (en) 2018-12-28 2019-12-04 METHOD AND DEVICE FOR ILLUMINATION DETECTION FOR FACE IMAGES, AND DEVICE AND STORAGE MEDIUM

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201811627209.5A CN111382618B (zh) 2018-12-28 2018-12-28 一种人脸图像的光照检测方法、装置、设备和存储介质
CN201811627209.5 2018-12-28

Publications (1)

Publication Number Publication Date
WO2020134925A1 true WO2020134925A1 (zh) 2020-07-02

Family

ID=71125693

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/123006 WO2020134925A1 (zh) 2018-12-28 2019-12-04 人脸图像的光照检测方法、装置、设备和存储介质

Country Status (5)

Country Link
US (1) US11908236B2 (zh)
EP (1) EP3905103A4 (zh)
CN (1) CN111382618B (zh)
SG (1) SG11202106040PA (zh)
WO (1) WO2020134925A1 (zh)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112581518A (zh) * 2020-12-25 2021-03-30 百果园技术(新加坡)有限公司 基于三维卡通模型的眼球配准方法、装置、服务器和介质
CN113205005B (zh) * 2021-04-12 2022-07-19 武汉大学 一种面向低光照低分辨率的人脸图像幻构方法
CN113362221A (zh) * 2021-04-29 2021-09-07 南京甄视智能科技有限公司 用于门禁的人脸识别系统与人脸识别方法
CN113298948B (zh) * 2021-05-07 2022-08-02 中国科学院深圳先进技术研究院 三维网格重建方法、装置、设备及存储介质

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070127787A1 (en) * 2005-10-24 2007-06-07 Castleman Kenneth R Face recognition system and method
CN101159015A (zh) * 2007-11-08 2008-04-09 清华大学 一种二维人脸图像的识别方法
CN101320484A (zh) * 2008-07-17 2008-12-10 清华大学 一种基于人脸全自动定位的三维人脸识别方法
CN108898068A (zh) * 2018-06-06 2018-11-27 腾讯科技(深圳)有限公司 一种人脸图像的处理方法和装置以及计算机可读存储介质

Family Cites Families (29)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6072496A (en) * 1998-06-08 2000-06-06 Microsoft Corporation Method and system for capturing and representing 3D geometry, color and shading of facial expressions and other animated objects
US7755619B2 (en) * 2005-10-13 2010-07-13 Microsoft Corporation Automatic 3D face-modeling from video
KR100914847B1 (ko) * 2007-12-15 2009-09-02 한국전자통신연구원 다시점 영상정보를 이용한 삼차원 얼굴 모델 생성방법 및장치
TW201023092A (en) * 2008-12-02 2010-06-16 Nat Univ Tsing Hua 3D face model construction method
US8861800B2 (en) * 2010-07-19 2014-10-14 Carnegie Mellon University Rapid 3D face reconstruction from a 2D image and methods using such rapid 3D face reconstruction
CN102509345B (zh) * 2011-09-30 2014-06-25 北京航空航天大学 一种基于画家知识的肖像艺术光影效果生成方法
US9317954B2 (en) * 2013-09-23 2016-04-19 Lucasfilm Entertainment Company Ltd. Real-time performance capture with on-the-fly correctives
US10417824B2 (en) * 2014-03-25 2019-09-17 Apple Inc. Method and system for representing a virtual object in a view of a real environment
KR20150113751A (ko) * 2014-03-31 2015-10-08 (주)트라이큐빅스 휴대용 카메라를 이용한 3차원 얼굴 모델 획득 방법 및 장치
CN104915641B (zh) 2015-05-27 2018-02-02 上海交通大学 基于android平台获取人脸图像光源方位的方法
US10796480B2 (en) * 2015-08-14 2020-10-06 Metail Limited Methods of generating personalized 3D head models or 3D body models
US9639737B2 (en) * 2015-09-29 2017-05-02 Eth Zürich (Eidgenöessische Technische Hochschule Zürich) Methods and systems of performing performance capture using an anatomically-constrained local model
US9959661B2 (en) * 2015-12-02 2018-05-01 Samsung Electronics Co., Ltd. Method and device for processing graphics data in graphics processing unit
CN105719326A (zh) * 2016-01-19 2016-06-29 华中师范大学 一种基于单张照片的真实感人脸生成方法
US9858296B2 (en) * 2016-03-31 2018-01-02 Adobe Systems Incorporated Representative image selection for image management using face recognition
CN106157372B (zh) * 2016-07-25 2019-01-25 深圳市唯特视科技有限公司 一种基于视频图像的3d人脸网格重构方法
US10930086B2 (en) * 2016-11-01 2021-02-23 Dg Holdings, Inc. Comparative virtual asset adjustment systems and methods
US10497172B2 (en) * 2016-12-01 2019-12-03 Pinscreen, Inc. Photorealistic facial texture inference using deep neural networks
CN107506714B (zh) * 2017-08-16 2021-04-02 成都品果科技有限公司 一种人脸图像重光照的方法
CN107680158A (zh) * 2017-11-01 2018-02-09 长沙学院 一种基于卷积神经网络模型的三维人脸重建方法
CN107909640B (zh) * 2017-11-06 2020-07-28 清华大学 基于深度学习的人脸重光照方法及装置
CN108492373B (zh) * 2018-03-13 2019-03-08 齐鲁工业大学 一种人脸浮雕几何建模方法
CN108898665A (zh) * 2018-06-15 2018-11-27 上饶市中科院云计算中心大数据研究院 三维人脸重建方法、装置、设备及计算机可读存储介质
CN109035388B (zh) * 2018-06-28 2023-12-05 合肥的卢深视科技有限公司 三维人脸模型重建方法及装置
CN109377563A (zh) * 2018-11-29 2019-02-22 广州市百果园信息技术有限公司 一种人脸网格模型的重建方法、装置、设备和存储介质
CN109919876B (zh) * 2019-03-11 2020-09-01 四川川大智胜软件股份有限公司 一种三维真脸建模方法及三维真脸照相系统
US11765332B2 (en) * 2021-03-02 2023-09-19 True Meeting Inc. Virtual 3D communications with participant viewpoint adjustment
US11663798B2 (en) * 2021-10-13 2023-05-30 Mitsubishi Electric Research Laboratories, Inc. System and method for manipulating two-dimensional (2D) images of three-dimensional (3D) objects
US11562597B1 (en) * 2022-06-22 2023-01-24 Flawless Holdings Limited Visual dubbing using synthetic models

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070127787A1 (en) * 2005-10-24 2007-06-07 Castleman Kenneth R Face recognition system and method
CN101159015A (zh) * 2007-11-08 2008-04-09 清华大学 一种二维人脸图像的识别方法
CN101320484A (zh) * 2008-07-17 2008-12-10 清华大学 一种基于人脸全自动定位的三维人脸识别方法
CN108898068A (zh) * 2018-06-06 2018-11-27 腾讯科技(深圳)有限公司 一种人脸图像的处理方法和装置以及计算机可读存储介质

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
See also references of EP3905103A4 *
XIU-JUAN CHAI: "Pose and Illumination Invariant Face Recognition Based on 3D Face Reconstruction", RUANJIAN XUEBAO - JOURNAL OF SOFTWARE,20060101GAI KAN BIANJIBU, BEIJING, CN, vol. 17, no. 3, 31 March 2006 (2006-03-31), pages 525 - 534, XP055509101, DOI: 20200301063401X *

Also Published As

Publication number Publication date
US20220075992A1 (en) 2022-03-10
EP3905103A4 (en) 2022-03-02
US11908236B2 (en) 2024-02-20
SG11202106040PA (en) 2021-07-29
EP3905103A1 (en) 2021-11-03
CN111382618A (zh) 2020-07-07
CN111382618B (zh) 2021-02-05

Similar Documents

Publication Publication Date Title
WO2021077720A1 (zh) 获取对象三维模型的方法、装置、电子设备及系统
JP7249390B2 (ja) 単眼カメラを用いたリアルタイム3d捕捉およびライブフィードバックのための方法およびシステム
WO2020134925A1 (zh) 人脸图像的光照检测方法、装置、设备和存储介质
WO2022121645A1 (zh) 一种教学场景中虚拟对象的真实感生成方法
US10977818B2 (en) Machine learning based model localization system
US9058661B2 (en) Method for the real-time-capable, computer-assisted analysis of an image sequence containing a variable pose
CN107484428B (zh) 用于显示对象的方法
US20150206003A1 (en) Method for the Real-Time-Capable, Computer-Assisted Analysis of an Image Sequence Containing a Variable Pose
JP2019057248A (ja) 画像処理システム、画像処理装置、画像処理方法及びプログラム
JP7015152B2 (ja) キーポイントデータに関する加工装置、方法及びプログラム
CN104794722A (zh) 利用单个Kinect计算着装人体三维净体模型的方法
CN111243093A (zh) 三维人脸网格的生成方法、装置、设备及存储介质
JP2002133446A (ja) 顔画像処理方法およびシステム
CN108564619B (zh) 一种基于两张照片的真实感三维人脸重建方法
US11928778B2 (en) Method for human body model reconstruction and reconstruction system
CN113628327A (zh) 一种头部三维重建方法及设备
CN112784621A (zh) 图像显示方法及设备
WO2023066120A1 (zh) 图像处理方法、装置、电子设备及存储介质
US11080920B2 (en) Method of displaying an object
WO2019042028A1 (zh) 全视向的球体光场渲染方法
US20220277586A1 (en) Modeling method, device, and system for three-dimensional head model, and storage medium
Jian et al. Realistic face animation generation from videos
RU2778288C1 (ru) Способ и устройство для определения освещенности изображения лица, устройство и носитель данных
US20240127539A1 (en) Mechanical weight index maps for mesh rigging
US20220309733A1 (en) Surface texturing from multiple cameras

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19905268

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2019905268

Country of ref document: EP

Effective date: 20210728