WO2016045711A1 - A face pose rectification method and apparatus - Google Patents

A face pose rectification method and apparatus Download PDF

Info

Publication number
WO2016045711A1
WO2016045711A1 PCT/EP2014/070282 EP2014070282W WO2016045711A1 WO 2016045711 A1 WO2016045711 A1 WO 2016045711A1 EP 2014070282 W EP2014070282 W EP 2014070282W WO 2016045711 A1 WO2016045711 A1 WO 2016045711A1
Authority
WO
WIPO (PCT)
Prior art keywords
pose
depth map
image data
model
face
Prior art date
Application number
PCT/EP2014/070282
Other languages
French (fr)
Inventor
Yann Rodriguez
François Moulin
Sébastien PICCAND
Original Assignee
Keylemon Sa
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Keylemon Sa filed Critical Keylemon Sa
Priority to EP14772329.0A priority Critical patent/EP3198522A1/en
Priority to KR1020177010873A priority patent/KR20170092533A/en
Priority to PCT/EP2014/070282 priority patent/WO2016045711A1/en
Publication of WO2016045711A1 publication Critical patent/WO2016045711A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/64Three-dimensional objects
    • G06V20/653Three-dimensional objects by matching three-dimensional models, e.g. conformal mapping of Riemann surfaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10048Infrared image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20068Projection on vertical or horizontal image axis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20172Image enhancement details
    • G06T2207/20182Noise reduction or smoothing in the temporal domain; Spatio-temporal filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Definitions

  • a face pose rectification method and apparatus A face pose rectification method and apparatus
  • the present invention relates to a face pose rectification method and apparatus
  • Face recognition involves the analysis of data representing a test image (or a set of test images) of a human face and its comparison with a database of reference images.
  • the test image is usually a 2D image captured with a common 2D camera while the reference image is a 2D image or sometimes a 3D image captured with a depth camera for example.
  • US8055028 concerns a method of normalizing a non-frontal 2D facial image to a frontal facial image.
  • the method comprises: determining a pose of a non-frontal image of an object; performing smoothing transformation on the non-frontal image of the object, thereby generating a smoothed object image; and synthesizing a frontal image of the object by using the pose determination result and the smoothed object image.
  • the pose is determined by obtaining a first mean distance between object feature points existing in both sides of the centre line of the non-frontal image.
  • US8199979 discloses a method for classifying and archiving images including face regions that are acquired with a 2D image
  • a face detection module identifies a group of pixels corresponding to a face region. The face orientation and pose is then determined. In case of a half-profile face, a normalization module transforms the candidate face region into a 3D space, and then rotates it until the appropriate pose correction is made. Alternatively, the texture, colour and feature regions of the 2D face region are mapped onto a 3D model which is then rotated to correct the pose. This results however in a massive deformation when the 2D image is mapped onto a 3D model of a different face.
  • US7929775 discloses a method for matching portions of a 2D image to a 3D class model.
  • the method comprises identifying image features in the 2D image; computing an aligning transformation between the class model and the image; and comparing, under the aligning transformation, class parts of the class model with the image features.
  • US7289648 discloses a method for automatically modelling a three dimensional object, such as a face, from a single image.
  • the system and method according to the invention constructs one or more three dimensional (3D) face models using a single image. It can also be used as a tool to generate a database of faces with various poses which are needed to train most face recognition systems.
  • US2013156262 suggests an estimation of the pose of an object by defining a set of pair features as pairs of geometric primitives, wherein the geometric primitives include oriented surface points, oriented boundary points, and boundary line segments.
  • Model pair features are determined based on the set of pair features for a model of the object.
  • Scene pair features are determined based on the set of pair features from data acquired by a 3D sensor, and then the model pair features are matched with the scene pair features to estimate the pose of the object.
  • US8660306 discloses a method for the correction of a human pose determined from depth data.
  • the method comprises receiving depth image data, obtaining an initial estimated skeleton of an articulated object from depth image data, applying a random forest subspace regression function (a function that utilizes a plurality of random splitting/projection decision trees) to the initial estimated skeleton, and determining the representation of the pose based upon a result of applying the random forest subspace regression to the initial estimated skeleton.
  • the method is in particular adapted to the estimation of pose of the whole body.
  • US6381346 describes a system for generating facial images, indexing those images by composite codes and for searching for similar two-dimensional facial images.
  • 3D images of human faces are generated from a data repository of 3D facial feature surface shapes. These shapes are organized by facial feature parts. By assembling a shape for each facial part, a 3D facial image is formed.
  • US8406484 concerns a facial recognition apparatus, comprising: a two-dimensional information acquisition unit to acquire two-dimensional image information of a subject; a three-dimensional information
  • acquisition unit to acquire three-dimensional image information of the subject;
  • a user information database to store an elliptical model
  • control unit to perform facial recognition using the two-dimensional image information of the subject, to determine whether a recognized face is the user's face, to match the elliptical model of the user to the three-dimensional image
  • US7756325 discloses an algorithm for estimating the 3D shape of a 3-dimensional object, such as a human face, based on information retrieved from a single photograph. Beside the pixel intensity, the invention uses various image features in a multi-features fitting algorithm (MFF).
  • MFF multi-features fitting algorithm
  • EP1039417 concerns a method of processing an image of a three- dimensional object, including the steps of providing a morphable object model derived from a plurality of 3D images, matching the morphable object model to at least one 2D object image, and providing the matched morphable object model as a 3D representation of the object.
  • US8553973 concerns another methods and systems for modelling 3D objects (for example, human faces).
  • RGB-D datasets A popular example of a depth sensor that produces RGB-D datasets is the Kinect input device proposed by Microsoft for Xbox 360, Xbox One and Windows PC (all trademarks of Microsoft, Inc).
  • Depth sensors produce 2.5D image datasets include an indication of depth (or distance to the light source) for each pixel of the image, but no indication about hidden or occluded elements, such as the back of the head for example.
  • test images are captured with depth sensors, and then compared with existing 2D images.
  • Such a method would have the advantage of using modern depth sensors for the acquisition of RGB-D image data, and to compare them with widely available 2D reference images.
  • a pose rectification method for rectifying a pose in data representing face images comprising the steps of:
  • NIR near infrared
  • the head pose is estimated by fitting the depth map with an existing 3D model, so as to estimate the orientation of the depth map. This pose estimation is particularly robust.
  • the 3D model could be a generic model, i.e., user-independent model.
  • the 3D model could be a user dependent model, for example a 3D model of the head of the user whose identity needs to be verified.
  • the 3D model could be a gender-specific model, or an ethnic- specific model, or an age-specific model, and selected based on an a priori knowledge of the gender, ethnicity and/or age.
  • the existing 3D model is used for estimating the pose of the face.
  • the 2D image data is not mapped onto this 3D model, but on the depth map.
  • the method may comprise a further step of classifying image, for example classifying the 2D projected image.
  • the classification may include face authentication, face
  • the method may comprise a step of further processing the 2D projected image.
  • the acquisition step may comprise a temporal and/or spatial smoothing of points in the depth map, in order to remove noise generated by the depth sensor.
  • the step of estimating the pose may include a first step of performing a rough pose estimation, for example based on a random regression forest method.
  • the step of estimating the pose may include a second step of fine pose estimation.
  • the fine pose estimation may be based on the result of the rough pose estimation.
  • the fine pose estimation may be based for example on an alignment of the depth map with a 3D model, using rigid Iterative Closest Point (ICP) methods.
  • the method may further include a step of basic face detection before said pose estimation, in order to eliminate at least some portions of the 2D near-infrared image, and/or of said 2D visible light image, and/or of said depth map which do not belong to the face.
  • the method may further include a step of foreground extraction in order to eliminate portions of said 2D near-infrared image, and/or of said 2D visible light image, and/or of said 2D near-infrared which do not belong to the foreground.
  • the step of aligning the depth map with an existing 3D model of a head may comprise scaling the depth map, so that some of its dimensions (for example the maximal height) match corresponding dimensions of the 3D model.
  • the step of fitting the depth map with an existing 3D model of a head may comprise warping the depth map and/or the 3D model.
  • the method may comprise a further step of correcting the illumination of the 2D visible light image dataset based on the 2D near- infrared image dataset. Shadows or bright zones in the 2D visible light image dataset that do not appear in the 2D near-infrared image dataset may thus be corrected.
  • the method may comprise a further step of flagging portions of said pose-rectified 2D projected image data which correspond to portions not visible on the depth map and/or on the 2D image.
  • the method may comprising a further step of reconstructing portions of the pose-rectified 2D projected image data which correspond to unknown portions of the depth map.
  • Fig. 1 is a flowchart of the method of the invention.
  • Fig.2 is a schematic view of an apparatus according to the invention.
  • Fig. 3 illustrates an example of a 2D visible light image of a face.
  • Fig. 4 illustrates an example of a 2D near-infrared image of the face of figure 3.
  • Fig. 5 illustrates an example of a representation of a depth map of the face of figure 3.
  • Fig. 6 illustrates an example of a representation of a textured depth map of the face of figure 3.
  • Fig. 7 illustrates an example of a generic head model used for the head pose estimation.
  • Fig. 8 illustrates the step of fine pose estimation by aligning the depth map within the 3D model.
  • Fig. 9 illustrates a pose rectified 2D projection of the depth map, wherein the missing portions corresponding to portions of the head not present in the depth map are flagged.
  • Fig. 10 illustrates a pose rectified 2D projection of the 2.5D dataset, wherein the missing portions corresponding to portions of the head not present in the depth map are reconstructed.
  • Fig. 1 1 illustrates a pose rectified 2D projection of the 2.5D dataset, wherein the missing portions corresponding to portions of the head not present in 2.5D are reconstructed and flagged.
  • FIG. 1 is a flowchart that schematically illustrates the main steps of an example of pose rectification method according to the
  • the apparatus comprises a camera 101 for capturing an image of a user 100.
  • the camera 101 might be a depth camera, such as without restriction a Kinect camera (trademark of Microsoft), a time-of-flight camera, or any other camera able to generate a RGB-D data stream.
  • the camera 101 is connected to a processor 102 accessing a memory 104 and connected to a network over a network interface 103.
  • the memory 104 may include a permanent memory portion for storing computer-code causing the processor to carry out at least some steps of the method of Figure 1 .
  • the apparatus 101 +102+104 may take the form of a mobile phone, personal navigation device, personal information manager (PIM), car equipment, gaming device, personal digital assistant (PDA), laptop, tablet, notebook and/or handheld computer, smart glass, smart watch, smartTV, other wearable device, etc.
  • PIM personal information manager
  • PDA personal digital assistant
  • a test video stream is produced by the depth camera 101 in order to capture test images of the user 100 to be identified or otherwise classified.
  • test images designates images of a face whose pose needs to be rectified, typically during test (for identification or authentication), but also during
  • Each frame of the test video stream preferably includes three temporally and spatially aligned datasets: i) a first (optional) dataset corresponding to a two dimensional (2D) visible light image of the face of the user 100 (such as a grayscale or RGB image for example).
  • a first (optional) dataset corresponding to a two dimensional (2D) visible light image of the face of the user 100 such as a grayscale or RGB image for example.
  • a second (optional) dataset representing a 2D near-infrared (NIR) image of the face of the user 100 is illustrated on
  • a representation of such a depth map is illustrated on Figure 5.
  • Figure 6 is another representation 201 of a frame, in which the first RGB dataset is projected onto the depth map.
  • many low costs depth sensors generate noisy depth maps, i.e. datasets where the depth value assigned to each point includes noise.
  • the detrimental influence of noise can be reduced by smoothing the depth map.
  • the smoothing may include a low- pass filtering of the depth map in the spatial and/or in the temporal (across successive frames) domain.
  • the portion of each dataset that represent the user face is detected and isolated from the background and other elements of the image. This detection might be performed on any or all of the three datasets, and applied to any or all of those datasets.
  • this basic face detection is based, at least in part, on a thresholding of the depth map, in order to exclude pixels which are not in a predefined depth range, for example between 20 cm and 100 cm.
  • Other known algorithms could be used for extracting the foreground that represents the user face, and excluding the background, including for example algorithms based on colour detection.
  • the head pose estimation step C of Figure 1 the pose of the user in the frame is estimated. In one embodiment, this estimation is performed in two successive steps:
  • a rough estimate of the head pose is determined.
  • the computation of this rough estimate uses a random forest algorithm, in order to quickly determine the head pose with a precision of a few degrees.
  • a method of rough head pose estimation with random regression forest is disclosed in G. Fanelli et al., "Real Time Head Pose Estimation with Random Regression Forests", Computer Vision and Pattern Recognition (CVPR), 201 1 IEEE Conference on, 617-624.
  • CVPR Computer Vision and Pattern Recognition
  • the location of the nose and/or of other key features of the face is also determined during this step.
  • a finer estimate of the head pose is determined.
  • This fine estimate could start from the previously determined rough estimate of the orientation and of the location of one point, such as nose location, in order to improve the speed and robustness.
  • the fine estimate could be computed by determining the orientation of a 3D model of a head ( Figure 7) that best corresponds to the depth map.
  • the matching of the 3D model to the test depth map may be performed by minimizing a distance function between points of the depth map and corresponding points of the 3D model, for example using an Iterative Closest Point (ICP) method.
  • ICP Iterative Closest Point
  • test depth map textured in this illustration
  • 3D model 200 represented as a mesh in this illustration
  • This alignment step preferably includes a scaling of the 3D model so that at least some of its dimensions correspond to the test depth map.
  • the 3D model 200 might be generic, i.e., user-independent.
  • the 3D model might be user-dependent, and retrieved from a database of user-dependent 3D models based for example on the assumed identity of the user 100.
  • a plurality of user-independent 3D models might also be stored and selected according to the assumed gender, age, or ethnicity of the user for example.
  • a personal 3D model is generated using a non-rigid Iterative Closest Point (ICP) method.
  • the 3D model may then comprise a mesh with some constraints on the position and/or relations between nodes, so as to allow some realistic and limited deformation in deformable portions of the head, for example in the lower part of the face.
  • the ICP method may try some deformations or morph of the model, in order to find the most likely orientation given all the possible deformations.
  • the output of the head pose estimation step may include a set of angles phi, theta, psi describing three rotations with regard to a given coordinate system.
  • step D of Figure 1 the 2D textures (in the visible and NIR ranges) of the frame are mapped onto the depth map (UV mapping). It is possible to use only the greyscale value of the visible dataset. This mapping may be performed before or after the pose correction, and generates a textured depth map with a known orientation. Hidden parts, for example portions of the user's face which were hidden or occluded in the depth map, are either flagged as invalid, or reconstructed, for example by supposing a symmetry of the user's face.
  • This step might also comprise a correction of the illumination in the visible light and/or in the NIR image datasets.
  • the correction of illumination may include a correction of brightness, contrast, and/or white balance in the case of colour images.
  • the NIR dataset is used to remove or attenuate shadows and/or reflects in the visible light dataset, by compensating brightness variations that appear in portions of the visible light dataset but not in corresponding portions of the NIR datasets.
  • the textured 3D image is projected in 2D so as to generate at least one dataset representing a pose-rectified 2D projected image, in the visible and/or in the NIR range. Various projections could be considered.
  • the projection generates a frontal facial image, i.e. a 2D image as seen from a viewer in front of the user 100. It is also possible to generate a non-frontal facial image, or a plurality of 2D projections, such as for example one frontal facial projection and one another profile projection. Other projections could be considered, including cartographic projections, or projections that introduce deformations in order to magnify discriminative parts of the face, in particular the eyes, the upper half of the face, and reduce the size of more deformable parts of the face, such as the mouth. It is also possible to morph the head to a generic model before comparison, in order to facilitate comparison. Purely mathematical projections, such as projections onto not easily representable space, could also be considered.
  • Figure 9 illustrates an example of a 2D textured projection 202 generated during step E. Portions 203 of the projections which are not available in the depth map, for example hidden or occluded portions, are flagged as such.
  • Figure 10 illustrates another example of a 2D textured projection 202 generated during step E.
  • portions 204 of the projections which are not available in the depth map, for example hidden or occluded portions, are reconstructed as a non textured image.
  • the reconstruction may be based on available portions of the image, for example by assuming that the user's face is symmetric. Alternatively, or in addition, the reconstruction may use a generic model of a head.
  • Figure 10 illustrates another example of a 2D textured projection 202 generated during step E.
  • portions 205 of the projections which are not available in the depth map, for example hidden or occluded portions, are reconstructed as a textured image.
  • reconstruction may be based on available portions of the image, for example by assuming that the user's face is symmetric. Alternatively, or in addition, the reconstruction may use a generic model of a head.
  • the reconstruction may use image portion data available from other frames in the same video sequence.
  • the above described method thus generates a pose corrected 2D test image dataset of the user, based on a 2.5D test view acquired with a depth camera.
  • this dataset can then be used by a classifying module, such as a user identification or authentication module, or a gender estimation module, an age estimation module, etc.
  • the classification may be based on a single frame, for example a frame which can be classified with the highest reliability, or with the first frame which can be classified with a reliability higher than a given threshold, or on a plurality of successive frames of the same video stream. Additionally, or alternatively, the classification could also be based on the oriented textured 3D image. Other face processing could be applied during step F.
  • the various operations of methods described above may be performed by any suitable means capable of performing the corresponding functions.
  • the means may include various hardware and/or software component(s) and/or module(s), including, but not limited to a circuit, an application specific integrate circuit (ASIC), a processor, a field
  • FPGA programmable gate array signal
  • PLD programmable logic device
  • determining and “estimating” encompass a wide variety of actions. For example, “determining” and “estimating” may include calculating, computing, processing, deriving, looking up (e.g., looking up in a table, a database or another data structure), ascertaining and the like. Also, “determining” and “estimating” may include receiving, accessing (e.g., accessing data in a memory) and the like.
  • a software module may reside in any form of storage medium that is known in the art. Some examples of storage media that may be used include random access memory (RAM), read only memory (ROM), flash memory, EPROM memory, EEPROM memory, registers, a hard disk, a removable disk, a CD-ROM and so forth.
  • RAM random access memory
  • ROM read only memory
  • flash memory EPROM memory
  • EEPROM memory EEPROM memory
  • registers a hard disk, a removable disk, a CD-ROM and so forth.
  • a software module may comprise a single instruction, or many instructions, and may be distributed over several different code segments, among different programs, and across multiple storage media.
  • a storage medium may be coupled to a processor such that the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor.
  • certain aspects may comprise a computer program product for performing the operations presented herein.
  • a computer program product may comprise a computer readable medium having instructions stored (and/or encoded) thereon, the instructions being executable by one or more processors to perform the operations described herein.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

A pose rectification method for rectifying a pose in data representing face (100) images, comprising the steps of: A-acquiring a least one test frame including 2D near infrared image data, 2D visible light image data, and a depth map; C-estimating the pose of a face in said test frame by aligning said depth map with a 3D model of a head of known orientation; D-mapping at least one of said 2D image on the depth map, so as to generate textured image data; E-projecting the textured image data in 2D so as to generate data representing a pose-rectified 2D projected image.

Description

A face pose rectification method and apparatus
Field of the invention
[0001] The present invention relates to a face pose rectification method and apparatus
Description of related art [0002] Face recognition involves the analysis of data representing a test image (or a set of test images) of a human face and its comparison with a database of reference images. The test image is usually a 2D image captured with a common 2D camera while the reference image is a 2D image or sometimes a 3D image captured with a depth camera for example.
[0003] Prior approaches to face recognition usually assume that the test image and the reference images are all captured in a full frontal view. Therefore, various performing algorithms have been developed for classifying frontal test images and for matching them with corresponding frontal reference images.
[0004] The reliability of the recognition however quickly decreases if the test image is captured from a non-frontal point of view. In order to solve this problem, it has already been suggested in the prior art to rectify the pose, i.e., to warp the test image in order to generate, from a non-frontal test image, a synthetic frontal test image.
[0005] Therefore, fast and robust methods for estimating and then correcting the head pose are essential in many applications such as face recognition or, more generally, face classification and face processing.
[0006] US8055028 concerns a method of normalizing a non-frontal 2D facial image to a frontal facial image. The method comprises: determining a pose of a non-frontal image of an object; performing smoothing transformation on the non-frontal image of the object, thereby generating a smoothed object image; and synthesizing a frontal image of the object by using the pose determination result and the smoothed object image. The pose is determined by obtaining a first mean distance between object feature points existing in both sides of the centre line of the non-frontal image.
[0007] US8199979 discloses a method for classifying and archiving images including face regions that are acquired with a 2D image
acquisition device. A face detection module identifies a group of pixels corresponding to a face region. The face orientation and pose is then determined. In case of a half-profile face, a normalization module transforms the candidate face region into a 3D space, and then rotates it until the appropriate pose correction is made. Alternatively, the texture, colour and feature regions of the 2D face region are mapped onto a 3D model which is then rotated to correct the pose. This results however in a massive deformation when the 2D image is mapped onto a 3D model of a different face.
[0008] US7929775 discloses a method for matching portions of a 2D image to a 3D class model. The method comprises identifying image features in the 2D image; computing an aligning transformation between the class model and the image; and comparing, under the aligning transformation, class parts of the class model with the image features.
[0009] US7289648 discloses a method for automatically modelling a three dimensional object, such as a face, from a single image. The system and method according to the invention constructs one or more three dimensional (3D) face models using a single image. It can also be used as a tool to generate a database of faces with various poses which are needed to train most face recognition systems.
[0010] US2013156262 suggests an estimation of the pose of an object by defining a set of pair features as pairs of geometric primitives, wherein the geometric primitives include oriented surface points, oriented boundary points, and boundary line segments. Model pair features are determined based on the set of pair features for a model of the object. Scene pair features are determined based on the set of pair features from data acquired by a 3D sensor, and then the model pair features are matched with the scene pair features to estimate the pose of the object.
[0011] The pose rectification from a 2D image is however a difficult task and the quality of the rectification is usually not robust since it depends on the amplitude of correction to be achieved, on occlusions, illumination, and so on. [0012] In order to overcome some of the problems inherent of head estimate pose based on 2D data, it has already been suggested to use the additional depth information provided by depth sensors, which are becoming increasingly available and affordable.
[0013] US8660306 discloses a method for the correction of a human pose determined from depth data. The method comprises receiving depth image data, obtaining an initial estimated skeleton of an articulated object from depth image data, applying a random forest subspace regression function (a function that utilizes a plurality of random splitting/projection decision trees) to the initial estimated skeleton, and determining the representation of the pose based upon a result of applying the random forest subspace regression to the initial estimated skeleton. The method is in particular adapted to the estimation of pose of the whole body.
[0014] US6381346 describes a system for generating facial images, indexing those images by composite codes and for searching for similar two-dimensional facial images. 3D images of human faces are generated from a data repository of 3D facial feature surface shapes. These shapes are organized by facial feature parts. By assembling a shape for each facial part, a 3D facial image is formed.
[0015] US8406484 concerns a facial recognition apparatus, comprising: a two-dimensional information acquisition unit to acquire two-dimensional image information of a subject; a three-dimensional information
acquisition unit to acquire three-dimensional image information of the subject; a user information database to store an elliptical model
corresponding to three-dimensional face information of a user and two- dimensional face information of the user; and a control unit to perform facial recognition using the two-dimensional image information of the subject, to determine whether a recognized face is the user's face, to match the elliptical model of the user to the three-dimensional image
information, to calculate an error upon determining that the recognized face is the user's face, and to determine whether the user's face is improperly used based on the error.
[0016] US7756325 discloses an algorithm for estimating the 3D shape of a 3-dimensional object, such as a human face, based on information retrieved from a single photograph. Beside the pixel intensity, the invention uses various image features in a multi-features fitting algorithm (MFF).
[0017] EP1039417 concerns a method of processing an image of a three- dimensional object, including the steps of providing a morphable object model derived from a plurality of 3D images, matching the morphable object model to at least one 2D object image, and providing the matched morphable object model as a 3D representation of the object.
[0018] US8553973 concerns another methods and systems for modelling 3D objects (for example, human faces).
[0019] With the emergence of depth sensors in a variety of consumer devices, such as game consoles, laptops, tablets, smartphones, and cars for example, more and more images of faces will include a depth map. A popular example of a depth sensor that produces RGB-D datasets is the Kinect input device proposed by Microsoft for Xbox 360, Xbox One and Windows PC (all trademarks of Microsoft, Inc). Depth sensors produce 2.5D image datasets include an indication of depth (or distance to the light source) for each pixel of the image, but no indication about hidden or occluded elements, such as the back of the head for example.
[0020] It would be desirable to provide new face recognition methods where test images are captured with depth sensors, and then compared with existing 2D images. Such a method would have the advantage of using modern depth sensors for the acquisition of RGB-D image data, and to compare them with widely available 2D reference images.
[0021] It would also be desirable to provide new methods using the capabilities of depth sensors and RGB-D datasets in order to improve the task of pose rectification.
[0022] It would also be desirable to provide a new method for evaluating a head pose which is faster than existing methods
[0023] It would also be desirable to provide a new method for evaluating a head pose which is more precise than existing methods. [0024] It would also be desirable to provide a new method for evaluating a head pose which can handle large pose variations.
[0025] It would also be desirable to provide a new method for evaluating a head pose which can handle large pose variations.
Brief summary of the invention [0026] It is therefore an aim of the present invention to propose a new method for rectifying a pose in 2.5D datasets representing face images.
[0027] According to the invention, these aims are achieved by means of a pose rectification method for rectifying a pose in data representing face images, comprising the steps of:
A-acquiring at least one test frame including 2D near infrared image data, 2D visible light image data, and a depth map; C-estimating the pose of a face in said test frame by aligning said depth map with a 3D model of a head of known orientation;
D-mapping at least one of said 2D image on the depth map, so as to generate textured image data;
E-projecting the textured image data in 2D so as to generate data representing a pose-rectified 2D projected image.
[0028] The use of a depth map (i.e. a 2.5D dataset) is advantageous since it improves the precision and robustness of the pose estimation step.
[0029] The use of visible light data is advantageous since it allows a detection of features that depends on skin colour or texture for example.
[0030] The use of near infrared (NIR) image data is advantageous since it is less dependent on illumination conditions than visible light data.
[0031] The head pose is estimated by fitting the depth map with an existing 3D model, so as to estimate the orientation of the depth map. This pose estimation is particularly robust.
[0032] The 3D model could be a generic model, i.e., user-independent model.
[0033] The 3D model could be a user dependent model, for example a 3D model of the head of the user whose identity needs to be verified. [0034] The 3D model could be a gender-specific model, or an ethnic- specific model, or an age-specific model, and selected based on an a priori knowledge of the gender, ethnicity and/or age.
[0035] The existing 3D model is used for estimating the pose of the face. However, the 2D image data is not mapped onto this 3D model, but on the depth map. [0036] The method may comprise a further step of classifying image, for example classifying the 2D projected image.
[0037] The classification may include face authentication, face
identification, gender estimation, age estimation, and/or detection of other facial features.
[0038] The method may comprise a step of further processing the 2D projected image.
[0039] The acquisition step may comprise a temporal and/or spatial smoothing of points in the depth map, in order to remove noise generated by the depth sensor.
[0040] The step of estimating the pose may include a first step of performing a rough pose estimation, for example based on a random regression forest method.
[0041] The step of estimating the pose may include a second step of fine pose estimation. The fine pose estimation may be based on the result of the rough pose estimation.
[0042] The fine pose estimation may be based for example on an alignment of the depth map with a 3D model, using rigid Iterative Closest Point (ICP) methods. [0043] The method may further include a step of basic face detection before said pose estimation, in order to eliminate at least some portions of the 2D near-infrared image, and/or of said 2D visible light image, and/or of said depth map which do not belong to the face.
[0044] The method may further include a step of foreground extraction in order to eliminate portions of said 2D near-infrared image, and/or of said 2D visible light image, and/or of said 2D near-infrared which do not belong to the foreground. [0045] The step of aligning the depth map with an existing 3D model of a head may comprise scaling the depth map, so that some of its dimensions (for example the maximal height) match corresponding dimensions of the 3D model. [0046] The step of fitting the depth map with an existing 3D model of a head may comprise warping the depth map and/or the 3D model.
[0047] The method may comprise a further step of correcting the illumination of the 2D visible light image dataset based on the 2D near- infrared image dataset. Shadows or bright zones in the 2D visible light image dataset that do not appear in the 2D near-infrared image dataset may thus be corrected.
[0048] The method may comprise a further step of flagging portions of said pose-rectified 2D projected image data which correspond to portions not visible on the depth map and/or on the 2D image. [0049] The method may comprising a further step of reconstructing portions of the pose-rectified 2D projected image data which correspond to unknown portions of the depth map.
Brief Description of the Drawings
[0050] The invention will be better understood with the aid of the description of an embodiment given by way of example and illustrated by the figures, in which:
Fig. 1 is a flowchart of the method of the invention.
Fig.2 is a schematic view of an apparatus according to the invention.
Fig. 3 illustrates an example of a 2D visible light image of a face. Fig. 4 illustrates an example of a 2D near-infrared image of the face of figure 3.
Fig. 5 illustrates an example of a representation of a depth map of the face of figure 3. Fig. 6 illustrates an example of a representation of a textured depth map of the face of figure 3.
Fig. 7 illustrates an example of a generic head model used for the head pose estimation.
Fig. 8 illustrates the step of fine pose estimation by aligning the depth map within the 3D model.
Fig. 9 illustrates a pose rectified 2D projection of the depth map, wherein the missing portions corresponding to portions of the head not present in the depth map are flagged.
Fig. 10 illustrates a pose rectified 2D projection of the 2.5D dataset, wherein the missing portions corresponding to portions of the head not present in the depth map are reconstructed.
Fig. 1 1 illustrates a pose rectified 2D projection of the 2.5D dataset, wherein the missing portions corresponding to portions of the head not present in 2.5D are reconstructed and flagged. Detailed Description of possible embodiments of the Invention
[0051] The detailed description set forth below in connection with the appended drawings is intended as a description of various embodiments of the present invention and is not intended to represent the only aspects in which the present disclosure may be practiced. Each aspect described in this disclosure is provided merely as an example or illustration of the present invention, and should not necessarily be construed as preferred or essential. [0052] Figure 1 is a flowchart that schematically illustrates the main steps of an example of pose rectification method according to the
invention. This method could be carried out with the apparatus or system schematically illustrated as a block diagram on Figure 2. In this example, the apparatus comprises a camera 101 for capturing an image of a user 100. The camera 101 might be a depth camera, such as without restriction a Kinect camera (trademark of Microsoft), a time-of-flight camera, or any other camera able to generate a RGB-D data stream.
[0053] The camera 101 is connected to a processor 102 accessing a memory 104 and connected to a network over a network interface 103. The memory 104 may include a permanent memory portion for storing computer-code causing the processor to carry out at least some steps of the method of Figure 1 . The memory 104, or portions of the memory 104, might be removable. [0054] As used herein, the apparatus 101 +102+104 may take the form of a mobile phone, personal navigation device, personal information manager (PIM), car equipment, gaming device, personal digital assistant (PDA), laptop, tablet, notebook and/or handheld computer, smart glass, smart watch, smartTV, other wearable device, etc. [0055] In acquisition step A of Figure 1 , a test video stream is produced by the depth camera 101 in order to capture test images of the user 100 to be identified or otherwise classified.
[0056] In the present description, the expression "test images " designates images of a face whose pose needs to be rectified, typically during test (for identification or authentication), but also during
enrolment.
[0057] Each frame of the test video stream preferably includes three temporally and spatially aligned datasets: i) a first (optional) dataset corresponding to a two dimensional (2D) visible light image of the face of the user 100 (such as a grayscale or RGB image for example). One example is illustrated on Figure 3. ii) a second (optional) dataset representing a 2D near-infrared (NIR) image of the face of the user 100. One example is illustrated on
Figure 4. iii) a depth map (i.e. a 2.5D dataset) where the value associated with each pixel depends on the depth of the light emitting source, i.e. its distance to the depth sensor in the camera 101 . A representation of such a depth map is illustrated on Figure 5.
[0058] Figure 6 is another representation 201 of a frame, in which the first RGB dataset is projected onto the depth map.
[0059] As can be seen on Figure 5, many low costs depth sensors generate noisy depth maps, i.e. datasets where the depth value assigned to each point includes noise. The detrimental influence of noise can be reduced by smoothing the depth map. The smoothing may include a low- pass filtering of the depth map in the spatial and/or in the temporal (across successive frames) domain. [0060] In the basic face detection step B of Figure 1 , the portion of each dataset that represent the user face is detected and isolated from the background and other elements of the image. This detection might be performed on any or all of the three datasets, and applied to any or all of those datasets. [0061] In one embodiment, this basic face detection is based, at least in part, on a thresholding of the depth map, in order to exclude pixels which are not in a predefined depth range, for example between 20 cm and 100 cm. Other known algorithms could be used for extracting the foreground that represents the user face, and excluding the background, including for example algorithms based on colour detection. [0062] In the head pose estimation step C of Figure 1 , the pose of the user in the frame is estimated. In one embodiment, this estimation is performed in two successive steps:
[0063] During a first part of the head pose estimation step, a rough estimate of the head pose is determined. In one embodiment, the computation of this rough estimate uses a random forest algorithm, in order to quickly determine the head pose with a precision of a few degrees. A method of rough head pose estimation with random regression forest is disclosed in G. Fanelli et al., "Real Time Head Pose Estimation with Random Regression Forests", Computer Vision and Pattern Recognition (CVPR), 201 1 IEEE Conference on, 617-624. Preferably, the location of the nose and/or of other key features of the face is also determined during this step.
[0064] During a second part of the head pose estimation, a finer estimate of the head pose is determined. This fine estimate could start from the previously determined rough estimate of the orientation and of the location of one point, such as nose location, in order to improve the speed and robustness. The fine estimate could be computed by determining the orientation of a 3D model of a head (Figure 7) that best corresponds to the depth map. In one example, the matching of the 3D model to the test depth map may be performed by minimizing a distance function between points of the depth map and corresponding points of the 3D model, for example using an Iterative Closest Point (ICP) method. Figure 8
schematically illustrates this step of fitting the test depth map (textured in this illustration) 201 within a 3D model 200 (represented as a mesh in this illustration).
[0065] This alignment step preferably includes a scaling of the 3D model so that at least some of its dimensions correspond to the test depth map.
[0066] The 3D model 200 might be generic, i.e., user-independent.
Alternatively, the 3D model might be user-dependent, and retrieved from a database of user-dependent 3D models based for example on the assumed identity of the user 100. A plurality of user-independent 3D models might also be stored and selected according to the assumed gender, age, or ethnicity of the user for example.
[0067] In one embodiment, a personal 3D model is generated using a non-rigid Iterative Closest Point (ICP) method. The 3D model may then comprise a mesh with some constraints on the position and/or relations between nodes, so as to allow some realistic and limited deformation in deformable portions of the head, for example in the lower part of the face. In this case, the ICP method may try some deformations or morph of the model, in order to find the most likely orientation given all the possible deformations.
[0068] The output of the head pose estimation step may include a set of angles phi, theta, psi describing three rotations with regard to a given coordinate system. [0069] In step D of Figure 1, the 2D textures (in the visible and NIR ranges) of the frame are mapped onto the depth map (UV mapping). It is possible to use only the greyscale value of the visible dataset. This mapping may be performed before or after the pose correction, and generates a textured depth map with a known orientation. Hidden parts, for example portions of the user's face which were hidden or occluded in the depth map, are either flagged as invalid, or reconstructed, for example by supposing a symmetry of the user's face.
[0070] This step might also comprise a correction of the illumination in the visible light and/or in the NIR image datasets. The correction of illumination may include a correction of brightness, contrast, and/or white balance in the case of colour images. In one preferred embodiment, the NIR dataset is used to remove or attenuate shadows and/or reflects in the visible light dataset, by compensating brightness variations that appear in portions of the visible light dataset but not in corresponding portions of the NIR datasets. [0071] In step E of Figure 1 , the textured 3D image is projected in 2D so as to generate at least one dataset representing a pose-rectified 2D projected image, in the visible and/or in the NIR range. Various projections could be considered. At first, deformations caused by the camera and/or by perspective are preferably compensated. Then, in one embodiment, the projection generates a frontal facial image, i.e. a 2D image as seen from a viewer in front of the user 100. It is also possible to generate a non-frontal facial image, or a plurality of 2D projections, such as for example one frontal facial projection and one another profile projection. Other projections could be considered, including cartographic projections, or projections that introduce deformations in order to magnify discriminative parts of the face, in particular the eyes, the upper half of the face, and reduce the size of more deformable parts of the face, such as the mouth. It is also possible to morph the head to a generic model before comparison, in order to facilitate comparison. Purely mathematical projections, such as projections onto not easily representable space, could also be considered.
[0072] Figure 9 illustrates an example of a 2D textured projection 202 generated during step E. Portions 203 of the projections which are not available in the depth map, for example hidden or occluded portions, are flagged as such.
[0073] Figure 10 illustrates another example of a 2D textured projection 202 generated during step E. In this embodiment, portions 204 of the projections which are not available in the depth map, for example hidden or occluded portions, are reconstructed as a non textured image. The reconstruction may be based on available portions of the image, for example by assuming that the user's face is symmetric. Alternatively, or in addition, the reconstruction may use a generic model of a head.
Alternatively, or in addition, the reconstruction may use image portion data available from other frames in the same video sequence. [0074] Figure 10 illustrates another example of a 2D textured projection 202 generated during step E. In this embodiment, portions 205 of the projections which are not available in the depth map, for example hidden or occluded portions, are reconstructed as a textured image. The
reconstruction may be based on available portions of the image, for example by assuming that the user's face is symmetric. Alternatively, or in addition, the reconstruction may use a generic model of a head.
Alternatively, or in addition, the reconstruction may use image portion data available from other frames in the same video sequence.
[0075] The above described method thus generates a pose corrected 2D test image dataset of the user, based on a 2.5D test view acquired with a depth camera. During face processing step F, this dataset can then be used by a classifying module, such as a user identification or authentication module, or a gender estimation module, an age estimation module, etc. The classification may be based on a single frame, for example a frame which can be classified with the highest reliability, or with the first frame which can be classified with a reliability higher than a given threshold, or on a plurality of successive frames of the same video stream. Additionally, or alternatively, the classification could also be based on the oriented textured 3D image. Other face processing could be applied during step F.
[0076] The methods disclosed herein comprise one or more steps or actions for achieving the described method. Unless a specific order of steps or actions is specified, the order and/or use of specific steps and/or actions may be modified without departing from the scope of the claims.
[0077] It is to be recognized that depending on the embodiment, certain acts or events or steps of any of the methods described herein can be performed in a different sequence, may be added, merged, or left out all together (e.g., not all described acts or events are necessary for the practice of the method). Moreover, in certain embodiments, acts or events may be performed concurrently, e.g., through multi-threaded processing, interrupt processing, or multiple processors, rather than sequentially.
[0078] The various operations of methods described above may be performed by any suitable means capable of performing the corresponding functions. The means may include various hardware and/or software component(s) and/or module(s), including, but not limited to a circuit, an application specific integrate circuit (ASIC), a processor, a field
programmable gate array signal (FPGA) or other programmable logic device (PLD), discrete gate or transistor logic, discrete hardware
components or any combination thereof designed to carry out the method steps described herein.
[0079] As used herein, the terms "determining" and "estimating" encompass a wide variety of actions. For example, "determining" and "estimating" may include calculating, computing, processing, deriving, looking up (e.g., looking up in a table, a database or another data structure), ascertaining and the like. Also, "determining" and "estimating" may include receiving, accessing (e.g., accessing data in a memory) and the like.
[0080] The steps of a method or algorithm described in connection with the present disclosure may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in any form of storage medium that is known in the art. Some examples of storage media that may be used include random access memory (RAM), read only memory (ROM), flash memory, EPROM memory, EEPROM memory, registers, a hard disk, a removable disk, a CD-ROM and so forth. A software module may comprise a single instruction, or many instructions, and may be distributed over several different code segments, among different programs, and across multiple storage media. A storage medium may be coupled to a processor such that the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor.
[0081] Thus, certain aspects may comprise a computer program product for performing the operations presented herein. For example, such a computer program product may comprise a computer readable medium having instructions stored (and/or encoded) thereon, the instructions being executable by one or more processors to perform the operations described herein.
[0082] Various modifications and variations to the described
embodiments of the invention will be apparent to those skilled in the art without departing from the scope of the invention as defined in the appended claims. Although the invention has been described in connection with specific preferred embodiments, it should be understood that the invention as claimed should not be unduly limited to such specific embodiment.

Claims

Claims
1 . A pose rectification method for rectifying a pose in data representing face (100) images, comprising the steps of:
A-acquiring a least one test frame including 2D near infrared image data, 2D visible light image data, and a depth map;
C-estimating the pose of a face in said test frame by aligning said depth map with a 3D model of a head of known orientation;
D-mapping at least one of said 2D image on the depth map, so as to generate textured image data;
E-projecting the textured image data in 2D so as to generate data representing a pose-rectified 2D projected image.
2. The method of claim 1 , comprising a step of temporal and/or spatial smoothing of points in said depth map.
3. The method of one of the claims 1 to 2, said step (C) of estimating the pose including a step of performing a rough pose estimation, for example based on random forest, and a further step of determining a more precise estimation of the pose.
4. The method of one of the claims 1 to 3, said step (C) of aligning said depth map with a 3D model of a head of known orientation using an Iterative Closest Points (ICP) method.
5. The method of one of the claims 1 to 4, further including a step (B) of basic face detection before said estimation (C) of the pose, in order to eliminate at least some portions of said 2D near infrared image data, and/or of said 2D visible light image data, and/or of said depth map which do not belong to the face.
6. The method of one of the claims 1 to 5, wherein said 3D model is user-independent.
7. The method of one of the claims 1 to 5, wherein said 3D model is user-dependent.
8. The method of one of the claims 1 to 7, wherein said 3D model is warped to adapt it to the user.
9. The method of one of the claims 1 to 8, wherein said step (C) of aligning said depth map with an existing 3D model of a head comprises warping said 3D model.
10. The method of one of the claims 1 to 9, further comprising a step of correcting the illumination of portions of said 2D visible light image data based on said 2D near infrared image data.
1 1 . The method of one of the claims 1 to 10, further comprising a step of flagging portions of said pose-rectified 2D projected image data which correspond to portions not visible on said depth map.
12. The method of one of the claims 1 to 1 1 , further comprising a step of reconstructing portions of said pose-rectified 2D projected image data which correspond to unknown portions of said depth map.
13. The method of one of the claims 1 to 12, further comprising a step (F) of classifying said 2D projected image.
14. An apparatus comprising a depth map camera (101 ) arranged for acquiring a least one test frame including 2D near infrared image data,
2D visible light image data, and a depth map, as well as a processor with a memory storing a program that causes said processor to carry out following steps when the program is executed:
C-estimating the pose of a face in said test frame by aligning said depth map with a 3D model of a head of known orientation;
D-mapping at least one of said 2D image on the depth map, so as to generate textured image data; E-projecting the textured image data in 2D so as to generate data representing a pose-rectified 2D projected image.
1 5. An apparatus comprising means for performing the method of one of the claims 1 to 14.
16. A computer-program product, comprising a computer readable medium comprising instructions executable to:
A-acquire a least one test frame including 2D near infrared image data, 2D visible light image data, and a depth map;
C-estimate the pose of a face in said test frame by aligning said depth map with a 3D model of a head of known orientation;
D-map at least one of said 2D image on the depth map, so as to generate textured image data;
E-project the textured image data in 2D so as to generate data representing a pose-rectified 2D projected image.
PCT/EP2014/070282 2014-09-23 2014-09-23 A face pose rectification method and apparatus WO2016045711A1 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
EP14772329.0A EP3198522A1 (en) 2014-09-23 2014-09-23 A face pose rectification method and apparatus
KR1020177010873A KR20170092533A (en) 2014-09-23 2014-09-23 A face pose rectification method and apparatus
PCT/EP2014/070282 WO2016045711A1 (en) 2014-09-23 2014-09-23 A face pose rectification method and apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/EP2014/070282 WO2016045711A1 (en) 2014-09-23 2014-09-23 A face pose rectification method and apparatus

Publications (1)

Publication Number Publication Date
WO2016045711A1 true WO2016045711A1 (en) 2016-03-31

Family

ID=51619169

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2014/070282 WO2016045711A1 (en) 2014-09-23 2014-09-23 A face pose rectification method and apparatus

Country Status (3)

Country Link
EP (1) EP3198522A1 (en)
KR (1) KR20170092533A (en)
WO (1) WO2016045711A1 (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106667496A (en) * 2017-02-10 2017-05-17 广州帕克西软件开发有限公司 Face data measuring method and device
CN106709568A (en) * 2016-12-16 2017-05-24 北京工业大学 RGB-D image object detection and semantic segmentation method based on deep convolution network
CN107220995A (en) * 2017-04-21 2017-09-29 西安交通大学 A kind of improved method of the quick point cloud registration algorithms of ICP based on ORB characteristics of image
WO2017206144A1 (en) * 2016-06-02 2017-12-07 Intel Corporation Estimation of human orientation in images using depth information
CN107977650A (en) * 2017-12-21 2018-05-01 北京华捷艾米科技有限公司 Method for detecting human face and device
CN108347516A (en) * 2018-01-16 2018-07-31 宁波金晟芯影像技术有限公司 A kind of face identification system and method for 3D identifications
CN108549873A (en) * 2018-04-19 2018-09-18 北京华捷艾米科技有限公司 Three-dimensional face identification method and three-dimensional face recognition system
CN109903378A (en) * 2019-03-05 2019-06-18 盎锐(上海)信息科技有限公司 Hair 3D modeling device and method based on artificial intelligence
CN109949412A (en) * 2019-03-26 2019-06-28 腾讯科技(深圳)有限公司 A kind of three dimensional object method for reconstructing and device
US10740912B2 (en) 2016-05-19 2020-08-11 Intel Corporation Detection of humans in images using depth information

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108052912A (en) * 2017-12-20 2018-05-18 安徽信息工程学院 A kind of three-dimensional face image recognition methods based on square Fourier descriptor
KR102358854B1 (en) * 2020-05-29 2022-02-04 연세대학교 산학협력단 Apparatus and method for color synthesis of face images

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
GABRIELE FANELLI ET AL: "Real Time Head Pose Estimation from Consumer Depth Cameras", 31 August 2011, PATTERN RECOGNITION, SPRINGER BERLIN HEIDELBERG, BERLIN, HEIDELBERG, PAGE(S) 101 - 110, ISBN: 978-3-642-23122-3, XP019163424 *
IOANNIS A KAKADIARIS ET AL: "Three-Dimensional Face Recognition in the Presence of Facial Expressions: An Annotated Deformable Model Approach", IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, IEEE COMPUTER SOCIETY, USA, vol. 29, no. 4, 1 April 2007 (2007-04-01), pages 640 - 649, XP011168503, ISSN: 0162-8828, DOI: 10.1109/TPAMI.2007.1017 *
ROBERT NIESE ET AL: "A Novel Method for 3D Face Detection and Normalization", JOURNAL OF MULTIMEDIA, 5 September 2007 (2007-09-05), XP055191108 *
See also references of EP3198522A1 *
ZHENYUE CHEN ET AL: "RGB-NIR multispectral camera", OPTICS EXPRESS, vol. 22, no. 5, 24 February 2014 (2014-02-24), pages 4985, XP055191777, DOI: 10.1364/OE.22.004985 *

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10740912B2 (en) 2016-05-19 2020-08-11 Intel Corporation Detection of humans in images using depth information
WO2017206144A1 (en) * 2016-06-02 2017-12-07 Intel Corporation Estimation of human orientation in images using depth information
US11164327B2 (en) 2016-06-02 2021-11-02 Intel Corporation Estimation of human orientation in images using depth information from a depth camera
CN106709568A (en) * 2016-12-16 2017-05-24 北京工业大学 RGB-D image object detection and semantic segmentation method based on deep convolution network
CN106709568B (en) * 2016-12-16 2019-03-22 北京工业大学 The object detection and semantic segmentation method of RGB-D image based on deep layer convolutional network
CN106667496A (en) * 2017-02-10 2017-05-17 广州帕克西软件开发有限公司 Face data measuring method and device
CN107220995A (en) * 2017-04-21 2017-09-29 西安交通大学 A kind of improved method of the quick point cloud registration algorithms of ICP based on ORB characteristics of image
CN107977650A (en) * 2017-12-21 2018-05-01 北京华捷艾米科技有限公司 Method for detecting human face and device
CN108347516A (en) * 2018-01-16 2018-07-31 宁波金晟芯影像技术有限公司 A kind of face identification system and method for 3D identifications
CN108549873A (en) * 2018-04-19 2018-09-18 北京华捷艾米科技有限公司 Three-dimensional face identification method and three-dimensional face recognition system
CN109903378A (en) * 2019-03-05 2019-06-18 盎锐(上海)信息科技有限公司 Hair 3D modeling device and method based on artificial intelligence
CN109949412A (en) * 2019-03-26 2019-06-28 腾讯科技(深圳)有限公司 A kind of three dimensional object method for reconstructing and device

Also Published As

Publication number Publication date
KR20170092533A (en) 2017-08-11
EP3198522A1 (en) 2017-08-02

Similar Documents

Publication Publication Date Title
US9747493B2 (en) Face pose rectification method and apparatus
EP3198522A1 (en) A face pose rectification method and apparatus
US9818023B2 (en) Enhanced face detection using depth information
WO2016107638A1 (en) An image face processing method and apparatus
Martin et al. Real time head model creation and head pose estimation on consumer depth cameras
JP5873442B2 (en) Object detection apparatus and object detection method
JP4653606B2 (en) Image recognition apparatus, method and program
US9224060B1 (en) Object tracking using depth information
US20180018805A1 (en) Three dimensional scene reconstruction based on contextual analysis
JP5715833B2 (en) Posture state estimation apparatus and posture state estimation method
US20160162673A1 (en) Technologies for learning body part geometry for use in biometric authentication
US9727776B2 (en) Object orientation estimation
JP2017506379A5 (en)
US20180075291A1 (en) Biometrics authentication based on a normalized image of an object
JP6397354B2 (en) Human area detection apparatus, method and program
JP4774818B2 (en) Image processing apparatus and image processing method
JP2009525543A (en) 3D face reconstruction from 2D images
JP2009020761A (en) Image processing apparatus and method thereof
JP6351243B2 (en) Image processing apparatus and image processing method
US20170249503A1 (en) Method for processing image with depth information and computer program product thereof
CN109858433B (en) Method and device for identifying two-dimensional face picture based on three-dimensional face model
Bondi et al. Reconstructing high-resolution face models from kinect depth sequences
Jiménez et al. Face tracking and pose estimation with automatic three-dimensional model construction
CN112016495A (en) Face recognition method and device and electronic equipment
CN113837053B (en) Biological face alignment model training method, biological face alignment method and device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 14772329

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

REEP Request for entry into the european phase

Ref document number: 2014772329

Country of ref document: EP

WWE Wipo information: entry into national phase

Ref document number: 2014772329

Country of ref document: EP

ENP Entry into the national phase

Ref document number: 20177010873

Country of ref document: KR

Kind code of ref document: A