EP2923302A1 - Verfahren zur erzeugung eines dreidimensionalen gesichtsmodells - Google Patents

Verfahren zur erzeugung eines dreidimensionalen gesichtsmodells

Info

Publication number
EP2923302A1
EP2923302A1 EP13792400.7A EP13792400A EP2923302A1 EP 2923302 A1 EP2923302 A1 EP 2923302A1 EP 13792400 A EP13792400 A EP 13792400A EP 2923302 A1 EP2923302 A1 EP 2923302A1
Authority
EP
European Patent Office
Prior art keywords
face
template
individual
shape
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP13792400.7A
Other languages
English (en)
French (fr)
Inventor
Sami Romdhani
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Idemia Identity and Security France SAS
Original Assignee
Morpho SA
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Morpho SA filed Critical Morpho SA
Publication of EP2923302A1 publication Critical patent/EP2923302A1/de
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/169Holistic features and representations, i.e. based on the facial image taken as a whole
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/10Constructive solid geometry [CSG] using solid primitives, e.g. cylinders, cubes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/06Topological mapping of higher dimensional structures onto lower dimensional surfaces
    • G06T3/073Transforming surfaces of revolution to planar images, e.g. cylindrical surfaces to planar images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • G06T7/337Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/20Indexing scheme for editing of 3D models
    • G06T2219/2021Shape modification

Definitions

  • the field of the invention is that of the image processing of faces of individuals, to generate a front view of an individual from a non-frontal image thereof.
  • the invention is particularly applicable to the identification of individuals by face recognition.
  • the identification of individuals by face recognition is implemented by comparing two images of faces, and deducing from this comparison a score evaluating the similarity between the faces on the images.
  • the optimal recognition efficiency is therefore achieved not only when two faces have the same pose on the compared images, but also when the faces are seen from the front, because this view provides the most information on the shape of the face.
  • image processing methods have been developed for generating, from an image of a face, an image of the same face, seen from the front.
  • the acquired image is processed to determine the three-dimensional shape of the face of the individual on the image, its pose, that is to say its position relative to a front view, as well as a representation of the texture of the face, that is, the physical appearance of the surface of the face superimposed on the three-dimensional structure of the shape of the face.
  • the determination of the three-dimensional shape of the face of the individual is carried out by deforming a deformable three-dimensional model of the human face, to minimize the difference between the characteristic points of the model (position of the eyes, nostrils, tip of the nose, commissures of the lips, etc.) and the corresponding points of the face on the image.
  • the purpose of the invention is to propose a method of treating a face image of an individual not having the aforementioned drawback, and in particular making it possible to determine the shape of any human face appearing on a face. picture.
  • the subject of the invention is a method for generating a deformable three-dimensional face model from a plurality of images of faces of individuals, the method being characterized in that it comprises the steps of:
  • a face template acquire example shapes of individuals' faces, for each individual face example, iteratively deform the template so that the shape of the deformed template matches the shape of the face example, and determine the deformation between the initial template and the deformed template, said iterative deformation of the template including minimizing the derivative of the gap between the original template and the deformed template, to constrain the deformed template to retain a human face shape.
  • the generation of the face model as a linear combination of the shape of the template and deformations between the initial template and the deformed template for each example of an individual's face.
  • the method according to the invention also has at least one of the following characteristics:
  • the acquisition of example shapes of faces of individuals comprises the detection of characteristic points of each example of faces of individuals, and the matching of the corresponding characteristic points between the examples of faces.
  • the iterative deformation of the template comprises, for each example of an individual's face, the modification of the positions of the characteristic points of the template in order to minimize a difference of position between said characteristic points and the corresponding points of the example of an individual's face.
  • the iterative deformation of the template further comprises minimizing a difference in position between the points of the template and the surface of the face example.
  • the step of iterative deformation of the template comprises the iterative minimization of a linear combination of:
  • the invention also proposes a method for processing at least one face image of an individual, comprising the steps of generating, from the image, a three-dimensional representation of the individual's face, said representation comprising the steps of:
  • the deformation of the three-dimensional model being carried out by modifying the coefficients of the linear combination of the model
  • the method being characterized in that the changes of the coefficients of the linear combination are constrained to ensure that the deformed pattern corresponds to a human face.
  • the method of processing a face image further comprises at least one of the following features: the modifications of the coefficients of the linear combination are constrained by minimizing the norm of the derivative of the difference between the initial model and the distorted model.
  • the pose and the shape of the face of the individual on the image are estimated simultaneously, by iterative modification of the pose and the shape of the three-dimensional model to minimize the difference between the characteristic points of the face of the individual on the image and corresponding points of the model.
  • the modification of the pose of the model comprises at least one transformation among the preceding group: translation, rotation, change of scale
  • the modification of the shape of the three-dimensional model comprises the determination of the coefficients of the linear combination between the face template and the deformations applied to the template to obtain each face example.
  • the method further comprises the steps of:
  • the method is implemented on a plurality of face images of individuals, and:
  • the step of determining a blank of the pose of the face of the individual is implemented on each face image of the individual
  • the step of determining the shape and the pose of the face of the individual is implemented on all the face images by iteratively deforming the three-dimensional model so that the shape of the deformed model corresponds to the shape of the individual's face on the images.
  • the invention finally proposes a system for identifying individuals comprising at least one control server of an individual to be identified, and at least one management server of a database of N reference images of listed individuals, the server control system comprising acquisition means adapted to proceed to the acquisition of an image of the face of the individual,
  • the system for identifying individuals being characterized in that one of the control server and the management server comprises processing means adapted to implement the processing method according to the invention, and, from a front view of the face of an individual obtained, implement a face recognition treatment by comparison with the reference images of the base, in order to identify the individual. DESCRIPTION OF THE FIGURES
  • FIG. 1 represents an exemplary identification system adapted to implement an image processing method.
  • FIG. 2a represents the main steps of the method for generating a three-dimensional model of faces
  • FIG. 2b represents the main steps of the image processing method according to the invention.
  • Figure 3 illustrates the characteristic points of a face.
  • Figure 4 illustrates notations used for calculating a differentiation matrix.
  • FIG. 5a is an image of a face to be treated in order to identify the individual on the image
  • FIGS. 5b and 5c are respectively the restitution of the shape of the face of the individual and a representation of the texture of said face
  • Figure 5d is a front view of the face of the individual reconstructed from the image of Figure 5a.
  • Figures 6a and 6b are respectively input images of the same face and a face image of the face obtained from these input images.
  • an identification system 1 adapted to implement an image processing method.
  • a control server SC provided with means 1 1 of acquisition of appropriate images, proceeds to the acquisition of an image of the face of the individual. This image may be non-frontal.
  • the control server SC can also acquire a face image of the individual, this time frontal, which is stored in an identity document.
  • the control server then advantageously comprises processing means adapted to implement, on the first image of the face of the individual, a treatment aimed at "frontalising” this image: that is, to generate, from of this image, a frontal image.
  • the control server can advantageously compare the two front images it has, to determine if the faces on the images correspond to the same person.
  • the second face image can be stored, among other images, in a database of a management server SG.
  • the control server transmits the first image that it has acquired to the management server, and the latter implements the method of processing the first image and comparison to identify the individual I.
  • the comparison can take place between the "frontalised" image of the individual and each face image recorded in the database.
  • the shape of the object which is composed of a set of 3D vertices, each vertex being a point of the object defined by coordinates along three orthogonal directions.
  • the surface of the object it is materialized by connecting vertex between them to form triangles.
  • a list of triangles is thus defined for each object, each triangle being indicated by the three indexes of the corresponding columns of the matrix S.
  • a representation of the texture of the object it is an image used to color the object in three dimensions obtained from its shape and its surface. The surface of the object defined by the triangle list is used to match the vertices of the object to a particular texture.
  • the method includes a first step 100 of generating a three-dimensional model of a human face shape, which can be deformed to obtain any type of human face shape.
  • This model is mathematically formulated as a linear combination of examples of individuals' faces, noted
  • S 0 is a template of human face shape, constituting the basis of the model
  • S ° + S j represents the shape of the face of a particular example of a real individual. Therefore, S j represents the gap between one of the face examples and the template.
  • the coefficients ⁇ are later determined to distort the model S to match the face of an individual that we want to identify.
  • the template S 0 of human face is generated: it may be a face shape of a particular individual, or an average of faces of a plurality of faces. 'people.
  • the face shape or shapes are defined by a series of vertex corresponding to points of the face. These points comprise, inter alia, a number N s of characteristic points of a face, represented in FIG. 3, typically 22, and which are the corners of the eyes, the ends of the mouth, the nostrils, the tip of the nose, ears, etc.
  • These feature points can be manually marked by an operator from a front face image, or they can be automatically spotted by a server.
  • the human face template also includes the order of a few thousand other vertex acquired by a 3D scanner.
  • a step 120 we proceed to the acquisition of forms of examples of faces of real individuals. This acquisition is implemented in the same way as before, by identifying the characteristic points of the faces of individuals to generate a list of vertexes.
  • the shapes of faces thus acquired each correspond to a S ° + S j .
  • the deflection S j between the face and the template is determined from the vertex lists of each face shape.
  • the peculiarity of the template is that it is a form of face for which vertex indexing is already done. Therefore, the vertex indexing of each example of a face shape is performed by mapping in step 130 the vertices of each example shape to the vertex of the template.
  • the deformed template deforms in a step 131 the template iteratively to minimize the gap between the template shape and that of the face example, the deformed template must always correspond to a human face shape.
  • the mathematical function to be minimized includes three terms.
  • the first term serves to minimize the distance between the characteristic points of an example of a face and the corresponding points of the template. It is written:
  • i is the index of a characteristic point
  • v ki is a vertex of a point of the template after deformation corresponding to the same point characteristic i
  • N s is the number of characteristic points in a face, for example 22.
  • the second term is used to match the surface of the face shape of the template with the surface of the shape of the face example.
  • the function to be minimized represents the difference between the points of the template and the surface of the face example that is closest to the feature points. It is noted:
  • Pvi is a point on the surface of the face example, that is to say a point corresponding to the projection on the surface of the face of the vertex ⁇ ,. It is possible that the surface of the face example is incomplete, if for example it is obtained from a non-frontal image, and that points of the template do not correspond to any point of the face example. In this case, these points of the template are not taken into account.
  • the third term constrains the deformed template to remain a real human face, even if the face example used for deformation of the template is incomplete or contains noise.
  • This term makes the jig deformed as "smooth" as possible, that is to say as continuous as possible, by minimizing the standard of the derivative of the transformation of the jig, at each iteration. This standard is expressed as follows:
  • v is the concatenation of the 3D vertices of the deformed template
  • vec (S °) the same term for the template before transformation
  • v and vec (S °) are vectors of size 3N X 1.
  • A is a vector differentiation matrix v-vec (S °), of dimension 3T x 3N, where T is the number of triangles of the surface of the template.
  • the derivative is calculated for each triangle t of the surface of the template, the derivative of the deformation of a triangle t being calculated with respect to the triangles q neighbors of the triangle t, by approximation of the finite difference of the triangle t with the neighboring triangles q as follows:
  • N t is the set of triangles q adjacent to the triangle t
  • w qt is a weighting factor that depends on the surfaces of the triangles t and q
  • d t is the deformation of the triangle t at its centroid
  • b t is the position of the centroid of the triangle t.
  • the distance between the centers of gravity and the weighting factor are calculated on the undeformed template S 0 .
  • the weighting factor w qt is the sum of the surfaces of two triangles whose base is the edge connecting the triangles t and q, and the opposite vertex to this base is respectively the barycenters b t of the triangle t and that b q of the triangle q.
  • the deformation of the shape is multiplied (v-vec (S 0 )) by a matrix B t of dimension 3 3N which is zero everywhere except the elements associated with the vertex of the triangle t. These elements are then equal 1/3.
  • the matrix A of dimension 3T x 3N, is obtained by vertically concatenating the set of matrices B t associated with each triangle t, whose coefficients corresponding to the vertex of a triangle t are multiplied by weighting factors w qt and divided by the distances between the centers of gravity ⁇ b q - b t ⁇
  • the differentiation matrix A depends solely on the surface of the undistorted template (list of triangles of S 0 ), and not on the shape of the deformed template v. It is therefore constant.
  • the template may be arbitrarily removed from the individual's face example, and thus the points p v , of the face surface closest to the points V, of the template are not well defined.
  • a significant value is fixed for ⁇ to ensure that the transformation is almost rigid, that is to say that the shape of the face of the template is the least distorted possible.
  • the value of ⁇ is increased.
  • the points p v are searched on the surface of the example of the individual face, as being the closest to the points v, of the template deformed at this iteration.
  • these points ⁇ ⁇ are more and more reliable and the value of the coefficient ⁇ is decreased to make the comparison more flexible.
  • This iterative mapping step is performed for each individual face example. It results in a deformed template, which corresponds to an example of a face, from which we can deduce the value of S j , the deviation between the template and the example of the face.
  • a deformable three-dimensional face model is obtained, comprising the template S 0 and the deviations S j , which can be made linear combinations to obtain any individual face.
  • this model can be used to generate, from a face image of an individual, a three-dimensional shape of his face.
  • an image of the face of an individual that one wishes to identify is acquired, for example by means of a control server of FIG. 1.
  • An example of such an image is appended in FIG. 5a.
  • the pose is defined relative to a reference using six parameters: three rotation angles, two translation parameters and a scale factor and is defined as follows:
  • p is a two-dimensional vector, comprising the X and Y coordinates of the projection of each vertex v in three dimensions
  • s is the scale parameter
  • R is a 2 x 3 matrix whose two lines are the first two rows of a rotation matrix
  • t is a translation vector in X and Y.
  • the rotation matrix is written according to the angles of Euler a x , a y , and a z as follows:
  • the positions of the characteristic points of the face of the individual on the image are acquired in the same way as previously, for example by the pointing of an operator or by an automatic detection.
  • Pi is the position of a characteristic point i on the image and v, is a vertex of the corresponding point i of the template.
  • v is a vertex of the corresponding point i of the template.
  • Each characteristic point i is therefore assigned a weighting coefficient c, representative of the "confidence" in the position of the point. If a point is invisible on the image, then its confidence coefficient is zero.
  • the pose obtained for the template at the end of the minimization constitutes the pose of the face of the individual on the image.
  • This optimization problem is solved with a two-step procedure, the first step 310 being the linear search of a solution, and the second step being the nonlinear minimization 320 to refine the estimate of the pose obtained with the first one. step.
  • This first step of linear resolution 310 provides a good initial estimate of the pose, but the assumption previously adopted for the linear resolution not being however based in practice, the estimation needs to be refined by the step of non-linear estimate 320.
  • the result of the linear step is refined by implementing a non-linear iterative step 320, for which a preferred method is the minimization of
  • This step finally makes it possible to obtain a first estimate of the pose of the face of the individual on the image, this pose then being refined during the step 400 of "flexible” estimation of the pose and the shape of the face . It is therefore considered that at this stage a "draft" of the pose of the face has been determined.
  • step 400 of flexible estimation of the pose and the shape is implemented using the three-dimensional face model obtained in step 100.
  • this model is written in the form of a linear combination of the template S0 and deviations of this template compared to examples of individuals:
  • the shape of any face can be obtained by choosing the coefficients a, of the linear combination.
  • the flexible estimation of the shape and pose of the face of the individual on the image is thus achieved by minimizing the difference between the projections of the characteristic points p, the face of the individual on the image, and the same projections of the model.
  • it is iteratively modified the shape of the face obtained by the model (thanks to the coefficients ⁇ ; ) and the parameters of pose of the face.
  • the aj coefficients are then constrained to guarantee a realistic human face.
  • the norm of the derivative of the deformation of the three-dimensional model is minimized, the deformed vertexes of the model being here defined as a function of the vector a comprising the aj for j between 1 and M.
  • the derivative of the deformation of the Three-dimensional model is obtained by multiplying the deformed model by a matrix A 'constructed in the same way as the matrix A of previous differentiation.
  • This minimization step corresponds to a hypothesis of continuity of the face, which is verified regardless of the individual and therefore allows the process to be as general as possible, that is to say applicable for any individual.
  • This equation is solved, analogously to the nonlinear minimization step, using the Levenberg-Marquardt minimization algorithm.
  • the initialization of the pose is provided by the pose obtained at the end of the rigid estimation step.
  • the initial form used for the minimization is that of the template S 0 of origin, ie that the values of the coefficients ⁇ ; initials are zero.
  • the deformed three-dimensional model thus corresponds to the three-dimensional shape of the individual's face in the image, represented in FIG. 5b.
  • This three-dimensional shape can be manipulated simply to obtain a frontal representation.
  • a representation of the texture of the face of the face is generated.
  • the individual represented in FIG. 5c.
  • a two-dimensional face image of the individual is generated. This image is illustrated in Figure 5d. It can serve as a basis for a conventional identification method by face recognition.
  • this method can be implemented for a plurality of input images of the same individual, to obtain a unique three-dimensional shape of the individual's face and a unique representation of the texture of the face of the person. the individual.
  • a set of installation parameters must be estimated for each input image.
  • step 400 of flexible estimation of the pose and the shape is carried out on all the K images by searching for the following minimum:
  • FIG. 6a shows two input images of the same individual, and in FIG. 6b a front image of the individual generated with this method.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Computer Graphics (AREA)
  • Software Systems (AREA)
  • Geometry (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • Architecture (AREA)
  • Processing Or Creating Images (AREA)
  • Image Processing (AREA)
  • Image Generation (AREA)
  • Image Analysis (AREA)
EP13792400.7A 2012-11-20 2013-11-20 Verfahren zur erzeugung eines dreidimensionalen gesichtsmodells Withdrawn EP2923302A1 (de)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
FR1261025A FR2998402B1 (fr) 2012-11-20 2012-11-20 Procede de generation d'un modele de visage en trois dimensions
PCT/EP2013/074310 WO2014079897A1 (fr) 2012-11-20 2013-11-20 Procede de generation d'un modele de visage en trois dimensions

Publications (1)

Publication Number Publication Date
EP2923302A1 true EP2923302A1 (de) 2015-09-30

Family

ID=47878193

Family Applications (1)

Application Number Title Priority Date Filing Date
EP13792400.7A Withdrawn EP2923302A1 (de) 2012-11-20 2013-11-20 Verfahren zur erzeugung eines dreidimensionalen gesichtsmodells

Country Status (5)

Country Link
US (1) US10235814B2 (de)
EP (1) EP2923302A1 (de)
JP (1) JP6318162B2 (de)
FR (1) FR2998402B1 (de)
WO (1) WO2014079897A1 (de)

Families Citing this family (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FR3018937B1 (fr) * 2014-03-19 2017-07-07 Morpho Procede de modelisation amelioree d'un visage a partir d'une image
FR3028064B1 (fr) 2014-11-05 2016-11-04 Morpho Procede de comparaison de donnees ameliore
JP6541334B2 (ja) 2014-11-05 2019-07-10 キヤノン株式会社 画像処理装置、画像処理方法、およびプログラム
JP6439634B2 (ja) 2015-09-04 2018-12-19 富士通株式会社 生体認証装置、生体認証方法および生体認証プログラム
CA2933799A1 (en) * 2016-06-21 2017-12-21 John G. Robertson System and method for automatically generating a facial remediation design and application protocol to address observable facial deviations
WO2018053703A1 (en) * 2016-09-21 2018-03-29 Intel Corporation Estimating accurate face shape and texture from an image
JP6930091B2 (ja) * 2016-11-15 2021-09-01 富士フイルムビジネスイノベーション株式会社 画像処理装置、画像処理方法、画像処理システムおよびプログラム
US10572720B2 (en) 2017-03-01 2020-02-25 Sony Corporation Virtual reality-based apparatus and method to generate a three dimensional (3D) human face model using image and depth data
US10621788B1 (en) 2018-09-25 2020-04-14 Sony Corporation Reconstructing three-dimensional (3D) human body model based on depth points-to-3D human body model surface distance
CN109376698B (zh) * 2018-11-29 2022-02-01 北京市商汤科技开发有限公司 人脸建模方法和装置、电子设备、存储介质、产品
JP6675564B1 (ja) 2019-05-13 2020-04-01 株式会社マイクロネット 顔認識システム、顔認識方法及び顔認識プログラム
JP7321772B2 (ja) * 2019-05-22 2023-08-07 キヤノン株式会社 画像処理装置、画像処理方法、およびプログラム
CN111508069B (zh) * 2020-05-22 2023-03-21 南京大学 一种基于单张手绘草图的三维人脸重建方法
CN113643412B (zh) * 2021-07-14 2022-07-22 北京百度网讯科技有限公司 虚拟形象的生成方法、装置、电子设备及存储介质

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6072496A (en) * 1998-06-08 2000-06-06 Microsoft Corporation Method and system for capturing and representing 3D geometry, color and shading of facial expressions and other animated objects
DE69934478T2 (de) * 1999-03-19 2007-09-27 MAX-PLANCK-Gesellschaft zur Förderung der Wissenschaften e.V. Verfahren und Gerät zur Bildverarbeitung auf Basis von Metamorphosemodellen
US6807290B2 (en) * 2000-03-09 2004-10-19 Microsoft Corporation Rapid computer modeling of faces for animation
US9400921B2 (en) * 2001-05-09 2016-07-26 Intel Corporation Method and system using a data-driven model for monocular face tracking
US8553949B2 (en) * 2004-01-22 2013-10-08 DigitalOptics Corporation Europe Limited Classification and organization of consumer digital images using workflow, and face detection and recognition
US7218774B2 (en) * 2003-08-08 2007-05-15 Microsoft Corp. System and method for modeling three dimensional objects from a single image
US8571272B2 (en) * 2006-03-12 2013-10-29 Google Inc. Techniques for enabling or establishing the use of face recognition algorithms
US8295614B2 (en) * 2006-04-14 2012-10-23 Nec Corporation Collation apparatus and collation method
US20140043329A1 (en) * 2011-03-21 2014-02-13 Peng Wang Method of augmented makeover with 3d face modeling and landmark alignment
WO2013020248A1 (en) * 2011-08-09 2013-02-14 Intel Corporation Image-based multi-view 3d face generation
JP5842541B2 (ja) * 2011-11-01 2016-01-13 大日本印刷株式会社 三次元ポートレートの作成装置

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
LIU ZICHENG ET AL: "Rapid Modeling of Animated Faces From Video", 28 February 2000 (2000-02-28), XP055775253, Retrieved from the Internet <URL:https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.37.420&rep=rep1&type=pdf> [retrieved on 20210211] *
See also references of WO2014079897A1 *
ZICHENG LIU ET AL: "Rapid modeling of animated faces from video images", PROCEEDINGS ACM MULTIMEDIA, ACM, 2 PENN PLAZA, SUITE 701 NEW YORK NY 10121-0701 USA, 30 October 2000 (2000-10-30), pages 475 - 476, XP058235929, ISBN: 978-1-58113-198-7, DOI: 10.1145/354384.376389 *

Also Published As

Publication number Publication date
US10235814B2 (en) 2019-03-19
FR2998402B1 (fr) 2014-11-14
US20150310673A1 (en) 2015-10-29
JP2016501396A (ja) 2016-01-18
FR2998402A1 (fr) 2014-05-23
JP6318162B2 (ja) 2018-04-25
WO2014079897A1 (fr) 2014-05-30

Similar Documents

Publication Publication Date Title
WO2014079897A1 (fr) Procede de generation d&#39;un modele de visage en trois dimensions
JP4353246B2 (ja) 法線情報推定装置、登録画像群作成装置および画像照合装置ならびに法線情報推定方法
US10554957B2 (en) Learning-based matching for active stereo systems
EP3707676A1 (de) Verfahren zur schätzung der installation einer kamera im referenzrahmen einer dreidimensionalen szene, vorrichtung, system mit erweiterter realität und zugehöriges computerprogramm
EP3582141B1 (de) Verfahren zum lernen von parametern eines neuronalen netztes mit konvolution
Ratyal et al. Deeply learned pose invariant image analysis with applications in 3D face recognition
EP1864242A1 (de) Verfahren zum identifizieren von gesichtern aus gesichtsbildern und entsprechende einrichtung und computerprogramm
JP2005149507A (ja) テクストンを用いる対象物認識方法及び装置
CN105590020B (zh) 改进的数据比较方法
FR2781906A1 (fr) Dispositif electronique de recalage automatique d&#39;images
FR3088467A1 (fr) Procede de classification d&#39;une image d&#39;entree representative d&#39;un trait biometrique au moyen d&#39;un reseau de neurones a convolution
Alqahtani et al. 3D face tracking using stereo cameras: A review
Jamil et al. Illumination-invariant ear authentication
EP3145405B1 (de) Verfahren zur bestimmung von mindestens einem verhaltensparameter
FR3103045A1 (fr) Procédé d’augmentation d’une base d’images d’apprentissage représentant une empreinte sur un arrière-plan au moyen d’un réseau antagoniste génératif
CN116778533A (zh) 一种掌纹全感兴趣区域图像提取方法、装置、设备及介质
US9786030B1 (en) Providing focal length adjustments
FR3018937A1 (fr) Procede de modelisation amelioree d&#39;un visage a partir d&#39;une image
EP3929809A1 (de) Methode zur erkennung von mindestens einem sichtbaren biometrischen merkmal auf einem eingangsbild mittels eines konvolutionales neurales netzwerk
Ruiz Matarán Bayesian Modeling and Inference in Image Recovery and Classification Problems
Babanin et al. Performance evaluation of face alignment algorithms on" in-the-wild" selfies
JP2023038885A (ja) 画像処理方法及び画像処理システム
CN117351246A (zh) 一种误匹配对去除方法、系统及可读介质
EP1095358A1 (de) Verfahren zur modellierung von gegenständen oder dreidimensionalen szenen
Pollefeys et al. 5. Calibration and Shape Recovery from Videos of Dynamic Scenes

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20150616

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

AX Request for extension of the european patent

Extension state: BA ME

DAX Request for extension of the european patent (deleted)
RAP1 Party data changed (applicant data changed or rights of an application transferred)

Owner name: IDEMIA IDENTITY & SECURITY FRANCE

REG Reference to a national code

Ref country code: DE

Ref legal event code: R079

Free format text: PREVIOUS MAIN CLASS: G06K0009000000

Ipc: G06T0015040000

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: EXAMINATION IS IN PROGRESS

RIC1 Information provided on ipc code assigned before grant

Ipc: G06T 17/10 20060101ALI20190628BHEP

Ipc: G06T 7/33 20170101ALI20190628BHEP

Ipc: G06T 19/20 20110101ALI20190628BHEP

Ipc: G06T 3/00 20060101ALI20190628BHEP

Ipc: G06T 15/04 20110101AFI20190628BHEP

Ipc: G06K 9/00 20060101ALI20190628BHEP

17Q First examination report despatched

Effective date: 20190711

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: EXAMINATION IS IN PROGRESS

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: EXAMINATION IS IN PROGRESS

REG Reference to a national code

Ref country code: DE

Ref legal event code: R079

Free format text: PREVIOUS MAIN CLASS: G06T0015040000

Ipc: G06V0040160000

GRAP Despatch of communication of intention to grant a patent

Free format text: ORIGINAL CODE: EPIDOSNIGR1

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: GRANT OF PATENT IS INTENDED

P01 Opt-out of the competence of the unified patent court (upc) registered

Effective date: 20230919

RIC1 Information provided on ipc code assigned before grant

Ipc: G06V 40/16 20220101AFI20231005BHEP

INTG Intention to grant announced

Effective date: 20231020

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION HAS BEEN WITHDRAWN

18W Application withdrawn

Effective date: 20240223