EP2923302A1 - Method for generating a three-dimensional facial model - Google Patents
Method for generating a three-dimensional facial modelInfo
- Publication number
- EP2923302A1 EP2923302A1 EP13792400.7A EP13792400A EP2923302A1 EP 2923302 A1 EP2923302 A1 EP 2923302A1 EP 13792400 A EP13792400 A EP 13792400A EP 2923302 A1 EP2923302 A1 EP 2923302A1
- Authority
- EP
- European Patent Office
- Prior art keywords
- face
- template
- individual
- shape
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
- 238000000034 method Methods 0.000 title claims abstract description 37
- 230000001815 facial effect Effects 0.000 title abstract description 7
- 238000012545 processing Methods 0.000 claims abstract description 12
- 238000003672 processing method Methods 0.000 claims description 10
- 230000004048 modification Effects 0.000 claims description 9
- 238000012986 modification Methods 0.000 claims description 9
- 230000009466 transformation Effects 0.000 claims description 5
- 238000013519 translation Methods 0.000 claims description 4
- 230000008859 change Effects 0.000 abstract description 2
- 210000000887 face Anatomy 0.000 description 22
- 239000011159 matrix material Substances 0.000 description 15
- 239000013598 vector Substances 0.000 description 5
- 238000001514 detection method Methods 0.000 description 4
- 230000004069 differentiation Effects 0.000 description 4
- 238000004458 analytical method Methods 0.000 description 2
- 230000015572 biosynthetic process Effects 0.000 description 2
- 238000000354 decomposition reaction Methods 0.000 description 2
- 238000006073 displacement reaction Methods 0.000 description 2
- 230000005484 gravity Effects 0.000 description 2
- 238000013507 mapping Methods 0.000 description 2
- 238000005457 optimization Methods 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 238000003786 synthesis reaction Methods 0.000 description 2
- 235000001291 Aechmea magdalenae Nutrition 0.000 description 1
- 244000179819 Aechmea magdalenae Species 0.000 description 1
- 230000004075 alteration Effects 0.000 description 1
- 238000004422 calculation algorithm Methods 0.000 description 1
- 230000003247 decreasing effect Effects 0.000 description 1
- 238000009795 derivation Methods 0.000 description 1
- 210000005069 ears Anatomy 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 238000010191 image analysis Methods 0.000 description 1
- 238000009434 installation Methods 0.000 description 1
- 238000007620 mathematical function Methods 0.000 description 1
- 238000007619 statistical method Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/20—Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
- G06V40/169—Holistic features and representations, i.e. based on the facial image taken as a whole
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/04—Texture mapping
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
- G06T17/10—Constructive solid geometry [CSG] using solid primitives, e.g. cylinders, cubes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/06—Topological mapping of higher dimensional structures onto lower dimensional surfaces
- G06T3/073—Transforming surfaces of revolution to planar images, e.g. cylindrical surfaces to planar images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
- G06T7/33—Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
- G06T7/337—Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods involving reference images or patches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
- G06T2207/30201—Face
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2219/00—Indexing scheme for manipulating 3D models or images for computer graphics
- G06T2219/20—Indexing scheme for editing of 3D models
- G06T2219/2021—Shape modification
Definitions
- the field of the invention is that of the image processing of faces of individuals, to generate a front view of an individual from a non-frontal image thereof.
- the invention is particularly applicable to the identification of individuals by face recognition.
- the identification of individuals by face recognition is implemented by comparing two images of faces, and deducing from this comparison a score evaluating the similarity between the faces on the images.
- the optimal recognition efficiency is therefore achieved not only when two faces have the same pose on the compared images, but also when the faces are seen from the front, because this view provides the most information on the shape of the face.
- image processing methods have been developed for generating, from an image of a face, an image of the same face, seen from the front.
- the acquired image is processed to determine the three-dimensional shape of the face of the individual on the image, its pose, that is to say its position relative to a front view, as well as a representation of the texture of the face, that is, the physical appearance of the surface of the face superimposed on the three-dimensional structure of the shape of the face.
- the determination of the three-dimensional shape of the face of the individual is carried out by deforming a deformable three-dimensional model of the human face, to minimize the difference between the characteristic points of the model (position of the eyes, nostrils, tip of the nose, commissures of the lips, etc.) and the corresponding points of the face on the image.
- the purpose of the invention is to propose a method of treating a face image of an individual not having the aforementioned drawback, and in particular making it possible to determine the shape of any human face appearing on a face. picture.
- the subject of the invention is a method for generating a deformable three-dimensional face model from a plurality of images of faces of individuals, the method being characterized in that it comprises the steps of:
- a face template acquire example shapes of individuals' faces, for each individual face example, iteratively deform the template so that the shape of the deformed template matches the shape of the face example, and determine the deformation between the initial template and the deformed template, said iterative deformation of the template including minimizing the derivative of the gap between the original template and the deformed template, to constrain the deformed template to retain a human face shape.
- the generation of the face model as a linear combination of the shape of the template and deformations between the initial template and the deformed template for each example of an individual's face.
- the method according to the invention also has at least one of the following characteristics:
- the acquisition of example shapes of faces of individuals comprises the detection of characteristic points of each example of faces of individuals, and the matching of the corresponding characteristic points between the examples of faces.
- the iterative deformation of the template comprises, for each example of an individual's face, the modification of the positions of the characteristic points of the template in order to minimize a difference of position between said characteristic points and the corresponding points of the example of an individual's face.
- the iterative deformation of the template further comprises minimizing a difference in position between the points of the template and the surface of the face example.
- the step of iterative deformation of the template comprises the iterative minimization of a linear combination of:
- the invention also proposes a method for processing at least one face image of an individual, comprising the steps of generating, from the image, a three-dimensional representation of the individual's face, said representation comprising the steps of:
- the deformation of the three-dimensional model being carried out by modifying the coefficients of the linear combination of the model
- the method being characterized in that the changes of the coefficients of the linear combination are constrained to ensure that the deformed pattern corresponds to a human face.
- the method of processing a face image further comprises at least one of the following features: the modifications of the coefficients of the linear combination are constrained by minimizing the norm of the derivative of the difference between the initial model and the distorted model.
- the pose and the shape of the face of the individual on the image are estimated simultaneously, by iterative modification of the pose and the shape of the three-dimensional model to minimize the difference between the characteristic points of the face of the individual on the image and corresponding points of the model.
- the modification of the pose of the model comprises at least one transformation among the preceding group: translation, rotation, change of scale
- the modification of the shape of the three-dimensional model comprises the determination of the coefficients of the linear combination between the face template and the deformations applied to the template to obtain each face example.
- the method further comprises the steps of:
- the method is implemented on a plurality of face images of individuals, and:
- the step of determining a blank of the pose of the face of the individual is implemented on each face image of the individual
- the step of determining the shape and the pose of the face of the individual is implemented on all the face images by iteratively deforming the three-dimensional model so that the shape of the deformed model corresponds to the shape of the individual's face on the images.
- the invention finally proposes a system for identifying individuals comprising at least one control server of an individual to be identified, and at least one management server of a database of N reference images of listed individuals, the server control system comprising acquisition means adapted to proceed to the acquisition of an image of the face of the individual,
- the system for identifying individuals being characterized in that one of the control server and the management server comprises processing means adapted to implement the processing method according to the invention, and, from a front view of the face of an individual obtained, implement a face recognition treatment by comparison with the reference images of the base, in order to identify the individual. DESCRIPTION OF THE FIGURES
- FIG. 1 represents an exemplary identification system adapted to implement an image processing method.
- FIG. 2a represents the main steps of the method for generating a three-dimensional model of faces
- FIG. 2b represents the main steps of the image processing method according to the invention.
- Figure 3 illustrates the characteristic points of a face.
- Figure 4 illustrates notations used for calculating a differentiation matrix.
- FIG. 5a is an image of a face to be treated in order to identify the individual on the image
- FIGS. 5b and 5c are respectively the restitution of the shape of the face of the individual and a representation of the texture of said face
- Figure 5d is a front view of the face of the individual reconstructed from the image of Figure 5a.
- Figures 6a and 6b are respectively input images of the same face and a face image of the face obtained from these input images.
- an identification system 1 adapted to implement an image processing method.
- a control server SC provided with means 1 1 of acquisition of appropriate images, proceeds to the acquisition of an image of the face of the individual. This image may be non-frontal.
- the control server SC can also acquire a face image of the individual, this time frontal, which is stored in an identity document.
- the control server then advantageously comprises processing means adapted to implement, on the first image of the face of the individual, a treatment aimed at "frontalising” this image: that is, to generate, from of this image, a frontal image.
- the control server can advantageously compare the two front images it has, to determine if the faces on the images correspond to the same person.
- the second face image can be stored, among other images, in a database of a management server SG.
- the control server transmits the first image that it has acquired to the management server, and the latter implements the method of processing the first image and comparison to identify the individual I.
- the comparison can take place between the "frontalised" image of the individual and each face image recorded in the database.
- the shape of the object which is composed of a set of 3D vertices, each vertex being a point of the object defined by coordinates along three orthogonal directions.
- the surface of the object it is materialized by connecting vertex between them to form triangles.
- a list of triangles is thus defined for each object, each triangle being indicated by the three indexes of the corresponding columns of the matrix S.
- a representation of the texture of the object it is an image used to color the object in three dimensions obtained from its shape and its surface. The surface of the object defined by the triangle list is used to match the vertices of the object to a particular texture.
- the method includes a first step 100 of generating a three-dimensional model of a human face shape, which can be deformed to obtain any type of human face shape.
- This model is mathematically formulated as a linear combination of examples of individuals' faces, noted
- S 0 is a template of human face shape, constituting the basis of the model
- S ° + S j represents the shape of the face of a particular example of a real individual. Therefore, S j represents the gap between one of the face examples and the template.
- the coefficients ⁇ are later determined to distort the model S to match the face of an individual that we want to identify.
- the template S 0 of human face is generated: it may be a face shape of a particular individual, or an average of faces of a plurality of faces. 'people.
- the face shape or shapes are defined by a series of vertex corresponding to points of the face. These points comprise, inter alia, a number N s of characteristic points of a face, represented in FIG. 3, typically 22, and which are the corners of the eyes, the ends of the mouth, the nostrils, the tip of the nose, ears, etc.
- These feature points can be manually marked by an operator from a front face image, or they can be automatically spotted by a server.
- the human face template also includes the order of a few thousand other vertex acquired by a 3D scanner.
- a step 120 we proceed to the acquisition of forms of examples of faces of real individuals. This acquisition is implemented in the same way as before, by identifying the characteristic points of the faces of individuals to generate a list of vertexes.
- the shapes of faces thus acquired each correspond to a S ° + S j .
- the deflection S j between the face and the template is determined from the vertex lists of each face shape.
- the peculiarity of the template is that it is a form of face for which vertex indexing is already done. Therefore, the vertex indexing of each example of a face shape is performed by mapping in step 130 the vertices of each example shape to the vertex of the template.
- the deformed template deforms in a step 131 the template iteratively to minimize the gap between the template shape and that of the face example, the deformed template must always correspond to a human face shape.
- the mathematical function to be minimized includes three terms.
- the first term serves to minimize the distance between the characteristic points of an example of a face and the corresponding points of the template. It is written:
- i is the index of a characteristic point
- v ki is a vertex of a point of the template after deformation corresponding to the same point characteristic i
- N s is the number of characteristic points in a face, for example 22.
- the second term is used to match the surface of the face shape of the template with the surface of the shape of the face example.
- the function to be minimized represents the difference between the points of the template and the surface of the face example that is closest to the feature points. It is noted:
- Pvi is a point on the surface of the face example, that is to say a point corresponding to the projection on the surface of the face of the vertex ⁇ ,. It is possible that the surface of the face example is incomplete, if for example it is obtained from a non-frontal image, and that points of the template do not correspond to any point of the face example. In this case, these points of the template are not taken into account.
- the third term constrains the deformed template to remain a real human face, even if the face example used for deformation of the template is incomplete or contains noise.
- This term makes the jig deformed as "smooth" as possible, that is to say as continuous as possible, by minimizing the standard of the derivative of the transformation of the jig, at each iteration. This standard is expressed as follows:
- v is the concatenation of the 3D vertices of the deformed template
- vec (S °) the same term for the template before transformation
- v and vec (S °) are vectors of size 3N X 1.
- A is a vector differentiation matrix v-vec (S °), of dimension 3T x 3N, where T is the number of triangles of the surface of the template.
- the derivative is calculated for each triangle t of the surface of the template, the derivative of the deformation of a triangle t being calculated with respect to the triangles q neighbors of the triangle t, by approximation of the finite difference of the triangle t with the neighboring triangles q as follows:
- N t is the set of triangles q adjacent to the triangle t
- w qt is a weighting factor that depends on the surfaces of the triangles t and q
- d t is the deformation of the triangle t at its centroid
- b t is the position of the centroid of the triangle t.
- the distance between the centers of gravity and the weighting factor are calculated on the undeformed template S 0 .
- the weighting factor w qt is the sum of the surfaces of two triangles whose base is the edge connecting the triangles t and q, and the opposite vertex to this base is respectively the barycenters b t of the triangle t and that b q of the triangle q.
- the deformation of the shape is multiplied (v-vec (S 0 )) by a matrix B t of dimension 3 3N which is zero everywhere except the elements associated with the vertex of the triangle t. These elements are then equal 1/3.
- the matrix A of dimension 3T x 3N, is obtained by vertically concatenating the set of matrices B t associated with each triangle t, whose coefficients corresponding to the vertex of a triangle t are multiplied by weighting factors w qt and divided by the distances between the centers of gravity ⁇ b q - b t ⁇
- the differentiation matrix A depends solely on the surface of the undistorted template (list of triangles of S 0 ), and not on the shape of the deformed template v. It is therefore constant.
- the template may be arbitrarily removed from the individual's face example, and thus the points p v , of the face surface closest to the points V, of the template are not well defined.
- a significant value is fixed for ⁇ to ensure that the transformation is almost rigid, that is to say that the shape of the face of the template is the least distorted possible.
- the value of ⁇ is increased.
- the points p v are searched on the surface of the example of the individual face, as being the closest to the points v, of the template deformed at this iteration.
- these points ⁇ ⁇ are more and more reliable and the value of the coefficient ⁇ is decreased to make the comparison more flexible.
- This iterative mapping step is performed for each individual face example. It results in a deformed template, which corresponds to an example of a face, from which we can deduce the value of S j , the deviation between the template and the example of the face.
- a deformable three-dimensional face model is obtained, comprising the template S 0 and the deviations S j , which can be made linear combinations to obtain any individual face.
- this model can be used to generate, from a face image of an individual, a three-dimensional shape of his face.
- an image of the face of an individual that one wishes to identify is acquired, for example by means of a control server of FIG. 1.
- An example of such an image is appended in FIG. 5a.
- the pose is defined relative to a reference using six parameters: three rotation angles, two translation parameters and a scale factor and is defined as follows:
- p is a two-dimensional vector, comprising the X and Y coordinates of the projection of each vertex v in three dimensions
- s is the scale parameter
- R is a 2 x 3 matrix whose two lines are the first two rows of a rotation matrix
- t is a translation vector in X and Y.
- the rotation matrix is written according to the angles of Euler a x , a y , and a z as follows:
- the positions of the characteristic points of the face of the individual on the image are acquired in the same way as previously, for example by the pointing of an operator or by an automatic detection.
- Pi is the position of a characteristic point i on the image and v, is a vertex of the corresponding point i of the template.
- v is a vertex of the corresponding point i of the template.
- Each characteristic point i is therefore assigned a weighting coefficient c, representative of the "confidence" in the position of the point. If a point is invisible on the image, then its confidence coefficient is zero.
- the pose obtained for the template at the end of the minimization constitutes the pose of the face of the individual on the image.
- This optimization problem is solved with a two-step procedure, the first step 310 being the linear search of a solution, and the second step being the nonlinear minimization 320 to refine the estimate of the pose obtained with the first one. step.
- This first step of linear resolution 310 provides a good initial estimate of the pose, but the assumption previously adopted for the linear resolution not being however based in practice, the estimation needs to be refined by the step of non-linear estimate 320.
- the result of the linear step is refined by implementing a non-linear iterative step 320, for which a preferred method is the minimization of
- This step finally makes it possible to obtain a first estimate of the pose of the face of the individual on the image, this pose then being refined during the step 400 of "flexible” estimation of the pose and the shape of the face . It is therefore considered that at this stage a "draft" of the pose of the face has been determined.
- step 400 of flexible estimation of the pose and the shape is implemented using the three-dimensional face model obtained in step 100.
- this model is written in the form of a linear combination of the template S0 and deviations of this template compared to examples of individuals:
- the shape of any face can be obtained by choosing the coefficients a, of the linear combination.
- the flexible estimation of the shape and pose of the face of the individual on the image is thus achieved by minimizing the difference between the projections of the characteristic points p, the face of the individual on the image, and the same projections of the model.
- it is iteratively modified the shape of the face obtained by the model (thanks to the coefficients ⁇ ; ) and the parameters of pose of the face.
- the aj coefficients are then constrained to guarantee a realistic human face.
- the norm of the derivative of the deformation of the three-dimensional model is minimized, the deformed vertexes of the model being here defined as a function of the vector a comprising the aj for j between 1 and M.
- the derivative of the deformation of the Three-dimensional model is obtained by multiplying the deformed model by a matrix A 'constructed in the same way as the matrix A of previous differentiation.
- This minimization step corresponds to a hypothesis of continuity of the face, which is verified regardless of the individual and therefore allows the process to be as general as possible, that is to say applicable for any individual.
- This equation is solved, analogously to the nonlinear minimization step, using the Levenberg-Marquardt minimization algorithm.
- the initialization of the pose is provided by the pose obtained at the end of the rigid estimation step.
- the initial form used for the minimization is that of the template S 0 of origin, ie that the values of the coefficients ⁇ ; initials are zero.
- the deformed three-dimensional model thus corresponds to the three-dimensional shape of the individual's face in the image, represented in FIG. 5b.
- This three-dimensional shape can be manipulated simply to obtain a frontal representation.
- a representation of the texture of the face of the face is generated.
- the individual represented in FIG. 5c.
- a two-dimensional face image of the individual is generated. This image is illustrated in Figure 5d. It can serve as a basis for a conventional identification method by face recognition.
- this method can be implemented for a plurality of input images of the same individual, to obtain a unique three-dimensional shape of the individual's face and a unique representation of the texture of the face of the person. the individual.
- a set of installation parameters must be estimated for each input image.
- step 400 of flexible estimation of the pose and the shape is carried out on all the K images by searching for the following minimum:
- FIG. 6a shows two input images of the same individual, and in FIG. 6b a front image of the individual generated with this method.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Health & Medical Sciences (AREA)
- Computer Graphics (AREA)
- Geometry (AREA)
- Software Systems (AREA)
- Human Computer Interaction (AREA)
- Multimedia (AREA)
- General Health & Medical Sciences (AREA)
- Architecture (AREA)
- Computer Hardware Design (AREA)
- General Engineering & Computer Science (AREA)
- Processing Or Creating Images (AREA)
- Image Processing (AREA)
- Image Generation (AREA)
- Image Analysis (AREA)
Abstract
Description
Claims
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
FR1261025A FR2998402B1 (en) | 2012-11-20 | 2012-11-20 | METHOD FOR GENERATING A FACE MODEL IN THREE DIMENSIONS |
PCT/EP2013/074310 WO2014079897A1 (en) | 2012-11-20 | 2013-11-20 | Method for generating a three-dimensional facial model |
Publications (1)
Publication Number | Publication Date |
---|---|
EP2923302A1 true EP2923302A1 (en) | 2015-09-30 |
Family
ID=47878193
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP13792400.7A Withdrawn EP2923302A1 (en) | 2012-11-20 | 2013-11-20 | Method for generating a three-dimensional facial model |
Country Status (5)
Country | Link |
---|---|
US (1) | US10235814B2 (en) |
EP (1) | EP2923302A1 (en) |
JP (1) | JP6318162B2 (en) |
FR (1) | FR2998402B1 (en) |
WO (1) | WO2014079897A1 (en) |
Families Citing this family (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
FR3018937B1 (en) * | 2014-03-19 | 2017-07-07 | Morpho | METHOD FOR IMPROVED FACE MODELING FROM AN IMAGE |
JP6541334B2 (en) * | 2014-11-05 | 2019-07-10 | キヤノン株式会社 | IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND PROGRAM |
FR3028064B1 (en) | 2014-11-05 | 2016-11-04 | Morpho | IMPROVED DATA COMPARISON METHOD |
JP6439634B2 (en) | 2015-09-04 | 2018-12-19 | 富士通株式会社 | Biometric authentication device, biometric authentication method, and biometric authentication program |
CA2933799A1 (en) * | 2016-06-21 | 2017-12-21 | John G. Robertson | System and method for automatically generating a facial remediation design and application protocol to address observable facial deviations |
WO2018053703A1 (en) * | 2016-09-21 | 2018-03-29 | Intel Corporation | Estimating accurate face shape and texture from an image |
JP6930091B2 (en) * | 2016-11-15 | 2021-09-01 | 富士フイルムビジネスイノベーション株式会社 | Image processing equipment, image processing methods, image processing systems and programs |
US10572720B2 (en) | 2017-03-01 | 2020-02-25 | Sony Corporation | Virtual reality-based apparatus and method to generate a three dimensional (3D) human face model using image and depth data |
US10621788B1 (en) | 2018-09-25 | 2020-04-14 | Sony Corporation | Reconstructing three-dimensional (3D) human body model based on depth points-to-3D human body model surface distance |
CN109376698B (en) * | 2018-11-29 | 2022-02-01 | 北京市商汤科技开发有限公司 | Face modeling method and device, electronic equipment, storage medium and product |
JP6675564B1 (en) | 2019-05-13 | 2020-04-01 | 株式会社マイクロネット | Face recognition system, face recognition method, and face recognition program |
JP7321772B2 (en) * | 2019-05-22 | 2023-08-07 | キヤノン株式会社 | Image processing device, image processing method, and program |
CN111508069B (en) * | 2020-05-22 | 2023-03-21 | 南京大学 | Three-dimensional face reconstruction method based on single hand-drawn sketch |
CN113643412B (en) * | 2021-07-14 | 2022-07-22 | 北京百度网讯科技有限公司 | Virtual image generation method and device, electronic equipment and storage medium |
Family Cites Families (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6072496A (en) * | 1998-06-08 | 2000-06-06 | Microsoft Corporation | Method and system for capturing and representing 3D geometry, color and shading of facial expressions and other animated objects |
DE69934478T2 (en) * | 1999-03-19 | 2007-09-27 | MAX-PLANCK-Gesellschaft zur Förderung der Wissenschaften e.V. | Method and apparatus for image processing based on metamorphosis models |
US6807290B2 (en) * | 2000-03-09 | 2004-10-19 | Microsoft Corporation | Rapid computer modeling of faces for animation |
US9400921B2 (en) * | 2001-05-09 | 2016-07-26 | Intel Corporation | Method and system using a data-driven model for monocular face tracking |
US8553949B2 (en) * | 2004-01-22 | 2013-10-08 | DigitalOptics Corporation Europe Limited | Classification and organization of consumer digital images using workflow, and face detection and recognition |
US7218774B2 (en) * | 2003-08-08 | 2007-05-15 | Microsoft Corp. | System and method for modeling three dimensional objects from a single image |
US8571272B2 (en) * | 2006-03-12 | 2013-10-29 | Google Inc. | Techniques for enabling or establishing the use of face recognition algorithms |
WO2007119870A1 (en) * | 2006-04-14 | 2007-10-25 | Nec Corporation | Checking device and checking method |
US20140043329A1 (en) * | 2011-03-21 | 2014-02-13 | Peng Wang | Method of augmented makeover with 3d face modeling and landmark alignment |
US20130201187A1 (en) * | 2011-08-09 | 2013-08-08 | Xiaofeng Tong | Image-based multi-view 3d face generation |
JP5842541B2 (en) * | 2011-11-01 | 2016-01-13 | 大日本印刷株式会社 | 3D portrait creation device |
-
2012
- 2012-11-20 FR FR1261025A patent/FR2998402B1/en active Active
-
2013
- 2013-11-20 WO PCT/EP2013/074310 patent/WO2014079897A1/en active Application Filing
- 2013-11-20 EP EP13792400.7A patent/EP2923302A1/en not_active Withdrawn
- 2013-11-20 US US14/646,009 patent/US10235814B2/en active Active
- 2013-11-20 JP JP2015542315A patent/JP6318162B2/en active Active
Non-Patent Citations (3)
Title |
---|
LIU ZICHENG ET AL: "Rapid Modeling of Animated Faces From Video", 28 February 2000 (2000-02-28), XP055775253, Retrieved from the Internet <URL:https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.37.420&rep=rep1&type=pdf> [retrieved on 20210211] * |
See also references of WO2014079897A1 * |
ZICHENG LIU ET AL: "Rapid modeling of animated faces from video images", PROCEEDINGS ACM MULTIMEDIA, ACM, 2 PENN PLAZA, SUITE 701 NEW YORK NY 10121-0701 USA, 30 October 2000 (2000-10-30), pages 475 - 476, XP058235929, ISBN: 978-1-58113-198-7, DOI: 10.1145/354384.376389 * |
Also Published As
Publication number | Publication date |
---|---|
FR2998402B1 (en) | 2014-11-14 |
US10235814B2 (en) | 2019-03-19 |
US20150310673A1 (en) | 2015-10-29 |
JP2016501396A (en) | 2016-01-18 |
FR2998402A1 (en) | 2014-05-23 |
WO2014079897A1 (en) | 2014-05-30 |
JP6318162B2 (en) | 2018-04-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP2923302A1 (en) | Method for generating a three-dimensional facial model | |
JP4353246B2 (en) | Normal information estimation device, registered image group creation device, image collation device, and normal information estimation method | |
US10554957B2 (en) | Learning-based matching for active stereo systems | |
EP3707676A1 (en) | Method for estimating the installation of a camera in the reference frame of a three-dimensional scene, device, augmented reality system and associated computer program | |
Ratyal et al. | Deeply learned pose invariant image analysis with applications in 3D face recognition | |
EP3582141B1 (en) | Method for learning parameters of a convolutional neural network | |
CN105590020B (en) | Improved data comparison method | |
EP1864242A1 (en) | Method of identifying faces from face images and corresponding device and computer program | |
JP2005149507A (en) | Object recognition method and device using texton | |
FR2781906A1 (en) | ELECTRONIC DEVICE FOR AUTOMATIC IMAGE RECORDING | |
CN108229347A (en) | For the method and apparatus of the deep layer displacement of the plan gibbs structure sampling of people's identification | |
FR3088467A1 (en) | METHOD FOR CLASSIFYING A REPRESENTATIVE INPUT IMAGE OF A BIOMETRIC TRAIT USING A CONVOLUTIONAL NEURON NETWORK | |
Alqahtani et al. | 3D face tracking using stereo cameras: A review | |
Jamil et al. | Illumination-invariant ear authentication | |
EP3145405B1 (en) | Method of determining at least one behavioural parameter | |
FR3103045A1 (en) | A method of augmenting a training image base representing a fingerprint on a background using a generative antagonist network | |
CN116778533A (en) | Palm print full region-of-interest image extraction method, device, equipment and medium | |
US9786030B1 (en) | Providing focal length adjustments | |
FR3018937A1 (en) | METHOD FOR IMPROVED FACE MODELING FROM AN IMAGE | |
EP3929809A1 (en) | Method of detection of at least one visible biometric trait on an input image by means of a convolutional neural network | |
Ruiz Matarán | Bayesian Modeling and Inference in Image Recovery and Classification Problems | |
Babanin et al. | Performance evaluation of face alignment algorithms on" in-the-wild" selfies | |
WO2000004503A1 (en) | Method for modelling three-dimensional objects or scenes | |
Pollefeys et al. | 5. Calibration and Shape Recovery from Videos of Dynamic Scenes | |
Sekar et al. | Restoration of Fingerprint Images Using Discrete Version of the Topological Derivative |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
17P | Request for examination filed |
Effective date: 20150616 |
|
AK | Designated contracting states |
Kind code of ref document: A1 Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR |
|
AX | Request for extension of the european patent |
Extension state: BA ME |
|
DAX | Request for extension of the european patent (deleted) | ||
RAP1 | Party data changed (applicant data changed or rights of an application transferred) |
Owner name: IDEMIA IDENTITY & SECURITY FRANCE |
|
REG | Reference to a national code |
Ref country code: DE Ref legal event code: R079 Free format text: PREVIOUS MAIN CLASS: G06K0009000000 Ipc: G06T0015040000 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: EXAMINATION IS IN PROGRESS |
|
RIC1 | Information provided on ipc code assigned before grant |
Ipc: G06T 17/10 20060101ALI20190628BHEP Ipc: G06T 7/33 20170101ALI20190628BHEP Ipc: G06T 19/20 20110101ALI20190628BHEP Ipc: G06T 3/00 20060101ALI20190628BHEP Ipc: G06T 15/04 20110101AFI20190628BHEP Ipc: G06K 9/00 20060101ALI20190628BHEP |
|
17Q | First examination report despatched |
Effective date: 20190711 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: EXAMINATION IS IN PROGRESS |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: EXAMINATION IS IN PROGRESS |
|
REG | Reference to a national code |
Ref country code: DE Ref legal event code: R079 Free format text: PREVIOUS MAIN CLASS: G06T0015040000 Ipc: G06V0040160000 |
|
GRAP | Despatch of communication of intention to grant a patent |
Free format text: ORIGINAL CODE: EPIDOSNIGR1 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: GRANT OF PATENT IS INTENDED |
|
P01 | Opt-out of the competence of the unified patent court (upc) registered |
Effective date: 20230919 |
|
RIC1 | Information provided on ipc code assigned before grant |
Ipc: G06V 40/16 20220101AFI20231005BHEP |
|
INTG | Intention to grant announced |
Effective date: 20231020 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE APPLICATION HAS BEEN WITHDRAWN |
|
18W | Application withdrawn |
Effective date: 20240223 |