US9076209B2 - Augmented reality method applied to the integration of a pair of spectacles into an image of a face - Google Patents

Augmented reality method applied to the integration of a pair of spectacles into an image of a face Download PDF

Info

Publication number
US9076209B2
US9076209B2 US13/522,599 US201113522599A US9076209B2 US 9076209 B2 US9076209 B2 US 9076209B2 US 201113522599 A US201113522599 A US 201113522599A US 9076209 B2 US9076209 B2 US 9076209B2
Authority
US
United States
Prior art keywords
spectacles
lens
image
eyes
overlay
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active, expires
Application number
US13/522,599
Other languages
English (en)
Other versions
US20120313955A1 (en
Inventor
Ariel Choukroun
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
FITTINGBOX
Original Assignee
FITTINGBOX
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Family has litigation
First worldwide family litigation filed litigation Critical https://patents.darts-ip.com/?family=42629539&utm_source=google_patent&utm_medium=platform_link&utm_campaign=public_patent_search&patent=US9076209(B2) "Global patent litigation dataset” by Darts-ip is licensed under a Creative Commons Attribution 4.0 International License.
Application filed by FITTINGBOX filed Critical FITTINGBOX
Assigned to FITTINGBOX reassignment FITTINGBOX ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: CHOUKROUN, ARIEL
Publication of US20120313955A1 publication Critical patent/US20120313955A1/en
Application granted granted Critical
Publication of US9076209B2 publication Critical patent/US9076209B2/en
Active legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T7/004
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/017Head mounted
    • G02B27/0172Head mounted characterised by optical features
    • GPHYSICS
    • G02OPTICS
    • G02CSPECTACLES; SUNGLASSES OR GOGGLES INSOFAR AS THEY HAVE THE SAME FEATURES AS SPECTACLES; CONTACT LENSES
    • G02C13/00Assembling; Repairing; Cleaning
    • G02C13/003Measuring during assembly or fitting of spectacles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • G06T17/205Re-meshing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/149Segmentation; Edge detection involving deformable models, e.g. active contour models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/017Head mounted
    • G02B2027/0178Eyeglass type
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20116Active contour; Active surface; Snakes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30244Camera pose

Definitions

  • This invention relates to the field of image processing and image synthesis. It relates more specifically to the real-time integration of a virtual object into photographs or videos.
  • the context of the invention is the real-time virtual trying on of an object in the most realistic way possible; typically these objects are a pair of spectacles to be integrated into a photograph or a video representing the face of a person oriented substantially facing the camera.
  • the objective of this invention is to propose a method for modeling virtual spectacles representative of real spectacles and a method of integrating in real time these said virtual spectacles into a photograph or a video representing the face of a person, limiting the number of necessary data.
  • “Integration” means a positioning and realistic rendering of these virtual spectacles on a photo or a video representing a person without spectacles, thus generating a new photo or video equivalent to the photo or video of the individual that would have been obtained by photographing or filming the same person wearing the real spectacles corresponding to these virtual spectacles.
  • the invention envisages in the first place a method of creating a real-time photorealistic final image of a virtual object, corresponding to a real object, arranged on an original photo of a person in a realistic orientation linked to the position of said user, characterized in that it comprises the following steps:
  • the object is a pair of spectacles and the placement area is the user's face.
  • step 510 uses a first boosting algorithm AD 1 trained to determine whether the original photo contains a face.
  • step 530 consists of:
  • step 530 advantageously uses an iterative algorithm that makes it possible to refine the value of the similarity ⁇ and the positions of the characteristic points:
  • step 530 uses a second boosting algorithm trained with an eyes learning database, comprising a set of positive examples of eyes and a set of negative examples of eyes.
  • step 550 consists of:
  • the simplified geometric model of a real pair of spectacles consisting of a frame and lenses, is obtained in a phase 100 in which:
  • the number N of surfaces of the simplified geometric model is a value close to twenty.
  • phase 100 also comprises a step 110 consisting of obtaining images of the real pair of spectacles; the lens must match the lens intended for trying on 500 , and in this step 110 :
  • the number V of reference orientations is equal to nine, and if an orthogonal reference space with axes x, y, z is defined, where the y-axis corresponds to the vertical axis, ⁇ to the angle of rotation around the x-axis, ⁇ to the angle of rotation around the y-axis, the V positions Orientation i selected are such that the angle ⁇ substantially takes the respective values ⁇ 16°, 0° or 16°, the angle ⁇ takes the respective values ⁇ 16°, 0° or 16°.
  • phase 100 comprises a step 120 of creating a texture overlay of the frame Frame i , for each of the V reference orientations.
  • the shape of the lenses needed to generate the lens silhouette Lens i binary is extracted using an active contours algorithm based on the assumption that the frame and the lenses have different transparencies.
  • step 120 in step 120 :
  • this lens overlay Lens i overlay is a high-definition cropped image of the lens using, for cropping the original high-definition image, the lens silhouette Lens i binary .
  • step 550 the texture calculation is performed using overlays associated to the reference orientation closest to angles ⁇ and ⁇ , by the following sub-steps:
  • step 560 consists of generating an oriented textured model, oriented according to angles ⁇ and ⁇ and according to the scale and orientation of the original photo, from a textured reference model, oriented according to the reference orientation closest to angles ⁇ and ⁇ , and parameters of similarity ⁇ ; this step comprises the following sub-steps:
  • step 560 also comprises a sub-step of geometrically varying the arms of the virtual spectacles according to the morphology of the face of the original photo, so as to obtain a spectacles overlay Spectades overlay of the virtual pair of spectacles and a binary overlay Spectades overlay — binary , oriented as the original photo, and which can therefore be superimposed on it.
  • step 570 consists of taking into account the light interactions due to wearing virtual spectacles, particularly the shadows cast onto the face, the visibility of the skin through the lens of the spectacles, the reflection of the environment on the spectacles.
  • step 570 comprises the following sub-steps:
  • the method as described further comprises a phase 200 of creating a database of models of eyes DB models — eyes , comprising a plurality of photographs of faces referred to as learning photographs App eyes k
  • phase 200 advantageously comprises the following steps:
  • the fixed distance ⁇ is chosen so that no texture exterior to the face is included in patch P, and the width w and height h of patches P l k , P r k are constant and predefined, so that patch P contains the eye corresponding to this patch P in full, and contains no texture that is exterior to the face, irrespective of the learning photograph App eyes k .
  • the invention also envisages in another aspect a computer program product comprising program code instructions for executing steps of a method as described when said program is run on a computer.
  • FIG. 1 a represents a pair of wraparound sports spectacles
  • FIG. 1 b represents an initial mesh used to represent a real pair of spectacles
  • FIG. 1 c illustrates the definition of the normal to the surface in a segment V i + V i ,
  • FIG. 1 d represents a simplified model for a pair of wraparound sports spectacles
  • FIG. 2 illustrates the principle for photographing a real pair of spectacles for modeling
  • FIG. 3 is a schematic of the step for obtaining a simplified geometric model
  • FIG. 4 represents the nine shots of a pair of spectacles
  • FIG. 5 is a schematic of the step for obtaining images of the real pair of spectacles
  • FIG. 6 is a schematic of the step for generating overlays of spectacles
  • FIGS. 7 a and 7 b illustrate the creation of a shadow map on an average face
  • FIG. 8 is a schematic of the transition between a learning photograph and a gray-scale normalized learning photograph
  • FIG. 9 is a schematic of the construction of the final image.
  • the method here comprises five phases:
  • the first phase 100 is a method of modeling real pairs of spectacles allowing a spectacles database DB models — spectacles of virtual models of pairs of spectacles to be populated,
  • the second phase 200 is a method of creating a database of models of eyes DB models — eyes ,
  • the third phase 300 is a method of searching for criteria for recognizing a face in a photo.
  • the fourth phase 400 is a method of searching for criteria for recognizing characteristic points in a face.
  • the fifth phase 500 is a method of generating a final image 5 , from a virtual model 3 of a pair of spectacles, and an original photo 1 of a subject taken, in this example, by a camera and representing the face 2 of the subject.
  • the first four phases, 100 , 200 , 300 and 400 are performed on a preliminary basis, while phase 500 of trying on virtual spectacles is utilized many times, on different subjects and different virtual pairs of spectacles, based on the results from the four preliminary phases.
  • This phase of modeling pairs of spectacles is to model a real pair of spectacles 4 geometrically and texturally.
  • the data calculated by this spectacles modeling algorithm, for each pair of spectacles made available during the trying-on phase 500 are stored in a database DB models — spectacles so as to be available during this trying-on phase.
  • This spectacles modeling phase 100 is divided into four steps.
  • Step 110 Obtaining Images of the Real Pair of Spectacles 4
  • the procedure for constructing a simplified geometric model 6 of a real pair of spectacles 4 uses a device for taking photographs 50 .
  • This device for taking photographs 50 is, in this example, represented in FIG. 2 and consists of:
  • the device for taking photographs 50 is controlled by a unit associated to a software system 61 .
  • This control consists of managing the position and orientation of digital cameras 55 , relative to the object to be photographed, assumed to be fixed, for managing the background color 59 of the screen 58 and its position, and managing the rotation of the turntable 52 .
  • the device for taking photographs 50 is calibrated by conventional calibration procedures in order to accurately know the geometric position of each of the cameras 55 and the position of the vertical axis of rotation Z.
  • calibrating the device for taking photographs 50 consists of:
  • the first step 110 of the spectacles modeling phase consists of obtaining images of the real pair of spectacles 4 from a number of orientations (preferably keeping a constant distance between the camera and the object to be photographed), and under a number of lighting conditions.
  • the lens 4 b must match the lens intended for the trying-on phase 500 .
  • the real pair of spectacles 4 is photographed with a camera, at high resolution (typically a higher resolution than 1000 ⁇ 1000) in nine (more generally V) different orientations and in N light configurations showing the transmission and reflection of the spectacle lens 4 b.
  • high resolution typically a higher resolution than 1000 ⁇ 1000
  • V orientations are called reference orientations and in the rest of the description are designated by Orientation i .
  • V reference orientations Orientation i are selected by discretizing a spectrum of orientations corresponding to possible orientations when spectacles are tried on.
  • V*N high-resolution images of the real pair of spectacles 4 are thus obtained, designated Image-spectacles i,j (1 ⁇ i ⁇ V, 1 ⁇ j ⁇ N).
  • the number V of reference orientations Orientation i is equal to nine, i.e. a relatively small number of orientations from which to derive a 3D geometry of the model.
  • other numbers of orientations may be envisaged with no substantial change to the method according to the invention.
  • FIG. 4 represents a real pair of spectacles 4 and the nine orientations Orientation i of the shots.
  • nine camera positions corresponding to the reference orientations Orientation i
  • eighteen high-resolution images Image-spectacles i,j representing a real pair of spectacles 4 are obtained; these eighteen high-resolution images Image-spectacles i,j correspond to the nine orientations Orientation i in the two light configurations.
  • the first light configuration respects the colors and materials of the real pair of spectacles 4 .
  • Neutral conditions of luminosity are used for this first light configuration.
  • the nine (and more generally V) images Image-spectacles i,l created in this light configuration allow the maximum transmission of light through the lenses 4 b to be revealed (there is no reflection on the lens and the spectacle arms can be seen through the lenses). They are called high-resolution transmission images and in the rest of the description are designated by Transmission i ; the exponent i is used to characterize the i th view, where i varies from 1 to V.
  • the second light configuration highlights the special geometric features of the real pair of spectacles 4 , such as, for example, the chamfers. This second light configuration is taken in conditions of intense reflection.
  • the high-resolution images Image-spectacles i,2 obtained in this second light configuration reveal the physical reflection properties of the lens 4 b (the arms are not seen behind the lenses, but reflections of the environment on the lens are; transmission is minimal).
  • the nine (or V) high-resolution images of the real pair of spectacles 4 , created in this second light configuration are called high-resolution reflection images and in the rest of the description are designated by Reflection i ; the exponent i is used to characterize the i th view, where i varies from 1 to V.
  • the set of high-resolution images Image-spectacles i,j of real pairs of spectacles comprises, by definition, both the high-resolution transmission images Transmission i and the high-resolution reflection images Reflection i .
  • Obtaining the set of high-resolution images Image-spectacles i,j by this step 110 is illustrated in FIG. 5 .
  • Step 120 Generating Overlays of Spectacles
  • the second step 120 of spectacles modeling phase 100 consists of generating overlays for each of the nine reference orientations Orientation i .
  • a schematic of this second step 120 is shown in FIG. 6 . It is understood that an overlay is defined here in the sense known to the expert in the field of image processing. An overlay is a raster image with the same dimensions as the image from which it is derived.
  • the high-resolution reflection image Reflection i is taken.
  • a binary image is then generated with the same resolution as the high-resolution reflection image of the reference orientations.
  • This binary image actually shows the “outline” shape of the lenses 4 b of the real pair of spectacles 4 .
  • This binary image is called a lens silhouette and is designated Lens i binary .
  • Extraction of the shape of the lenses needed generate the lens silhouette is performed by an active contours algorithm (e.g. of a type known to those skilled in the art under the name “2D snake”) based on the assumption that the frame 4 a and the lenses 4 b have different transparencies.
  • the principle of this algorithm known per se, is to deform a curve having several deformation constraints. At the end of the deformation, the optimized curve follows the shape of the lens 4 b.
  • the curve to be deformed is defined as a set of 2D points placed on a line.
  • the k th point of the curve associated with the coordinate xk in the high-resolution reflection image Reflection i associated to a current reference orientation has an energy E(k).
  • This energy E(k) is the sum of an internal energy E internal (k) and an external energy E external (k).
  • the external energy E external (k) depends on the high-resolution reflection image Reflection i associated to a current reference orientation, whereas the internal energy E internal (k) depends on the shape of the curve. This therefore gives E external (k) ⁇ (xk), where ⁇ is the gradient of the high-resolution reflection image Reflection i associated to a current reference orientation.
  • the balloon energies E balloon (k) and the curvature energies E curvature (k) are calculated using standard formulas in the field of active contour methods, such as the method known as the Snake method.
  • the value of the pixel is equal to one if the pixel represents the lenses 4 b , and zero if not (which, in other words, in effect forms an outline image).
  • gray scales values between 0 and 1
  • binary levels values equal to 0 or 1
  • a lens overlay is then generated for each of the nine (V) reference orientations by copying, for each pixel with a value equal to one in the lens silhouette Lens i binary , the information contained in the high-resolution reflection image Reflection i and assigning zero to the other pixels.
  • the exponent i of variables Lens i binary and Lens i overlay varies from 1 to V, where V is the number of reference orientations.
  • This lens overlay Lens i overlay is, to some extent, a high-definition cropped image of the lens using, for cropping the original high-definition image, the lens silhouette Lens i binary (outline shape) created previously.
  • Lens i overlay Lens i binary Reflection i (Eq 1)
  • the associated high-resolution reflection image Reflection i is chosen, and then, for each of them, a binary background image Background i binary is then generated by automatically extracting the background, using a standard image background extraction algorithm.
  • An overlay referred to as the texture overlay of the frame behind the lens Frame i behind — lens
  • An overlay is then generated of the texture of the frame corresponding to the portion of the frame located behind the lenses 4 b (for example, a portion of the arms may be visible behind the lens 4 b depending on the orientation) for each of the nine (V) reference orientations, by copying, for each pixel with a value equal to one in the binary lens overlay Lens i binary , the information contained in the high-resolution transmission image Transmission i , and assigning zero to the other pixels.
  • an overlay referred to as the texture overlay of the frame outside the lens Frame i exterior — lens is generated for each of the nine (V) reference orientations by copying, for each pixel with a value equal to one in the binary frame overlay Frame i binary , the information contained in the high-resolution reflection image Reflection i and assigning zero to the other pixels.
  • Step 130 Geometric Model
  • the third step 130 , of the spectacles modeling phase 100 consists of obtaining a simplified geometric model 6 of a real pair of spectacles 4 .
  • a real pair of spectacles 4 comprises a frame 4 a and lenses 4 b (the notion of lenses 4 b comprises the two lenses mounted in the frame 4 a ).
  • the real pair of spectacles 4 is represented in FIG. 1 a.
  • This step 130 does not involve the reflection characteristics of the lenses 4 b mounted in the frame 4 a ; the real pair of spectacles 4 may be replaced by a pair of spectacles comprising the same frame 4 a with any lenses 4 b having the same thickness and curvature.
  • a geometric model suitable for the rendering method described in step 120 There are several possible ways to construct a geometric model suitable for the rendering method described in step 120 .
  • One possible method is to generate a dense 3 d mesh that faithfully describes the shape of the pair and is extracted either by automatic reconstruction methods [C. Hernández, F. Schmitt and R. Cipolla, Silhouette Coherence for Camera Calibration under Circular Motion, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 29, no. 2, pp. 343-349, February, 2007] or by exploiting existing 3D models from manual modeling by CAD (Computer Aided Design) systems.
  • a second method consists of modeling the real pair of spectacles 4 by a 3D active contour linked to a surface mesh. An optimization algorithm deforms the model so that the projections of its silhouette in each of the views best match the silhouettes detected in the images (using a procedure as described).
  • the real pair of spectacles 4 is modeled by a surface mesh that is dense or has a low number of facets (traditionally known by the name “low polygon number” or “LowPoly”). This last method is.
  • the initial shape is used to introduce one a priori with a weak shape; it can be generic or chosen from a database of models according to the pair to be reconstructed.
  • a simplified geometric model i.e. a “low polygons” type
  • the mesh comprises N summits, designated V i .
  • the mesh has the shape of a triangle strip, as shown in FIG. 1 b . Furthermore it is assumed that the number of summits on the upper contour of the mesh is equal to the number of summits on the lower contour of the mesh, and that the sampling of these two contours is similar. Thus, an “opposite” summit, V i + can be defined for each summit V i .
  • N i ⁇ V i+1 ; V i ⁇ 1 ; V i + ⁇
  • the summits V i+1 and V i ⁇ 1 are the neighbors of V i along the contour of the mesh.
  • the summit V i + corresponds to the summit opposite to V i , as defined earlier.
  • This neighborhood also allows two triangles T i 1 and T i 2 to be constructed (see FIG. 1 c ).
  • the normal to the surface in segment V i + V i (which is a topological peak or not) is defined by
  • n n 1 + n 2 ⁇ n 1 + n 2 ⁇ ( Eq ⁇ ⁇ 6 )
  • an energy is associated to the current 3D model: the closer the projected silhouettes of the model are to the contours in the images, the lower this energy is.
  • Each summit is then displaced iteratively so as to minimize this energy until convergence (i.e. until the energy is no longer reduced by a displacement).
  • E d,i is the linking term to the image data, i.e. to the contours calculated in the different views.
  • the three other terms are smoothing terms, which do not depend on images.
  • E r,i is a repulsion term that tends to distribute the summits uniformly.
  • E c,i is a curvature term that tends to make the surface smooth.
  • E o,i is an obliquity term aimed at minimizing the gap in the (x; y) plane between Vi and V i +
  • the weights ⁇ d , ⁇ r , ⁇ c , ⁇ o are common to all the summits and in general ⁇ d , ⁇ r , ⁇ c , ⁇ o .
  • the linking term to data E d,i characterizes the proximity of the silhouette of the current active contour with the contours detected in the images (by an active contour procedure as described in step 120 above).
  • an automatic cropping phase of a type known per se (“difference matting”), provides an opacity map for each view.
  • the contours are obtained by thresholding the gradient of this opacity map.
  • the contour information is propagated to the entire image by calculating, for each view k, a map of distances to the contours, designated D k .
  • the projection model of the 3D model in the images is a model of pinhole camera, of a type known per se, defined by the following elements:
  • the linking energy to the data is thus expressed by:
  • the repulsion term E r,i tends to minimize the difference in length of two peaks of the contour joining at V i . It is expressed by:
  • the curvature term E c,i tends to reduce the curvature perpendicular to segment V i + V i
  • E o,i (d i T a ) 2 (Eq 12) where d i designates segment V i + V i
  • the initial non-linear minimization problem is replaced by a succession of linear problems.
  • step ⁇ k is either optimized (a standard method referred to as “line-search”), or determined beforehand and left constant throughout the procedure.
  • the iterative procedure described above is stopped when the step is normally below a threshold, when more than k max iterations have been performed, or when the energy E i does not reduce sufficiently from one iteration to the next.
  • 3D modeling software is used to model the geometry of the real pair of spectacles 4 .
  • a model of the database of models DB models — spectacles is used and it is adapted manually.
  • the simplified geometric model 6 is formed of a number N of polygons and their normals, taking as the orientation of these normals the exterior of the envelop convex to the real pair of spectacles 1 .
  • the number N is a value close to twenty.
  • FIG. 1 d represents a simplified model for a pair of wraparound sports spectacles.
  • these polygons of the simplified geometric model 6 are called the surfaces of the modeled pair of spectacles designated by surface j .
  • the normal to a surface of the modeled pair of spectacles surface j is designated by ⁇ right arrow over (n) ⁇ j ; j is a numbering index of the surfaces surface j which varies from 1 to N.
  • step 130 A schematic of step 130 is shown in FIG. 3 .
  • Step 140 Creating a Shadow Map
  • a shadow map designated Visibility i , is created for each of the reference orientations Orientation i .
  • the goal is to calculate the shadow produced by the pair of spectacles on a face, modeled here by an average face 20 , a 3D model constructed in the form of a mesh of polygons (see FIG. 7 a ).
  • the modeling of the face in question corresponds to an average face 20 , which makes it possible to calculate a shadow suitable for any person.
  • the method calculates the light occlusion produced by the pair of spectacles on each area of the average face 20 .
  • the technique envisaged allows very faithful shadows to be calculated while requiring only a simplified geometric model 6 of the real pair of spectacles 4 .
  • This procedure is applied to calculate the shadow produced by the pair of spectacles, for each image of said pair of spectacles.
  • the final result obtained are 9 shadow maps Visibility i corresponding to the 9 reference orientations Orientation i used, in this example, during the creation of the image-based rendering.
  • this shadow map Visibility i is calculated using the simplified geometric model 6 of the real pair of spectacles 4 (“low polygons” surface simplified model, see step 130 ), a textured reference model 9 (superimposition of texture overlays of the pair of spectacles corresponding to a reference orientation) oriented according to the reference orientation Orientation i , a modeling of an average face 20 , a modeling of a light source 21 and a modeling 22 of a camera.
  • the shadow map Visibility i is obtained by calculating the light occlusion produced by each elementary triangle forming the simplified geometric model 6 of the real pair of spectacles 4 , on each area of the average face 20 , when everything is lit by the light source 21 .
  • the light source 21 is modeled by a set of point sources emitting in all directions, located at regular intervals in a rectangle, for example as a 3 ⁇ 3 matrix of point sources.
  • the modeling 22 of a camera is standard modeling of a type known as pinhole, i.e. modeling without a lens and with a very small and simple opening.
  • the shadow map Visibility i obtained is an image comprising values between 0 and 1.
  • K designate the operator that, at a vertex V(x,y,z), associates its projection P(X,Y) in the image.
  • the set of these 3D points forms a radius. Subsequently, when reference is made to a 3D radius associated with a pixel, the 3D radius corresponds to the set of 3D points projected on the pixel.
  • the value O(i,j) of the shadow image is calculated.
  • V is defined as the intersection of the 3D radius defined by the pixel and the 3D model of the face 20 (see FIG. 7 b ).
  • the light occlusion produced by the pair of spectacles on this vertex is calculated.
  • the light occlusion produced by each triangle of the low-resolution geometric model 6 is calculated.
  • A(m), B(m), C(m) designate the three summits of the mth triangle of the low-resolution geometric model 6 .
  • the intersection tn of the light ray passing through Vis calculated.
  • Tn is the 2D projection of vertex tn on the texture image (textured reference model 9 of the pair of spectacles).
  • the transparency of the texture is known from step ( 120 ) of cropping on differences, therefore, the pixel Tn has a transparency, designated by a(Tn).
  • the Coefficient term allows the opacity of the shadow Visibility i to be adjusted according to the visual rendering wanted.
  • the data obtained in phase 100 are stored in a spectacles database DB models — spectacles that contains, for each pair of spectacles modeled, the simplified geometric model 6 of this real pair of spectacles 4 , the lens overlays Lens i overlay , the overlays of the frame behind the lens Frame i behind — lens and the overlays of the frame outside the lens Frame i exterior — lens , for each of the V reference orientations.
  • a spectacles database DB models spectacles that contains, for each pair of spectacles modeled, the simplified geometric model 6 of this real pair of spectacles 4 , the lens overlays Lens i overlay , the overlays of the frame behind the lens Frame i behind — lens and the overlays of the frame outside the lens Frame i exterior — lens , for each of the V reference orientations.
  • spectacles database DB models data specific to the lenses 4 b of the real pair of spectacles 4 are added to the previously mentioned data in the spectacles database DB models — spectacles , such as its coefficient of opacity ⁇ , known by the manufacturer, and possibly supplied for each reference orientation.
  • the second phase 200 makes it possible to create a database of models of eyes, DB models — eyes . To simplify its description, it is subdivided into ten steps ( 210 , 220 , 230 to 236 and 240 ). The database of models of eyes, DB models — eyes , thus obtained is used, in the trying-on phase 500 , to characterize the eyes of a person photographed.
  • This eyes database DB models eyes can be created, for example, from at least two thousand photographs of faces, referred to as learning photographs App eyes k (1 ⁇ k ⁇ 2000). These learning photographs are advantageously, but not obligatorily, the same size as the images of models of spectacles and of the face of the user in the trying-on method.
  • Step 210 When this eyes database DB models — eyes is created, first of all a reference face 7 shape is defined by setting a reference interpupillary distance di 0 , by centering the interpupillary segment on the center of the image and orienting the interpupillary segment parallel to the horizontal axis of the image (face not tilted). The reference face 7 is therefore centered on the image, with the face orientation and magnification depending on the reference interpupillary distance di 0 .
  • Step 220 In a second step a correlation threshold is defined.
  • steps 230 to 236 are applied.
  • Step 230 The precise position of characteristic points (corners of the eyes) are determined, manually in this example, i.e. the position of the exterior point B l k , B r k of each eye (left and right respectively with these notations) and the position of the interior point A l k , A r k , as defined in FIG. 8 . Each position is determined by its two coordinates within the image.
  • the respective geometric centers G l k , G r k of these eyes are determined, calculated as the barycenter of the exterior point B k of the corresponding eye and the interior point A k of this eye, and the interpupillary distance di k is calculated.
  • Step 231 This k th learning photograph App eyes k is transformed into a gray-scale image App eyes-gray k , by an algorithm known per se, and the gray-scale image is normalized by applying a similarity S k (tx, ty, s, ⁇ ) so as to establish the orientation (front view), scale (reference interpupillary distance di 0 ) of the reference face 7 .
  • This similarity S k (tx, ty, s, ⁇ ) is determined as the mathematical operation to be applied to the pixels of the learning photograph App eyes k to center the face (center of eyes equal to the center of the photograph), orientation of face and magnification depending on the reference interpupillary distance di 0 .
  • the terms tx and ty designate the translations to be applied on the two axes of the image so as to establish the centering of the reference face 7 .
  • the term s designates the magnification factor to be applied to this image
  • the term ⁇ designates the rotation to be applied to the image so as to establish the orientation of the reference face 7 .
  • a k th gray-scale normalized learning photograph App eyes — gray — norm k is thus obtained.
  • the interpupillary distance is equal to the reference interpupillary distance di 0 .
  • the interpupillary segment is centered on the center of the k th gray-scale normalized learning photograph App eyes — gray — norm k .
  • the interpupillary segment is parallel to the horizontal axis of the gray-scale normalized learning photograph App eyes — gray — norm k .
  • Step 232 A window, rectangular in this example, with a fixed size (width w and height h) is defined for each of the eyes, in the k th gray-scale normalized learning photograph App eyes — gray — norm k .
  • These two windows are called the left patch P l k and right patch P r k in the remainder of this description, according to a standard usage in this field.
  • the term patch P will be used to denote either one of these patches P l k , P r k .
  • Each patch P is a sub-raster image extracted from an initial raster image of a face. It is clear that, in a variant, a shape other than rectangular may be used for the patch, for example polygonal, elliptical or circular.
  • the position of the patch P corresponding to an eye is defined by the fixed distance ⁇ between the exterior point of the eye B and the edge of the patch P closest to this exterior point of the eye B (see FIG. 7 ).
  • This fixed distance ⁇ is chosen so that no texture exterior to the face is included in the patch P.
  • the width w and height h of patches P l k , P r k are constant and predefined, so patch P contains the eye corresponding to this patch P in full, and contains no texture that is external to the face, irrespective of the learning photograph App eyes k .
  • Step 233 For each of the two patches P l k , P r k associated to the k th gray-scale normalized learning photograph APP eyes — gray — norm k (each corresponding to one eye), the gray-scales are normalized.
  • a texture column-vector T (called the original texture column-vector, is defined, comprised of the gray-scales for patch P, in this example stored in row order the size of the texture column-vector T is equal to the number of lines (h) multiplied by the number of columns (I) and a column-vector I with a unit value is defined, the same size as the texture column-vector T.
  • the mathematical operation therefore consists of calculating the mean of the gray-scales of patch P, mean designated ⁇ T , of normalizing the standard deviation of these gray-scales, designated ⁇ T , and of applying the formula:
  • T ⁇ ⁇ 0 ( T - ⁇ T ⁇ I ) ⁇ T ( Eq ⁇ ⁇ 17 )
  • T 0 is the normalized texture column-vector (gray-scale) and T the original texture column-vector.
  • Step 234 This step 234 is only performed for the first learning photograph App eyes 1 .
  • the eyes database DB models eyes is therefore empty.
  • each of the patches P l 1 , P r 1 is added to the eyes database DB models — eyes ; with the following data stored:
  • Patches P l 1 , P r 1 stored in the eyes database DB models — eyes in this step 234 and in step 236 are called descriptor patches.
  • Step 235 For each of the patches P associated to the k th gray-scale normalized learning photograph App eyes — gray — norm k (each corresponding to one eye), the corresponding normalized texture column-vector T 0 is correlated with each of the normalized texture column-vectors T 0 of the corresponding descriptor patches.
  • t T 0 designates the transposed vector of the normalized texture column-vector T 0 .
  • Step 236 For each of the patches P l k , P r k , this correlation measurement Z ncc is compared against the previously defined correlation threshold. If correlation Z ncc is below the threshold, i.e. Z ncc (T 0 k , T 0 i ) ⁇ threshold, patch P is added to the eyes database DB models — eyes , with the following data stored:
  • a new learning photograph App eyes k+1 can now be processed by returning to step 230 .
  • Step 240 A statistical operation is performed on all the similarities S k (tx, ty, s, ⁇ ) stored in the database DB models — eyes .
  • the mean value of the translation tx and the mean value of the translation ty are calculated; these values will be stored in a two-dimensional vector ⁇ right arrow over ( ⁇ ) ⁇ .
  • the standard deviation ⁇ is calculated for position parameters tx, ty relative to their mean, characterized by ⁇ right arrow over ( ⁇ ) ⁇ .
  • the precise positions of the characteristic points of the eyes (these precise positions here are non-normalized) determined beforehand in the k th learning photograph App eyes k .
  • the similarity S k (tx, ty, s, ⁇ ) or the values of all the parameters allowing these precise positions to be re-calculated, are also stored.
  • Phase 300 Method of Searching for Criteria for Recognizing a Face in a Photo.
  • phase 300 is to detect the possible presence of a face in a photo.
  • a boosting algorithm is used, of a type known per se and, for example, described by P. Viola and L. Jones “Rapid object detection using a boosted cascade of features” and improved by R. Lienhart “a detector tree of boosted classifiers for real-time object detection tracking”.
  • classifier refers to a family of statistical classification algorithms. In this definition, a classifier groups together in the same class elements presenting similar properties.
  • Strong classifier refers to a very precise classifier (low error rate), as opposed to weak classifiers, which are not very precise (slightly better than a random classification).
  • the principle of boosting algorithms is to use a sufficient number of weak classifiers to make a strong classifier, achieving a desired classification success rate, emerge by selection or combination.
  • This face learning database DBA faces consists of a set of images referred to as positive examples of faces Face positive (type of example that one wants to detect) and a set of images referred to as negative examples of faces Face negative (type of example that one does not want to detect). These images are advantageously, but not obligatorily, the same size as the images of models of spectacles and of the face of the user in the trying-on method.
  • Face reference is selected such that:
  • the set of these reference face images Face reference must comprise several lighting conditions.
  • Face modified are constructed by applying variations in scale, rotation and translation in bounds determined by a normal trying on of a pair of spectacles (e.g. unnecessary to create an inverted face).
  • the set of images referred to as the positive examples of faces Face positive consists of reference face images Face reference and modified images Face modified based on these reference face images Face reference .
  • the number of examples referred to as positive examples of faces Face positive is greater than or equal to five thousand.
  • the set of images of negative examples of faces Face negative consists of images that cannot be included in the images referred to as positive examples of faces Face positive .
  • images that do not represent faces, or images representing parts of faces, or faces that have undergone aberrant variations are, therefore, images that do not represent faces, or images representing parts of faces, or faces that have undergone aberrant variations.
  • a group of pertinent images is taken for each level of the cascade of strong classifiers. For example, five thousand images of negative examples of faces Face negative are selected for each level of cascade. If, as in this example, one chooses to use twenty levels in the cascade, this gives one hundred thousand images of negative examples of faces Face negative in the face learning database DBA faces .
  • Phase 300 uses this face learning database DBA faces train the first boosting to algorithm AD 1 , designed to be used in step 510 of phase 500 .
  • Phase 400 Method of Searching for Criteria for Recognizing Characteristic Points in a Face
  • phase 400 is to provide a method for detecting the position of the eyes in a face in a photo.
  • the position of the eyes is detected with a second, Adaboost-type, detection algorithm AD 2 , trained with an eyes learning database DBA eyes described below.
  • the eyes learning database DBA eyes consists of a set of positive examples of eyes Eyes positive (positive examples of eyes are examples of what one wants to detect) and a set of negative examples of eyes Eyes negative (negative examples of eyes are examples of what one does not want to detect).
  • first of all reference eye images Eyes reference are selected such that the eyes are of the same size, straight (aligned horizontally) and centered, under different lighting conditions and in different states (closed, open, half-closed, etc.),
  • Eyes modified are constructed by applying variations in scale, rotation and translation in limited bounds.
  • the set of images referred to as the positive examples of eyes Eyes positive will therefore consist of reference eye images Eyes reference and modified eye images Eyes modified based on these reference eye images Eyes reference .
  • the number of examples referred to as positive examples of eyes Eyes positive is greater than or equal to five thousand.
  • Eyes negative must be constituted of images of parts of the face that are not eyes (nose, mouth, cheek, forehead, etc.) or of partial eyes (bits of the eye).
  • additional negative images are constructed based on reference eye images Eyes reference by applying sufficiently great variations in scale, rotation and translation so that these images thus created are not interesting in the context of images of positive examples of eyes Eyes positive .
  • a group of pertinent images is selected for each level of the cascade of strong classifiers. For example, five thousand images of negative examples of eyes Eyes negative can be selected for each level of cascade. If there are twenty levels in the cascade, this gives one hundred thousand images of negative examples of eyes Eyes negative in the eyes learning database DBA eyes .
  • Phase 400 may use this eyes learning database DBA eyes to train a second boosting algorithm AD 2 , which is used in a variant of the method involving a step 520 .
  • phase 500 trying on virtual spectacles, the method of generating a final image 5 from the original photo 1 is divided into seven steps:
  • Step 510 uses the first boosting algorithm AD 1 trained in phase 300 to determine whether the original photo 1 contains a face 2 . If this is the case one goes to step 520 , otherwise the user is warned that no face has been detected.
  • Step 520 its purpose is to detect the position of the eyes in the face 2 in the original photo 1 .
  • Step 520 here uses the second boosting algorithm AD 2 trained in phase 400 .
  • the position of the eyes, determined in this step 520 is expressed by the position of characteristic points.
  • This step 520 thus provides a first approximation, which is refined in the next step 530 .
  • Step 530 it consists of determining a similarity ⁇ , to be applied to an original photo 1 , to obtain a face, similar to a reference face 7 in magnification and orientation, and determining the position of the precise exterior corner A and the precise interior corner B for each eye in the face 2 in the original photo 1 .
  • the position of the eyes, determined in this step 530 is expressed by the position of characteristic points.
  • these characteristic points comprise two points per eye; the first point is defined by the most innermost possible corner A of the eye (the one nearest the nose), the second point B is the most outermost corner of the eye (the one furthest from the nose).
  • the first point, A is called the interior point of the eye, and the second point, B, is called the exterior point of the eye.
  • This step 530 uses the database of models of eyes DB models — eyes .
  • this step 530 provides information characterizing the offset from center, distance to the camera and 2D orientation of the face 2 in the original photo 1 .
  • This step 530 uses an iterative algorithm that makes it possible to refine the value of the similarity ⁇ and the positions of the characteristic points.
  • Step 520 has provided respectively, for each eye, a first approximate exterior point of the eye A 0 and a first approximate interior point B 0 ; these points are used for initializing the characteristic points.
  • the initialization values of the similarity ⁇ are deduced from them.
  • the similarity ⁇ is defined by a translation tx, ty in two dimensions x, y, a parameter of scale s and a parameter of rotation ⁇ in the image plane.
  • ⁇ 0 ( x 0 y 0 ⁇ 0 s 0 ) , the initial value of ⁇ .
  • the characteristic points are used to create the two patches P l , P r containing the two eyes. These patches P l , P r are created as follows;
  • the original photo 1 is transformed into a gray-scale image 8 , by an algorithm known per se, and the two patches P l , P r are constructed with the information about the exterior B and interior A points.
  • the position of a patch P l , P r is defined by the fixed distance D, used prior to this in step 232 and following steps, between the exterior edge B of the eye and the edge of the patch closest to this point B.
  • the sizing of the patch P l , P r was defined in step 232 and following steps. If the patches P l , P r are not horizontal (external and interior points of the patch not aligned horizontally), a bilinear interpolation of a type known per se, is used to align them.
  • T l The information about the texture of each of the two patches P l , P r is stored in a vector (T l ), then these two vectors are normalized by subtracting their respective mean and dividing by their standard deviation. This gives two normalized vectors, designated T 0 r and T 0 l .
  • is considered in terms of probability.
  • the realizations of the parameters of position tx, ty, orientation ⁇ , and scale s, are considered to be independent and, in addition, the distributions of ⁇ , s are considered to follow a uniform distribution.
  • D r are random variable data representing the right patch P r , consisting of the texture of the right patch,
  • D l are random variable data representing the left patch P l , consisting of the texture of the left patch,
  • the realizations of D r and D l are considered to be independent,
  • id represents a descriptor patch (patches stored in the eyes database DB models — eyes ).
  • the set of descriptor patches in the eyes database DB models — eyes are then scanned.
  • the term ⁇ represents the correlation Z ncc (between 0 and 1), formulated in step 235 and following steps, between patch P r of the right eye (respectively P l of the left eye) and a descriptor patch transformed according to the similarity ⁇ .
  • Equation 19 The optimization criterion defined above (Equation 19), thus makes it possible to define an optimal similarity ⁇ and an optimal patch from the patch descriptors for each of the two patches P l , P r which allows new estimates of the position of the exterior corners A and interior point B of each eye, i.e. characteristic points, to be provided.
  • this new similarity value ⁇ is sufficiently far from the previous value, e.g. by a difference of ⁇ : if ⁇ i ⁇ 1 ⁇ i ⁇ > ⁇ an iteration is repeated.
  • ⁇ i represents the value of ⁇ found at the end of the current iteration
  • ⁇ i ⁇ 1 is the value of similarity ⁇ found at the end of the previous iteration, i.e. also the initial value of similarity ⁇ for the current iteration.
  • the constant K allows the right compromise to be achieved between the correlation measurements Zncc and a mean position from which one does not want to depart too far.
  • This constant K is calculated, using the method just described, on a set of test images, different from the images used to create the database, and by varying K.
  • the constant K is chosen so as to minimize the distance between the characteristic points of the eyes, manually positioned on the training images, and those found in step 530 .
  • Step 540 its purpose is to estimate the 3D orientation of the face, i.e. to provide the angle ⁇ and angle ⁇ of the camera having taken the photo 1 , relative to the principal plane of the face. These angles are calculated from the precise position 38 of the characteristic points determined in step 530 , by a geometric transformation known per se.
  • Step 550 this step consists of:
  • the simplified geometric model 6 is divided into N surfaces surface j , each having a normal ⁇ right arrow over (n) ⁇ j .
  • This texture calculation is performed as follows, using the texture, i.e. the different overlays, of the reference orientation Orientation i closest to angles ⁇ and ⁇ :
  • Step 560 consist of generating an oriented textured model 11 , oriented according to the angles ⁇ and ⁇ and according to the scale and orientation of the original photo 1 (which can have any value and not necessarily equal to the angles of the reference orientations), from the textured reference model 9 , oriented according to the reference orientation Orientation i , closest to angles ⁇ and ⁇ , and parameters ⁇ and s of similarity ⁇ (determined in step 530 ).
  • a bilinear affine interpolation is used to orient an interpolated textured model 10 according to the angles ⁇ and ⁇ (determined in step 540 ) based on the textured reference model 9 (determined in step 550 ) oriented according to the reference orientation Orientation i closest to these angles ⁇ and ⁇ .
  • the arms of the virtual spectacles 3 are varied geometrically according to the morphology of the face of the original photo 1
  • a spectacles overlay Spectades overlay of the virtual pair of spectacles 3 is obtained and a binary overlay Spectacles overlay — binary (outline shape of this spectacles overlay) is deduced, oriented as the original photo 1 , and which can therefore be superimposed on it.
  • Step 570 consists of taking into account the light interactions due to wearing virtual spectacles, i.e. taking into account, for example, the shadows cast onto the face 2 , the visibility of the skin through the lens of the spectacles, the reflection of the environment on the spectacles. It is described in FIG. 9 . It consists of:
  • the result of this function is an image of the original photo 1 on which is superimposed an image of the model of spectacles chosen, oriented as the original photo 1 , and given shadow properties.
  • the construction procedure allowing the simplified geometrical model 6 of a new shape of a real pair of spectacles 4 , i.e. of a shape not found in the models database DB models — spectacles to be constructed, is here as follows:
  • step 540 whose purpose is to estimate the 3D orientation of the face, proceeds by detecting, if possible, the two points on the image representing the temples, called temple image points 63 .
  • the visual characteristic of a temple point is the visual meeting of the cheek and ear.
  • temple image points 63 may fail in the case where, for example, the face is turned sufficiently (>fifteen degrees), or there is hair in front of the temples etc.
  • the failure to detect a temple image point 63 can be classified into two causes:
  • Step 540 uses segmentation tools that also, if detection of a temple image point 63 fails, allow the class of failure cause to which the image belongs to be determined.
  • Step 540 comprises a method for deciding whether or not to use the temples image point or points 63 , according to a previously stored decision criterion.
  • angle ⁇ and angle ⁇ are considered to be zero. Otherwise, angle ⁇ and angle ⁇ are calculated from the position of the temple image point or points 63 detected, and the precise position 38 of the characteristic points determined in step 530 .

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Computer Graphics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Hardware Design (AREA)
  • Optics & Photonics (AREA)
  • Data Mining & Analysis (AREA)
  • Ophthalmology & Optometry (AREA)
  • Health & Medical Sciences (AREA)
  • Signal Processing (AREA)
  • Multimedia (AREA)
  • Geometry (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Processing Or Creating Images (AREA)
  • Eyeglasses (AREA)
US13/522,599 2010-01-18 2011-01-18 Augmented reality method applied to the integration of a pair of spectacles into an image of a face Active 2032-01-24 US9076209B2 (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
FR1050305 2010-01-18
FR1050305A FR2955409B1 (fr) 2010-01-18 2010-01-18 Procede d'integration d'un objet virtuel dans des photographies ou video en temps reel
PCT/EP2011/050596 WO2011086199A1 (fr) 2010-01-18 2011-01-18 Procede de realite augmentee appliquee a l'integration d'une paire de lunettes dans une image de visage

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2011/050596 A-371-Of-International WO2011086199A1 (fr) 2010-01-18 2011-01-18 Procede de realite augmentee appliquee a l'integration d'une paire de lunettes dans une image de visage

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US14/791,731 Division US9317973B2 (en) 2010-01-18 2015-07-06 Augmented reality method applied to the integration of a pair of spectacles into an image of a face

Publications (2)

Publication Number Publication Date
US20120313955A1 US20120313955A1 (en) 2012-12-13
US9076209B2 true US9076209B2 (en) 2015-07-07

Family

ID=42629539

Family Applications (3)

Application Number Title Priority Date Filing Date
US13/522,599 Active 2032-01-24 US9076209B2 (en) 2010-01-18 2011-01-18 Augmented reality method applied to the integration of a pair of spectacles into an image of a face
US14/791,731 Active US9317973B2 (en) 2010-01-18 2015-07-06 Augmented reality method applied to the integration of a pair of spectacles into an image of a face
US15/132,185 Active US9569890B2 (en) 2010-01-18 2016-04-18 Method and device for generating a simplified model of a real pair of spectacles

Family Applications After (2)

Application Number Title Priority Date Filing Date
US14/791,731 Active US9317973B2 (en) 2010-01-18 2015-07-06 Augmented reality method applied to the integration of a pair of spectacles into an image of a face
US15/132,185 Active US9569890B2 (en) 2010-01-18 2016-04-18 Method and device for generating a simplified model of a real pair of spectacles

Country Status (4)

Country Link
US (3) US9076209B2 (fr)
EP (2) EP3367307A3 (fr)
FR (1) FR2955409B1 (fr)
WO (1) WO2011086199A1 (fr)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160217609A1 (en) * 2015-01-22 2016-07-28 Ditto Technologies, Inc. Rendering glasses shadows
US10120194B2 (en) 2016-01-22 2018-11-06 Corning Incorporated Wide field personal display
US10571721B2 (en) * 2017-01-27 2020-02-25 Carl Zeiss Vision International Gmbh Computer-implemented method for determining a representation of a rim of a spectacles frame or a representation of the edges of the spectacle lenses
US10976551B2 (en) 2017-08-30 2021-04-13 Corning Incorporated Wide field personal display device
US20210110141A1 (en) * 2017-12-12 2021-04-15 Seiko Epson Corporation Methods and systems for training an object detection algorithm using synthetic images
DE102020131580B3 (de) 2020-11-27 2022-04-14 Fielmann Ventures GmbH Computerimplementiertes Verfahren zum Bereitstellen und Platzieren einer Brille sowie zur Zentrierung von Gläsern der Brille
EP4006628A1 (fr) 2020-11-27 2022-06-01 Fielmann Ventures GmbH Procédé mis en oeuvre par ordinateur pour fournir et positionner des lunettes et pour centrer les verres des lunettes

Families Citing this family (75)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8639020B1 (en) 2010-06-16 2014-01-28 Intel Corporation Method and system for modeling subjects from a depth map
US11048333B2 (en) 2011-06-23 2021-06-29 Intel Corporation System and method for close-range movement tracking
JP6074170B2 (ja) 2011-06-23 2017-02-01 インテル・コーポレーション 近距離動作のトラッキングのシステムおよび方法
US20130113879A1 (en) * 2011-11-04 2013-05-09 Comcast Cable Communications, Llc Multi-Depth Adaptation For Video Content
FR2986893B1 (fr) * 2012-02-13 2014-10-24 Total Immersion Systeme de creation de representations tridimensionnelles a partir de modeles reels ayant des caracteristiques similaires et predeterminees
FR2986892B1 (fr) * 2012-02-13 2014-12-26 Total Immersion Procede, dispositif et systeme de generation d'une representation texturee d'un objet reel
WO2013139814A2 (fr) * 2012-03-19 2013-09-26 Fittingbox Modèle et procédé de production de modèle 3d photo-réalistes
US9477303B2 (en) 2012-04-09 2016-10-25 Intel Corporation System and method for combining three-dimensional tracking with a three-dimensional display for a user interface
US9378584B2 (en) * 2012-05-23 2016-06-28 Glasses.Com Inc. Systems and methods for rendering virtual try-on products
US9501140B2 (en) * 2012-11-05 2016-11-22 Onysus Software Ltd Method and apparatus for developing and playing natural user interface applications
US20140178029A1 (en) * 2012-12-26 2014-06-26 Ali Fazal Raheman Novel Augmented Reality Kiosks
US8994652B2 (en) * 2013-02-15 2015-03-31 Intel Corporation Model-based multi-hypothesis target tracker
CN104021590A (zh) * 2013-02-28 2014-09-03 北京三星通信技术研究有限公司 虚拟试穿试戴系统和虚拟试穿试戴方法
US20140240354A1 (en) * 2013-02-28 2014-08-28 Samsung Electronics Co., Ltd. Augmented reality apparatus and method
WO2014169238A1 (fr) 2013-04-11 2014-10-16 Digimarc Corporation Procédés de reconnaissance d'objet et agencements associés
KR101821284B1 (ko) 2013-08-22 2018-01-23 비스포크, 인코포레이티드 커스텀 제품을 생성하기 위한 방법 및 시스템
US10373018B2 (en) * 2013-10-08 2019-08-06 Apple Inc. Method of determining a similarity transformation between first and second coordinates of 3D features
PT107289A (pt) * 2013-11-11 2015-05-11 César Augusto Dos Santos Silva Sistema multi-ocular para visualização de óculos virtuais num rosto real
US9489765B2 (en) * 2013-11-18 2016-11-08 Nant Holdings Ip, Llc Silhouette-based object and texture alignment, systems and methods
US9807373B1 (en) * 2013-12-27 2017-10-31 Google Inc. Systems and devices for acquiring imagery and three-dimensional (3D) models of objects
US10121178B2 (en) * 2014-06-13 2018-11-06 Ebay Inc. Three-dimensional eyeglasses modeling from two-dimensional images
US10198865B2 (en) 2014-07-10 2019-02-05 Seiko Epson Corporation HMD calibration with direct geometric modeling
US9086582B1 (en) 2014-08-20 2015-07-21 David Kind, Inc. System and method of providing custom-fitted and styled eyewear based on user-provided images and preferences
WO2016063166A1 (fr) * 2014-10-21 2016-04-28 Koninklijke Philips N.V. Appareil d'ajustement de dispositif d'interface patient de réalité augmentée
CN107408315B (zh) * 2015-02-23 2021-12-07 Fittingbox公司 用于实时、物理准确且逼真的眼镜试戴的流程和方法
WO2016203770A1 (fr) * 2015-06-17 2016-12-22 凸版印刷株式会社 Système, procédé et programme de traitement d'image
JP6511980B2 (ja) * 2015-06-19 2019-05-15 カシオ計算機株式会社 画像処理装置、画像処理方法及びプログラム
US10192133B2 (en) 2015-06-22 2019-01-29 Seiko Epson Corporation Marker, method of detecting position and pose of marker, and computer program
US10192361B2 (en) 2015-07-06 2019-01-29 Seiko Epson Corporation Head-mounted display device and computer program
CA2901477C (fr) 2015-08-25 2023-07-18 Evolution Optiks Limited Systeme de correction de la vision, methode et interface utilisateur graphique destinee a la mise en place de dispositifs electroniques ayant un afficheur graphique
US10347048B2 (en) * 2015-12-02 2019-07-09 Seiko Epson Corporation Controlling a display of a head-mounted display device
US10701999B1 (en) 2015-12-17 2020-07-07 A9.Com, Inc. Accurate size selection
WO2017134275A1 (fr) * 2016-02-05 2017-08-10 Eidgenossische Technische Hochschule Zurich Procédés et systèmes permettant de déterminer un axe optique et/ou des propriétés physiques d'une lentille, et leur utilisation dans l'imagerie virtuelle et des visiocasques
US9875546B1 (en) * 2016-03-29 2018-01-23 A9.Com, Inc. Computer vision techniques for generating and comparing three-dimensional point clouds
GB201607639D0 (en) * 2016-05-02 2016-06-15 Univ Leuven Kath Sensing method
WO2018002533A1 (fr) 2016-06-30 2018-01-04 Fittingbox Procédé d'occultation d'un objet dans une image ou une vidéo et procédé de réalité augmentée associé
FR3053509B1 (fr) * 2016-06-30 2019-08-16 Fittingbox Procede d’occultation d’un objet dans une image ou une video et procede de realite augmentee associe
EP3355214A1 (fr) * 2017-01-27 2018-08-01 Carl Zeiss Vision International GmbH Procédé, ordinateur et programme informatique destinés à préparer un modèle de bord de monture
US10242294B2 (en) * 2017-05-01 2019-03-26 Intel Corporation Target object classification using three-dimensional geometric filtering
KR101886754B1 (ko) * 2017-05-04 2018-09-10 국방과학연구소 머신 러닝을 위한 학습 이미지 생성 장치 및 방법
FR3067151B1 (fr) * 2017-05-30 2019-07-26 Fittingbox Procede d'essayage virtuel realiste d'une paire de lunettes par un individu
US20180357819A1 (en) * 2017-06-13 2018-12-13 Fotonation Limited Method for generating a set of annotated images
CN107948499A (zh) * 2017-10-31 2018-04-20 维沃移动通信有限公司 一种图像拍摄方法及移动终端
CN107943527A (zh) * 2017-11-30 2018-04-20 西安科锐盛创新科技有限公司 睡眠自动关闭电子设备的方法及其系统
CN109862343A (zh) * 2017-11-30 2019-06-07 宏达国际电子股份有限公司 虚拟现实装置、影像处理方法以及非暂态电脑可读取记录媒体
CN108593556A (zh) * 2017-12-26 2018-09-28 中国科学院电子学研究所 基于矢量特征的卫星成像几何精化模型的构建方法
KR102450948B1 (ko) * 2018-02-23 2022-10-05 삼성전자주식회사 전자 장치 및 그의 증강 현실 객체 제공 방법
US10673939B2 (en) 2018-05-04 2020-06-02 Citrix Systems, Inc. WebRTC API redirection with window monitoring/overlay detection
US10777012B2 (en) 2018-09-27 2020-09-15 Universal City Studios Llc Display systems in an entertainment environment
US11966507B2 (en) 2018-10-22 2024-04-23 Evolution Optiks Limited Light field vision testing device, adjusted pixel rendering method therefor, and vision testing system and method using same
US11327563B2 (en) 2018-10-22 2022-05-10 Evolution Optiks Limited Light field vision-based testing device, adjusted pixel rendering method therefor, and online vision-based testing management system and method using same
US11500460B2 (en) 2018-10-22 2022-11-15 Evolution Optiks Limited Light field device, optical aberration compensation or simulation rendering
US11433546B1 (en) * 2018-10-24 2022-09-06 Amazon Technologies, Inc. Non-verbal cuing by autonomous mobile device
US10685457B2 (en) 2018-11-15 2020-06-16 Vision Service Plan Systems and methods for visualizing eyewear on a user
CN109377563A (zh) * 2018-11-29 2019-02-22 广州市百果园信息技术有限公司 一种人脸网格模型的重建方法、装置、设备和存储介质
US11789531B2 (en) 2019-01-28 2023-10-17 Evolution Optiks Limited Light field vision-based testing device, system and method
US11500461B2 (en) 2019-11-01 2022-11-15 Evolution Optiks Limited Light field vision-based testing device, system and method
US11635617B2 (en) 2019-04-23 2023-04-25 Evolution Optiks Limited Digital display device comprising a complementary light field display or display portion, and vision correction system and method using same
US11902498B2 (en) 2019-08-26 2024-02-13 Evolution Optiks Limited Binocular light field display, adjusted pixel rendering method therefor, and vision correction system and method using same
CN110544314B (zh) * 2019-09-05 2023-06-02 上海电气集团股份有限公司 虚拟现实与仿真模型的融合方法、系统、介质及设备
US11823598B2 (en) 2019-11-01 2023-11-21 Evolution Optiks Limited Light field device, variable perception pixel rendering method therefor, and variable perception system and method using same
US11487361B1 (en) 2019-11-01 2022-11-01 Evolution Optiks Limited Light field device and vision testing system using same
WO2021122387A1 (fr) * 2019-12-19 2021-06-24 Essilor International Appareil, procédé et support de stockage lisible par ordinateur permettant d'étendre une base de données d'images afin d'évaluer la compatibilité de lunettes
EP3843043B1 (fr) * 2019-12-23 2022-07-13 Essilor International Appareil, procédé et support d'informations lisible sur ordinateur permettant d'agrandir une base de données d'images pour évaluer la compatibilité des lunettes
EP3846123B1 (fr) * 2019-12-31 2024-05-29 Dassault Systèmes Reconstruction 3d avec des cartes lisses
EP4106984A4 (fr) 2020-02-21 2024-03-20 Ditto Tech Inc Raccord de montures de lunettes comprenant un raccord en direct
CN111882660A (zh) * 2020-07-23 2020-11-03 广联达科技股份有限公司 基于cad图纸的三维显示方法和三维显示装置
FR3116363A1 (fr) * 2020-11-14 2022-05-20 Bleu Ebene Procédé d’essayage virtuel réaliste de bijoux ou autre objet de même nature par une personne dans une photographie ou un flux vidéo en temps réel
US11341698B1 (en) * 2020-12-18 2022-05-24 Tiliter Pty Ltd. Methods and apparatus for simulating images of produce with markings from images of produce and images of markings
FR3118821B1 (fr) 2021-01-13 2024-03-01 Fittingbox Procédé de détection et de suivi dans un flux vidéo d’un visage d’un individu portant une paire de lunettes
FR3119694B1 (fr) 2021-02-11 2024-03-29 Fittingbox Procédé d’apprentissage d’un système d’apprentissage automatique pour la détection et la modélisation d’un objet dans une image, produit programme d’ordinateur et dispositif correspondant.
FR3124069A1 (fr) 2021-06-18 2022-12-23 Acep France Procédé d’essayage de lunettes virtuelles
CN113568996B (zh) * 2021-07-29 2023-05-16 西安恒歌数码科技有限责任公司 一种基于osgEarth的多图层掉帧优化方法及系统
US20230252655A1 (en) * 2022-02-09 2023-08-10 Google Llc Validation of modeling and simulation of wearable device
CN115033998B (zh) * 2022-07-13 2023-02-21 北京航空航天大学 一种面向机械零部件的个性化2d数据集构建方法

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2001032074A1 (fr) 1999-11-04 2001-05-10 Stefano Soatto Systeme de selection et de conception de montures de lunettes
US7023454B1 (en) * 2003-07-07 2006-04-04 Knight Andrew F Method and apparatus for creating a virtual video of an object

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6504546B1 (en) 2000-02-08 2003-01-07 At&T Corp. Method of modeling objects to synthesize three-dimensional, photo-realistic animations
US6807290B2 (en) 2000-03-09 2004-10-19 Microsoft Corporation Rapid computer modeling of faces for animation
GB2381429B (en) * 2001-09-28 2005-07-27 Canon Europa Nv 3D computer model processing apparatus
EP1495447A1 (fr) 2002-03-26 2005-01-12 KIM, So-Woon Systeme et procede de simulation tridimensionnelle de port de lunettes
KR100914845B1 (ko) * 2007-12-15 2009-09-02 한국전자통신연구원 다시점 영상 정보를 이용한 물체의 삼차원 형상복원 방법및 장치
HUP0800163A2 (en) * 2008-03-14 2009-09-28 Dezsoe Szepesi Computer network based method for virtually trying on and displaying glasses and sunglasses

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2001032074A1 (fr) 1999-11-04 2001-05-10 Stefano Soatto Systeme de selection et de conception de montures de lunettes
US7023454B1 (en) * 2003-07-07 2006-04-04 Knight Andrew F Method and apparatus for creating a virtual video of an object

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
Hamouz M. et al: "Face detection by learned affine correspondences". Structural, Syntactic, and Statistical Pattern Recognition, Joint IAPR International Workshops SSPR 2002 and SPR 2002 (Lecture Notes in Computer Science vol. 2396) Springer-Verlag Berlin, Germany, 2002, pp. 566-575. XP002622546.
International Search Report dated Mar. 7, 2011, in corresponding PCT application.
Ma Yong et al: "Robust precise eye location under probabilistic framework". Automatic Face and Gesture Recognition, 2004. Proceedings. Sixth IEEE International Conference on, IEEE, Piscataway, NJ, USA. May 17, 2004, pp. 339-344. XP010949456.
Viola P. et al: "Robust Real-Time Face Detection". International Journal of Computer Vision. Dordrecht. NL LNKD-DOI: 10.10231B:VISI.0000013087.49260.FB. vol. 57. No. 2. Jan. 1, 2004. pp. 137-154. XP888835782.

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160217609A1 (en) * 2015-01-22 2016-07-28 Ditto Technologies, Inc. Rendering glasses shadows
US10013796B2 (en) * 2015-01-22 2018-07-03 Ditto Technologies, Inc. Rendering glasses shadows
US10403036B2 (en) * 2015-01-22 2019-09-03 Ditto Technologies, Inc. Rendering glasses shadows
US10120194B2 (en) 2016-01-22 2018-11-06 Corning Incorporated Wide field personal display
US10649210B2 (en) 2016-01-22 2020-05-12 Corning Incorporated Wide field personal display
US10571721B2 (en) * 2017-01-27 2020-02-25 Carl Zeiss Vision International Gmbh Computer-implemented method for determining a representation of a rim of a spectacles frame or a representation of the edges of the spectacle lenses
US10976551B2 (en) 2017-08-30 2021-04-13 Corning Incorporated Wide field personal display device
US20210110141A1 (en) * 2017-12-12 2021-04-15 Seiko Epson Corporation Methods and systems for training an object detection algorithm using synthetic images
US11557134B2 (en) * 2017-12-12 2023-01-17 Seiko Epson Corporation Methods and systems for training an object detection algorithm using synthetic images
DE102020131580B3 (de) 2020-11-27 2022-04-14 Fielmann Ventures GmbH Computerimplementiertes Verfahren zum Bereitstellen und Platzieren einer Brille sowie zur Zentrierung von Gläsern der Brille
EP4006628A1 (fr) 2020-11-27 2022-06-01 Fielmann Ventures GmbH Procédé mis en oeuvre par ordinateur pour fournir et positionner des lunettes et pour centrer les verres des lunettes

Also Published As

Publication number Publication date
EP2526510B1 (fr) 2018-01-24
FR2955409A1 (fr) 2011-07-22
US20160232712A1 (en) 2016-08-11
WO2011086199A1 (fr) 2011-07-21
EP2526510A1 (fr) 2012-11-28
FR2955409B1 (fr) 2015-07-03
EP3367307A3 (fr) 2018-12-26
US20120313955A1 (en) 2012-12-13
EP3367307A2 (fr) 2018-08-29
US9317973B2 (en) 2016-04-19
EP2526510B2 (fr) 2021-09-08
US9569890B2 (en) 2017-02-14
US20150310672A1 (en) 2015-10-29

Similar Documents

Publication Publication Date Title
US9569890B2 (en) Method and device for generating a simplified model of a real pair of spectacles
CN107408315B (zh) 用于实时、物理准确且逼真的眼镜试戴的流程和方法
JP4723834B2 (ja) 映像に基づいたフォトリアリスティックな3次元の顔モデリング方法及び装置
US7221809B2 (en) Face recognition system and method
US9357204B2 (en) Method for constructing images of a pair of glasses
KR100682889B1 (ko) 영상에 기반한 사실감 있는 3차원 얼굴 모델링 방법 및 장치
US7363201B2 (en) Facial image processing methods and systems
JP4284664B2 (ja) 三次元形状推定システム及び画像生成システム
Dimitrijevic et al. Accurate face models from uncalibrated and ill-lit video sequences
WO2022095721A1 (fr) Procédé et appareil de formation de modèle d'estimation de paramètre, dispositif, et support de stockage
US20070080967A1 (en) Generation of normalized 2D imagery and ID systems via 2D to 3D lifting of multifeatured objects
US20020106114A1 (en) System and method for face recognition using synthesized training images
US20060192785A1 (en) Methods and systems for animating facial features, and methods and systems for expression transformation
EP0907144A2 (fr) Procédé d'extraction d'un modèle tridimensionnel à partir d'une séquence d'images
US20080309662A1 (en) Example Based 3D Reconstruction
CN113269862A (zh) 场景自适应的精细三维人脸重建方法、系统、电子设备
CN114930142A (zh) 用于取得眼科镜片的光学参数的方法和系统
Tyle_ek et al. Refinement of surface mesh for accurate multi-view reconstruction
JP4623320B2 (ja) 三次元形状推定システム及び画像生成システム
US20220309733A1 (en) Surface texturing from multiple cameras
van Dam From image sequence to frontal image: reconstruction of the unknown face: a forensic case
Castelán Face shape recovery from a single image view
TW201320005A (zh) 用於三維影像模型調整之方法及配置
Rohith Structure from non-rigid motion using 3D models

Legal Events

Date Code Title Description
AS Assignment

Owner name: FITTINGBOX, FRANCE

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:CHOUKROUN, ARIEL;REEL/FRAME:028850/0774

Effective date: 20120824

STCF Information on status: patent grant

Free format text: PATENTED CASE

FEPP Fee payment procedure

Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 4

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 8TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1552); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 8