CA2690826C - Automatic 3d modeling system and method - Google Patents

Automatic 3d modeling system and method Download PDF

Info

Publication number
CA2690826C
CA2690826C CA2690826A CA2690826A CA2690826C CA 2690826 C CA2690826 C CA 2690826C CA 2690826 A CA2690826 A CA 2690826A CA 2690826 A CA2690826 A CA 2690826A CA 2690826 C CA2690826 C CA 2690826C
Authority
CA
Canada
Prior art keywords
gesture
model
dimensional
change
change variable
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CA2690826A
Other languages
French (fr)
Other versions
CA2690826A1 (en
Inventor
Young Harvill
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Callahan Cellular LLC
Original Assignee
Laastra Telecom GmbH LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US10/219,041 external-priority patent/US7123263B2/en
Application filed by Laastra Telecom GmbH LLC filed Critical Laastra Telecom GmbH LLC
Publication of CA2690826A1 publication Critical patent/CA2690826A1/en
Application granted granted Critical
Publication of CA2690826C publication Critical patent/CA2690826C/en
Anticipated expiration legal-status Critical
Expired - Fee Related legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation
    • G06T15/205Image-based rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/10Constructive solid geometry [CSG] using solid primitives, e.g. cylinders, cubes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships

Abstract

An automatic 3D modeling system and method are described in which a 3D model may be generated from a picture or other image. For example, a 3D model for a face of a person may be automatically generated. The system and method also permits gestures/behaviors associated with a 3D model to automatically generated so that the gestures/behaviors may be applied to any 3D models.

Description

This is a divisional of Canadian National Phase Patent Application Serial No. 2,457,839 filed August 14, 2002.

Field Of The Invention The present invention is related to 3D modeling systems and methods and more particularly, to a system and method that merges automatic image-based model generation techniques with interactive real-time character orientation techniques to provide rapid creation of virtual 3D
personalities.

Background Of The Invention There are many different techniques for generating an animation of a three dimensional object on a computer display. Originally, the animated figures (for example, the faces) looked very much like wooden characters, since the animation was not very good. In particular, the user would typically see an animated face, yet its features and expressions would be static. Perhaps the mouth would open and close, and the eyes might blink, but the facial expressions, and the animation in general, resembled a wood puppet. The problem was that these animations were typically created from scratch as drawings, and were not rendered using an underlying 3D model to capture a more realistic appearance, so that the animation looked unrealistic and not very life-like. More recently, the animations have improved so that a skin may cover the bones of the figure to provide a more realistic animated figure.
While such animations are now rendered over one or more deformation grids to capture a more realistic appearance for the animation, often the animations are still rendered by professional companies and redistributed to users. While this results in high-quality animations, it is
2 limited in that the user does not have the capability to customize a particular animation, for example, of him or herself, for use as a virtual personality. With the advance features of the Internet or the World Wide Web, these virtual personas will extend the capabilities and interaction between users. It would thus be desirable to provide a 3D modeling system and method which enables the typical user to rapidly and easily create a 3D model from an image, such as a photograph, that is useful as a virtual personality.

Typical systems also required that, once a model was created by the skilled animator, the same animator was required to animate the various gestures that you might want to provide for the model. For example, the animator would create the animation of a smile, a hand wave or speaking which would then be incorporated into the model to provide the model with the desired gestures. The process to generate the behavior/gesture data is slow and expensive and requires a skilled animator. It is desirable to provide an automatic mechanism for generating gestures and behaviors fnr mncpis without the assistance of a skilled animator. It is to these ends that embodiments of the present invention are directed.

Summary Of The Invention Broadly, embodiments of the invention utilize image processing techniques, statistical analysis and 3D
geometry deformation to allow photo-realistic 3D models of objects, such as the human face, to be automatically generated from an image (or from multiple images). For example, for the hiiman face, facial proportions and feature details from a photograph (or series of photographs) are
3 identified and used to generate an appropriate 3D model. Image processing and texture mapping techniques also optimize how the photograph(s) is used as detailed, photo-realistic texture for the 3D model.

In accordance with another aspect of the invention, a gesture of the person may be captured and abstracted so that it can be applied to any other model. For example, the animated smile of a particular person may be captured.
The smile may then be converted into feature space to provide an abstraction of the gesture. The abstraction of the gesture (e.g., the movements of the different portions of the model) are captured as a gesture. The gesture may then be used for any other model. Thus, in accordance with some embodiments of the invention, the system permits the generation of a gesture model that may be used with other models.

In accordance with one specific aspect of the invention, there is provided a method for generating a gesture model, the method comprising:
receiving an image of a first object performing a gesture; determining at least one movement associated with the gesture from the at least one movement of the first object to generate a gesture object wherein the gesture object further comprises a coloration change variable storing a change of coloration that occurs during the gesture, a two dimensional change variable storing a change of a surface that occurs during the gesture, and a three dimensional change variable storing a change of at least one vertex associated with the object that occurs during the gesture; and applying the gesture object to a three dimensional model of a second object to animate the second object using the gesture object.

In accordance with another aspect of the invention, there is provided a computer implemented system for generating a gesture model, the computer implemented system comprising: a three dimensional model generation module configured to receive an image of an object and generate a three dimensional deformable model of the object, the three dimensional model having a surface and a plurality of vertices; and a gesture generation module configured to generate a gesture object corresponding to a gesture of the object so that the gesture can be applied to a three dimensional deformable model of another object; wherein the 3a gesture object further comprises a coloration change variable storing a change of coloration on the three dimensional deformable model that occurs during the gesture, a two dimensional change variable storing a change of the surface of the three dimensional deformable model that occurs during the gesture, and a three dimensional change variable storing a change of the vertices of the three dimensional deformable model that occurs during the gesture.

Brief Description Of The Drawings Figure 1 is a flowchart describing a method for generating a 3D
model of a human face;
4 Figure 2 is a diagram illustrating an example of it computer system which may be used to implement the 3D modeling method in accordance with the invention;

Figure 3 is a block diagram illustrating more details of the 3D model generation system in accordance with the invention;

Figure 4 is an exemplary image of a person's head that may be loaded into the memory of a computer during an image acquisition process;

Figure 5 illustrates the exemplary image of Figure 4 with an opaque background after having processed the image with a "seed fill" operation;

Figure 6 illustrates the exemplary image of Figure 5 having dashed lines indicating particular bound areas about the locations of the eyes;

Figure 7 illustrates the exemplary image of Figure 6 with the high contrast luminance portion of the eyes identified by dashed lines;

Figure 8 is an exemplary diagram illustrating various landmark location points for a human head;

Figure 9 illustrates an example of a human face 3D model in accordance with the invention;

Figures 10A-1 OD illustrate respective deformation grids that can be used to generate a 3D
model of a human head;

Figure tOE illustrates the deformation grids overlaid upon one another;

Figure 1 l is a flowchart illustrating the automatic gesture behavior generation method in accordance with the invention;

Figure 12 illustrates an example of a base 3D model for a first model, Kristen;
Figure 13 illustrates an example of a base 3D model for a second model, Ellie;
5 Figure 14 is an example of the first model in a neutral gesture;

Figure 15 is an example of the first model in a smile gesture;

Figure 16 is an example of a smile gesture map generated from the neutral gesture and the smile gesture of the first model;

Figure 17 is an example of the feature space with both the models overlaid over each other;

Figure 18 is an example of a neutral gesture for the second model; and Figure 19 is an example of the smile gesture, generated from the first model, being applied to the second model to generate a smile gesture in the second model.

Detailed Description Of The Preferred Embodiment While the invention has a greater utility, it will be described in the context of generating a 3D model of the human face and gestures associated with the human fact. Those skilled in the art recognize that any other 3D models and gestures can be generated using the principles and techniques described herein, and that the following is merely exemplary of a particular application of the invention and the invention is not limited to the facial models described herein, To generate a 3D model of the human face, the invention preferably performs a series of complex image processing techniques to determine a set of landmark points 10 which serve as guides for generating the 3D model. Figure 1 is a flow chart describing a preferred algorithm for generating a 3D model of a human face. With reference to Figure 1, an image acquisition process (Step 1) is used to load a photograph(s) (or other image) of a human face (for example, a "head shot") into the memory of a computer. Preferably, images may be loaded as JPEG images, however, other image type formats may be used without departing from the invention. Images
6 PCT/US02/25933 can be loaded from a diskette, downloaded from the Internet, or otherwise loaded into memory using known techniques so that the image processing techniques of the invention can be performed on the image in order to generate a 3D model.

Since different images may have different orientations, the proper orientation of the image should be determined by locating and grading appropriate landmark points 10.
Determining the image orientation allows a more realistic rendering of the image onto the deformation grids. Locating the appropriate landmark points 10 will now be described in detail.

Referring to Figure 1, to locate landmark points 10 on an image, a "seed fill"
operation may preferably be performed (Step 2) on the image to eliminate the variable background of the image so that the boundary of the head (in the case of a face) can be isolated on the image.
Figure 4 is an exemplary image 20 of a person's head that may be loaded into the memory of the computer during the image acquisition process (Step 1, Figure 1). A "seed fill" operation (Step 2, Figure 1) is a well-known recursive paintfill operation that is accomplished by identifying one or more points 22 in the background 24 of the image 20 based on, for example, color and luminosity of the point(s) 22 and expand a paintfill zone 26 outwardly from the point(s) 22 where the color and luminosity are similar. Preferably, the "seed fill"
operation successfully replaces the color and luminescent background 24 of the image with an opaque background so that, the boundary of the head can be more easily determined.

Referring again to Figure 1, the boundary of the head 30 can be determined (Step 3), for example, by locating the vertical center of the image (line 32) and integrating across a horizontal area 34 from the centerline 32 (using a non-fill operation) to determine the width of the head 30, and by locating the horizontal center of the image (line 36) and integrating across a vertical area 38 from the centerline 36 (using a non-fill operation) to determine the height of the head 30. In other words, statistically directed linear integration of a field of pixels whose values differ based on the presence of an object or the presence of a background is performed.
This is shown in Figure 5 which shows the exemplary image 20 of Figure 4 with an opaque background 24.
Returning again to Figure 1, upon determining the width and height of the head 30, the bounds of the head 30 can be determined by using statistical properties of the height of the head and the known properties of the integrated horizontal area 34 and top of the head 30.
30 Typically, the height of the head will be approximately 2/3 of the image height and the width of
7 the head will be approximately 1/3 of the image width. The height of the head may also be 1.5 times the width of the head which is used as a first approximation.

Once the bounds of the head 30 are determined, the location of the eyes 40 can be determined (Step 4). Since the eyes 40 are typically located on the upper half of the head 30, a statistical calculation can be used and the head bounds can be divided into an upper half 42 and a lower half 44 to isolate the eye bound areas 46a, 46b. The upper half of the head bounds 42 can be further divided into right and left portions 46a, 46b to isolate the left and right eyes 40a, 40b, respectively. This is shown in detail in Figure 6 which shows the exemplary image 20 of Figure 4 with dashed lines indicating the particular bound areas.

Referring yet again to Figure 1, the centermost region of each eye 40a, 40b can be located (Step 5) by identifying a circular region 48 of high contrast luminance within the respective eye bounds 46a, 46b. This operation can be recursively performed outwardly from the centermost point 48 over the bounded area 46a, 46b and the results can be graded to determine the proper bounds of the eyes 40a, 40b. Figure 7 shows the exemplary image of Figure 6 with the high contrast luminance portion of the eyes identified by dashed lines.

Referring again to Figure 1, once the eyes 40a, 40b have been identified, the scale and orientation of the head 30 can be determined (Step 6) by analyzing a line 50 connecting the eyes 40a, 40b to determine the angular offset of the line 50 from a horizontal axis of the screen. The scale of the head 30 can be derived from the width of the bounds according to the following formula: width of bound/width of model.

After determining the above information, the approximate landmark points 10 on the head 30 can be properly identified. Preferred landmark points 10 include a) outer head bounds 60a, 60b, 60c; b) inner head bounds 62a, 62b, 62c, 62d; c) right and left eye bounds 64a-d, 64w-z, respectively, d) comers of nose 66a, 66b; and e) corners of mouth 68a, 68d (mouth line), however. those skilled in the art recognize that other landmark points may be used without departing from the invention. Figure 8 is an exemplary representation of the above landmark points shown for the image of Figure 4.

Having determined the appropriate landmark locations 10 on the head 30, the image can be properly aligned with one or more deformation grids (described below) that define the 3D
8 model 70 of the head (Step 7). The following describes some of the deformation grids that may be used to define the 3D model 70, however, those skilled in the art recognize that these are merely exemplary of certain deformation grids that may be used to define the 3D model and that other deformation grids may be used without departing from the invention.
Figure 9 illustrates an example of a 3D model of a human face generated using the 3D model generation method in accordance with the invention. Now, more details of the 3D model generation system will be described.

Figure 2 illustrates an example of a computer system 70 in which the 3D model generation method and gesture model generation method may be implemented. In particular, the 3D model generation method and gesture model generation method may be implemented as one or more pieces of software code (or compiled software code) which are executed by a computer system. The methods in accordance with the invention may also be implemented on a hardware device in which the method in accordance with the invention are programmed into a hardware device. Returning to Figure 2, the computer system 70 shown is a personal computer system.
The invention, however, may be implemented on a variety of different computer systems, such as client/server systems, server systems, workstations, etc... and the invention is not limited to implementation on any particular computer system. The illustrated computer system may include a display device 72, such as a cathode ray tube or LCD, a chassis 74 and one or more input/output devices, such as a keyboard 76 and a mouse 78 as shown, which permit the user to interact with the computer system. For example, the user may enter data or commands into the computer system using the keyboard or mouse and may receive output data from the computer system using the display device (visual data) or a printer (not shown), etc.
The chassis 74 may house the computing resources of the computer system and may include one or more central processing units (CPU) 80 which control the operation of the computer system as is well known, a persistent storage device 82, such as a hard disk drive, an optical disk drive, a tape drive and the like, that stores the data and instructions executed by the CPU even when the computer system is not supplied with power and a memory 84, such as DRAM, which temporarily stores data and instructions currently being executed by the CPU and loses its data when the computer system is not being powered as is well known. To implement the 3D model generation and gesture generation methods in accordance with the invention, the memory may store a 3D
modeler 86 which is a series of instructions and data being executed by the CPU 80 to implement
9 the 3D model and gesture generation methods described above. Now, more details of the 3D
niodekr will be described.

Figure 3 is a diagram illustrating more details of the 3D modeler 86 shown in Figure 2.
In particular, the 3D modeler includes a 3D model generation module 88 and a gesture generator module 90 which are each implemented using one or more computer program inmuctions. The pseudo-code that may be used to implement each of these modules is shown in Appendix 1 and Appendix 2. As shown in Figure 3, an image of an object, such as a human bee is input into the system as shown. The image is fed into the 3D modal generation module as well as the gesture generation module as shown. The output from the 3D model generation module, is a 3D model of the image which has been automatically generated as described above.
The output from the gesture generation module is out or more gesture models which may then be applied to and used for any 3D model including any model generate by the 3D
model generation module. The gesture generator is described in more detail below with reference to Figure 11. In this maimiair, the system permits 3D models of any object to be rapidly generated and implemented. Furthermdre, the gesture generator permits one or more gesture models, such as a smile gesture, a hand wave, etc:..) to be automatically generated from a particular image. The advantage of We gesture generator is that the gesture models may then be applied to any 3D
model. The gesture generator also eliminates the need for a skilled animator to implement a gesture. Now, the deformation grids for the 3D model generation will be described.

Fights 10A-10D illustrate exemplary deformation grids that may be used to define a 3D
model 70 of a human head Figure IOA illustrates a bounds space deformation grid 172which is preferably the innermost deformation grid Overlaying the bounds space deformation grid 172is a feature apace deformation grid 174(shown in Figure IOB). An edge space deformation grid 176' (sbow in Figure t OC) preferably overlays the feature space deformation grid, 174. Figure 10D
illustrates a detail deformation grid 177 that is preferably the outermost deformation grid.

The grids are preferably aligned in accordance with the landmark locations 10 (shown in Figure 102) such that tha head image 30 will be appropriately aligned with the deformation grids when its landmark locations 10 are aligned with the landmark locations 10 of the deformation grids. To properly align the head image 30 with the deformation grids, a user may manually refine the landmart location precision on the head image (Step 8), for example by using the mouse or other input device to "drag" a particular landmark to a different area on the image 30.
Using the new landmark location information, the image 30 may be modified with respect to the deformation grids as appropriate (Step 9) in order to properly align the head image 30 with the deformation grids. A new model state can then be calculated, the detail gridl78 can then be 5 detached (Step 10), behaviors can be scaled for the resulting 3D model (Step 11), and the model can be saved (Step 12) for use as a virtual personality. Now, the automatic gesture generation in accordance with the invention will be described in more detail.

Figure 11 is a flowchart illustrating an automatic gesture generation method 100 in accordance with the invention. In general, the automatic gesture generation results in a gesture
10 object which may then be applied to any 3D model so that a gesture behavior may be rapidly generated and reused with other models. Usually, there may need to be a separate gesture model for different types of 3D models. For example, a smile gesture may need to be automatically generated for a human male, a human female, a human male child and a human female child in order to make the gesture more realistic. The method begins is step 102 in which a common feature space is generated. The feature space Is common space that Is used to store and represent an object image, such as a face, movements of the object during a gesture and object scalars which capture the differences between different objects. The gesture object to be generated using this method also stores a scalar field variable that stores the mapping between a model space and the feature space that permits transformation of motion and geometry data. The automatic gesture generation method involves using a particular image of an object, such as a face, to generate an abstraction of a gesture of the objc u, such as a smile, which is then stored as a gesture object so that the gesture object may then be applied to any 3D
model.

Returning to Figure 11, in step 104, the method determines the correlation between the feature space and the image space to determine the texture map changes which represent changes to the surface movements of the image during the gesture. In step 106, the method updates the texture map from the image (to check the correlation) and applies the resultant texture map to the feature space and generates a variable "stDcltaChange" as shown in the exemplary pseudo-code shown in Appendix 1 which stores the texture map changes, Appendix I
illustrates an xemplary pseudo-code for performing the image processing techniques of the invention. In step 108, the method determines the changes in the 3D vertices of the image model during the gesture which captures the 3D movement that occurs during the gesture. In step 110, the vertex changes are applied to the feature space and are captured in the gesture object in a variable "VertDeltaChange" as
11 shown in Appendix 1. In step 112, the method determines the texture coloration that occurs during the gesture and applies it to the feature space. The texture coloration is captured in the "DeltaMap" variable in the gesture object. In step 114, the gesture object is generated that includes the "stDeltaChange", "VartfeltaChange" and "DeltaMap" variables which contain the coloration, 2D and 3D movement that occurs during the gesture, The variables represent only the movement and color changes that occurs during a gesture so that the gesture object may then be applied to any 31) model. In essence, the gesture object distills the gesture that exists in a particular image model into an abstract object that contains the essential elements of the gesture so that the gesture may then be applied to any 3D model.

The gesture object also includes a scalar field variable storing the mapping between a feature space of the gesture and a model space of a model to permit transformation of the geometry and motion data. The scalerArray has an entry for each geometry vertex in the Gesture object. Each entry is a 3 dimensional vector that holds the change in scale for that vertex of the Feature level from its undeformed state to the deformed state. The scale is computed by vertex in Feature space by evaluating the scaler change in distance from that vert to connected verticies, The scaler for a given Gesture vertex is computed by weighted interpolation of that Vertex's postion when mapped to 1N space of a polygon in the Feature Level. The shape and size of polygons in the feature level are chosen to match areas of similarly scaled movement. This was determined by analyzing visual flow of typical facial gestures. The above method is shown in greater detail in the pseudo-code shown in Appendix 1.

Appendix 2 and Appendix 3, respectively, contain a sample pseudo-code algorithm and exemplary work flow process for automatically generating a 3D model in accordance with the invention. Appendix 2 illustrates an exemplary work flow process for automatically generating a 3D model in accordance with the invention. Appendix 3 illustrates an exemplary pseudo-code for performing the automatic gesture behaviour model in accordance with the invention.
The automatically generated model can incorporate built-in behaviour animation and interactivity. For example, for the human face, such expressions include gestures, mouth positions for lip syncing (visemes), and head movements. Such behaviours can be integrated with technologies such as automatic lip syncing, text-to-speech, natural language processing, and speech recognition and can trigger or be triggered by user of data driven events. For example, real-time lip syncing of automatically generated models may be associated with audio tracks. In addition, real-time analysis of the audio spoken by an intelligent agent can be provided and
12 synchronized head and facial gestures initiated to provide automatic, lifelike movements to accompany speech delivery.

Thus, virtual personas can be deployed to serve as an intelligent agent that may be used as an interactive, responsive front-end to information contained within knowledge bases, customer resource management systems, and learning management systems, as well as entertainment applications and communications via chat, instant messaging, and e-mail, Now, examples of a gesture being generated from an image of a 3D model and then applied to another model in accordance with the invention will now described.

Figure 12 illustrates an example of a base 3D model for a first model, Kristen. The 3D
model shown in Figure 12 has been previously generated as described above using the 3D model generation process. Figure 13 illustrates a second 3D model generated as described above. These two models will be used to illustrate the automatic generation of a smile gesture from an existing model to generate a gesture object and then the application of that generated gesture object to another 3D model. Figure 14 show an example of the first model in a neutral gesture while Figure 15 shows an example of the first model in a smile gesture. The smile gesture of the first model is then captured as described above. Figure 16 illustrates an example of the smile gesture map (the graphical version of the gesture object described above) that is generated from the first model based on the neutral gesture and the smile gesture. As described above, the gesture map abstracts the gesture behavior of the first model into a series of coloration changes, texture map changes and 3D vertices changes which can then be applied to any other 3D
model that has texture maps and 3D vertices, Then, using this gesture map (which includes the variables described above), the gesture object may be applied to another model in accordance with the invention. In this manner, the automatic gesture generation process permits various gestures for a 3D model to be abstracted and then applied to other 3D models.

Figure 17 is an example of the feature space with both the models overlaid over each other to illustrate that the feature space of the first and second models are consistent with each other. Now, the application of the gesture map (and therefore the gesture object) to another model will be described in more detail. In particular, Figure 18 illustrates the neutral gesture of the second model. Figure 19 illustrates the smile gesture (from the gesture map generated by
13 from the first model) applied to the second model to provide a smile gesture to that second model even when the second model does not actually show a smile.

While the above has been described with reference to a particular method for locating the landmark location points on the image and a particular method for generating gestures, those skilled in the art will recognize that other techniques may be used without departing from the invention as defined by the appended claims. For example, techniques such as pyramid transforms which uses a frequency analysis of the image by down sampling each level and analyzing the frequency differences at each level can be used. Additionally, other techniques such as side sampling and image pyramid techniques can be used to process the image.
Furthermore, quadrature (low pass) filtering techniques may be used to increase the signal strength of the facial features, and fuzzy logic techniques may be used to identify the overall location of a face. The location of the landmarks may then be determined by a known corner finding algorithm.
14 void AddNeutralFacialGesture(LandmarkPositions position, VeeperModel model) I/ update the virtual personality from the new facial landmark positions UpdaLeVeeperFromLandmarkPositions(model, ositions);
II ssyy th+ a the b lc texture to fill in the virtual personality was Synt sisBackgroun ex uure(model);
1/ create a facial gesture that animates vertex p(sitions, texture coordinate positions, and the texture map model->baseGesture = MakeNewGesture(model->detail->verts, model->detail->stCoords, model->detail->texture);
build a scaler field from the scale changes in feature space this allows us to go from a j~iven indincfuales space to a common space model->baseGesture->sc aler held = BuildFeatureScalerField(model->baseGesture->verts, model->featureSpace);
// scale the behaviors in common space to the individuates space ScaleBehaviors(model->behaviors, model->baseGesture->scalerField);
}
facialGesture MakeFacialGesture(LandmarkPositions positions, VeeperModel model, Texture feateMasks) facialGesture newGesture = null;
If uppdate the virtual ppersonalityy from the new facial landmark positions UpdateVeeperFrom landmarkPositions(model, positions);
// make an array of delta changes in texture coordiate space, Texture // this is done by correlating a grid of points from the detail texture to the baseGestures I/ grid points are than sampled to determine the delta change for a texture coordinate stUeltaChange = PerformFteldCorrelabon(model->detail->stcoords, model->baseG re->textre, mock->detail->t e, featureMasks);
II move the texture coordinates back so the two maps will aligned ApptySTOeltaChangemode)->detail->stCoords, stOeltaChancgl;
// resam~ ple the detail object from the photo to generate a texture aligned in coordinate space, UpdateMapFromPhoto(model->detail, model->inputPhoto, model->detaii=>texture);
// calculate the delta change for the 3d verticies of the detail model use the baseGesture`s scalerfield to transform the delta into common feature space.
II addidonsll , films the ve b the featureM~c, in this way we I/ can filter far only a mouth change eye change or facial change, vertDeltaChange = MakeVertDeltachange(mcde(->baseGesture->verts, model->detail->verts, model->baseGesture->scalerField, featureMasks);

// calculate the delta change in the texture 1! filter the delta the featureMask, deltaMap = Make extureDeltaChan elmodel->detail->texture, model->baseGesture-> texture, featureMa s);
i/ make a gesture object which encodes the changes in vertex position, texture coordinate position, /I and texture map than s newGesturee = MakeNeewsture(vertDeltaChange, stDeitaChange, deltaMap);
return(newGesturel;
I
PlayFacialGesture(VeeperModel model, facialGesture theGesture, real time, boolean firstGesturel I/ reset the model if this is the first gesture if (firstGesture) model->detail->verts = model->baseGesture->verts;
}
// transform the delta verts into individual feature space using scalerFietd, // scale the delta its by time, // add to the detail vents add VertDeltaChanggee(niodel->detail->verts, theGesture->verts, model->baseGesture->scalere Field, time);
scale the delta chap a for the stCoords by time, add to the detail stCoords.
AddST ltaCChangetmodel->detail->stCoords, the Gesture->stCoords, time);
// scale the delta change for the texture by time and add to the detail texture.
1 AddTextureChange(model->detail->texture, theGesture->textt~e, time);

Head bounds are found by integrating opaque alpha pixels out from a likely location for the head.
E es are found by looking for a highlight surrounded by a dark area of a predetermined size, within likely areas in head bounds.
Pseudo Code for possible outer loop of constructing a Virtual Personality with Veepers tolerance FindLandMarks (Photo, LandMarks, Orientation, weightThreslwld) SamplePointss AvergeCenterofHead He oundRange He Bounds ClearAlphathannel(Photo) InitAv fHeadll ientation, A CenterolHead);
in6ampplePoints(Orientation, SemplePoints) InitValidHeadBoundRange((ettation, Head oundRange) SeedFillNphatPhoto, Dnentation, Sam lePoints).
Headt3ounds = Inte ate Head Bounds rornAlpha~Photo, Orientation, AverageCenterofRead);
weight = CalcHeadcodWeic t(Orientat(on, HeadBoundRange);
if (weight > weightThreshold) Land.. eyes = FindEyes(Photo Orientation, HeadBounds);
weight'=Calm Tolerance(Hea ounds, LandMarks, eyes);
if (weight > wei ~Thresho)d) {
...HereWe continue to find other landmarks.
) }
return(toleran ce);
Photo MakeVeeper (theFile) Photo on inalPhoto = to iR sterFile( File);
Photo texturePhoto = ResampleToPowerofiwo(originalPhoto);
LandMarks theLandmarks;
Orientation bestflrient;
weight bestTolerance;
VeeperModel theModel CreateOrientAlas(textorePhoto);
toleranceTop = Find(.andMarks(texturePhoto, t eLandmerks, LiT ,VALID LANDMARK) toleranceRi t = Fmdl rks(Photo, thelandmarks p Right, VALID LANDMARK) toleranceLe = F(ndlandMarks(texturePhoto, theLandmarks, Cp Left, VALID
(ANDMARK) toleranceBottorn = F1indLandMurks(texture , tlielur marks, Up_ , VAC)D_LANDMARKU

bestOrient = Up_Top;
bestTolerance = toleranceTop;
if (bestTolerance < toleranceRight) bestTolerance = toleranceRight;
bestOrient = Up-Riot }
if (bestTolerance < toleranceLeft) bestTolerance = toleranceleft;
bestOrient = Up_Left }
if (bestTolerance < toleranceBottom) I
bestTolerance = toleranceBottom;
} bestDrient = Up_Bottom;

bestTolerance = FindLandMarks(texturePhoto, the Landmarks, bestOrient, VALID_LANDMARK?
OrientPhoto(texturefto, bestOrient);
(bestTolerance < GOOD_LANDMARK) } GetUllandmarks(texturePhoto, LandMarks);
theModel = FindBestAtModel(LandMarks, ModelDataBase) F)tModelBounds(theModel, LandMar s.bounds), Fit ModelFeature(todel, LandMarksJ );
FitModeIEd e(theModel, LandMarks.edge);
ScaleMode ehaviorsByS ce(thel el, the Model. feature);
theModel = MakeTurnkeyModel(theMo(iel) SaveModei(theModel);

VEEPERS WORK FLOW, END USER

1. USER LOADS THE IMAGE.
2. RESAMPLE OR PAD THE IMAGE TO POWER OF TWO, 3. CREATE ALPHA CHANNEL FOR SEGMENTATION OF BACKGROUND AND CHARACTER.
4. USER SUPPLIES THE FOLLOWING INFORMATION.
a. NAME
b. SEx C. AGE
d. WEIGHT
C. ETHNICITY (OPTIONAL BUT WILL NARROW SEARCH) f. HAIR COLOR
g. LENGTH OF HAIR / HAIR STYLE (SURVEY POPULAR STYLES FOR AGE GROUP/SEX) i. MEN
1. STYLE OF BEARD
8. CLEAN SHAVEN
b. SHORT BEARD
0. GOATEE
d. MEDIUM LENGTH BEARD
8. FULL LENGTH BEARD.
f. zz Top.
2. VERY SHORT/BUZZ
3. SHORT
4. BUSINESS CUT
5. MEDIUM LENGTH.
6. LONG
ii. WOMEN
1. VERY SHORT.
2. SHORT
3. MEDIUM LENGTH
4. LONG
5. VERY LONG
h. GLASSES (NEED TO REMOVE FOR PHOTO AND CHOOSE GLASSES) I. JEWELRY/PIERCINGS. (CHOOSE FACIAL ORNAMENTS) j. HAT OR HEADGEAR (NEED TO REMOVE FOR PHOTO AND CHOOSE HATS) 5. USER SETS FACIAL LANDMARKS.
8. CORNER OF EYES, b. CORNERS OF NOSE. (WHERE IT MEETS THE CHEEK AND UPPER LIP) C. CORNERS OF UPS. (RIGHT AND LEFT'EDGES OF MOUTH) d. CENTER OF LIPS, e. CENTER OF PUPIL, f. SIZE OF IRIS.
RIGHT, LEFT, AND CENTER OF EYEBROWS.
~. BOTTOM OF CHIN.
I. TOP OF HEAD.
j. SIDES OF HEAD IN LINE WITH EYES.

Ic, TOP, BOTTOM AND CENTER OF EAR. (OPTIONAU
6. USE FACIAL DATA TO SEARCH FOR AND LOAD A HEAD MODEL THAT BEST FITS.
8. ALLOW USER TO CHOOSE BETWEEN CLOSE SETS OF IMAGES, 7, USE FACIAL LANDMARKS TO DETERMINE:
a. ORIENTATION OF HEAD MODEL.
b. SCALE OF HEAD MODEL.
0. START POSITION OF POINTS OF THE FEATURE GRID, 6. USER SETS EDGE LANDMARKS.
8. EDGES OF EYES, b. EDGES OF THE MOUTH.
C. EDGES OF THE NOSE.
d. EDGES OF THE CHIN.
9. MAP USER PHOTO ON TO HEAD MODEL COORDINATES, 10. USE FEATURE GRID TO SCALE FACIAL BEHAVIORS.
11. DISPLAY RESULT TO USER.
12. SAVE MODEL AS RUNTIME FILES.

Claims (11)

CLAIMS:
1. A method for generating a gesture model, the method comprising:
receiving an image of a first object performing a gesture;
determining at least one movement associated with the gesture from the at least one movement of the first object to generate a gesture object wherein the gesture object further comprises a coloration change variable storing a change of coloration that occurs during the gesture, a two dimensional change variable storing a change of a surface that occurs during the gesture, and a three dimensional change variable storing a change of at least one vertex associated with the object that occurs during the gesture; and applying the gesture object to a three dimensional model of a second object to animate the second object using the gesture object.
2. The method of claim 1, further comprises generating a feature space into which the gesture is mapped during the gesture generation process.
3. The method of claim 2, wherein the determining the at least one movement further comprise determining a correlation between the feature space and the image of the first object.
4. The method of claim 2, wherein applying the gesture object comprises transforming at least one geometric vector and motion vector to and from the feature space.
5. The method of claim 2, wherein applying the gesture object comprises applying the change of coloration, the change of the surface, and the change of the at least one vertices from one model to another model using the feature space.
6. The method of claim 1, wherein the gesture object comprises a data structure configured to store:

a scalar field variable storing a mapping between a feature space of the gesture and a model space of a three dimensional model to enable transformation of geometry and motion data;
the coloration change variable;
the two dimensional change variable; and the three dimensional change variable, wherein the coloration change variable, the two dimensional change variable, and the three dimensional change variable enable the gesture to be applied to a three dimensional model of another object having a texture and vertices.
7. A computer implemented system for generating a gesture model, the computer implemented system comprising:
a three dimensional model generation module configured to receive an image of an object and generate a three dimensional deformable model of the object, the three dimensional model having a surface and a plurality of vertices; and a gesture generation module configured to generate a gesture object corresponding to a gesture of the object so that the gesture can be applied to a three dimensional deformable model of another object;
wherein the gesture object further comprises a coloration change variable storing a change of coloration on the three dimensional deformable model that occurs during the gesture, a two dimensional change variable storing a change of the surface of the three dimensional deformable model that occurs during the gesture, and a three dimensional change variable storing a change of the vertices of the three dimensional deformable model that occurs during the gesture.
8. The system of claim 7, wherein the gesture generation module is further configured to generate a feature space into which the gesture is mapped.
9. The system of claim 8, wherein the gesture generation module is further configured to transform the coloration change variable, the two dimensional change variable, and the three dimensional change variable to and from the feature space.
10. The system of claim 8, wherein the gesture generation module is further configured to apply changes in the coloration change variable, the two dimensional change variable, and the three dimensional change variable from one model to another model using the feature space.
11. The system of claim 7, wherein the gesture object comprises a data structure configured to store:
a scalar field variable storing a mapping between a feature space of the gesture and a model space of a three dimensional model to enable transformation of geometry and motion data;
the coloration change variable;
the two dimensional change variable; and the three dimensional change variable, wherein coloration change variable, the two dimensional change variable, and the three dimensional change variable enable the gesture to be applied to a three dimensional model of another object having a texture and vertices.
CA2690826A 2001-08-14 2002-08-14 Automatic 3d modeling system and method Expired - Fee Related CA2690826C (en)

Applications Claiming Priority (7)

Application Number Priority Date Filing Date Title
US31238401P 2001-08-14 2001-08-14
US60/312,384 2001-08-14
US21911902A 2002-08-13 2002-08-13
US10/219,041 US7123263B2 (en) 2001-08-14 2002-08-13 Automatic 3D modeling system and method
US10/219,041 2002-08-13
US10/219,119 2002-08-13
CA2457839A CA2457839C (en) 2001-08-14 2002-08-14 Automatic 3d modeling system and method

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CA2457839A Division CA2457839C (en) 2001-08-14 2002-08-14 Automatic 3d modeling system and method

Publications (2)

Publication Number Publication Date
CA2690826A1 CA2690826A1 (en) 2003-02-27
CA2690826C true CA2690826C (en) 2012-07-17

Family

ID=27396614

Family Applications (2)

Application Number Title Priority Date Filing Date
CA2690826A Expired - Fee Related CA2690826C (en) 2001-08-14 2002-08-14 Automatic 3d modeling system and method
CA2457839A Expired - Fee Related CA2457839C (en) 2001-08-14 2002-08-14 Automatic 3d modeling system and method

Family Applications After (1)

Application Number Title Priority Date Filing Date
CA2457839A Expired - Fee Related CA2457839C (en) 2001-08-14 2002-08-14 Automatic 3d modeling system and method

Country Status (6)

Country Link
EP (1) EP1425720A1 (en)
JP (3) JP2005523488A (en)
CN (1) CN1628327B (en)
CA (2) CA2690826C (en)
MX (1) MXPA04001429A (en)
WO (1) WO2003017206A1 (en)

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FR2875043B1 (en) 2004-09-06 2007-02-09 Innothera Sa Lab DEVICE FOR ESTABLISHING A COMPLETE THREE-DIMENSIONAL REPRESENTATION OF A MEMBER OF A PATIENT FROM A REDUCED NUMBER OF MEASUREMENTS TAKEN ON THIS MEMBER
ES2284391B1 (en) * 2006-04-19 2008-09-16 Emotique, S.L. PROCEDURE FOR THE GENERATION OF SYNTHETIC ANIMATION IMAGES.
US20110298799A1 (en) * 2008-06-03 2011-12-08 Xid Technologies Pte Ltd Method for replacing objects in images
CN101609564B (en) * 2009-07-09 2011-06-15 杭州力孚信息科技有限公司 Method for manufacturing three-dimensional grid model by draft input
CN102496184B (en) * 2011-12-12 2013-07-31 南京大学 Increment three-dimensional reconstruction method based on bayes and facial model
CN103207745B (en) * 2012-01-16 2016-04-13 上海那里信息科技有限公司 Avatar interactive system and method
CN105321147B (en) * 2014-06-25 2019-04-12 腾讯科技(深圳)有限公司 The method and device of image procossing
WO2019049298A1 (en) * 2017-09-08 2019-03-14 株式会社Vrc 3d data system and 3d data processing method
US10586368B2 (en) 2017-10-26 2020-03-10 Snap Inc. Joint audio-video facial animation system
CN108062785A (en) * 2018-02-12 2018-05-22 北京奇虎科技有限公司 The processing method and processing device of face-image, computing device
CN111553983A (en) * 2020-03-27 2020-08-18 中铁十九局集团第三工程有限公司 Three-dimensional space modeling method, device, equipment and medium for reducing explosion site

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH09305798A (en) * 1996-05-10 1997-11-28 Oki Electric Ind Co Ltd Image display device
JP2915846B2 (en) * 1996-06-28 1999-07-05 株式会社エイ・ティ・アール通信システム研究所 3D video creation device
US5978519A (en) * 1996-08-06 1999-11-02 Xerox Corporation Automatic image cropping
US6222553B1 (en) * 1997-08-04 2001-04-24 Pixar Animation Studios Hybrid subdivision in computer graphics
JPH11175223A (en) * 1997-12-11 1999-07-02 Alpine Electron Inc Animation preparing method, its device and storage medium
JPH11219422A (en) * 1998-02-02 1999-08-10 Hitachi Ltd Personal identification communication method by face
US6072496A (en) * 1998-06-08 2000-06-06 Microsoft Corporation Method and system for capturing and representing 3D geometry, color and shading of facial expressions and other animated objects
JP3639475B2 (en) * 1999-10-04 2005-04-20 シャープ株式会社 3D model generation apparatus, 3D model generation method, and recording medium on which 3D model generation program is recorded

Also Published As

Publication number Publication date
JP2011159329A (en) 2011-08-18
WO2003017206A1 (en) 2003-02-27
CN1628327B (en) 2010-05-26
CA2457839C (en) 2010-04-27
JP2005523488A (en) 2005-08-04
MXPA04001429A (en) 2004-06-03
CN1628327A (en) 2005-06-15
JP2008102972A (en) 2008-05-01
EP1425720A1 (en) 2004-06-09
CA2457839A1 (en) 2003-02-27
WO2003017206A9 (en) 2003-10-30
CA2690826A1 (en) 2003-02-27

Similar Documents

Publication Publication Date Title
US7355607B2 (en) Automatic 3D modeling system and method
US10169905B2 (en) Systems and methods for animating models from audio data
JP4865093B2 (en) Method and system for animating facial features and method and system for facial expression transformation
Noh et al. A survey of facial modeling and animation techniques
Pighin et al. Modeling and animating realistic faces from images
US11868515B2 (en) Generating textured polygon strip hair from strand-based hair for a virtual character
JP4932951B2 (en) Facial image processing method and system
JP2008102972A (en) Automatic 3d modeling system and method
KR100900823B1 (en) An efficient real-time skin wrinkle rendering method and apparatus in character animation
JP4842242B2 (en) Method and apparatus for real-time expression of skin wrinkles during character animation
CN114529640B (en) Moving picture generation method, moving picture generation device, computer equipment and storage medium
CN116229008B (en) Image processing method and device
AU2002323162A1 (en) Automatic 3D modeling system and method
CN117830584A (en) Digital human model realization method and device, electronic equipment and medium
ed eric Pighin Modeling and Animating Realistic Faces from Images
Lewis Siggraph 2005 course notes-Digital Face Cloning Audience Perception of Clone Realism

Legal Events

Date Code Title Description
EEER Examination request
MKLA Lapsed

Effective date: 20220301

MKLA Lapsed

Effective date: 20200831