WO2001037218A1 - Method and apparatus to control responsive action by a computer-generated character - Google Patents

Method and apparatus to control responsive action by a computer-generated character Download PDF

Info

Publication number
WO2001037218A1
WO2001037218A1 PCT/US1999/016553 US9916553W WO0137218A1 WO 2001037218 A1 WO2001037218 A1 WO 2001037218A1 US 9916553 W US9916553 W US 9916553W WO 0137218 A1 WO0137218 A1 WO 0137218A1
Authority
WO
WIPO (PCT)
Prior art keywords
character
image
deltaset
morph
vision
Prior art date
Application number
PCT/US1999/016553
Other languages
French (fr)
Inventor
Christopher D. Shaw
Orion Wilson
Marlon Veal
Original Assignee
Haptek, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Haptek, Inc. filed Critical Haptek, Inc.
Priority to EP99937371A priority Critical patent/EP1177528A1/en
Publication of WO2001037218A1 publication Critical patent/WO2001037218A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2213/00Indexing scheme for animation
    • G06T2213/12Rule based animation

Definitions

  • the present invention pertains to automated methods and apparatuses for the controlling and transforming of two- and three-dimensional images. More particularly, the present invention relates to methods and apparatuses for changing the elements of image through the use of one or more sets of modification data in real time.
  • the computer system 810 includes a system unit having a processor 811, such as a Pentium® processor manufactured by Intel Corporation, Santa Clara, California.
  • the processor is coupled to system memory 812 (e.g., Random Access Memory (RAM)) via a bridge circuit 813.
  • the bridge circuit 813 couples the processor 811 and system memory 812 to a bus 814, such as one operated according to the Peripheral Component Interconnect standard (Nersion 2.1, 1995, PCI Special Interest Group, Portland, Oregon).
  • the system unit 810 also includes a graphics adapter 815 coupled to the bus 814 which converts data signals from the bus into information for output at a display 820, such as a cathode ray tube (CRT) display, active matrix display, etc.
  • a display 820 such as a cathode ray tube (CRT) display, active matrix display, etc.
  • CTR cathode ray tube
  • a graphical image can be displayed at display 820.
  • the graphical image can be created internally to the computer system 810 or can be input via an input device 830 (such as a scanner, video camera, digital camera, etc.).
  • a graphical image is stored as a number of two-dimensional picture elements or "pixels," each of which can be displayed.
  • graphical images e.g., of a person's face
  • a graphical image can be changed by allowing the user to modify a graphical image by "moving" (e.g., with a cursor movement device such as a mouse) the two-dimensional location of one or more pixels ( For example: Adobe Photoshop Version 3.0.5 (Adobe Systems, Inc., San Jose, California)).
  • the other pixels around the one that is being moved are filled in with new data or other pixel data from the graphical image.
  • the graphical image of the person's face can be modified using this product by making the person's nose larger or smaller.
  • This two-dimensional phenomenon is analogous to stretching and warping a photograph printed on a "rubber sheet”.
  • Morphing programs typically work by allowing the operator to select points on the outline of the specific starting image and then to reassign each of these points to a new location, thereby defining the new outline of the desired target image.
  • the computer then performs the morph by: (1) smoothly moving each of these points along a path from start to finish, and (2) interpolating the movement of all the other points within the image as the morph takes place.
  • a first region of a first graphical image is identified and then it is modified based on a first set of predetermined modification data.
  • a variety of applications can be performed according to further embodiments of the present invention.
  • the morph (e.g., the application of modification data) for a first starting image can be readily applied to other starting images.
  • the morphs automatically impart desired characteristics in a custom manner to a multiplicity of starting images.
  • a method of the present invention described herein automates this process.
  • the morphs of the present invention enable a wide variety of human, animal, or other characters to be rendered chimp-like using a single "chimp" morph. An example of this is shown in Fig.
  • modification data includes deltasets and deltazones described in more detail below.
  • deltasets or zones categorically identify regions, feature by feature within differing starting images so that these images are uniquely altered to preserve the automated morph's desired effect. Because a single morph can be applied to a number of different starting images, the morph exists as a qualitative entity independently from the images it acts upon. This independence creates an entirely new tool, a morph library, a collection of desired alterations or enhancements which can be generically used on any starting image to create specific desired effects as illustrated in the above "chimpification" example. Second, once an image has been morphed to add a particular characteristic or quality, the resulting image can be subjected to a different morph to add a second characteristic or quality. Fig.
  • the additive property of the automated, additive morphing system can be used in a number of ways to bring new functionality and scope to the morphing process. Five distinct additive properties of automated, additive morphs will be described below along with their practical application.
  • morphs can be provided that allow a graphical image character to speak, move, emote, etc.
  • a moving morph can be created during which a character can continue speaking, moving, and emoting by cross applying an automated additive morph to a ("morph sequence").
  • the morph sequence that is known in the art (such as what is shown in programs by Dr. Michael Cohen at the University of California at Santa Cruz and products of Protozoa, Inc. (San Francisco, California) allows for computer-generated characters to move their mouths in a manner which approximates speech by running their characters through a sequence of "Niseme” morphs.
  • a Niseme is the visual equivalent of a phoneme, i.e., the face one makes when making a phonetic sound.
  • Such programs use a specific initial image of a character at rest, and a collection of target images. Each target image corresponds to a particular facial position or "Niseme” used in common speech.
  • Fig. 3 shows how these target images can be strung together in a morph sequence to make an animated character approximate the Up movements of speech. This figure shows the sequence of Viseme endpoints which enable the character to mouth the word "pony". It is important to note that the creation and performance of this sequence does not require the special properties of the morphing system presented herein.
  • Fig. 4 shows the cross-addition of an automated, additive morph to the "pony" morph sequence described above in Fig. 3.
  • the four vertical columns of character pictures represent the progressive application of the "chimp" morph described earlier (from left to right multiplying the modification data by multiplication values of 0%, 33%, 66%, 100% prior to application to the starting image).
  • “chimp” morph is nonspecific as to its starting point (as are all automated additive morphs according to the present invention), it is possible to increasingly apply the "chimp” morph while changing the starting point within the morph sequence, producing the progression shown in the darkened diagonal of squares. This diagonal progression, shown in horizontal fashion at the bottom of Fig. 4 yields a character which can speak while this character is morphing. This is the underlying structure of the moving morph. Traditional morphs (being specific rather than generic) cannot be cross-applied in this manner. Characters created using the methods of the present invention can be made to not only speak, but also emote, and react from morph sequences.
  • characters can remain fully functional during an automated, additive morph rather than being required to "freeze frame" until the morph has been completed as do the morphs of the prior art.
  • An additional benefit of this cross-additive procedure is that morphs can be stopped at any point to yield a fully functional, consistent new character which is a hybrid of the starting and final characters.
  • the methods of the present invention provide for parametric character creation in which newly-created characters automatically speak, move, and emote using modification data stored in a database or library.
  • (1) morphs can exist as qualitative attributes independent of any particular starting image; and (2) morphs can be applied, one after the other, to produce a cumulative effect.
  • appearance parameters length or width of nose, prominence of jaw parameters, roundness of face, etc.
  • these attributes can be selectively applied in such a way as to create any desired face from one single starting image.
  • a multiracial starting character is defined and a morph library of appearance parameters is created which can be used to adjust the characters features so as to create any desired character.
  • Fig. 5 shows an example of this process.
  • the parameter adjustments in this figure are coarse and cartoon-like so as to yield clearly visible variations. In realistic character generation, a much larger number of parameters can be more gradually applied.
  • the first three morphs shown in this illustration are "shape morphs”.
  • the coloration or "skin" which is laid over the facial shape is changed rather than the shape itself. This step can be used to create the desired hair color, eye color, skin tone, facial hair, etc. in the resultant character.
  • the parametric character creation described above can be combined with the moving morph, also described above, to create characters which automatically speak, emote and move.
  • This dual application is illustrated in Fig. 6, wherein not only the underlying structure, but also the full speaking and emoting functionality of the original character are automatically transfe ⁇ ed to the new character.
  • the character shown in Fig. 6 not only contains a mutable physical appearance, but also a full set of Nisemes, emotions, and computer triggered autonomous and reactive behavior. All of these functions can be automatically transferred to a vast range of characters which can be created using parametric character creation. This represents an exponential savings in animation time and cost over existing procedures which require custom creation of not only the character itself, but every emotion, Niseme, blink, etc. that the new character makes.
  • morph sequences can be used simultaneously to combine different behavioral sequences.
  • Fig. 7 illustrates the simultaneous utilization of an emoting sequence and a speaking sequence.
  • the Niseme sequence required to make the character say "pony" has been added to an emotive morph sequence (center column) in such a manner that the timing of each sequence is preserved.
  • the resultant sequence (right column) creates a character which can simultaneously speak and react with emotions.
  • This procedure can also be used to combine autonomous emotive factors (a computer-generated cycling of deltas representing different emotions or "moods") with reactive factors (emotional deltas triggered by the proximity of elements within the character's environment which have assigned emotive influences on the character). Such procedures can be used to visualize the interplay between conscious and subconscious emotions.
  • a computer-generated character character (or characters) in a multidimensional, computer-generated environment can be automatically made to seem aware of objects in that environment.
  • This apparent “awareness” of the character includes the ability to "look around” for objects in the environment, to turn its eyes, head and the remainder of the body to "focus” on an object once it enters the character's "field of vision", and to create an apparent emotional response to that object. All of this behavior can be automatically created in a computer-generated character using the method described below.
  • Fig. 1 shows an example of an automated morph according to an embodiment of the present invention.
  • Fig. 2 shows an example of an automated, additive morph according to an embodiment of the present invention.
  • Fig. 3 shows an example of a morph sequence that can be performed according to an embodiment of the present invention.
  • Fig. 4 shows an example of a moving morph according to an embodiment of the present invention.
  • Fig 5 shows an example of parametric character creation according to an embodiment of the present invention.
  • Fig. 6 shows an example of automatic behavioral transference according to an embodiment of the present invention.
  • Fig. 7 shows an example of behavioral layering according to an embodiment of the present invention.
  • Fig. 8 is a computer system that is known in the art.
  • Fig. 9 is a general block diagram of an image transformation system of the present invention.
  • Figs. 10 a-d are polygonal models used for the presentation of a graphical image of a human head or the like.
  • Figs. 11 a-f are polygonal images showing the application of deltaset in accordance with an embodiment of the present invention.
  • Figs. 12 a-g are graphical images of a person's head that are generated in accordance with an embodiment of the present invention.
  • Fig. 13 shows an input device for controlling the amount of transformation occurs when applying a deltaset to an image.
  • Figs. 14 a-d are graphical images of a person's head that are generated in accordance with an embodiment of the present invention.
  • Fig. 15 shows a communication system environment for an exemplary method of the present invention. Detailed Description
  • modification data is generated that can be applied to a starting image so as to form a destination image.
  • the modification data can be difference values that are generated by determining the differences between first and second images. Once these differences are determined they can be stored and later applied to any starting image to create a new destination image without the extensive frame-by-frame steps described above with respect to morphing performed in the motion picture industry. These difference values can be created on a vertex-by-vertex basis to facilitate the morphing between shapes that have an identical number of vertices. Alternatively, difference values can be assigned spatially, so that the location of points within the starting image determines the motion within the automated morph.
  • An example of the vertices-based embodiment of the present invention includes the generation of a first image (e.g., a neutral or starting image) comprising a first number of vertices, each vertex having a spatial location (e.g., in two- or three-dimensional space) and a second image is generated (e.g., a target or destination image) having an equal number of vertices.
  • a difference between a first one of the vertices of the first image and a co ⁇ esponding vertex of the second image is determined representing the difference in location between the two vertices.
  • the difference is then stored in a memory device (e.g., RAM, hard disc drive, etc.).
  • Difference values for all co ⁇ esponding vertices of the first and second images can be created using these steps and stored as a variable array (refe ⁇ ed to herein as a deltaset).
  • the deltaset can then be applied to the first image to create the second image by moving the vertices in the first image to their co ⁇ esponding locations in the second image.
  • a multiplication or ratio value can be multiplied by the entries in the deltaset and applied to the first image so that an intermediate graphical image is created.
  • the deltaset can be applied to any starting image having an equal number of vertices. This allows the user to create new destination images without performing, again, the mathematical calculations used to create the original deltaset.
  • the system 900 includes a library or database of deltasets 931.
  • the library of deltasets 931 can be stored in the system memory 912 or any other memory device, such as a hard disc drive 917 coupled to bus 914 via a Small Computer Standard Interface (SCSI) host bus adapter 918 (see Fig. 8).
  • SCSI Small Computer Standard Interface
  • deltasets are variable a ⁇ ays of position change values that can be applied to the vertices of a starting image.
  • the deltaset information is composed and cached in device 932 (e.g., processor 811 and system memory 812 of Fig.
  • Both the starting and second images can then be displayed at display 920, or any other output device (memory) or sent to file export (e.g., the Internet system).
  • Inputs to the system 900 of Fig. 9 include a variety of user controls 937, autonomous behavior control 938, and face tracker data input 939 which will be further described below.
  • Other inputs can come from other systems such as the so-called World Wide Web (WWW).
  • WWW World Wide Web
  • audio data can be supplied by audio data input device 940 which can be supplied to deltaset caching and composing device 932.
  • the neutral geometry 933 is based on the image of a person's head that has been captured using any of a variety of known methods (e.g., video, scanner, etc.).
  • the image data of the person's head is placed onto a polygonal model 1051.
  • the polygonal model comprises a plurality of vertices 1052 and connections 1053 that extend between the vertices.
  • Each polygon 1054 of the polygonal model is defined by three or more vertices 1052.
  • an example is discussed below using simple polygons (e.g., a square, a triangle, a rectangle, and a circle).
  • Each polygon has an identifiable shape. For example, looking at Fig. 1 la, a square polygon is shown having eight vertices (points 1100 to 1107) in two-dimensional space. By moving individual vertices, the square polygon can be converted into a number of other polygon shapes such as a rectangle (Fig. 1 lb), a circle (Fig. 1 lc) and a triangle (Fig. l id; where vertices 1100, 1101, and 1107 all occupy the same point in two-dimensional space).
  • a deltaset is a set of steps that are taken to move each vertex (1100 to 1107) from a starting polygon to a target or destination polygon. For example, the steps that are taken from the square polygon of Fig.
  • the deltaset defines the path taken by each vertex in transforming the starting polygon to the destination polygon.
  • the deltaset defines the difference in position of co ⁇ esponding vertices in the starting and target polygons.
  • deltasets can be created for the transformation of the square polygon of Fig. 1 la to the circle polygon of Fig. 1 lc and of the square polygon of Fig. 1 la to the triangle polygon of Fig. l id.
  • the deltaset is created by transforming a starting polygon shape into another, however, one skilled in the art will appreciate that a deltaset can be created that are not based on specific starting and target shapes, but created in the abstract. Moreover, once a deltaset is created, it can be used on any starting shape to create a new shape. For example, the deltaset used to transform the square polygon of Fig. 1 la to the rectangle polygon of Fig. 1 lb (for convenience, refe ⁇ ed to as Deltaset 1) can be used on the circle polygon of Fig. l ie. Thus, the circle polygon of Fig. 1 lc becomes the starting shape and after applying Deltaset 1, would become the ellipse polygon of Fig. 1 le (i.e., the target shape).
  • Deltasets can also be combined (e.g., added together) to create new deltasets.
  • Deltasetl, Deltaset2 i.e., transform from the square of Fig. 1 la to the circle of Fig. 1 lc
  • Deltaset3 i.e., transform from the square of Fig. 1 la to the triangle of Fig. l ie
  • Deltaset4 Applying Deltaset4 to the starting square polygon of Fig. 11a, the target shape of Fig. l lf is achieved.
  • the starting polygon, destination polygon, and deltaset must have the same number of vertices. Additional algorithms would be necessary to transform between shapes or objects having a differing number of vertices.
  • An additional method for moving vertices can be derived from the deltaset method wherein the motion to the points of a deltaset are interpolated such that a continuous field of motion is created.
  • deltazones can be used to morph images i ⁇ espective of their particular triangle strip set because a one to one co ⁇ espondence between movements and vertices upon which the deltasets rely are replaced by a dynamical system of motion which operates on any number of vertices by moving them in accordance with their original location.
  • deltasetJType The datatype structure for a deltaset (DeltasetJType) is similar to that for a basic shape object, and the pseudocode is shown in Table I.
  • the DeltasetJType and shapejype variables each include an array of [numpoints] values. Each value is a position of a vertex for the shapejype variable and delta value for the DeltasetJType variable.
  • DeltaSet_Calc (deltaSetJType *dset, shapeJType *src shapeJType *dest) ⁇ int i; int numpts; dataPointJType delta;
  • Numpts src ⁇ numPoints; deltaSet SetNumPts (dset, numpts);
  • delta is used to temporarily store the difference in position between the source (src) and destination (dest) for each of the vertices in the shape. Each delta value is then stored in a deltaset array (dset). Once a deltaset array is created, it can be easily applied to any starting shape having an equal number of vertices to form a new target shape.
  • deltaSet_Apply (deltaSetJType *dset, shapeJType *dest, float amount)
  • the pseudo-code of Table IN shows two utility routines that are used for creating a new, blank deltaset and to set the number of datapoints.
  • deltaSetJType rect_dset, circ_dset
  • shape 3etNumPoints (&square, 8); shape_SetNumPoints (&rectangle, 8) ; shape SetNumPoints (&circle, 8); 01/37218
  • a datapoint is defined as a two-dimensional vector and the square, rectangle, and circle shapes are defined as eight points with abscissa and ordinate values.
  • Deltasets are then calculated for the transition from square to rectangle and from square to circle.
  • the resulting deltasets (rect_dset and circ_dset) represent differences between abscissa and ordinate values of the respective starting and target images.
  • the deltasets can then be appUed to a starting shape (in this example, the starting image, newshape, is set to the square shape of Fig.1 la).
  • the rect_dset deltaset is applied to the square shape to form an intermediate shape, and then the circ_dset deltaset is applied to this intermediate shape to form the destination shape that is shown in Fig. lie.
  • a deltaset representing a transformation between the square shape of Fig.1 la to the triangle shape of Fig.1 Id is created and appUed to the eUipse shape shown in Fig.1 le.
  • the deltasets example, above, can be easily extended to a three-dimensional representation.
  • the example can also be expanded to more intricate and complex applications such as in three-dimensional space and facial animation.
  • several additional features can be added. For example, certain motions of the face are limited to certain defined areas, such as blinking of the eyes. Accordingly, a deltaset for an entire face would be mostly 0's (indicating no change) except for the eyes and eyelids, thus isolating these areas for change.
  • the deltaset datatype can be changed so that only nonzero values are stored. Thus during the execution of the Deltaset_apply routine, only the points that change are acted upon, rather than every point in the graphical representation.
  • An embodiment of facial animation is described below with reference to the pseudocode example of Table VI.
  • shape Type neutralFace, overallMorphface, blinkFace, emoteFaces [] , speakFaces [], newShapeFace; deltaSet Type overall_dset, blink_dset, emote_dsets [] , speak_dsets [] ;
  • the animated face image comprises three-dimensional datapoints.
  • "NeutralFace” is a starting image that will be changed based on one or more deltasets.
  • the neutralface image is shown in Fig. 12a with eyes looking straight ahead and no expression.
  • "OverallMorphFace” is a different face from NeutralFace.
  • OverallMorphFace is in the image of a cat shown in Fig. 12b.
  • a face showing a completed facial movement is "blinkFace” which shows the same face as NeutralFace but with the eyes closed (see Fig. 12c).
  • "EmoteFaces” is an a ⁇ ay of the neutralFace augmented to show one or more emotions. For example, Fig.
  • Fig. 12d shows the neutralFace emoting happiness
  • Fig. 12e shows neutralFace emoting anger
  • "SpeakFaces” is an array of faces showing expressions of different phonemes, a phoneme, or viseme, is a speech syllable used to form spoken words (e.g., the "oo", “ae”, “1", and “m” sounds).
  • Fig. 12f shows neutralFace expressing the phoneme "oo”.
  • the amount of transformation or morphing can be controlled by multiplication or multipUer values, overallMorphAmount, blinkAmount, emote Amountsf], and speakAmounts[].
  • blinkAmount is set to 1.0
  • a deltaset for blinking to neutralFace of Fig. 12a wiU achieve the face of Fig. 12c (i.e., 100% of the blink is applied).
  • Numbers less than or greater than 1.0 can be selected for these variables.
  • Deltasets are then created for transforming the neutralFace image.
  • deltaset overaU_dset is created for the changes between neutralFace (Fig. 12a) and OverallMorphFace (Fig. 12b);
  • deltaset blink_dset is created for the changes between neutralFace (Fig. 12a) and blinkFace (Fig. 12c);
  • deltasets emote_dsets[] are created between neutralFace (Fig. 12a) and each emotion expression image (e.g., the "happy” emoteFace[] of Fig. 12d and the "angry” emoteFace[] of Fig. 12e;
  • deltasets speak_dsets[] are created between neutralFace (Fig. 12a) and each phoneme expression image (e.g., the "oo" speakFace[] of Fig. 12f).
  • the amounts for each deltaset transformation are calculated (e.g., the values for overallMorphAmount, blinkAmount, emoteAmount[]s, and speakAmounts[]). For the emoteAmounts[] and speakAmounts[] a ⁇ ays, these values are mostly zero.
  • the new facial image to be created is stored in newShapeFace and is originally set to the NeutralFace image. Then, the deltasets that were calculated above, are applied to the newShapeFace in amounts set in transformation variables calculated above.
  • overallMorphAmount is set to 0.5 (i.e., halfway between neutralFace and OverallMorphFace; blinkAmount is set to 1.0 (i.e., full blink - eyes closed); emoteAmount[] for "happy” is set to 1.0, while all other emoteAmount[] values are set to 0; and speakAmount[] for the phoneme "oo” is set to 1.0 while all other speakAmount[] values are set to 0.
  • Fig. 12g The resulting image based on these variables is shown in Fig. 12g.
  • the deltasets that have been created can now be applied to another starting image (i.e., an image other than neutralFace shown in Fig. 12a) without recalculation.
  • a deltaset can be created between neutralFace and OverallMorphFace which signifies the changes between a male human face (shown in Fig. 12a) and the face of a cat (shown in Fig. 12b).
  • Fig. 14a a neutral, male human face is shown without appUcation of this deltaset.
  • Fig. 14b shows the effects of the application of this deltaset
  • Figs. 14a and 14b The underlying polygonal model for Figs. 14a and 14b are shown in Figs. 10a and 10b, respectively. As seen in Figs. 10a and 10b, vertices of the first image are shown to move to different positions in the destination image. Referring back to Figs. 14a and 14b, one skilled in the art will appreciate that the color of each pixel can also change in accordance with a deltaset storing the difference in color for each pixel in the human and cat images of these figures.
  • the deltaset described above can be appUed to a neutral, female human face (see Fig. 14c) to form a new destination image (see Fig. 14d).
  • the variables can be input using graphical sUders shown in Figs. 13 a-d.
  • a first deltaset represents the difference between a starting image with Ups in a first position and a target image with Ups in a second, higher position.
  • a second deltaset represents the difference between a starting image with jaw in a first position and a target image with the jaw in a second, jutted-out position.
  • a third deltaset represents the difference between a starting image with relatively smooth skin and a target image with old (i.e., heavily textured skin). Referring to Figs.
  • the amount each of these first, second, and third deltasets is applied to the neutral image of Fig. 13a as determined by the placement of one or more sliders 1301-1303.
  • the deltaset is not applied at all (i.e., the deltaset multiplied by 0.0 is appUed to the image).
  • the deltaset multiplied by 1.0 is applied to the image and if it is placed to the left, the deltaset multiplied by -1.0 is applied to the image.
  • sUders 101-03 are in a central position.
  • Fig. 10a sUders 101-03 are in a central position.
  • sUder 1301 is moved (e.g., with a mouse, not shown) to the right causing the first deltaset (multiplied by 1.0) to be applied to the neutral image of Fig. 13a (thus, the Ups are moved up some distance).
  • slider 1302 is moved to the left, and the second deltaset described above (multiplied by -1.0) is applied to the image of Fig. 13b (thus, the jaw is recessed).
  • sUder 1303 is moved to the right causing the third deltaset (multiplied by 1.0) to be applied to the image of Fig. 13 c.
  • the sliders 1301-03 can have intermediate values between -1.0 and 1.0 or can have values beyond this range.
  • a communication system is shown.
  • a first component such as server 1510
  • a transmission medium 1509 is coupled via a transmission medium 1509 to a second component (such as cUent 1511 coupled to a display 1512).
  • the transmission medium 1509 is the so-called Internet system that has a varying, but limited bandwidth.
  • the server 1510 and client 1511 are computer systems similar to system 801 of Fig. 8.
  • a first image (e.g., a person's face) is transmitted over the transmission medium 1509 from the server 1510 to the client as weU as any desired deltasets (as described above). Some code may also be sent, operating as described herein.
  • the image and deltasets can be stored at the client 1511 and the image can be displayed at display 1512.
  • the server 1510 to change the image at the cUent 1511 an entire, new image need not be sent. Rather, the multipUcation values for the deltasets (e.g., the values controlled by sUders 1301-03 in Fig. 13) can be sent over the transmission medium 1509 to cause the desired change to the image at display 1512.
  • the system of Fig. 15 can be used as a video phone system where the original image that is sent is that of the speaking party at the server 1510 over the transmission medium 1509 (e.g., plain old telephone system (POTS)). Speech by the user at the server 1510 can be converted into phonemes that are then converted into multiplication values that are transmitted over the transmission medium 1509 with the voice signal to facilitate the "mouthing" of words at the client 1511.
  • POTS plain old telephone system
  • a graphical image of a human can be made to express emotions by applying a deltaset to a neutral, starting image of the human. If the expression of emotions is autonomous, the computer graphical image of the human will seem more life-like. It could be concluded that humans fit into two categories or extremes: one that represents a person who is emotionally unpredictable (i.e., expresses emotions randomly), such as an infant, perhaps; and one that has preset reactions to every stimulation. According to an embodiment of the present invention, an "emotional state space" is created that includes a number of axes, each co ⁇ esponding to one emotion.
  • element 937 provides input for changing the neutral image based on the expression of emotions.
  • An example of pseudo-code for the expression of emotions is shown in Table VIII. In this pseudo-code, two emotions are selected: one that is to be expressed and one that is cu ⁇ ently fading from expression.
  • emoteNum is the number of emotions in the library, float emoteAmounts [ ] ; int emoteNum;
  • ⁇ nextAmount select a new random amount of emotion
  • objectReactionLevel [i] metric which incorporates object's visibility, speed, speed towards viewer, inherent emotional reactivity (how exciting it is), and distance to center of vision;
  • ⁇ mainObject index of largest value in objectReactionLevel
  • currAmount nextAmount
  • emoteAmounts [nextAmount] currAmount
  • emoteAmounts [] is an a ⁇ ay of values for the cu ⁇ ent expression of one of "emoteNum” emotions.
  • a value is set (e.g, between -1.0 and 1.0) to indicate the cu ⁇ ent state of the graphical image (e.g., Fig. 12D shows neutralFace emoting "happy" with a value of 1.0).
  • the nextEmote variable stores the level of the next emotion to be expressed.
  • the lastEmote variable stores the level of the emotion that is cu ⁇ ently being expressed, and is also fading away.
  • the number of seconds for this emotion to fade to 0.0 is stored in the variable decaySecs.
  • the number of seconds for the next emotion to be expressed after the cu ⁇ ent emotion amount goes to 0.0.
  • the objects that are around the graphic image of the person are analyzed to determine which object is of most interest (e.g., by assigning weighted values based on the object's visibility, speed, speed towards the graphical image of the person, the inherent emotional reactivity of the object, and its distance to center of vision for the graphic image of the person).
  • Each object has a data structure that includes a predefined emotion, a degree of reactivity and position. For example, a gun object, would elicit a "fear" emotion with a high degree of reactivity depending on how close it is (i.e., distance) to the person.
  • nextEmotion is selected and a nextAmount is selected based on the object and the random numbers referenced above determine whether that next Emotion is to be expressed by the human image.
  • the human image expresses emotions that are more lifelike in that they are somewhat random, yet can occur in response to specific stimuli.
  • an input device 830 is provided for the input of data for the creation of graphic images to be output to display 820.
  • the input device 830 can be a variety of components including a video camera, a magnetic tracker monitor, etc.
  • selected points are tracked on a person's face.
  • These devices output a stream of information that are commensurate with the coordinates of a number of select locations on a person's face as they move (see element 939 in Fig. 9). For example, six locations around the mouth, one on each eyelid, one on each eyebrow, one on each cheek, can all be tracked and output to the computer system of Fig. 8.
  • a neutral three-dimensional model of a person is created as described above.
  • a test subject e.g., a person
  • a set of markers on his/her face (as described above).
  • three three-dimensional model faces are created, one for each 3D axis (e.g., the x, y and z axes).
  • Each of these models is the same as the neutral model except that the specific marker is moved a known distance (e.g., one inch or other unit) along one of the axes.
  • each marker there is a contorted version of the neutral image where the marker is moved one unit only along the x-axis; a second image where the marker is moved along one unit only along the y-axis; and a third image where the marker is moved along one unit only along the z-axis.
  • Deltasets are then created between the neutral image and each of the three contorted versions for each marker.
  • the input stream of marker positions are received from the input device 830.
  • the neutral image is then modified with the appropriate deltaset(s) rather than directly with the input positions. If marker data is only in two dimensions, then only two co ⁇ esponding distorted models are needed (and only two deltasets are created for that marker). Movement of one marker can influence the movement of other points in the neutral model (to mimic real-life or as desired by the user). Also, the movement of a marker in one axis may distort the model in more than one axis (e.g., movement of the marker at the left eyebrow in a vertical direction may have vertical and horizontal effects on the model).
  • An example of pseudocode for implementing the input of marker positions is shown in Table IX.
  • DeltaSet markerDeltaSets [numMarkers] [numDimensions] ;
  • numMarkers is the number of discrete locations being II tracked on the source face. Typically 6-14, but
  • markerDisplacements is an array of vectors with one vector
  • marker DeltaSets is a 2D array of DeltaSets of size
  • a computer-generated character (or characters) in a multidimensional, computer-generated environment can be automatically made to seem aware of objects in that environment.
  • This apparent “awareness” of the character includes the ability to "look around” for objects in the environment, to turn its eyes, head and the remainder of the body to "focus” on an object once it enters the character's "field of vision", and to create an apparent emotional response to that object. All of this behavior can be automaticaUy created in a computer-generated character using the method described below.
  • the computer-generated character is assigned a list of other objects (or characters) in the environment that the character can be "aware" of.
  • the list could include such objects as a flower, a gun, a chair, etc.
  • This list can also contain information describing how the character is to react emotionally to the object and what factors are to be used for determining the importance of this object relative to the rest of the Ust. VirtuaUy any variable can be used to determine an importance parameter, for example, proximity, velocity, likeability, etc.
  • the character's "field of vision” is specified to be a pyramid or cone-shaped region radiating from the character's eyes in a direction from the pupils. As used herein, this region will be refe ⁇ ed to as the character's "view cone".
  • the environmental position of this view cone is calculated from the position of the character's feet (or base) through the use of user-defined rotation limits and a series of transformations applied to a hierarchical structure representing the person's skeleton.
  • the character is made to track the object with the objective being to modify the eye and body orientation so that the object is centered within the view cone.
  • the eye can initiate the tracking using a nonlinear velocity profile to change its orientation.
  • the velocity profile can have "bell" shape where the eye can move toward the object slowly at first, increasing speed to a maximum, then reducing speed to zero (when the object is centered in the cone).
  • Additional joints are then gradually included in an additive process using feedback from the eye (difference between eye's cu ⁇ ent position and the ideal position in the center of the view cone). For example, as the view cone moves closer to the object, the head of the character can begin to move toward the object (foUowed by the remainder of the body, if desired).

Abstract

A computer-generated character is described having the capability to respond to one or more present object images (933). In one embodiment, the character has a field of vision (e.g., a cone emanating from the character's eyes). Thus, the character will respond only to object images that are in the field of vision. The character's responses can be, for example, movement (i.e., orienting the character's eyes or body toward the identified object image) or displaying emotion (e.g., showing surprise or sadness) (935). If several object images are in the field of vision, an importance parameter (937) can be assigned to each object image so that the character responds to the image having the highest such parameter. The importance parameter can be, for example, proportional to the proximity of the object image to the character or proportional to the velocity of the object image.

Description

Method And Apparatus To Control Responsive Action By A Computer-Generated Character
Related Applications
This application claims priority to the extent available under 35 U.S. C. §119(e)(1) to U.S. Provisional Application No. 065,403, filed November 13, 1997. This application is also a continuation-in-part of U.S. Patent Application No. 08/882,721 filed on June 25, 1997.
Field of the Invention
The present invention pertains to automated methods and apparatuses for the controlling and transforming of two- and three-dimensional images. More particularly, the present invention relates to methods and apparatuses for changing the elements of image through the use of one or more sets of modification data in real time.
Copyright Notice
A portion of the disclosure of this patent document contains material which is subject to copyright protection. The copyright owner has no objection to the facsimile reproduction by anyone of the patent document or patent disclosure as it appears in the Patent and Trademark Office patent file or records, but otherwise reserves all copyright rights whatsoever.
Background of the Invention Referring to Fig. 8, a computer system that is known in the art is shown. The computer system 810 includes a system unit having a processor 811, such as a Pentium® processor manufactured by Intel Corporation, Santa Clara, California. The processor is coupled to system memory 812 (e.g., Random Access Memory (RAM)) via a bridge circuit 813. The bridge circuit 813 couples the processor 811 and system memory 812 to a bus 814, such as one operated according to the Peripheral Component Interconnect standard (Nersion 2.1, 1995, PCI Special Interest Group, Portland, Oregon). The system unit 810 also includes a graphics adapter 815 coupled to the bus 814 which converts data signals from the bus into information for output at a display 820, such as a cathode ray tube (CRT) display, active matrix display, etc. Using the computer system of Fig. 1, a graphical image can be displayed at display 820. The graphical image can be created internally to the computer system 810 or can be input via an input device 830 (such as a scanner, video camera, digital camera, etc.). As is known in the art, a graphical image is stored as a number of two-dimensional picture elements or "pixels," each of which can be displayed.
In the current art, graphical images (e.g., of a person's face) can be changed by allowing the user to modify a graphical image by "moving" (e.g., with a cursor movement device such as a mouse) the two-dimensional location of one or more pixels ( For example: Adobe Photoshop Version 3.0.5 (Adobe Systems, Inc., San Jose, California)). In doing so, the other pixels around the one that is being moved are filled in with new data or other pixel data from the graphical image. For example, the graphical image of the person's face can be modified using this product by making the person's nose larger or smaller. This two-dimensional phenomenon is analogous to stretching and warping a photograph printed on a "rubber sheet". In the Kai's Power Goo product by MetaTools, Inc. (Carpinteria, California), photographic distortions can be performed in "real time" by the operator's "clicking and dragging" with a mouse across the surface of a photo displayed on the computer screen. The operator can see the photograph stretch as the mouse is moved. This procedure covers only two-dimensional art and does not permit any sophisticated character animation such as speech or emotion. In the current art, the gradual change of the shape of one image into that of another as seen in film and video is called a "morph". Current morphs are created by an operator who instructs a computer to distort the shape of a specific starting image into the shape of a specific target image. Morphing programs typically work by allowing the operator to select points on the outline of the specific starting image and then to reassign each of these points to a new location, thereby defining the new outline of the desired target image. The computer then performs the morph by: (1) smoothly moving each of these points along a path from start to finish, and (2) interpolating the movement of all the other points within the image as the morph takes place.
There are two distinct disadvantages to this method described above. First, it requires that a custom morph be created for each desired transformation. Second, because this method requires the selection of a single image or frame upon which the morph is performed, the frame-by-frame progression of character action must stop during the period in which the morph is performed. This is why in current films, characters do not speak or move during the morph procedure. The reason morphs are currently performed relatively quickly (i.e., within a few seconds) is so that this freezing of action is not fully noticed by the audience.
In recent films, whenever a character morphs (e.g., when the villain robot in James Cameron's "Terminator 2" changes to its liquid metal form), the character ceases moving while the morph takes place. In the "Fifth Element" released in May 1997, characters are seen changing from alien to human form while they shake their heads back and forth. Although this gives the character the appearance of moving while the morph is taking place, the underlying 3D image of a character's head is actually frozen while it shakes. This method is merely the 3D equivalent of a "freeze frame". This method cannot enable a morphing character to speak, move or emote while a morph is taking place. The static morphing methods used in today's films are slow and considerably expensive.
Summary of the Invention
According to an embodiment of the present invention a first region of a first graphical image is identified and then it is modified based on a first set of predetermined modification data. Using this method to morph a graphical image, a variety of applications can be performed according to further embodiments of the present invention.
First, the morph (e.g., the application of modification data) for a first starting image can be readily applied to other starting images. In other words, the morphs automatically impart desired characteristics in a custom manner to a multiplicity of starting images. This an improvement over the prior art which requires a customized selection and reassignment of points on a specific starting image to create a morph. A method of the present invention described herein automates this process. Rather than requiring an artist or technician to custom create a morph of a specific image, for example, an adult into that of a child, the morphs of the present invention enable a wide variety of human, animal, or other characters to be rendered chimp-like using a single "chimp" morph. An example of this is shown in Fig. 1, where a single morph relating to "chimpification" is applied to three different starting images. In each case, the resulting image maintains recognizable features of the starting image while uniquely embodying the desired characteristics of the "chimp" morph. The morph has been described thus far as modification data. Examples of modification data includes deltasets and deltazones described in more detail below.
Briefly, deltasets or zones categorically identify regions, feature by feature within differing starting images so that these images are uniquely altered to preserve the automated morph's desired effect. Because a single morph can be applied to a number of different starting images, the morph exists as a qualitative entity independently from the images it acts upon. This independence creates an entirely new tool, a morph library, a collection of desired alterations or enhancements which can be generically used on any starting image to create specific desired effects as illustrated in the above "chimpification" example. Second, once an image has been morphed to add a particular characteristic or quality, the resulting image can be subjected to a different morph to add a second characteristic or quality. Fig. 2 illustrates a simple example of this additive property wherein a "chimp" morph is added to a "child" morph to create a child-like chimp (other examples will be described in further detail below). The additive property of the automated, additive morphing system can be used in a number of ways to bring new functionality and scope to the morphing process. Five distinct additive properties of automated, additive morphs will be described below along with their practical application.
Third, morphs can be provided that allow a graphical image character to speak, move, emote, etc. According to an embodiment of the invention, a moving morph can be created during which a character can continue speaking, moving, and emoting by cross applying an automated additive morph to a ("morph sequence"). The morph sequence that is known in the art (such as what is shown in programs by Dr. Michael Cohen at the University of California at Santa Cruz and products of Protozoa, Inc. (San Francisco, California) allows for computer-generated characters to move their mouths in a manner which approximates speech by running their characters through a sequence of "Niseme" morphs. (A Niseme is the visual equivalent of a phoneme, i.e., the face one makes when making a phonetic sound.) Such programs use a specific initial image of a character at rest, and a collection of target images. Each target image corresponds to a particular facial position or "Niseme" used in common speech. Fig. 3 shows how these target images can be strung together in a morph sequence to make an animated character approximate the Up movements of speech. This figure shows the sequence of Viseme endpoints which enable the character to mouth the word "pony". It is important to note that the creation and performance of this sequence does not require the special properties of the morphing system presented herein. The morphs within this sequence in the prior art are not generalized (all are distortions of one specific starting image), and they are not additive. Visemes used in the prior art follow one after the other and are not added to one another. According to an embodiment of the present invention, the use of morph sequences is extended to generate not only speech, but also emotive flow and the physical components of emotional reactions. For example, Fig. 4 shows the cross-addition of an automated, additive morph to the "pony" morph sequence described above in Fig. 3. In that figure, the four vertical columns of character pictures represent the progressive application of the "chimp" morph described earlier (from left to right multiplying the modification data by multiplication values of 0%, 33%, 66%, 100% prior to application to the starting image). Because the
"chimp" morph is nonspecific as to its starting point (as are all automated additive morphs according to the present invention), it is possible to increasingly apply the "chimp" morph while changing the starting point within the morph sequence, producing the progression shown in the darkened diagonal of squares. This diagonal progression, shown in horizontal fashion at the bottom of Fig. 4 yields a character which can speak while this character is morphing. This is the underlying structure of the moving morph. Traditional morphs (being specific rather than generic) cannot be cross-applied in this manner. Characters created using the methods of the present invention can be made to not only speak, but also emote, and react from morph sequences. Thus, characters can remain fully functional during an automated, additive morph rather than being required to "freeze frame" until the morph has been completed as do the morphs of the prior art. An additional benefit of this cross-additive procedure is that morphs can be stopped at any point to yield a fully functional, consistent new character which is a hybrid of the starting and final characters. Fourth, the methods of the present invention provide for parametric character creation in which newly-created characters automatically speak, move, and emote using modification data stored in a database or library. In the automated, additive morphing system, (1) morphs can exist as qualitative attributes independent of any particular starting image; and (2) morphs can be applied, one after the other, to produce a cumulative effect. When qualitative attributes are defined as appearance parameters (length or width of nose, prominence of jaw parameters, roundness of face, etc.) these attributes can be selectively applied in such a way as to create any desired face from one single starting image. As an example, a multiracial starting character is defined and a morph library of appearance parameters is created which can be used to adjust the characters features so as to create any desired character. Fig. 5 shows an example of this process. The parameter adjustments in this figure are coarse and cartoon-like so as to yield clearly visible variations. In realistic character generation, a much larger number of parameters can be more gradually applied. The first three morphs shown in this illustration are "shape morphs". In the final step of Fig. 5, the coloration or "skin" which is laid over the facial shape is changed rather than the shape itself. This step can be used to create the desired hair color, eye color, skin tone, facial hair, etc. in the resultant character.
Fifth, the parametric character creation described above can be combined with the moving morph, also described above, to create characters which automatically speak, emote and move. This dual application is illustrated in Fig. 6, wherein not only the underlying structure, but also the full speaking and emoting functionality of the original character are automatically transfeπed to the new character. The character shown in Fig. 6 not only contains a mutable physical appearance, but also a full set of Nisemes, emotions, and computer triggered autonomous and reactive behavior. All of these functions can be automatically transferred to a vast range of characters which can be created using parametric character creation. This represents an exponential savings in animation time and cost over existing procedures which require custom creation of not only the character itself, but every emotion, Niseme, blink, etc. that the new character makes.
Sixth, morph sequences can be used simultaneously to combine different behavioral sequences. Fig. 7 illustrates the simultaneous utilization of an emoting sequence and a speaking sequence. In this figure, the Niseme sequence required to make the character say "pony" (left column of pictures) has been added to an emotive morph sequence (center column) in such a manner that the timing of each sequence is preserved. The resultant sequence (right column) creates a character which can simultaneously speak and react with emotions. This procedure can also be used to combine autonomous emotive factors (a computer-generated cycling of deltas representing different emotions or "moods") with reactive factors (emotional deltas triggered by the proximity of elements within the character's environment which have assigned emotive influences on the character). Such procedures can be used to visualize the interplay between conscious and subconscious emotions.
As more fully described herein, according to an embodiment of the present invention, a computer-generated character character (or characters) in a multidimensional, computer-generated environment, as described above, can be automatically made to seem aware of objects in that environment. This apparent "awareness" of the character includes the ability to "look around" for objects in the environment, to turn its eyes, head and the remainder of the body to "focus" on an object once it enters the character's "field of vision", and to create an apparent emotional response to that object. All of this behavior can be automatically created in a computer-generated character using the method described below. The foregoing examples and other examples of the present invention will be described in further detail below.
Brief Description of the Drawings
Fig. 1 shows an example of an automated morph according to an embodiment of the present invention.
Fig. 2 shows an example of an automated, additive morph according to an embodiment of the present invention.
Fig. 3 shows an example of a morph sequence that can be performed according to an embodiment of the present invention. Fig. 4 shows an example of a moving morph according to an embodiment of the present invention.
Fig 5 shows an example of parametric character creation according to an embodiment of the present invention.
Fig. 6 shows an example of automatic behavioral transference according to an embodiment of the present invention.
Fig. 7 shows an example of behavioral layering according to an embodiment of the present invention.
Fig. 8 is a computer system that is known in the art.
Fig. 9 is a general block diagram of an image transformation system of the present invention.
Figs. 10 a-d are polygonal models used for the presentation of a graphical image of a human head or the like.
Figs. 11 a-f are polygonal images showing the application of deltaset in accordance with an embodiment of the present invention. Figs. 12 a-g are graphical images of a person's head that are generated in accordance with an embodiment of the present invention.
Fig. 13 shows an input device for controlling the amount of transformation occurs when applying a deltaset to an image.
Figs. 14 a-d are graphical images of a person's head that are generated in accordance with an embodiment of the present invention.
Fig. 15 shows a communication system environment for an exemplary method of the present invention. Detailed Description
According to an embodiment of the present invention modification data is generated that can be applied to a starting image so as to form a destination image. For example, the modification data can be difference values that are generated by determining the differences between first and second images. Once these differences are determined they can be stored and later applied to any starting image to create a new destination image without the extensive frame-by-frame steps described above with respect to morphing performed in the motion picture industry. These difference values can be created on a vertex-by-vertex basis to facilitate the morphing between shapes that have an identical number of vertices. Alternatively, difference values can be assigned spatially, so that the location of points within the starting image determines the motion within the automated morph. This eliminates the need for explicit identification of vertices and allows these methods and apparatuses to work regardless of a given image's polygonal structure. For simplicity sake we describe an example vertex-based automated additive morphing system below which uses deltasets as the modification data. A position or spatially-based morphing system which can morph images regardless of polygonal structure, such as deltazones (another example of modification data), is created by interpolating the motion between vertices.
An example of the vertices-based embodiment of the present invention includes the generation of a first image (e.g., a neutral or starting image) comprising a first number of vertices, each vertex having a spatial location (e.g., in two- or three-dimensional space) and a second image is generated (e.g., a target or destination image) having an equal number of vertices. A difference between a first one of the vertices of the first image and a coπesponding vertex of the second image is determined representing the difference in location between the two vertices. The difference is then stored in a memory device (e.g., RAM, hard disc drive, etc.). Difference values for all coπesponding vertices of the first and second images can be created using these steps and stored as a variable array (refeπed to herein as a deltaset). The deltaset can then be applied to the first image to create the second image by moving the vertices in the first image to their coπesponding locations in the second image. Alternatively, a multiplication or ratio value can be multiplied by the entries in the deltaset and applied to the first image so that an intermediate graphical image is created. According to a feature of the present invention, the deltaset can be applied to any starting image having an equal number of vertices. This allows the user to create new destination images without performing, again, the mathematical calculations used to create the original deltaset.
Referring to Fig. 9, a general block diagram of an image transformation system 900 of the present invention is shown. According to an embodiment of the present invention, the system 900 includes a library or database of deltasets 931. The library of deltasets 931 can be stored in the system memory 912 or any other memory device, such as a hard disc drive 917 coupled to bus 914 via a Small Computer Standard Interface (SCSI) host bus adapter 918 (see Fig. 8). As described in further detail below, deltasets are variable aπays of position change values that can be applied to the vertices of a starting image. Referring to Fig. 9, the deltaset information is composed and cached in device 932 (e.g., processor 811 and system memory 812 of Fig. 8) where it then can be used to transform a first or starting image having a neutral geometry 933 into a target image having a final geometry 935. Additional geometry manipulation can be performed such as the addition of features (e.g., hair) or actions (e.g., looking around) by device 934. Both the starting and second images can then be displayed at display 920, or any other output device (memory) or sent to file export (e.g., the Internet system).
Inputs to the system 900 of Fig. 9 include a variety of user controls 937, autonomous behavior control 938, and face tracker data input 939 which will be further described below. Other inputs can come from other systems such as the so-called World Wide Web (WWW). Also, audio data can be supplied by audio data input device 940 which can be supplied to deltaset caching and composing device 932.
In this embodiment, the neutral geometry 933 is based on the image of a person's head that has been captured using any of a variety of known methods (e.g., video, scanner, etc.). Referring to Fig. 10, the image data of the person's head is placed onto a polygonal model 1051. The polygonal model comprises a plurality of vertices 1052 and connections 1053 that extend between the vertices. Each polygon 1054 of the polygonal model is defined by three or more vertices 1052. To show the generation and application of deltasets to the polygonal model of Fig. 10a, an example is discussed below using simple polygons (e.g., a square, a triangle, a rectangle, and a circle). Each polygon has an identifiable shape. For example, looking at Fig. 1 la, a square polygon is shown having eight vertices (points 1100 to 1107) in two-dimensional space. By moving individual vertices, the square polygon can be converted into a number of other polygon shapes such as a rectangle (Fig. 1 lb), a circle (Fig. 1 lc) and a triangle (Fig. l id; where vertices 1100, 1101, and 1107 all occupy the same point in two-dimensional space). A deltaset is a set of steps that are taken to move each vertex (1100 to 1107) from a starting polygon to a target or destination polygon. For example, the steps that are taken from the square polygon of Fig. 1 la to the rectangular polygon of Fig. 1 lb include vertices 1105, 1106, and 1107 moving to the left a certain distance "x"; points 1101, 1102, and 1103 moving to the right the same distance "x"; and vertices 1100 and 1104 staying in the same location. Thus, the deltaset defines the path taken by each vertex in transforming the starting polygon to the destination polygon. In other words, the deltaset defines the difference in position of coπesponding vertices in the starting and target polygons. Similarly, deltasets can be created for the transformation of the square polygon of Fig. 1 la to the circle polygon of Fig. 1 lc and of the square polygon of Fig. 1 la to the triangle polygon of Fig. l id.
In this embodiment, the deltaset is created by transforming a starting polygon shape into another, however, one skilled in the art will appreciate that a deltaset can be created that are not based on specific starting and target shapes, but created in the abstract. Moreover, once a deltaset is created, it can be used on any starting shape to create a new shape. For example, the deltaset used to transform the square polygon of Fig. 1 la to the rectangle polygon of Fig. 1 lb (for convenience, refeπed to as Deltaset 1) can be used on the circle polygon of Fig. l ie. Thus, the circle polygon of Fig. 1 lc becomes the starting shape and after applying Deltaset 1, would become the ellipse polygon of Fig. 1 le (i.e., the target shape).
Deltasets can also be combined (e.g., added together) to create new deltasets. Thus, Deltasetl, Deltaset2 (i.e., transform from the square of Fig. 1 la to the circle of Fig. 1 lc), and Deltaset3 (i.e., transform from the square of Fig. 1 la to the triangle of Fig. l ie) can be combined to form a new deltaset (Deltaset4). Applying Deltaset4 to the starting square polygon of Fig. 11a, the target shape of Fig. l lf is achieved. In its simplest form, the starting polygon, destination polygon, and deltaset must have the same number of vertices. Additional algorithms would be necessary to transform between shapes or objects having a differing number of vertices.
An additional method for moving vertices can be derived from the deltaset method wherein the motion to the points of a deltaset are interpolated such that a continuous field of motion is created. These fields which we refer to as deltazones can be used to morph images iπespective of their particular triangle strip set because a one to one coπespondence between movements and vertices upon which the deltasets rely are replaced by a dynamical system of motion which operates on any number of vertices by moving them in accordance with their original location.
Herein, an example of the implementation of deltasets and their operation on graphical images will be described with reference to pseudocode based on "C" and "C++" programming that is known in the art. The datatype structure for a deltaset (DeltasetJType) is similar to that for a basic shape object, and the pseudocode is shown in Table I.
////////////////////////////////////////////////////////////
// // Basic datatype structure of a deltaset. typedef struct { dataPoint_Type *dataPoints; // Array of delta values int nu Points; // Number of points in above } deltaSet_Type, shapeJType;
// end of datatype structure
////////////////////////////////////////////////////////////
// Table I
As seen from the above, the DeltasetJType and shapejype variables each include an array of [numpoints] values. Each value is a position of a vertex for the shapejype variable and delta value for the DeltasetJType variable.
An example of a core routine for the creation of a deltaset through the calculation of the steps or difference values from a starting object to a destination object is shown in Table II.
iiin/iiii/i 11 ii/i/i/i mi / ii ui ι///ιιι/ιι//ι/ιm/ιιιιιιm
II
II core Routine to calculate the steps from a source (neutral) II object to a destination object and store those steps
// in a deltaset. DeltaSet_Calc (deltaSetJType *dset, shapeJType *src shapeJType *dest) { int i; int numpts; dataPointJType delta;
// Ensure that dset has a matching number of data points II as the shapes.
Numpts = src → numPoints; deltaSet SetNumPts (dset, numpts);
// For each data point in the objects, calculate the // difference between the source and the destination II and store the result in the deltaset. for (i = 0; i < numpts; i++) { delta = dest → dataPoints [i] - src → dataPoints [i] ; dset → dataPoints [i] = delta; }
}
// end of routine
Figure imgf000013_0001
Table II
As can be seen from the above pseudocode, the variable "delta" is used to temporarily store the difference in position between the source (src) and destination (dest) for each of the vertices in the shape. Each delta value is then stored in a deltaset array (dset). Once a deltaset array is created, it can be easily applied to any starting shape having an equal number of vertices to form a new target shape.
An example of pseudocode that can be used to modify a deltaset so that it can be applied to a starting shape is shown in Table III. I//I///II I I//I//I/////I//II//II I lll/l/l/l/ll
// core Routine to apply the steps stored in a // deltaset to a shape, with a percentage amount. // Note that negative amounts can be used. deltaSet_Apply (deltaSetJType *dset, shapeJType *dest, float amount)
{ int i; if (amount == 0.0) return; for (i = 0 ; i < dset → numPoints; i++) { dest → dataPoints [i] += (dset → dataPoints [i] * amount) ;
} }
// end of routine I III llll 11 lllll 11 III I lllll llll I III III/ II
Table in
As seen from above, during the routine Deltaset_Apply, calculations for a single transition are performed based on a percentage amount passed using the "amount". Each data point in the destination shape is calculated based on the deltaset value for that point multiplied by the percentage value "amount" (which can have a negative value or a value greater than 1).
The pseudo-code of Table IN shows two utility routines that are used for creating a new, blank deltaset and to set the number of datapoints.
llll II II III
II
II utility Routine to create a new, Blank deltaset deltaSetJType *ΝewDeltaSet ( )
{ allocate a new deltraSetJType and return a pointer to it.
}
// end of routine
//'// II III
II 1 111 11 / 1 f 1 1 i l l I II 111 II II 111111 //// III //// 111 III I /II I lllll II I II
II utility Routine to set the number of datapoints // in a deltaset. deltaSet_SetNumPts (deltaSetJType *dset, int numPoints) { de-allocate dset → dataPoints, if not already empty; allocate an array of type dataPointJType and size numPoints, and put it in dset → dataPoints; dset → numPoints = numPoints;
}
// end of routine
////////////////////////////////////////////////////////////
// Table IV
With reference to Table V, an example of pseudo-code is shown for the transformation from the square shape of Fig. 1 la to the shape of Fig. l ie.
//////////////////////////////////////// ///////// III II 11/11/ II
II The following pseudocode example shows how // to use the above deltaset routines to morph from a // square to a new shape which has features of both the // rectangle and the circle. 1/ Define the basic dataPoint that makes shapes & deltasets. typedef 2DVector dataPointJType;
// Declaration of basic shapes. shapeJType Square,
Rectangle, Circle;
// Declaration of deltasets. deltaSetJType rect_dset, circ_dset;
// Declaration of a new shape to get the data put into it. shapeJType newshape;
// Initialize shapes. shape 3etNumPoints (&square, 8); shape_SetNumPoints (&rectangle, 8) ; shape SetNumPoints (&circle, 8); 01/37218
// set data points of square, rectangle, and circle. shape SetPoints &square, ( o.o, 1.0),
( 1.0, 1.0),
( 1.0, 0.0),
( 1.0, -1.0),
( o.o, -1.0),
(-1.0, -1.0),
(-1.0, 0.0),
(-1.0, 1.0));
&rectangle, ( o.o, 1.0),
( 2.0, 1.0),
[ 2.0, 0.0),
2.0, -1.0),
0.0, -1.0),
-2.0, -1.0),
-2.0, 0.0),
-2.0, 1.0));
&circle, ( 0.0, 1.0).,
0.0, 1.0),
0.0, 1.0),
0.0, 1.0),
0.0, 1.0),
0.0, 1.0),
0.0, 1.0),
0.0, 1.0),
0.0, 1.0));
II calculate DeltaSets deltaSet alc (&rect_dset, &square, &rectangle) ; deltaSet_Calc (&circ_dset, &square, &circle) ;
III lllll II I III
II
II The resulting DeltaSets now contain the values:
// rect dset: ( 0.0, 0.0)
// ( 1.0, 0.0)
// ( 1.0, 0.0)
// ( 1.0, 0.0)
// ( o.o, 0.0)
// (-1.0, 0.0)
// (-l.o, 0.0)
// (-1.0, 0.0)
// llcirc dset: ( 0.0, 0.0)
// (-0.4, -0.4)
// ( o.o, 0.0)
// (-0.4, 0.4)
// ( o.o, 0.0)
// ( 0.4, 0.4)
// ( o.o, 0.0)
// ( o.o, -0.4) llll II
II // Apply the DeltaSets. newshape = copy of square; deltaSet_Apply (&rect_dset, &newshape, 1.0); deltaSet_Apply (&circ_dset, &newshape, 1.0); /II1II///II/I/I//III/II//I Ill/Ill /I//IIII/I//I/III////II///I
II
II newshape now contains values which
// look like the ellipse drawn above:
// ( o.o, 1.0) II ( 1.6, 0.6)
II ( 2.0, 0.0)
II ( 1.6, -0.6)
II ( o.o, -1.0)
II (-1.6, -0.6) II (-2.0, 0.0)
II (-1.6, 0.6) iiiimimiiiimi/////i///m n ii/mi/ii/i II i/i/i iim/ii
II
11 to create the egg-ish shape above, II one would simply add a third DeltaSet // based on a triangle shape.
// end of simple geometry example///////////// Table N
As seen from the pseudo-code of Table N, a datapoint is defined as a two-dimensional vector and the square, rectangle, and circle shapes are defined as eight points with abscissa and ordinate values. Deltasets are then calculated for the transition from square to rectangle and from square to circle. As seen above, the resulting deltasets (rect_dset and circ_dset) represent differences between abscissa and ordinate values of the respective starting and target images. The deltasets can then be appUed to a starting shape (in this example, the starting image, newshape, is set to the square shape of Fig.1 la). First the rect_dset deltaset is applied to the square shape to form an intermediate shape, and then the circ_dset deltaset is applied to this intermediate shape to form the destination shape that is shown in Fig. lie. To get to the shape of Fig.1 If, a deltaset representing a transformation between the square shape of Fig.1 la to the triangle shape of Fig.1 Id is created and appUed to the eUipse shape shown in Fig.1 le.
The deltasets example, above, can be easily extended to a three-dimensional representation. The example can also be expanded to more intricate and complex applications such as in three-dimensional space and facial animation. As the application of deltasets becomes more complex in facial animation, several additional features can be added. For example, certain motions of the face are limited to certain defined areas, such as blinking of the eyes. Accordingly, a deltaset for an entire face would be mostly 0's (indicating no change) except for the eyes and eyelids, thus isolating these areas for change. To improve efficiency, the deltaset datatype can be changed so that only nonzero values are stored. Thus during the execution of the Deltaset_apply routine, only the points that change are acted upon, rather than every point in the graphical representation. An embodiment of facial animation is described below with reference to the pseudocode example of Table VI.
Facial Moving-Morphing example
// The following pseudocode example shows how deltaset
// morphing is used to fully animate and morph a face.
// Note that this achieves a 'moving morph" , wherein the II overall structure of the face can smoothly change without
// interrupting the process of other facial animation such
// as blinking, emoting, and speaking.
//////////////////////////////////////////////////////////// // II Setup
// Define the basic dataPoint that makes shapes & deltasets. typedef 3DVector dataPoint Type;
II Declaration of basic shapes & deltasets. shape Type neutralFace, overallMorphface, blinkFace, emoteFaces [] , speakFaces [], newShapeFace; deltaSet Type overall_dset, blink_dset, emote_dsets [] , speak_dsets [] ;
// neutralFace is the geometry of the basic 3D face, II no expression, looking straight ahead.
// overallMorphFace is a radically different face,
// say a cat.
// blinkFace is the same as neutralFace but with eyes closed. II emoteFaces is an array of faces with different emotions or
// expressions. ie Happy, sad, angry, trustful etc. // speakFaces is an array of faces in different phoneme (or // 'viseme") positions. ie OO", 'AE" , *L", *M" etc. // newShapeFace is a shape which is the destination of the // morphing.
// Declarations of amount of morphs.
// These typically range from 0.0 to 1.0, but can be outside // of this range. float overallMorphArαount, blinkAmount, emoteAmounts [ ] , speakAmounts [ ] ;
// Other declarations float time; // a pseudo time variable. int numEmotes; // the number of emotion faces. int numSpeaks; // the number of viseme faces.
II
II initialize the deltasets deltaSet Zalc (&overall_dset, &neutralFace, &blinkFace) ; deltaSet Calc (&blink_dset, &neutralFace, SblinkFace) ; for (i = 0; i < numEmotes; i++) deltaSet Calc ( &emote_dsets [i] , &neutralFace, &emoteFaces [i] ) ; for (i = 0; i < numSpeaks; i++) deltaSet alc ( &speak_dsets [i] , &neutralFace, &speakFaces [i] ) ; ll l II
II Main animation loop while (KeepRunning)
{ time += 0.1;
// Calculate the amount each morph is to be applied. // For emoteAmounts and speakAmounts, this is an array // of values most of which are zero.
// (Note that deltaSet_Apply ( ) returns immediately if II amount == 0.0)
CalBlinkAmount (SblinkAmount) ; CalEmoteAmounts (emoteAmounts) ; CalcSpeakAmounts (speakAmount) ; overallMorphAmount = sin (time) * 0.5 + 0.5; /I Reset the working copy of the face. newShapeFace = Copy of neutralFace;
// Apply the data sets controlling facial animation. deltaSet_Apply (blink_dset, &newShapeFace, blinkAmount); for (i = 0; i< numEmotes; i++) deltaSet_Apply (&emote_dsets [i] ,
&newShapeFace, emoteAmounts [i]); for (i = 0 i < numSpeaks; i++) deltaSet_Apply (&speak_dsets [i] , &newShapeFace, speakAmounts [i] ) ; // Apply the overall shape morph deltaSet_Apply (&overall_dset,
&newShapeFace, overallMorphAmount) ; } // End of animation loop nm II/////I////II/III//I/I/II/I/////I/ iii/i/ii/ii/iimm //
II end of facial moving morph example
Table VI
As seen from the above, the animated face image comprises three-dimensional datapoints. "NeutralFace" is a starting image that will be changed based on one or more deltasets. The neutralface image is shown in Fig. 12a with eyes looking straight ahead and no expression. "OverallMorphFace" is a different face from NeutralFace. In this example, OverallMorphFace is in the image of a cat shown in Fig. 12b. A face showing a completed facial movement is "blinkFace" which shows the same face as NeutralFace but with the eyes closed (see Fig. 12c). "EmoteFaces" is an aπay of the neutralFace augmented to show one or more emotions. For example, Fig. 12d shows the neutralFace emoting happiness, Fig. 12e shows neutralFace emoting anger, etc. "SpeakFaces" is an array of faces showing expressions of different phonemes, a phoneme, or viseme, is a speech syllable used to form spoken words (e.g., the "oo", "ae", "1", and "m" sounds). As an example, Fig. 12f shows neutralFace expressing the phoneme "oo". The amount of transformation or morphing can be controlled by multiplication or multipUer values, overallMorphAmount, blinkAmount, emote Amountsf], and speakAmounts[]. As an example, if blinkAmount is set to 1.0, then when applying a deltaset for blinking to neutralFace of Fig. 12a wiU achieve the face of Fig. 12c (i.e., 100% of the blink is applied). Numbers less than or greater than 1.0 can be selected for these variables.
Deltasets are then created for transforming the neutralFace image. As can be seen from the pseudocode of Table VI, deltaset overaU_dset is created for the changes between neutralFace (Fig. 12a) and OverallMorphFace (Fig. 12b); deltaset blink_dset is created for the changes between neutralFace (Fig. 12a) and blinkFace (Fig. 12c); deltasets emote_dsets[] are created between neutralFace (Fig. 12a) and each emotion expression image (e.g., the "happy" emoteFace[] of Fig. 12d and the "angry" emoteFace[] of Fig. 12e; and deltasets speak_dsets[] are created between neutralFace (Fig. 12a) and each phoneme expression image (e.g., the "oo" speakFace[] of Fig. 12f).
In the main animation loop, the amounts for each deltaset transformation are calculated (e.g., the values for overallMorphAmount, blinkAmount, emoteAmount[]s, and speakAmounts[]). For the emoteAmounts[] and speakAmounts[] aπays, these values are mostly zero. The new facial image to be created is stored in newShapeFace and is originally set to the NeutralFace image. Then, the deltasets that were calculated above, are applied to the newShapeFace in amounts set in transformation variables calculated above. In this example, overallMorphAmount is set to 0.5 (i.e., halfway between neutralFace and OverallMorphFace; blinkAmount is set to 1.0 (i.e., full blink - eyes closed); emoteAmount[] for "happy" is set to 1.0, while all other emoteAmount[] values are set to 0; and speakAmount[] for the phoneme "oo" is set to 1.0 while all other speakAmount[] values are set to 0. The resulting image based on these variables is shown in Fig. 12g. As described above, the deltasets that have been created can now be applied to another starting image (i.e., an image other than neutralFace shown in Fig. 12a) without recalculation. This is shown in the examples of Figs. 14a-d. Using the method set forth above, a deltaset can be created between neutralFace and OverallMorphFace which signifies the changes between a male human face (shown in Fig. 12a) and the face of a cat (shown in Fig. 12b). As seen in Fig. 14a, a neutral, male human face is shown without appUcation of this deltaset. Fig. 14b shows the effects of the application of this deltaset
(or fractional value of this deltaset) in that the male human face now looks "cat-like". The underlying polygonal model for Figs. 14a and 14b are shown in Figs. 10a and 10b, respectively. As seen in Figs. 10a and 10b, vertices of the first image are shown to move to different positions in the destination image. Referring back to Figs. 14a and 14b, one skilled in the art will appreciate that the color of each pixel can also change in accordance with a deltaset storing the difference in color for each pixel in the human and cat images of these figures. The deltaset described above can be appUed to a neutral, female human face (see Fig. 14c) to form a new destination image (see Fig. 14d).
Also, the variables (e.g., overallMorphAmount) can be input using graphical sUders shown in Figs. 13 a-d. In this example, several deltasets have been previously determined. A first deltaset represents the difference between a starting image with Ups in a first position and a target image with Ups in a second, higher position. A second deltaset represents the difference between a starting image with jaw in a first position and a target image with the jaw in a second, jutted-out position. A third deltaset represents the difference between a starting image with relatively smooth skin and a target image with old (i.e., heavily textured skin). Referring to Figs. 13a-d, the amount each of these first, second, and third deltasets is applied to the neutral image of Fig. 13a as determined by the placement of one or more sliders 1301-1303. In this example, if the slider is in a central position, then the deltaset is not applied at all (i.e., the deltaset multiplied by 0.0 is appUed to the image). If the slider is placed to the right, the deltaset multiplied by 1.0 is applied to the image and if it is placed to the left, the deltaset multiplied by -1.0 is applied to the image. Accordingly, in Fig. 10a, sUders 101-03 are in a central position. In Fig. 13 b, sUder 1301 is moved (e.g., with a mouse, not shown) to the right causing the first deltaset (multiplied by 1.0) to be applied to the neutral image of Fig. 13a (thus, the Ups are moved up some distance). Likewise, in Fig. 13c, slider 1302 is moved to the left, and the second deltaset described above (multiplied by -1.0) is applied to the image of Fig. 13b (thus, the jaw is recessed). Also, in Fig. 13d, sUder 1303 is moved to the right causing the third deltaset (multiplied by 1.0) to be applied to the image of Fig. 13 c. One skilled in the art will appreciate that the sliders 1301-03 can have intermediate values between -1.0 and 1.0 or can have values beyond this range.
As seen from the above, once one or more deltasets have been created, the multipUer values that are controUed by sliders 1301-03 (for example) of the embodiment of Fig. 13 would be the only input necessary to modify a starting image. This feature is advantageous in the area of communications. Referring to Fig. 15, a communication system is shown. In this system, a first component (such as server 1510) is coupled via a transmission medium 1509 to a second component (such as cUent 1511 coupled to a display 1512). In this example, the transmission medium 1509 is the so-called Internet system that has a varying, but limited bandwidth. The server 1510 and client 1511 are computer systems similar to system 801 of Fig. 8. A first image (e.g., a person's face) is transmitted over the transmission medium 1509 from the server 1510 to the client as weU as any desired deltasets (as described above). Some code may also be sent, operating as described herein. The image and deltasets can be stored at the client 1511 and the image can be displayed at display 1512. For the server 1510 to change the image at the cUent 1511 , an entire, new image need not be sent. Rather, the multipUcation values for the deltasets (e.g., the values controlled by sUders 1301-03 in Fig. 13) can be sent over the transmission medium 1509 to cause the desired change to the image at display 1512. Thus, a great savings in bandwidth is achieved allowing greater animation and control of the image. In another example, the system of Fig. 15 can be used as a video phone system where the original image that is sent is that of the speaking party at the server 1510 over the transmission medium 1509 (e.g., plain old telephone system (POTS)). Speech by the user at the server 1510 can be converted into phonemes that are then converted into multiplication values that are transmitted over the transmission medium 1509 with the voice signal to facilitate the "mouthing" of words at the client 1511.
As described above, a graphical image of a human, for example, can be made to express emotions by applying a deltaset to a neutral, starting image of the human. If the expression of emotions is autonomous, the computer graphical image of the human will seem more life-like. It could be concluded that humans fit into two categories or extremes: one that represents a person who is emotionally unpredictable (i.e., expresses emotions randomly), such as an infant, perhaps; and one that has preset reactions to every stimulation. According to an embodiment of the present invention, an "emotional state space" is created that includes a number of axes, each coπesponding to one emotion. For example, assuming that there are only two emotions, "happy" and "alert", then at point (1.0, 0.0), the person is happy and not sleepy or excited; at point (0.0, 1.0), the person is neither happy nor sad, but is excited, at point (-1.0, -1.0), the person is sad and sleepy. Though there are many more emotions that can be expressed, a person typically will be expressing no more than one or two emotions at a time. Referring back to Fig. 9, element 937 provides input for changing the neutral image based on the expression of emotions. An example of pseudo-code for the expression of emotions is shown in Table VIII. In this pseudo-code, two emotions are selected: one that is to be expressed and one that is cuπently fading from expression.
PSEUDOCODE example
// This is pseudocode based in part on *C" . // First is pseudocode for the random walk // style of autonomous emoting, second // is the reaction style. II These routines determine the amount that each emotion // in the emotion library is currently expressed // in the artificial human. They do not // actually express the emotions. One method // of expressing the emotions is detailed above.
II These variables are the basic
// output.
// emoteAmounts is an array of floats that represents
// the degree to which each emotion in the emotion II library is currently playing on the face. // emoteNum is the number of emotions in the library, float emoteAmounts [ ] ; int emoteNum;
II I I///IIIII///IIII/////III////I/I II These variables are the two
// emotions present at one moment. // nextEmote & nextAmount are the // current destination emotion & // how much of it. II lastEmote is the emotion currently
// fading away, int nextEmote = 0; float nextAmount = 0.0; int lastEmote = 0;
II This variable is the number // of seconds it will take the // lastEmote to fade completely, float decaySecs = 3.0; III I III II lllll I
II This variable is the number
// of seconds it will take to
// go from the current emotion amount
// to the next amount. float changeSecs = 0.5;
/////////////////////
II Routine to use a random walk to
// navigate an emotional state-space.
// This implementation uses only two II emotions at one time, and calls them
// nextEmotion and lastEmotion. The
// dynamic model is basically that of a
// human baby, emoting at random.
// The routine basically chooses an emotion II To go to, then increases its value while
// decreasing the value of the previous one.
// The input variable dt is the amount of
// time elapsed since the last call. calcEmoteAmountsRandom( float dt) { II These variables are probabilities // of an event per second, float probabilityOfNewEmote = 0.01; float probabilityOfNewAmount = 0.2;
// Decay old emotions, go towards new. DoDecayAndRamp ( ) ; mi iii/iiiii/iii/ii iiimii//////
II now decide if we go to a new emotion. // decide if we want to go to a new value // of the current emotion without changing // which emotion it is. if (unitRandO *dt <= probabilityOfNewAmount)
{ nextAmount = select a new random amount of emotion;
} // decide if we want to go to a new emotion if (unitRandO *dt <= probabilityOfNewEmote)
{ nextEmote = a random integer >= zero and < emoteNum; nextAmount = select a new random amount of emotion;
}
}
II End of routine. //////////////////////////////
///////////////////////////////////////// // Routine to calculate the amount // of each emotion based on reactions // to objects in the scene.
// This routine relies on objects with data- // structures that contain an emotion, // a degree of reactivity, and position. CalcEmoteAmountsReact (float dt) {
// Decay old emotions, go towards new. DoDecayAndRamp ( ) ;
// Determine object of most interest. for (i = 0; i< numberOfObjects; i++) { objectReactionLevel [i] = metric which incorporates object's visibility, speed, speed towards viewer, inherent emotional reactivity (how exciting it is), and distance to center of vision;
} mainObject = index of largest value in objectReactionLevel;
// Set next Emotion & Amount. nextEmotion = Object #mainObject → reaction; nextAmount = objectReactionLevel [mainObject] ;
} II End of routine // Note that mainObject is also used to move the artificial
// human's eyes and head towards the object, or to start // walking towards the object, and other manifestations
// of being interested in something.
////////////////////////////////////////////
1/1/ 11 ll/l/l/l I////III/////I/I/I 1/ Routine to decay the last emotion and
// ramp towards the next value of the // new emotion. DoDecayAndRamp ( )
{ 1/ Decrease value of all emotions besides current one. for (i = 0; i < emoteNum; i++) { if (i != nextEmote) { emoteAmounts [i] -= dt/decaySecs; if (emoteAmounts [i] < 0.0) emoteAmounts [i] 0.0; } }
// Change value of current emotion towards
// next level.
// First, calculate the direction of change. currAmount = emoteAmounts [lastEmote] ; diff = nextAmount - currAmount; if (diff > 0.0) direction = 1.0; else if (diff < 0.0) direction = 1.0; else direction = 0.0; // Now go in that direction at appropriate speed. currAmount += dt * direction * changeSecs; II stop at ends. if ((direction == 1.0 AND currAmount > nextAmount) OR
(direction == -1.0 AND currAmount < nextAmount) ) currAmount = nextAmount; emoteAmounts [nextAmount] = currAmount;
} // end of decaying and ramping routine II I llll llll II lllll lllll II /III I II 1/1/ 1 II I 111111 III 1111111111111 I I Utility function unitRand. float unitRandO
{ return a random number >= 0 . 0 and <= 1 . 0 ;
}
Table VIII
As seen from above, emoteAmounts [] is an aπay of values for the cuπent expression of one of "emoteNum" emotions. For example, for the emotion "happy", a value is set (e.g, between -1.0 and 1.0) to indicate the cuπent state of the graphical image (e.g., Fig. 12D shows neutralFace emoting "happy" with a value of 1.0). The nextEmote variable stores the level of the next emotion to be expressed. The lastEmote variable stores the level of the emotion that is cuπently being expressed, and is also fading away. The number of seconds for this emotion to fade to 0.0 is stored in the variable decaySecs. The number of seconds for the next emotion to be expressed after the cuπent emotion amount goes to 0.0.
During the CalcEmoteAmountRandom routine, probability values for going to the next emotion (probabilityOfNewEmote) and of changing to a new amount for the cuπent emotion (probabilityOfNewAmount) are set. Then a random number is generated, and if that number is less than the probability value, a new random amount of emotion is assigned to the variable nextAmount. a second random number is selected, and if that number is less than the probability value, a next emotion is selected from the available ones, and a random amount is assigned to the nextAmount variable.
During the routine CalcEmoteAmountsReact, the objects that are around the graphic image of the person are analyzed to determine which object is of most interest (e.g., by assigning weighted values based on the object's visibility, speed, speed towards the graphical image of the person, the inherent emotional reactivity of the object, and its distance to center of vision for the graphic image of the person). Each object has a data structure that includes a predefined emotion, a degree of reactivity and position. For example, a gun object, would elicit a "fear" emotion with a high degree of reactivity depending on how close it is (i.e., distance) to the person. Accordingly, based on the object of most interest (and the relationship between the person and the object), a nextEmotion is selected and a nextAmount is selected based on the object and the random numbers referenced above determine whether that next Emotion is to be expressed by the human image. Using the routines of Table VIII, the human image expresses emotions that are more lifelike in that they are somewhat random, yet can occur in response to specific stimuli.
Referring back to Fig. 8, an input device 830 is provided for the input of data for the creation of graphic images to be output to display 820. The input device 830 can be a variety of components including a video camera, a magnetic tracker monitor, etc. In one such system, selected points are tracked on a person's face. These devices output a stream of information that are commensurate with the coordinates of a number of select locations on a person's face as they move (see element 939 in Fig. 9). For example, six locations around the mouth, one on each eyelid, one on each eyebrow, one on each cheek, can all be tracked and output to the computer system of Fig. 8.
The method of face tracking according to an embodiment of the present invention, a neutral three-dimensional model of a person is created as described above. A test subject (e.g., a person) is used having a set of markers on his/her face (as described above). For each marker, three three-dimensional model faces are created, one for each 3D axis (e.g., the x, y and z axes). Each of these models is the same as the neutral model except that the specific marker is moved a known distance (e.g., one inch or other unit) along one of the axes. Thus, for each marker, there is a contorted version of the neutral image where the marker is moved one unit only along the x-axis; a second image where the marker is moved along one unit only along the y-axis; and a third image where the marker is moved along one unit only along the z-axis. Deltasets are then created between the neutral image and each of the three contorted versions for each marker.
With the deltasets created, the input stream of marker positions are received from the input device 830. The neutral image is then modified with the appropriate deltaset(s) rather than directly with the input positions. If marker data is only in two dimensions, then only two coπesponding distorted models are needed (and only two deltasets are created for that marker). Movement of one marker can influence the movement of other points in the neutral model (to mimic real-life or as desired by the user). Also, the movement of a marker in one axis may distort the model in more than one axis (e.g., movement of the marker at the left eyebrow in a vertical direction may have vertical and horizontal effects on the model). An example of pseudocode for implementing the input of marker positions is shown in Table IX.
// This pseudocode is based in part on *C" .
// It takes as input: 1/ * An array of vector data representing the spacial
// displacement of a set of facial markers. // * An array of DeltaSets setup as described above,
// with numDimensions DeltaSets for each marker.
// * a 3D model of a 'neutral face", as described above.
// It outputs: // * a 3D model of a face which mimics the motion of the
// actual face with which the markers are associated. int numMarkers; int numDimensions; float markerDisplacements [numMarkers] [numDimensions];
DeltaSet markerDeltaSets [numMarkers] [numDimensions] ;
Shape neutralFace;
Shape outputFace;
// numMarkers is the number of discrete locations being II tracked on the source face. Typically 6-14, but
// under no limitations.
// numDimensions is the number of dimensions reported by
// the markerDisplacements array. II markerDisplacements is an array of vectors with one vector
// for each marker on the source face. These values should
// be updated once per frame. II marker DeltaSets is a 2D array of DeltaSets of size
// numMarkers x numDimensions.
// neutralFace is the original, undistorted 3D face model.
// outputFace is a 3D model that will mimic the source face .
//////////////////////////////////////////////////////////// II The main animation loop. Runs once per frame. MainAnimationLoop ( )
{ outputFace = copy of neutralFace;
// Loop over each marker and each reported dimension. for ( - 0; m < numMarkers; m++)
{ for (d = 0; d < numDimensions; d++)
{ deltaSet_Apply (markerDeltaSets [m] [d] ,
& outputFace, MarkerDisplacements [m] [d] ) ;
} } } // // End of main animation loop.
////////////////////////////////////////////////////////////
Table IX As seen from the above, the neutral face image that is input is modified with the created deltasets to mimic the resultant movements in the face caused by physically moving the attached markers. Without distortion, neutralFace is the original 3D face model and outputFace is a 3D model that mimics the movement of the subject's face. During the main animation loop, which can run once per frame, each marker is analyzed for its position. The resulting displacement of the marker is then applied to the outputFace (which starts as a copy of neutralFace) through the use of the Deltaset_apply routine discussed above and the deltasets that have been previously created.
RESPONSIVENESS OF THE COMPUTER-GENERATED CHARACTER According to an embodiment of the present invention, a computer-generated character (or characters) in a multidimensional, computer-generated environment, as described above, can be automatically made to seem aware of objects in that environment.
This apparent "awareness" of the character includes the ability to "look around" for objects in the environment, to turn its eyes, head and the remainder of the body to "focus" on an object once it enters the character's "field of vision", and to create an apparent emotional response to that object. All of this behavior can be automaticaUy created in a computer-generated character using the method described below.
1. ASSIGNMENT OF AN EMOTIONAL VALUE TO ENVIRONMENTAL OBJECTS The computer-generated character is assigned a list of other objects (or characters) in the environment that the character can be "aware" of. For example, the list could include such objects as a flower, a gun, a chair, etc. This list can also contain information describing how the character is to react emotionally to the object and what factors are to be used for determining the importance of this object relative to the rest of the Ust. VirtuaUy any variable can be used to determine an importance parameter, for example, proximity, velocity, likeability, etc.
2. DETERMINATION OF SEGMENT OF ENVIRONMENT "VISIBLE" TO CHARACTER
In this embodiment of the present invention, the character's "field of vision" is specified to be a pyramid or cone-shaped region radiating from the character's eyes in a direction from the pupils. As used herein, this region will be refeπed to as the character's "view cone". The environmental position of this view cone is calculated from the position of the character's feet (or base) through the use of user-defined rotation limits and a series of transformations applied to a hierarchical structure representing the person's skeleton.
3. DETERMINATION OF OBJECT(S) MOST "INTERESTING" TO CHARACTER The position of each object in the person's list is compared to the view cone to determine its visibility (i.e., is the object in the view cone?). Those objects that lie within the cone are considered visible and the individual importance parameter for each visible object can then be calculated. For example, if the importance factor is based solely on proximity, the object that is closest to the character will have the highest importance factor. In this example, the object with the highest importance takes precedence. If this value is above a predefined value, then the following actions are taken.
4. PHYSICAL RESPONSE OF CHARACTER TO OBJECT
First, the character is made to track the object with the objective being to modify the eye and body orientation so that the object is centered within the view cone. The eye can initiate the tracking using a nonlinear velocity profile to change its orientation. For example, the velocity profile can have "bell" shape where the eye can move toward the object slowly at first, increasing speed to a maximum, then reducing speed to zero (when the object is centered in the cone). Additional joints are then gradually included in an additive process using feedback from the eye (difference between eye's cuπent position and the ideal position in the center of the view cone). For example, as the view cone moves closer to the object, the head of the character can begin to move toward the object (foUowed by the remainder of the body, if desired). During this process, the person's emotional state is modified using the emotional reaction associated with the selected object as defined in the object list (discussed in more detail above). An embodiment of the method of the present invention is shown below in the form of pseudo-code: For each person in the world {
For each object in the person's list { Check visibility status
If visible set object's importance value based on user definable attribute (proximity, velocity, likeability, etc.).
}
If the highest importance value exceeds a threshold value {
Set eye to track object
Use cuπent eye position to set head/body Set emotional response for person based on tracked object
} }

Claims

What is claimed is:
3. A method for transforming a graphical image, comprising:
(a) providing a computer-generated character, said character having a field of vision; (b) providing an object image proximate to said character; and
(c) determining whether said object image is within the field of vision of said character.
2. The method of claim 1 further comprising: modifying the character so that said object image is centered in said field of vision when said object image is within the field of vision of said character.
3. The method of claim 2 wherein said character is a person including eyes and the character is modified by changing an orientation of the eyes.
4. The method of claim 2 wherein said character is a person including a body and the character is modified by changing an orientation of the body.
5. The method of claim 1 wherein said character is capable of expressing a plurahty of emotions, the method further comprising:
(d) retrieving a first set of modification data from a library, said modification data being related to an expression of a first emotion;
(e) generating a multiplication value based on said object image; (f) multiplying said first set of modification data by said multipUcation value; and (g) modifying said character based on said first set of modification data.
6. A method of modifying a graphical image comprising:
(a) providing a computer-generated character, said character having a field of vision;
(b) providing a plurality of object images proximate to said character; and (c) determining whether each of said object images is within the field of vision of said character.
7. The method of claim 6 wherein each object image is assigned an importance parameter, the method further comprising:
(d) identifying one of said object images having the highest importance parameter.
8. The method of claim 7 wherein each importance factor is proportional to a proximity of one of said object images to said character.
9. The method of claim 7 wherein each importance factor is proportional to a velocity of one of said object images.
10. The method of claim 7 further comprising:
(d) modifying the character so that said identified object image is centered in said field of vision when said object image is within the field of vision of said character.
11. The method of claim 10 wherein said character is a person including eyes and the character is modified by changing an orientation of the eyes.
12. The method of claim 10 wherein said character is a person including a body and the character is modified by changing an orientation of the body.
13. The method of claim 7 wherein said character is capable of expressing a plurality of emotions, the method further comprising:
(d) retrieving a first set of modification data from a library, said modification data being related to an expression of a first emotion;
(e) generating a multiplication value based on said object image;
(f) multiplying said first set of modification data by said multiplication value; and
(g) modifying said character based on said first set of modification data.
PCT/US1999/016553 1998-07-21 1999-07-21 Method and apparatus to control responsive action by a computer-generated character WO2001037218A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
EP99937371A EP1177528A1 (en) 1998-07-21 1999-07-21 Method and apparatus to control responsive action by a computer-generated character

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US9352398P 1998-07-21 1998-07-21
US60/093,523 1998-07-21

Publications (1)

Publication Number Publication Date
WO2001037218A1 true WO2001037218A1 (en) 2001-05-25

Family

ID=22239407

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US1999/016553 WO2001037218A1 (en) 1998-07-21 1999-07-21 Method and apparatus to control responsive action by a computer-generated character

Country Status (2)

Country Link
EP (1) EP1177528A1 (en)
WO (1) WO2001037218A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114091639A (en) * 2021-11-26 2022-02-25 北京奇艺世纪科技有限公司 Interactive expression generation method and device, electronic equipment and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5611037A (en) * 1994-03-22 1997-03-11 Casio Computer Co., Ltd. Method and apparatus for generating image
US5659625A (en) * 1992-06-04 1997-08-19 Marquardt; Stephen R. Method and apparatus for analyzing facial configurations and components

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5659625A (en) * 1992-06-04 1997-08-19 Marquardt; Stephen R. Method and apparatus for analyzing facial configurations and components
US5611037A (en) * 1994-03-22 1997-03-11 Casio Computer Co., Ltd. Method and apparatus for generating image

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114091639A (en) * 2021-11-26 2022-02-25 北京奇艺世纪科技有限公司 Interactive expression generation method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
EP1177528A1 (en) 2002-02-06

Similar Documents

Publication Publication Date Title
US6147692A (en) Method and apparatus for controlling transformation of two and three-dimensional images
Noh et al. A survey of facial modeling and animation techniques
Deng et al. Computer facial animation: A survey
US6061072A (en) Method and apparatus for creating lifelike digital representations of computer animated objects
US6559849B1 (en) Animation of linear items
US6097396A (en) Method and apparatus for creating lifelike digital representation of hair and other fine-grained images
EP2043049B1 (en) Facial animation using motion capture data
Kouadio et al. Real-time facial animation based upon a bank of 3D facial expressions
WO2004084144A1 (en) System and method for animating a digital facial model
WO2001099048A2 (en) Non-linear morphing of faces and their dynamics
US20070247465A1 (en) Goal-directed cloth simulation
JP2000502823A (en) Computer-based animation production system and method and user interface
Breton et al. FaceEngine a 3D facial animation engine for real time applications
JP2011159329A (en) Automatic 3d modeling system and method
US7477253B2 (en) Storage medium storing animation image generating program
WO2001037218A1 (en) Method and apparatus to control responsive action by a computer-generated character
Neumann et al. NPR Lenses: Interactive tools for non-photorealistic line drawings
Kalberer et al. Lip animation based on observed 3D speech dynamics
Park et al. A feature‐based approach to facial expression cloning
US8077183B1 (en) Stepmode animation visualization
KR20070061252A (en) A method of retargeting a facial animation based on wire curves and example expression models
US6094202A (en) Method and apparatus for creating lifelike digital representations of computer animated objects
Bibliowicz An automated rigging system for facial animation
Cowe Example-based computer-generated facial mimicry
Nunes et al. Talking avatar for web-based interfaces

Legal Events

Date Code Title Description
WWE Wipo information: entry into national phase

Ref document number: 1999937371

Country of ref document: EP

AK Designated states

Kind code of ref document: A1

Designated state(s): JP

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE

121 Ep: the epo has been informed by wipo that ep was designated in this application
WWP Wipo information: published in national office

Ref document number: 1999937371

Country of ref document: EP

WWW Wipo information: withdrawn in national office

Ref document number: 1999937371

Country of ref document: EP