US20040227761A1 - Statistical dynamic modeling method and apparatus - Google Patents

Statistical dynamic modeling method and apparatus Download PDF

Info

Publication number
US20040227761A1
US20040227761A1 US10/438,748 US43874803A US2004227761A1 US 20040227761 A1 US20040227761 A1 US 20040227761A1 US 43874803 A US43874803 A US 43874803A US 2004227761 A1 US2004227761 A1 US 2004227761A1
Authority
US
United States
Prior art keywords
skin
character
pose
response
character model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US10/438,748
Inventor
John Anderson
Adam Woodbury
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Pixar
Original Assignee
Pixar
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Pixar filed Critical Pixar
Priority to US10/438,748 priority Critical patent/US20040227761A1/en
Priority to AU2003269986A priority patent/AU2003269986A1/en
Priority to CN03826436.6A priority patent/CN1788282B/en
Priority to PCT/US2003/026546 priority patent/WO2004104935A1/en
Priority to JP2004572200A priority patent/JP4358752B2/en
Priority to EP03751882A priority patent/EP1636759A4/en
Priority to AU2003260051A priority patent/AU2003260051A1/en
Priority to EP03817037A priority patent/EP1639552A4/en
Priority to JP2004572201A priority patent/JP4361878B2/en
Priority to PCT/US2003/026371 priority patent/WO2004104934A1/en
Assigned to PIXAR reassignment PIXAR ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: ANDERSON, JOHN, WOODBURY, ADAM
Publication of US20040227761A1 publication Critical patent/US20040227761A1/en
Priority to US11/582,704 priority patent/US7515155B2/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/403D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/21Collision detection, intersection

Definitions

  • the present invention relates to the field of computer graphics, and in particular to methods and apparatus for animating computer generated characters.
  • the present invention relates to the field of computer graphics.
  • Many computer graphic images are created by mathematically modeling the interaction of light with a three dimensional scene from a given viewpoint. This process, called rendering, generates a two-dimensional image of the scene from the given viewpoint, and is analogous to taking a photograph of a real-world scene.
  • Animated sequences can be created by rendering a sequence of images of a scene as the scene is gradually changed over time. A great deal of effort has been devoted to making realistic looking rendered images and animations.
  • Animation whether hand-drawn or computer generated, is as much an art as it is a science. Animators must not only make a scene look realistic, but must also convey the appropriate dramatic progression and emotional impact required by the story. This is especially true when animating characters. Characters drive the dramatic progression of the story and establish an emotional connection with the audience.
  • an animator To create artistically effective character animation, an animator often creates a rough version of a scene and then fine-tunes the character animation to create desired drama and expression of the final scene. This is analogous to a movie director rehearsing a scene with actors to capture the perfect mood for a scene. Because the animator is responsible for the expressiveness of the character animation, it is important that animation tools allow the animator to efficiently fine-tune a character animation and to accurately preview the final form of the animation.
  • a character's appearance is defined by a three-dimensional computer model.
  • the computer model of a character is often extremely complex, having millions of surfaces and hundreds or thousands of attributes.
  • animation tools often rely on armatures and animation variables to define character animation.
  • An armature is a “stick figure” representing the character's pose, or bodily attitude. By moving the armature segments, which are the “sticks” of the “stick figure,” the armature can be manipulated into a desired pose.
  • the animation tools modify character model so that the bodily attitude of the character roughly mirrors that of the armature.
  • Animation variables are another way of defining the character animation of a complex character model.
  • An animation variable is a parameter used by a function to modify the character models.
  • Animation variables and their associated functions are used to abstract complicated modifications to a character model to a relatively simple control.
  • an animation variable can define the degree of opening of a character's mouth.
  • the value of the animation variable is used to determine the position of the many different parts of the character's armature needed to open the characters mouth to the desired degree.
  • the animation tools then modify the character model according to the final posed armature to create a character model with an open mouth.
  • soft body characters can be animated using a physical simulation approach.
  • the character model is processed by a material physics simulation to create a physically realistic looking soft body object.
  • This approach is extremely time consuming to set up, often requiring modelers to define not only the exterior of a character, such as the skin, but also the underlying muscles and skeleton.
  • processing the character model for each pose created by the animator is extremely computationally expensive, often requiring hours or even days to compute the character model's deformation for a short animated sequence.
  • Animated characters also often collide or interact with other objects or characters in a scene.
  • an animated character will need to deformed around the colliding object.
  • Realistic character deformation in response to collisions is essential in animating collisions, especially when the character is a soft body object.
  • Prior character posing techniques such as kinematic transforms cannot realistically deform character models in response to collisions. Instead, animators must manually deform the shape of the character model.
  • Physical simulation techniques can be used to deform character models in response to collisions; however, as discussed above, physical simulation techniques are very time-consuming to set up and computer. Because the time requirements of physical simulation techniques are so high, it is difficult for animators to fine tune collision animations to convey the appropriate dramatic impact.
  • a method for animating soft body characters has a first character preparation phase followed by a second character animation phase.
  • the skin deformation of the character model is determined for each of a set of basis poses.
  • the character deformation phase also determines the skin deformation of a character model at a number of skin contact points in response to impulse collisions.
  • the skin deformation from posing referred to as the skin mode response
  • the skin deformation from impulse collisions referred to as the skin impulse response
  • the set of basis poses, the skin mode response, and the skin impulse response are used to create a final posed character. Regardless of the desired character pose, the character animation phase uses the same set of basis poses, skin mode response, and the skin impulse response. Therefore, the set of basis poses, the skin mode response, and the skin impulse response only need to be determined once for a character model.
  • a method for animating a character model includes determining a basis set from a set of character poses and determining a set of skin responses for the character model corresponding to the basis set.
  • a desired character pose is projected onto the basis set to determine a set of basis weights.
  • the basis weights are applied to the set of skin responses to create a skin pose response, and the skin pose responses is projected onto the basis set to create the posed character model.
  • the steps of projecting a character pose, applying the set of basis weights, and projecting the set of skin responses are repeated for a second desired character pose to create a second posed character model.
  • the set of character poses includes poses from a training set.
  • the set of character poses includes randomly created poses.
  • an armature is used to define the set of character poses as well as the desired character pose.
  • an animation variable defines at least part of a desired pose.
  • determining the skin response includes applying a set of displacements from pose in the basis set to a portion of the character model and minimizing a function of the displacement over the entire character model.
  • the function is an elastic energy function.
  • the function is minimized over a set of sample points associated with the character model.
  • An embodiment of the method transforms a character pose into a set of reference frames associated with a character model. For each reference frame, a skin pose response of the character model is created in response to the character pose. The embodiment constructs a composite skin response of the character model from the skin pose responses of each reference frame.
  • a further embodiment constructs a composite skin response by combining a portion of the skin response of a first reference frame with a portion of the skin response of a second reference frame.
  • the portion of the skin response of the first reference frame and the portion of the skin response of the second reference frame can correspond to two, at least partially overlapping regions of the character model.
  • the portion of the skin response of the first reference frame and the portion of the skin response of the second reference frame correspond to two different regions of the character model.
  • Another embodiment combines the portion of the skin response of a first reference frame and the portion of the skin response of the second reference frame according to a set of frame weights defining the influence of the skin responses of the first and second reference frames on the composite skin response. Yet another embodiment determines a set of frame weights by diffusing an initial set of frame weight values through the character model.
  • FIG. 1 illustrates an example computer system capable of implementing an embodiment of the invention
  • FIGS. 2A and 2B illustrate an example character and an example armature used for posing the example character
  • FIG. 3 is a block diagram illustrating two phases of a method of animating a character according to the embodiment of the invention.
  • FIG. 4 is a block diagram of a character preparation phase for animating a character according to an embodiment of the invention.
  • FIG. 5 illustrates a block diagram of a method for determining the skin mode response of a character according to an embodiment of the invention.
  • FIGS. 6A, 6B, 6 C, 6 D, and 6 E illustrate the determination of a skin mode response of an example character in an example pose according to an embodiment of the invention
  • FIG. 7 illustrates a block diagram of a method for weighting a character model with respect to a set of coordinate reference frames according to an embodiment of the invention
  • FIGS. 8A, 8B, and 8 C illustrate the determination of a set of coordinate reference frame weights of an example character model according to an embodiment of the invention
  • FIG. 9 illustrates a block diagram of a character animation phase for constructing a posed character model according to an embodiment of the invention
  • FIGS. 10A, 10B, 10 C, and 10 D illustrate the construction of a posed character model from an example armature and an example character model according to an embodiment of the invention
  • FIG. 11 illustrates a block diagram of a method for determining the skin impulse response of a character model according to an embodiment of the invention
  • FIGS. 12A, 12B, and 12 C illustrate the determination of a skin impulse response of a portion of an example character model according to an embodiment of the invention
  • FIG. 13 illustrates a block diagram of a method for determining the collision response of a character model according to an embodiment of the invention
  • FIGS. 14A, 14B, 14 C, 14 D, 14 E, and 14 F illustrate the determination of the skin collision response of a portion of a character model according to an embodiment of the invention.
  • FIG. 15 illustrates a block diagram of a character animation phase for constructing a posed character model according to a further embodiment of the invention.
  • FIG. 1 illustrates an example computer system 100 capable of implementing an embodiment of the invention.
  • Computer system 100 typically includes a monitor 110 , computer 120 , a keyboard 130 , a user input device 140 , and a network interface 150 .
  • User input device 140 includes a computer mouse, a trackball, a track pad, graphics tablet, touch screen, and/or other wired or wireless input devices that allow a user to create or select graphics, objects, icons, and/or text appearing on the monitor 110 .
  • Embodiments of network interface 150 typically provides wired or wireless communication with an electronic communications network, such as a local area network, a wide area network, for example the Internet, and/or virtual networks, for example a virtual private network (VPN).
  • VPN virtual private network
  • Computer 120 typically includes components such as one or more general purpose processors 160 , and memory storage devices, such as a random access memory (RAM) 170 , disk drives 180 , and system bus 190 interconnecting the above components.
  • RAM 170 and disk drive 180 are examples of tangible media for storage of data, audio/video files, computer programs, applet interpreters or compilers, virtual machines, embodiments of the herein described invention including geometric scene data, object data files, shader descriptors, a rendering engine, output image files, texture maps, and displacement maps.
  • Further embodiments of computer 120 can include specialized audio and video subsystems for processing and outputting audio and graphics data.
  • tangible media include floppy disks; removable hard disks; optical storage media such as DVD-ROM, CD-ROM, and bar codes; non-volatile memory devices such as flash memories; read-only-memories (ROMS); battery-backed volatile memories; and networked storage devices.
  • FIGS. 2A and 2B illustrate an example character and an example armature used for posing the example character.
  • Character 205 is a three-dimensional computer model of a soft-bodied object, shown in two dimensions for clarity. Although character 205 is shown to be humanoid in shape, character 205 may take the form of any sort of object, including plants, animals, and inanimate objects with realistic and/or anthropomorphic attributes. Character 205 can be created in any manner used to create three-dimensional computer models, including manual construction within three-dimensional modeling software, procedural object creation, and three-dimensional scanning of physical objects.
  • Character 205 can be comprised of a set of polygons; voxels; higher-order curved surfaces, such as Bezier surfaces or non-uniform rational B-splines (NURBS); constructive solid geometry; and/or any other technique for representing three-dimensional objects. Additionally, character 205 can include attributes defining the outward appearance of the object, including color, textures, material properties, transparency, reflectivity, illumination and shading attributes, displacement maps, and bump maps.
  • Character 205 is animated through armature 210 .
  • Armature 210 includes one or more armature segments.
  • the armature segments can be connected or separate, as show in FIG. 2A.
  • Animators manipulate the position and orientation of the segments of armature 210 to define a pose for the character.
  • a pose is a set of armature positions and orientations defining the bodily attitude of character 205 .
  • Armature segments can be constrained in size, position, or orientation, or can be freely manipulated by the animator.
  • the number of armature segments can vary according to the complexity of the character, and a typical character can have an armature with hundreds or thousands of segments.
  • the number and position of armature segments is similar to that of a “skeleton” for a character; however, armature segments can also define subtle facial expressions and other character details not necessarily associated with bones or other anatomical features.
  • the armature segments in the armature 210 of FIG. 2A are comprised of a set of points, in alternate embodiments of the invention the armature segments can be comprised of a set of surfaces and/or a set of volumes. As the armature 210 is posed by the animator, the bodily attitude of character 205 roughly mirrors that of the armature 210 .
  • Character 205 is animated by creating a sequence of frames, or still images, in which the character 205 is progressively moved from one pose to another. Character 205 can also be translated, rotated, scaled, or otherwise manipulated as a whole between frames. Animators can manually create the poses of a character for each frame in the sequence, or create poses for two or more key frames, which are then interpolated by animation software to create the poses for each frame. Poses can also be created automatically created using functions, procedures, or algorithms. Animation variables can be used as parameters for one or more functions defining a pose. Character 205 and its associated armature 210 are shown in the rest post, or the default bodily attitude of the character. In an embodiment, the rest pose of a character is determined by the initial configuration of the character model and the armature.
  • FIG. 2B illustrates a character 220 after being manipulated into a pose by the animator.
  • the animator has moved the arm segments of armature 225 .
  • the character 220 assumes a pose with its arms raised. More complicated poses can be created by manipulating additional armature segments.
  • the character is processed to mirror the bodily attitude of the armature.
  • the present invention allows for interactive frame rates and realistic posing of soft body characters by dividing the animation process into two phases.
  • FIG. 3 is a block diagram 300 illustrating two phases of a method of animating a character according to the embodiment of the invention.
  • the first phase 305 is a character preparation phase.
  • the character preparation phase is relatively computationally expensive and is performed in advance of any animation.
  • the character preparation phase 305 creates a set of mode data for the character defining the deformation of the character to numerous poses.
  • character animation phase 310 animators create animated sequences for characters by defining the armature pose of a character in a frame.
  • a final posed character is created from the armature pose defined by the animator and the set of mode data previously created in character preparation phase 305 .
  • An embodiment of the invention creates a final posed character from an armature pose and the set of mode data in real-time, allowing the animator to preview the result.
  • character animation phase uses the same set of mode data to create the final posed character. Therefore, the character preparation phase 305 only needs to be performed one time for a character.
  • the character animation phase 310 is repeated to create a final posed character for each armature pose in an animated sequence.
  • FIG. 4 is a block diagram 400 of the character preparation phase for animating a character according to an embodiment of the invention.
  • Step 405 creates a basis from a set of sample armature positions.
  • a set of sample armature positions is created for an armature associated with a character.
  • the set of sample armature positions includes poses from a training set defining typical actions of a character.
  • the set of sample armature positions might include armature poses associated with actions such as walking, running, grasping, jumping, and climbing.
  • the set of sample armature positions are programmatically created. Sample armature positions can be created procedurally by selecting one or more armature segments and manipulating these segments to new positions or orientations.
  • each armature segment is selected in turn and moved one unit in a given dimension to create a sample armature position.
  • there will be a total number of sample armature positions in the set will be three times the number of armature segments.
  • armature segments adjacent to the selected armature segment are also repositioned according to an elastic model as each sample armature position is created.
  • a sample armature position takes into consideration constraints on the armature segments. For example, an armature segment may have a limited range of motion.
  • Each sample armature position is described by a vector defining the position of the armature segments.
  • the vector defines the position of armature segments relative to their position in a rest or initial position.
  • the vectors of each sample armature position are combined to form a matrix containing the set of sample armature positions for the armature.
  • a single value decomposition of this matrix is calculated to find a set of basis functions (or modes) for the armature.
  • other methods of calculating a set of basis functions such as a canonical correlation, can also be used.
  • the set of basis functions compactly defines a “pose space,” in which any pose can be approximately represented as the weighted sum of one or more of the basis functions.
  • the resulting set of basis functions is not an orthonormal basis, the set of basis functions are ortho-normalized so that each basis function has a magnitude of 1 and is perpendicular to every other basis function.
  • a skin mode response is determined for each of the sample armature position basis functions in step 410 .
  • a skin mode response is the deformation of the surface of the character in response to the movement of the armature to a sample armature position from its rest pose.
  • FIG. 5 illustrates a block diagram 500 of a method for determining the skin mode response of a character as called for by step 410 according to an embodiment of the invention.
  • the character model and its armature are discretized to create a set of sample points.
  • the character model is discretized into a three-dimensional grid.
  • the grid points within the character model or adjacent to the armature are the set of sample points.
  • the character model is discretized into a set of tetrahedral cells.
  • a set of tetrahedrons are fitted within the character model and around the armature. The vertices of the tetrahedrons are the set of sample points.
  • FIG. 6A shows a character model 603 and its associated armature 605 discretized with a three-dimensional grid 607 .
  • the character model 603 and its armature 605 are in the rest position.
  • grid 607 is a three-dimensional grid.
  • the density of the grid 607 i.e. the number of grid cubes per unit of volume, is shown in FIG. 6A for illustration purposes only.
  • a typical grid forms a bounding box around the character approximately 120 cubes high, 50 cubes wide, and 70 deep. These dimensions will vary according to the height, width and depth of the character model 603 .
  • the density of the grid may vary over different portions of the character 603 to ensure accuracy in more complicated portions of character, for example, the character's face and hands. It should be noted that the grid 607 not only surrounds the character model 603 , but also fills the interior of character model 603 as well. In a further embodiment, grid elements completely outside the character model 603 are discarded, while grid elements either partially or completely inside the character model are retained for determining the skin mode response. This reduces the processing and memory requirements for determining the skin model response
  • FIG. 6B illustrates a sample armature position associated with a basis function for an example armature 625 and its associated character model 623 .
  • the armature segments 627 and 629 are positioned into a new position.
  • Displacement vectors 631 and 633 define the displacement of the armature segments 627 and 629 , respectively, from the rest pose.
  • Outline 635 illustrates the portion of the character model 623 affected by the armature displacement for the rest position into the sample armature position.
  • the displacement vectors from the sample armature positions are assigned to sample points adjacent to armature segments.
  • FIG. 6C illustrates the assignment of displacement vectors to sample points adjacent to armature segments.
  • FIG. 6C illustrates a portion 640 of a character model, its associated armature 642 , and the surrounding grid 641 . Armature segments 643 and 645 are shown in their rest pose. The armature displacement vectors, 647 and 649 , are associated with armature segments 643 and 645 respectively.
  • the sample points adjacent to armature displacement vector 647 are each assigned a displacement vector, illustrated by the set of displacement vectors 651 .
  • the values of the displacement vector are computed so that the weighted sum of the set of grid displacement vectors 651 is equal to the armature displacement vector 647 .
  • a set of displacement vectors 653 are assigned to the sample points adjacent to armature displacement vector 649 .
  • Displacement vectors are computed for all sample points adjacent to any portion of any armature segments. If armature displacement vectors are only defined for the endpoints of an armature segment, the armature displacement vectors are interpolated along the length of the armature segment. The interpolated armature displacement vector is then used to create a set of displacement values for the sample points adjacent to each portion of the armature.
  • each armature displacement vector has eight adjacent displacement vectors.
  • each armature displacement vector has four adjacent displacement vectors.
  • the skin mode response which is the deformation of the skin of the character model, is computed using the displacement vectors assigned in step 510 as initial input values.
  • the skin mode response is computed by determining the value of an elastic energy function over every sample point inside the character body.
  • E 2 V ⁇ ⁇ ⁇ q x ⁇ x + ⁇ q y ⁇ y + ⁇ q z ⁇ z ⁇ 2 + S ⁇ [ ( ⁇ q x ⁇ y + ⁇ q y ⁇ x ) 2 + ( ⁇ q x ⁇ z + ⁇ q z ⁇ x ) 2 + ( ⁇ q y ⁇ z + ⁇ q z ⁇ y ) 2 ]
  • q xyz (x,y,z) is the displacement of a sample point from its rest position of (x,y,z).
  • V is a parameter denoting resistance of the model to volume change and S is a parameter denoting resistance to internal shear.
  • the values of V and S can be varied to change the deformation characteristics for the character. Characters that are very soft or “squishy” have low values of V and S, while characters that are relatively more rigid will have larger values of V and S.
  • Material behavior can be represented in general by a Hamiltonian dynamics system, and any type of Hamiltonian function can be used as an energy function in step 515 .
  • the energy function includes local terms, which change the energy of the system in response to local deformations, such as shown in the example above, and additional global terms, which change the energy of the system in response to changes to the character model as a whole, such as global volume preservation terms.
  • a system of equations representing the elastic energy of the entire character model. This system of equations is minimized over the set of sample points.
  • the minimization of the elastic energy function can be performed using a numerical solver to find the value of q xyz , the position offset, for each sample point.
  • an elliptic numerical solver is used to minimize the energy function.
  • Alternate embodiments can used conjugate gradient multigrid or Jacoby solvers.
  • the skin mode is the set of position offsets adjacent to the skin of the model.
  • FIG. 6D illustrates an example of a portion of a skin mode response of a character model for a basis function.
  • a portion 660 of a character model is shown in detail.
  • the set of displacement vectors 663 describe the deformation of a portion of the character's skin. In this example, the skin bulges outward around the “kneecap” of the character.
  • the set of displacement vectors 665 describe the deformation of a portion of the character's skin behind the “knee.” In this example, the skin creases inward just behind the knee, but bulges outwards just above and below the knee.
  • displacement vectors are computed for all sample points adjacent to the skin of the character model.
  • grid displacement vectors that are zero or very small in value, indicating very little deformation at a skin point from a given basis function, are truncated.
  • FIG. 6E illustrates the skin mode response of FIG. 6D constructed onto the model skin. This figure is presented to clarify the effects of the skin mode response of FIG. 6D on the appearance of the character model. As discussed below, an embodiment of the invention projects the skin mode response on to the set of basis functions to create a more compact representation. In the example of FIG. 6E, a portion 680 of the character model is shown in detail. The skin 685 bulges outward and inward as a result of the displacement created by the basis function. In this example, the skin mode response presents a realistic looking representation of the deformation of a character's leg as its knee is bent.
  • the present invention determines a realistic skin mode response directly from displacement introduced by the armature basis function. And unlike kinematic transformation techniques, there is no need to explicitly associate skin points with one or more armature segments. Instead, realistic skin deformation automatically results from the displacement of the underlying armature. This decreases the time and effort needed to create character models compared with prior techniques.
  • an outline 690 of a character surface kinematically transformed by an armature pose is also shown.
  • the kinematically transformed model skin appears rigid and mechanical when compared with the skin mode response of the present example.
  • step 520 the process of determining a skin mode response is repeated for each basis function to create a corresponding set of skin mode responses for the set of basis functions.
  • step 525 the skin mode responses are projected onto the set of basis functions to create a compact representation of the set of skin modes.
  • step 525 the positional offsets for sample points adjacent to the armature segments are compared with their original values following the computation of each skin mode. This is done due to the effects of the deformed character model “pushing back” on the armature segments and the effects of separate armature segments pushing into each other. If the sample points adjacent to the armature segments have changed, a new basis function is computed from the set of modified sample points. The new basis function replaces its corresponding original basis function in the set of basis function. The modified set of basis functions is ortho-normalized, and then the skin modes are projected onto the modified, ortho-normalized set of basis function for storage. Following the determination of the skin mode responses for the set of basis functions, the unused positional offsets, which are the positional offsets not adjacent to the skin of the character model, are discarded.
  • step 415 determines a set of frame weights for the character model skin.
  • the set of frame weights are used in the character animation phase to correct for undesired shearing effects introduced by large rotations of portion of the character model.
  • FIG. 7 illustrates a block diagram of a method for weighting a character model with respect to a set of coordinate reference frames according to an embodiment of the invention.
  • a set of coordinate reference frames are attached to the segments of the armature.
  • a coordinate reference frame defines a local coordinate system for an armature segment and the adjacent portions of the character model.
  • a coordinate reference frame is attached to each armature segment.
  • several armature segments may share the same coordinate reference frame.
  • a coordinate reference frame is composed of four vectors: a first vector defining the origin or location of the coordinate reference frame and three vectors defining the coordinate axes of the coordinate reference frame.
  • FIG. 8A illustrates the attachment of coordinate reference frames to an example armature as called for by step 705 .
  • armature 802 is associated with a number of coordinate reference frames, including coordinate reference frames 804 , 806 , and 808 .
  • Each reference frame is attached, or positioned, near an armature segment.
  • a coordinate reference frame is positioned at the end, or joint, of an armature segment.
  • armature segments are positioned anywhere along an armature segment.
  • coordinate reference frame 804 is positioned near the center of the head of the character armature.
  • Coordinate reference frame 806 is positioned at the shoulder joint of the armature 802 .
  • Coordinate reference frame 808 is positioned at the knee joint of the armature 802 .
  • the armature and the character model are discretized to create a set of sample points. Similar to the discretization discussed above for determining skin mode responses, one embodiment creates a set of sample points from a three-dimensional grid. An alternate embodiment discretizes the character model and armature using a set of tetrahedral cells.
  • a set of initial frame weights are assigned to the sample points adjacent to each coordinate reference frame.
  • a frame weight defines the influence of a coordinate reference frame on a sample point. As discussed below, each sample point can be influenced by more than coordinate reference frame, and therefore can have more than one frame weight.
  • the sample points adjacent to a reference frame are initialized with a frame weight of 1. The frame weights of the other sample points that are not adjacent to any of the reference frames are undefined at this stage.
  • FIG. 8B illustrates an example of the assignment of initial frame weights in a portion of an example character model.
  • FIG. 8B shows a portion of a character model 810 and a portion of a set of sample points 812 created by a three-dimension grid.
  • coordinate reference frames 814 and 816 are positioned on armature segments.
  • the adjacent sample points are assigned a frame weight of 1.
  • sample points 818 are assigned a frame weight of 1 with respect to coordinate reference frame 814
  • sample points 820 are assigned a frame weight of 1 with respect to coordinate reference frame 816 .
  • the frame weights of the surrounding sample points are determined from the initial frame weights.
  • a spatial diffusion function is used to calculate the frame weights for sample points.
  • the initial frame weights values are diffused outward from their initial sample points to the surrounding sample points. As frame weights spread to distant sample points, the frame weight values gradually decrease in value.
  • the function w xyz (x,y,z) is frame weight associated with a sample point with respect to a given coordinate reference frame.
  • D is a diffusion coefficient defining the rate of diffusion. In an embodiment, D is uniform in every direction. In an alternate embodiment, the value of D varies according to direction of diffusion. In this alternate embodiment, a variable diffusion coefficient is useful in defining sharp transitions between reference frames at armature joints. In a further embodiment, the diffusion coefficient is selected to be consistent with the diffusion of shear stress in the character model.
  • a system of equations representing the diffusion of frame weights through the entire character model. This system of equations is solved over the set of sample points to find the value of one or more frame weights, w xyz , for each sample point. If a sample point is influenced by multiple coordinate reference frames, the sample point will have a corresponding set of frame weights defining the degree of influence from each associated coordinate reference frame. The set of frame weights for each sample point are normalized so that the sum of the frame weights for a sample point is 1.
  • an optimal set of frame weights are determined using a full non-linear model accounting for rotational effects.
  • a non-linear solution of the character model's skin mode response is determined for each of the set of armature basis functions. Unlike the linear solutions computed in step 415 , the non-linear solution does not assume the displacement vectors are infinitesimally small.
  • the skin mode of each non-linear solution is compared with the corresponding linear skin mode response for an armature basis function as determined in step 415 . From these comparisons of the non-linear skin mode responses with their corresponding linear skin mode responses, an optimal set of frame weights are determined for each sample point.
  • the frame weights determined in steps 715 and 720 are manually adjusted for optimal aesthetic results.
  • the frame weights for sample points located near joints on character models can be fine-tuned so that the deformation of skin points is aesthetically pleasing.
  • FIG. 8C illustrates an example of the set of frame weights determined for a portion of the sample points adjacent to the model skin.
  • FIG. 8C shows a portion of a character model 822 .
  • Coordinate reference frames 824 and 826 are shown as well.
  • a portion of the set of frame weights determined for the sample points adjacent to the model skin is highlighted with open circles in FIG. 8C.
  • Each sample point has one or more frame weights.
  • sample point 828 may have a frame weight of 0.9 with respect to coordinate reference frame 826 , and a frame weight of 0.1 with respect to coordinate reference frame 824 .
  • Sample point 830 may have a frame weight of 0.1 with respect to coordinate reference frame 826 , and a frame weight of 0.9 with respect to coordinate reference frame 824 .
  • Sample point 832 may have a frame weight of 0.999 with respect to coordinate reference frame 826 and a frame weight of 0.001 with respect to coordinate reference frame 824 .
  • step 725 the set of reference frames and their associated frame weights are stored for use during the character animation phase. This completes step 415 and the character preparation phase. Following the completion of the character preparation phase, the character is ready to be used by an animator in the character animation phase.
  • the character animation phase uses the set of basis functions, the associated set of skin modes, and the set of frame weights determined from method 400 to create a final posed character.
  • FIG. 9 illustrates a block diagram of a method 900 for animating a character in the character animation phase for according to an embodiment of the invention.
  • a posed armature defines the bodily attitude of the desired final posed character.
  • the posed armature can be created manually by an animator, by interpolating between key frames, or procedurally using one or more animations variables, functions, procedures, or algorithms.
  • the posed armature is compared with the armature in the rest position to determine a pose vector defining the differences in positions and orientations between the armature segments in the posed and rest positions.
  • the set of coordinate reference frames attached to the armature follow their associated armature segments from the rest position to the posed position.
  • a set of vectors defining the position and orientation of the set of coordinate reference frames are also determined in step 905 .
  • the pose vector is transformed into the each of the coordinate spaces defined by the set of coordinate reference frames in their posed positions.
  • the transformed pose vector is projected on to the armature basis functions.
  • the pose vector is transformed into a set of basis function weights.
  • the basis function weights redefine the pose vector as a weighted sum of the set of basis functions.
  • a set of basis function weights is created for each coordinate reference frame from the transformed pose vector.
  • the set of basis function weights determined in step 910 are applied to the skin modes in each coordinate reference frame.
  • a skin mode was previously created during the character preparation phase for each of the basis functions.
  • the basis function weights associated with each coordinate reference frame are applied to the skin mode of the basis function.
  • the resulting set of skin modes, each weighted by its associated basis function weight, are summed to create a skin pose response.
  • the skin pose response is the deformation of the character model in response to the posed armature.
  • a skin pose response can be determined for any possible character pose, regardless of whether the desired pose was explicitly part of the original pose set.
  • step 915 a separate skin mode response is created for each of the coordinate reference frames.
  • step 920 skips the determination of a skin mode response in a reference frame for portions of the character skin where the frame weight is zero or negligible.
  • the skin response is represented in the form of modes that take the form of spatial shape offsets.
  • portions of the model can be rotated away from their initial orientations. If the rotation is relatively large with respect to an adjacent portion of the character model, an undesirable shearing effect can be introduced. To correct for this shearing effect, separate skin pose responses are determined for each coordinate reference frame.
  • the skin pose responses determined in each reference frame are combined to create a single composite skin pose response that does not include any shearing effects.
  • the set of skin pose responses are transformed from their associated coordinate reference frames to the global reference frame. Once all of the skin pose responses are in the same coordinate system, the skin poses are summed according to the set of frame weights previously determined in the character preparation phase. Each skin point is the weighted sum of the set of skin pose responses and the corresponding frame weights associated with the skin point. In an embodiment, these skin responses are summed in their basis-projected form. The result is a composite skin response.
  • the composite skin response is constructed from the basis-projected form back to physical form.
  • the weighted sum of the composite skin response and the set of basis functions creates the final posed character.
  • the steps of method 900 are repeated for each pose of a character to produce an animated sequence. Because the skin mode responses and the skin impulse responses are precomputed in the character preparation phase, the character animation phase can be performed in real-time or near real-time. This permits the animator to efficiently fine-tune the animation. Additionally, because the combined skin response of a character model realistically deforms in response to armature poses, the animator sees the final appearance of the character model during the animation process, rather than having to wait to see the final appearance of the animation.
  • FIGS. 10A, 10B, 10 C, and 10 D illustrate the construction of a posed character model from an example armature and an example character model according to an embodiment of the method described in FIG. 9.
  • FIG. 10A illustrates an example posed armature 1005 .
  • posed armature 1005 defines the bodily attitude of a character in a running position.
  • posed armature 1005 may be created manually by an animator, by interpolating between key frames, or procedurally using one or more animations variables, functions, procedures, or algorithms.
  • FIG. 10B illustrates the example posed armature 1010 and its associated set of coordinate reference frames.
  • each coordinate reference frame is represented by a shaded rectangle.
  • the position and orientation of each rectangle illustrates the position and orientation of the associated coordinate reference frame in the posed position.
  • the size of each rectangle illustrates the approximate portion of the character model influenced by the associated coordinate reference frame.
  • coordinate reference frame 1015 is associated with the upper leg armature segment of the posed armature 1010 .
  • Coordinate reference frame 1015 influences the portion of the character model surrounding the upper leg armature segment.
  • coordinate reference frame 1020 influences the portion of the character model surrounding the upper arm armature segment of posed character 1010 .
  • two or more reference frames can influence the same portion of the character model.
  • FIG. 10C illustrates examples of two of the skin pose responses determined for the coordinate reference frames.
  • Skin pose response 1025 is associated with the coordinate reference frame 1035 .
  • Skin pose response 1030 is associated with coordinate reference frame 1040 .
  • a skin pose response is determined for each of the coordinate reference frames associated with the posed armature.
  • Skin pose response 1025 shows the deformation of the character model in response to the posed armature from the view of coordinate reference frame 1035 .
  • the portion of the skin pose response 1025 within coordinate reference frame 1035 is correctly deformed in response to the posed armature.
  • other portions of the skin pose response 1025 outside of coordinate reference frame 1035 are highly distorted due to shearing effects.
  • the upper leg portion of the character model within coordinate reference frame 1035 is correctly deformed from the posed armature, while the arms 1042 and 1044 of the character model are distorted due to shearing effects.
  • the skin pose response 1030 shows the deformation of the character model in response to the posed armature from the view of coordinate reference frame 1040 .
  • the arm portion of the skin pose response 1030 within coordinate reference frame 1040 is correctly deformed in response to the posed armature.
  • other portions of the skin pose response 1030 outside of coordinate reference frame 1040 such as the legs of the character model, are highly distorted due to shearing effects.
  • the separate skin pose responses determined from each reference frame are combined using the set of frame weights into a composite skin pose response without shearing effects.
  • FIG. 10D illustrates the composite skin pose response 1050 created from a set of separate skin pose responses associated with different reference frames.
  • the leg portion 1060 of the composite skin pose response 1050 is created primarily from the skin pose response 1025 shown in FIG. 10C.
  • the arm portion 1065 of the composite skin pose response 1050 is created primarily from the skin pose response 1030 .
  • the set of frame weights determines the contribution of each skin pose response to a given portion of the composite skin pose response.
  • the composite skin pose response can include contributions from several skin pose responses.
  • animators can create posed character models with a realistic bulging and bending in real-time.
  • a character model or any other soft object is realistically deformed in response to collisions with other objects in real time.
  • a character's skin can deform due to a collision with an external object, such as another character or a rigid object.
  • a character's skin can also deform due to self-collision, which is the collision of one part of the character model with another part of the character.
  • self-collision can occur when a character's arm is bent at the elbow so that the upper and lower arm contact each other.
  • the collision preparation phase is relatively computationally expensive and is performed in advance of any animation.
  • the collision preparation phase creates a set of skin impulse responses defining the deformation of the character to a set of test collisions.
  • Each character skin impulse response is the deformation of the surface of a character in response to a single collision at a single point.
  • the skin impulse response defines the displacement of points surrounding the collision point in response to a collision.
  • the collision animation phase animators create collisions by placing objects in contact with the character.
  • an animator defines the locations of the character model and the colliding object, referred to as a collider, in each frame of an animation sequence. Any portion of the character model overlapping or contacting the collider is considered to be part of the collision.
  • the collision animation phase determines the skin collision response, which is the deformation of the character model in response to the collision of the collider with the character model, using the set of skin impulse responses.
  • the collision animation phase uses the same set of skin impulse responses to determine the collision skin response.
  • the collision preparation phase only needs to be performed once for a character model, and the collision animation phase is repeated to create a skin collision response for each frame in an animated sequence.
  • FIG. 11 illustrates a block diagram 1100 of a method for determining the skin impulse response of a character according to an embodiment of the invention.
  • the character model is discretized to create a set of sample points.
  • the character model is discretized into a three-dimensional grid.
  • the character model is discretized into a set of tetrahedral cells.
  • a collision point can be any point on the surface of the character model, or in a further embodiment, within the interior of a character model.
  • internal collision points which are collision points within a character model, can be used to deform the skin of a character model in response to collisions with internal “muscle” objects.
  • skin and muscle are in reality often separated by a thin layer of fat.
  • a collision shield can be created by selecting interior points of the character model as collision points.
  • Step 1110 applies a set of displacements to the collision point.
  • Each displacement represents a collision of the character model at the collision point in a different direction.
  • a displacement is applied to the collision point in each of the three Cartesian directions.
  • each displacement is a unit displacement in the appropriate direction.
  • the sample points adjacent to the collision point are assigned displacement vectors based on the displacement at the collision point.
  • a skin impulse response is computed for each displacement using the displacement values assigned in step 1110 as initial input values.
  • the skin mode response is computed by determining the value of an elastic energy function over every sample point inside the character body, in a manner similar to that used to find the skin mode response. By minimizing the value of the elastic energy function over the entire discretized space, the value of q xyz , the position offset, is calculated for each sample point.
  • the skin impulse response for a given skin displacement is the set of position offsets at sample points adjacent to the skin of the model.
  • steps 1110 and 1115 are repeated for displacements applied to a number of collision points to create a set of skin impulse responses.
  • each control point is selected as a collision point and set of skin impulse responses are then created.
  • control points for relatively rigid portions of the character model are excluded from the set of collision points.
  • a set of basis functions is determined from the set of skin impulse responses.
  • a single value decomposition is used to calculate the set of basis functions from the skin impulse responses.
  • other methods of calculating a set of basis functions such as a canonical correlation, can also be used.
  • the set of basis functions are ortho-normalized so that each basis function has a magnitude of 1 and is perpendicular to every other basis function.
  • Step 1125 the projects the set of impulse responses onto the set of basis functions to create a compact representation of the set of skin impulse responses.
  • less significant terms of the single value decomposition are truncated to decrease the number of basis functions. This results in a smoothing effect as the set of impulse responses are projected onto the truncated basis set.
  • the set of skin responses are stored as a sparse set of vectors defining the displacement of surrounding points in response to a displacement of a collision point. This alternate embodiment represents skin impulse responses affecting only a small number of points more efficiently than the basis function representation.
  • FIG. 12A illustrates a displacement applied to a character model 1205 .
  • the character model 1205 has been discretized with a three-dimensional grid 1215 .
  • a displacement 1220 is applied to a collision point on the model skin 1210 .
  • a set of displacement values 1225 are assigned to the sample points adjacent to the collision point.
  • FIG. 12B illustrates a set of displacement vectors 1230 included as part of the skin impulse response resulting from the skin displacement 1235 .
  • the set of displacement vectors 1230 is provided for the purpose of explanation, and the skin impulse response may include any number of displacement vectors dispersed over all or a portion of the model skin.
  • the model skin bulges inward near the collision point and bulges outward in the region surrounding the collision point.
  • FIG. 12C illustrates the example skin impulse response of FIG. 12B projected onto the model skin. This figure is presented to clarify the effects of the skin impulse response of FIG. 12C on the appearance of the character model.
  • an embodiment of the invention projects the skin impulse response on to a set of basis functions to create a more compact representation.
  • the model skin 1260 bulges outward and inward as a result of the displacement created by the skin impulse response.
  • the skin impulse response presents a realistic looking representation of the deformation of a character due to the collision with an object.
  • the model skin 1250 in its rest state is shown for comparison.
  • FIG. 13 illustrates a block diagram of a method 1300 for determining the collision response of a character model according to an embodiment of the invention. Method 1300 will be discussed below with reference to FIGS. 14A-14F, which illustrate the determination of a skin collision response from an example collision according to an embodiment of the invention.
  • a set of collision points are skin points in contact with or inside of a collider.
  • FIG. 14A illustrates a portion of a model skin 1404 in collision with a collider 1402 .
  • the model skin 1404 includes a number of skin points. A portion of these skin points are inside of collider 1402 .
  • These skin points, 1406 , 1408 , 1410 , and 1412 are the set of collision points in this example collision.
  • skin points that are not inside or in contact with a collider are selected as additional collision points if there are near the surface of the collider or near a collision point inside the collider. This provides a margin of safety when the deformation of the character skin from a collision causes additional skin points to contact the collider.
  • a first collision point is selected and displaced to a potential rest position, which is a first approximation of the final rest position of a collision point.
  • the first collision point is selected randomly.
  • the potential rest position is the position on the surface of the collider nearest to the first collision point.
  • the potential rest position is a position between the first collision point and the nearest collider surface point.
  • a scaling factor is used to determine the distance between the nearest collider surface point and the potential rest position.
  • FIG. 14B illustrates an example first collision point 1414 displaced from its initial position 1416 to a potential rest position.
  • the potential rest position is 80% of the distance between the its initial position 1416 and the nearest surface point of the collider 1418 .
  • the scaling factor is selected to optimize the performance of method 1300 .
  • first collision point 1414 is shown being displaced in a strictly horizontal direction for clarity, it should be noted that a collision point can be moved in any direction to a potential rest position.
  • an initial collision response is applied to the other, non-displaced collision points.
  • the initial collision response is determined by projecting the displacement of the first collision point from its initial position to the potential rest position on to the set of basis functions previously created from the set of impulse responses.
  • the projection of the displacement on the set of basis functions creates a set of weights defining the displacement in basis space.
  • the set of weights are then applied to the impulse responses associated with the first collision point. This results in an initial collision response for the first collision point.
  • the initial collision response defines the displacement of skin points surrounding the first collision point in response to the displacement of the first collision point from its initial position to the potential rest position.
  • the initial collision response is applied to the surrounding collision points to displace these collision points from there initial position. It should be noted that the initial collision response is applied only to the collision points, i.e. only point in contact with or within the collider, even though the skin impulse responses associated with a first collision point can define displacements for additional points.
  • FIG. 14C illustrates the application of an example initial collision response to a set of surrounding collision points.
  • the first collision point 1420 outlined for emphasis, has been displaced from its initial position 1421 to a potential rest position, resulting in the application of an initial collision response to the set of surrounding collision points.
  • the initial collision response is the displacement applied to the surrounding collision points 1422 , 1424 , and 1426 resulting from the displacement of first collision point 1420 .
  • the initial collision response displaces collision points 1422 , 1424 , and 1426 from their respective initial positions to new potential rest positions as shown.
  • the skin points outside of the collider i.e. the non-collision skin points, are not displaced at this point of the collision animation phase.
  • the set of surrounding collision points are further displaced to respective potential rest positions. Similar to step 1310 , each of the surrounding collision points is moved from the position set at step 1315 to a potential rest position. In one embodiment, the positions on the surface of the collider nearest to each of the surrounding collision points are the respective potential rest positions. In an alternate embodiment, potential rest position is a position between a collision point and the nearest collider surface point. In a further embodiment, a scaling factor is used to determine the distance between the nearest collider surface point and the potential rest position. In one example, the potential rest position of a surrounding collision point is 80% of the distance between the surrounding collision points new position, determined in step 1315 and the nearest surface point of the collider.
  • a set of collision responses are determined for the set of surrounding collision responses. Similar to set 1315 , the displacement of each of the surrounding collision points from its initial position to its respective potential rest position is projected on to the set of basis functions previously created from the set of impulse responses. Each displacement projection creates a set of basis weights associated with one of the surrounding collision points.
  • the basis weights of a surrounding collision point are applied to the corresponding impulse responses associated with the surrounding collision point to create a secondary collision response. This is repeated for each of the surrounding collision points to create a set of secondary collision responses.
  • Each secondary collision response defines the displacement of skin points near the surrounding collision point in response to the displacement of the surrounding collision point from its initial position to its respective potential rest position.
  • the set of secondary collision responses are applied to all of the collision points, including the first collision point selected in step 1310 , to displace these collision points from their potential rest positions. Once again, the secondary collision responses are only applied to collision points and non-collision points are not moved during this step of the collision animation phase.
  • each of the set of collision points will have new positions.
  • Step 1325 determines a displacement for each of the collision points from their initial positions, and creates a new set of collision responses in a similar manner to that discussed above.
  • the new set of collision responses is applied to further displace the set of collision points. This process of creating a set of collision responses and applying the set of collision responses to the set of collision points is repeated until the set of collision points converge on a final set of displacements.
  • FIG. 14D illustrates the application of a set of secondary collision responses to the set of collision points.
  • a set of collision points, 1428 , 1430 , 1432 , and 1434 have vectors indicating a displacement from their potential rest positions as a result of the set of secondary collision responses.
  • Each vector represents the sum of the displacements resulting from the secondary collision responses of the other collision points.
  • the displacement of collision point 1428 is the sum of the secondary collision responses from collision points 1430 , 1432 , and 1434 .
  • the final collision response is applied to the non-collision points.
  • the final set of displacements determined in step 1325 is the displacement of each collision point from its initial position to its final position.
  • Step 1330 projects each displacement from the final set of displacements on the set of basis functions to create a basis weight associated with each collision point.
  • Each collision point's basis weight is applied to its associated impulse responses to determine a set of displacements for the non-collision points.
  • the displacements resulting from each collision point are added together to create a final collision response defining the displacement of the non-collision points in response to the collision.
  • FIG. 14E illustrates the determination of the final collision response for the non-collision points.
  • Collision points 1436 , 1438 , 1440 , and 1442 outlined for emphasis, have been displaced from their initial positions, shown in dotted outline, to their final positions.
  • the non-collision points 1444 , 1446 , 1448 , and 1450 are displaced from their initial positions as indicated by their respective vectors.
  • Each vector represents the sum of the displacements contributed from the set of collision points 1436 , 1438 , 1440 , and 1442 .
  • all of the collision responses are determined and summed together in basis space. This improves the efficiency of the method 1300 .
  • the final collision response is constructed from the basis-projected form back to physical form.
  • the skin deformation from the collision is determined from the weighted sum of the final collision response and the set of basis functions.
  • sparse vectors representing the impulse response of collisions points are used to determine the final collision response.
  • the association between a collider and collision points can be determined from their positions in the rest pose. This embodiment is useful in cases where the skin response is not expected to move much in response to a collision, for example, the collision between skin (or an internal collision shield) and muscles.
  • FIG. 14F illustrates the final collision response constructed onto the model skin.
  • the skin 1452 bulges inward and around the collider 1454 as a result of the final collision response, presenting a realistic looking deformation in response to a collision.
  • an outline 1456 of a initial undeformed character surface is also shown.
  • the methods of deforming a character model in response to a posed armature and in response to a collision can be combined.
  • character models deform realistically in response to a posed armature and to collisions.
  • the animation process is divided into two phases: a combined preparation phase and a combined animation phase. Similar to the other embodiments, in this embodiment, the combined preparation phase is performed once for a character model.
  • an armature basis set, a corresponding set of skin modes, and a set of frame weights are determined as described in method 400 .
  • the combined preparation phase determines a set of skin impulse responses and an associated impulse basis set as described in method 1100 .
  • FIG. 15 illustrates a block diagram of a method 1500 for animating a character in the combined animation phase for according to an embodiment of the invention.
  • a posed armature defines the bodily attitude of the desired final posed character.
  • the posed armature can be created manually by an animator, by interpolating between key frames, or procedurally using one or more animations variables, functions, procedures, or algorithms.
  • a set of vectors defining the position and orientation of the set of coordinate reference frames are also determined in step 1505 .
  • a set of skin poses responses are determined for the set of coordinate reference frames.
  • Each skin pose response is determined in a similar manner to that described in method 900 .
  • the pose vector and the set of basis functions are transformed into the coordinate spaces defined by a coordinate reference frame in its posed positions.
  • the transformed pose vector is projected on to the transformed set of armature basis functions to create a set of basis function weights.
  • the set of basis function weights are applied to the skin modes in each coordinate reference frame to determine a skin pose response for the coordinate reference frame. This is repeated for each coordinate reference frame to create a set of skin pose responses.
  • a composite skin pose response is determined from the set of skin pose responses. Similar to the method 900 discussed above, the skin pose responses from each coordinate reference frame are combined according to the associated frame weights to correct for undesirable shearing effects. Generally, the set of skin pose responses are transformed from their associated coordinate reference frames to the global reference frame and summed according to the set of frame weights. The results of this step is a composite skin response.
  • Point constraints are the points displaced from collisions of the character model with itself or external objects.
  • Animators can create collisions by positioning objects in contact with the character model in each frame, either manually or as the result of motion defined by a set of key frames or one or more animations variables.
  • Point constraints can also result from the animator attaching a point of the character model to another object, or by manually forcing a skin point into a new position.
  • step 1520 identifies potential collision points by defining a radius around each point on the skin of the character model.
  • a bounding box is used to identify potential collision points.
  • Step 1520 identifies the set of collision points to be used to determine the deformation of the character model from a collision.
  • step 1525 the set of collision points are evaluated to determine the skin collision response.
  • An embodiment of step 1525 evaluates the set of collision points according to the method 1300 discussed above.
  • a first displacement is determined for a first collision point.
  • the first displacement is projected on to the set of impulse basis functions to determine an initial collision response from skin impulse responses.
  • the initial collision response displaces the surrounding collision points.
  • the displacements of the surrounding collision points are applied to their respective skin impulse responses to further displace the set of collision points.
  • the further displacement of the set of collision points creates subsequent collision responses, which are iteratively processed until the collision points converge to their final positions.
  • the final positions of the set of collision points define a skin collision response, which is then applied to the set of non-collision points.
  • character model is constructed from the composite skin pose response and the skin collision response.
  • both the composite skin pose response and the skin collision response are stored and processed in their basis projected forms.
  • the weighted sum of the composite skin responses and the set of armature basis functions is added to the weighted sum of the skin collision response and the set of impulse basis functions. The result is a character model deformed in response to the armature pose and collisions.
  • the steps of method 1500 are repeated for each frame to produce an animated sequence. Because the skin mode responses and the skin impulse responses are precomputed in the combined preparation phase, the combined animation phase can be performed in real-time or near real-time. This permits the animator to efficiently fine-tune the animation and maximize the dramatic impact of the animation. Additionally, because the combined skin response of a character model realistically deforms in response to armature poses and collisions, the animator sees the final appearance of the character model during the animation process, rather than having to wait to see the final appearance of the animation.
  • the present invention determines a realistic character deformation directly from the posed armature without the need to create underlying bone and muscle structures required by physical simulation techniques or complicated armature weightings used by kinematic transform techniques. This decreases the time and effort needed to create character models compared with prior animation techniques.
  • any rendering technique for example ray-tracing or scanline rendering, can create a final image or frame from the model in combination with lighting, shading, texture mapping, and any other image processing information.

Abstract

A method for animating soft body characters has a preparation phase followed by an animation phase. In the preparation phase, the skin deformation of the character model is determined for a set of basis poses. The skin deformation from posing is compactly represented in terms of the set of basis poses. In the animation phase, the set of basis poses and the skin mode response are used to create a final posed character. A desired character pose is projected onto the basis set to determine a set of basis weights. The basis weights are applied to the set of skin responses to create a skin pose response, and the skin pose responses is projected onto the basis set to create the posed character model.

Description

    BACKGROUND OF THE INVENTION
  • The present invention relates to the field of computer graphics, and in particular to methods and apparatus for animating computer generated characters. The present invention relates to the field of computer graphics. Many computer graphic images are created by mathematically modeling the interaction of light with a three dimensional scene from a given viewpoint. This process, called rendering, generates a two-dimensional image of the scene from the given viewpoint, and is analogous to taking a photograph of a real-world scene. Animated sequences can be created by rendering a sequence of images of a scene as the scene is gradually changed over time. A great deal of effort has been devoted to making realistic looking rendered images and animations. [0001]
  • Animation, whether hand-drawn or computer generated, is as much an art as it is a science. Animators must not only make a scene look realistic, but must also convey the appropriate dramatic progression and emotional impact required by the story. This is especially true when animating characters. Characters drive the dramatic progression of the story and establish an emotional connection with the audience. [0002]
  • To create artistically effective character animation, an animator often creates a rough version of a scene and then fine-tunes the character animation to create desired drama and expression of the final scene. This is analogous to a movie director rehearsing a scene with actors to capture the perfect mood for a scene. Because the animator is responsible for the expressiveness of the character animation, it is important that animation tools allow the animator to efficiently fine-tune a character animation and to accurately preview the final form of the animation. [0003]
  • In computer-generated animation, a character's appearance is defined by a three-dimensional computer model. To appear realistic, the computer model of a character is often extremely complex, having millions of surfaces and hundreds or thousands of attributes. Due to the complexity involved with animating such complex models, animation tools often rely on armatures and animation variables to define character animation. An armature is a “stick figure” representing the character's pose, or bodily attitude. By moving the armature segments, which are the “sticks” of the “stick figure,” the armature can be manipulated into a desired pose. As the armature is posed by the animator, the animation tools modify character model so that the bodily attitude of the character roughly mirrors that of the armature. [0004]
  • Animation variables are another way of defining the character animation of a complex character model. An animation variable is a parameter used by a function to modify the character models. Animation variables and their associated functions are used to abstract complicated modifications to a character model to a relatively simple control. For example, an animation variable can define the degree of opening of a character's mouth. In this example, the value of the animation variable is used to determine the position of the many different parts of the character's armature needed to open the characters mouth to the desired degree. The animation tools then modify the character model according to the final posed armature to create a character model with an open mouth. [0005]
  • There are many different approaches for creating a final posed character model from an armature. One prior approach is to associate points on the character model to one or more armature segments. As the armature is moved into a pose, the points associated with each armature segment are kinematically transformed to a new position based on the position of its associated posed armature segments. Because this kinematic transformation can be performed rapidly, animators can preview and fine-tune their animations interactively in real-time or near real-time. However, the animation resulting from kinematic transformations often appears stiff and “puppet-like.”[0006]
  • Further, many characters, such as humans and animals, are deformable soft objects. Kinematic transformations perform particularly poorly with “soft body” objects because they are unable to accurately simulate the deformation of characters. This makes it difficult for characters to bend and bulge realistically as they are posed. Additionally, when kinematic transforms are applied to soft body objects, cracks and seams often develop on the model surface at the character joints. Additional armature segments can be added to simulate bending and bulging surfaces and to smooth out the model surfaces at the joints; however, it is time consuming to create these additional armature segments and the final posed character will often require extensive manual fine-tuning to make the bending and bulging look realistic. [0007]
  • As an alternative to animating soft body characters using kinematic transformations, soft body characters can be animated using a physical simulation approach. In the physical simulation approach, the character model is processed by a material physics simulation to create a physically realistic looking soft body object. This approach is extremely time consuming to set up, often requiring modelers to define not only the exterior of a character, such as the skin, but also the underlying muscles and skeleton. Additionally, processing the character model for each pose created by the animator is extremely computationally expensive, often requiring hours or even days to compute the character model's deformation for a short animated sequence. [0008]
  • Because of the time consuming nature of the animation process, animators often have to create scenes using simplified “stand-in” models and then wait to see the resulting animation with the final character model. Because the animators cannot immediately see the final results of their animation, it is very difficult and inefficient to fine-tune the expressiveness of the character. With this technique, the animator is essentially working blind and can only guess at the final result. [0009]
  • Animated characters also often collide or interact with other objects or characters in a scene. In order to make a collision look realistic, an animated character will need to deformed around the colliding object. Realistic character deformation in response to collisions is essential in animating collisions, especially when the character is a soft body object. Prior character posing techniques such as kinematic transforms cannot realistically deform character models in response to collisions. Instead, animators must manually deform the shape of the character model. Physical simulation techniques can be used to deform character models in response to collisions; however, as discussed above, physical simulation techniques are very time-consuming to set up and computer. Because the time requirements of physical simulation techniques are so high, it is difficult for animators to fine tune collision animations to convey the appropriate dramatic impact. [0010]
  • It is desirable to have a method and system for animating soft body characters that 1) realistically deforms soft body characters in response to armature poses; 2) is easy for animators operate; 3) can be quickly evaluated so that animators can efficiently fine-tune the animation; and 4) allows the animator to preview the final appearance of the character model. It is further desirable for the soft body character to deform realistically from collisions with itself or external objects. [0011]
  • BRIEF SUMMARY OF THE INVENTION
  • A method for animating soft body characters has a first character preparation phase followed by a second character animation phase. In the character preparation phase, the skin deformation of the character model is determined for each of a set of basis poses. The character deformation phase also determines the skin deformation of a character model at a number of skin contact points in response to impulse collisions. In an embodiment of the invention, the skin deformation from posing, referred to as the skin mode response, and the skin deformation from impulse collisions, referred to as the skin impulse response, are compactly represented in terms of the set of basis poses. [0012]
  • In the character animation phase, the set of basis poses, the skin mode response, and the skin impulse response are used to create a final posed character. Regardless of the desired character pose, the character animation phase uses the same set of basis poses, skin mode response, and the skin impulse response. Therefore, the set of basis poses, the skin mode response, and the skin impulse response only need to be determined once for a character model. [0013]
  • In an embodiment, a method for animating a character model includes determining a basis set from a set of character poses and determining a set of skin responses for the character model corresponding to the basis set. A desired character pose is projected onto the basis set to determine a set of basis weights. The basis weights are applied to the set of skin responses to create a skin pose response, and the skin pose responses is projected onto the basis set to create the posed character model. In an additional embodiment, the steps of projecting a character pose, applying the set of basis weights, and projecting the set of skin responses are repeated for a second desired character pose to create a second posed character model. [0014]
  • In an embodiment, the set of character poses includes poses from a training set. In another embodiment, the set of character poses includes randomly created poses. In yet another embodiment, an armature is used to define the set of character poses as well as the desired character pose. In a further embodiment, an animation variable defines at least part of a desired pose. [0015]
  • In an embodiment, determining the skin response includes applying a set of displacements from pose in the basis set to a portion of the character model and minimizing a function of the displacement over the entire character model. In an embodiment, the function is an elastic energy function. In a further embodiment, the function is minimized over a set of sample points associated with the character model. [0016]
  • An embodiment of the method transforms a character pose into a set of reference frames associated with a character model. For each reference frame, a skin pose response of the character model is created in response to the character pose. The embodiment constructs a composite skin response of the character model from the skin pose responses of each reference frame. [0017]
  • A further embodiment constructs a composite skin response by combining a portion of the skin response of a first reference frame with a portion of the skin response of a second reference frame. The portion of the skin response of the first reference frame and the portion of the skin response of the second reference frame can correspond to two, at least partially overlapping regions of the character model. Alternatively, the portion of the skin response of the first reference frame and the portion of the skin response of the second reference frame correspond to two different regions of the character model. [0018]
  • Another embodiment combines the portion of the skin response of a first reference frame and the portion of the skin response of the second reference frame according to a set of frame weights defining the influence of the skin responses of the first and second reference frames on the composite skin response. Yet another embodiment determines a set of frame weights by diffusing an initial set of frame weight values through the character model.[0019]
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • The invention will be described with reference to the drawings, in which: [0020]
  • FIG. 1 illustrates an example computer system capable of implementing an embodiment of the invention; [0021]
  • FIGS. 2A and 2B illustrate an example character and an example armature used for posing the example character; [0022]
  • FIG. 3 is a block diagram illustrating two phases of a method of animating a character according to the embodiment of the invention; [0023]
  • FIG. 4 is a block diagram of a character preparation phase for animating a character according to an embodiment of the invention; [0024]
  • FIG. 5 illustrates a block diagram of a method for determining the skin mode response of a character according to an embodiment of the invention. [0025]
  • FIGS. 6A, 6B, [0026] 6C, 6D, and 6E illustrate the determination of a skin mode response of an example character in an example pose according to an embodiment of the invention;
  • FIG. 7 illustrates a block diagram of a method for weighting a character model with respect to a set of coordinate reference frames according to an embodiment of the invention; [0027]
  • FIGS. 8A, 8B, and [0028] 8C illustrate the determination of a set of coordinate reference frame weights of an example character model according to an embodiment of the invention;
  • FIG. 9 illustrates a block diagram of a character animation phase for constructing a posed character model according to an embodiment of the invention; [0029]
  • FIGS. 10A, 10B, [0030] 10C, and 10D illustrate the construction of a posed character model from an example armature and an example character model according to an embodiment of the invention;
  • FIG. 11 illustrates a block diagram of a method for determining the skin impulse response of a character model according to an embodiment of the invention; [0031]
  • FIGS. 12A, 12B, and [0032] 12C illustrate the determination of a skin impulse response of a portion of an example character model according to an embodiment of the invention;
  • FIG. 13 illustrates a block diagram of a method for determining the collision response of a character model according to an embodiment of the invention; [0033]
  • FIGS. 14A, 14B, [0034] 14C, 14D, 14E, and 14F illustrate the determination of the skin collision response of a portion of a character model according to an embodiment of the invention; and
  • FIG. 15 illustrates a block diagram of a character animation phase for constructing a posed character model according to a further embodiment of the invention.[0035]
  • It should be noted that although the figures illustrate the invention in two dimensions for the sake of clarity, the invention is generally applicable to the manipulation of three-dimensional computer models. [0036]
  • DETAILED DESCRIPTION OF THE INVENTION
  • FIG. 1 illustrates an [0037] example computer system 100 capable of implementing an embodiment of the invention. Computer system 100 typically includes a monitor 110, computer 120, a keyboard 130, a user input device 140, and a network interface 150. User input device 140 includes a computer mouse, a trackball, a track pad, graphics tablet, touch screen, and/or other wired or wireless input devices that allow a user to create or select graphics, objects, icons, and/or text appearing on the monitor 110. Embodiments of network interface 150 typically provides wired or wireless communication with an electronic communications network, such as a local area network, a wide area network, for example the Internet, and/or virtual networks, for example a virtual private network (VPN).
  • [0038] Computer 120 typically includes components such as one or more general purpose processors 160, and memory storage devices, such as a random access memory (RAM) 170, disk drives 180, and system bus 190 interconnecting the above components. RAM 170 and disk drive 180 are examples of tangible media for storage of data, audio/video files, computer programs, applet interpreters or compilers, virtual machines, embodiments of the herein described invention including geometric scene data, object data files, shader descriptors, a rendering engine, output image files, texture maps, and displacement maps. Further embodiments of computer 120 can include specialized audio and video subsystems for processing and outputting audio and graphics data. Other types of tangible media include floppy disks; removable hard disks; optical storage media such as DVD-ROM, CD-ROM, and bar codes; non-volatile memory devices such as flash memories; read-only-memories (ROMS); battery-backed volatile memories; and networked storage devices.
  • FIGS. 2A and 2B illustrate an example character and an example armature used for posing the example character. [0039] Character 205 is a three-dimensional computer model of a soft-bodied object, shown in two dimensions for clarity. Although character 205 is shown to be humanoid in shape, character 205 may take the form of any sort of object, including plants, animals, and inanimate objects with realistic and/or anthropomorphic attributes. Character 205 can be created in any manner used to create three-dimensional computer models, including manual construction within three-dimensional modeling software, procedural object creation, and three-dimensional scanning of physical objects. Character 205 can be comprised of a set of polygons; voxels; higher-order curved surfaces, such as Bezier surfaces or non-uniform rational B-splines (NURBS); constructive solid geometry; and/or any other technique for representing three-dimensional objects. Additionally, character 205 can include attributes defining the outward appearance of the object, including color, textures, material properties, transparency, reflectivity, illumination and shading attributes, displacement maps, and bump maps.
  • [0040] Character 205 is animated through armature 210. Armature 210 includes one or more armature segments. The armature segments can be connected or separate, as show in FIG. 2A. Animators manipulate the position and orientation of the segments of armature 210 to define a pose for the character. A pose is a set of armature positions and orientations defining the bodily attitude of character 205. Armature segments can be constrained in size, position, or orientation, or can be freely manipulated by the animator. The number of armature segments can vary according to the complexity of the character, and a typical character can have an armature with hundreds or thousands of segments. In some cases, the number and position of armature segments is similar to that of a “skeleton” for a character; however, armature segments can also define subtle facial expressions and other character details not necessarily associated with bones or other anatomical features. Additionally, although the armature segments in the armature 210 of FIG. 2A are comprised of a set of points, in alternate embodiments of the invention the armature segments can be comprised of a set of surfaces and/or a set of volumes. As the armature 210 is posed by the animator, the bodily attitude of character 205 roughly mirrors that of the armature 210.
  • [0041] Character 205 is animated by creating a sequence of frames, or still images, in which the character 205 is progressively moved from one pose to another. Character 205 can also be translated, rotated, scaled, or otherwise manipulated as a whole between frames. Animators can manually create the poses of a character for each frame in the sequence, or create poses for two or more key frames, which are then interpolated by animation software to create the poses for each frame. Poses can also be created automatically created using functions, procedures, or algorithms. Animation variables can be used as parameters for one or more functions defining a pose. Character 205 and its associated armature 210 are shown in the rest post, or the default bodily attitude of the character. In an embodiment, the rest pose of a character is determined by the initial configuration of the character model and the armature.
  • FIG. 2B illustrates a [0042] character 220 after being manipulated into a pose by the animator. In this example, the animator has moved the arm segments of armature 225. In response, the character 220 assumes a pose with its arms raised. More complicated poses can be created by manipulating additional armature segments.
  • Following the creation of an armature pose, the character is processed to mirror the bodily attitude of the armature. The present invention allows for interactive frame rates and realistic posing of soft body characters by dividing the animation process into two phases. [0043]
  • FIG. 3 is a block diagram [0044] 300 illustrating two phases of a method of animating a character according to the embodiment of the invention. The first phase 305 is a character preparation phase. The character preparation phase is relatively computationally expensive and is performed in advance of any animation. The character preparation phase 305 creates a set of mode data for the character defining the deformation of the character to numerous poses.
  • Following the completion of the [0045] character preparation phase 305, animators animate the characters is character animation phase 310. In the character animation phase 310, animators create animated sequences for characters by defining the armature pose of a character in a frame. A final posed character is created from the armature pose defined by the animator and the set of mode data previously created in character preparation phase 305. An embodiment of the invention creates a final posed character from an armature pose and the set of mode data in real-time, allowing the animator to preview the result. Regardless of the desired character pose, character animation phase uses the same set of mode data to create the final posed character. Therefore, the character preparation phase 305 only needs to be performed one time for a character. The character animation phase 310 is repeated to create a final posed character for each armature pose in an animated sequence.
  • FIG. 4 is a block diagram [0046] 400 of the character preparation phase for animating a character according to an embodiment of the invention. Step 405 creates a basis from a set of sample armature positions. In this step, a set of sample armature positions is created for an armature associated with a character. In an embodiment, the set of sample armature positions includes poses from a training set defining typical actions of a character. For example, the set of sample armature positions might include armature poses associated with actions such as walking, running, grasping, jumping, and climbing. In an alternate embodiment, the set of sample armature positions are programmatically created. Sample armature positions can be created procedurally by selecting one or more armature segments and manipulating these segments to new positions or orientations. For example, each armature segment is selected in turn and moved one unit in a given dimension to create a sample armature position. In this example, there will be a total number of sample armature positions in the set will be three times the number of armature segments. In a further embodiment, armature segments adjacent to the selected armature segment are also repositioned according to an elastic model as each sample armature position is created. In yet a further embodiment, a sample armature position takes into consideration constraints on the armature segments. For example, an armature segment may have a limited range of motion.
  • Each sample armature position is described by a vector defining the position of the armature segments. In an embodiment, the vector defines the position of armature segments relative to their position in a rest or initial position. The vectors of each sample armature position are combined to form a matrix containing the set of sample armature positions for the armature. A single value decomposition of this matrix is calculated to find a set of basis functions (or modes) for the armature. In alternate embodiments, other methods of calculating a set of basis functions, such as a canonical correlation, can also be used. The set of basis functions compactly defines a “pose space,” in which any pose can be approximately represented as the weighted sum of one or more of the basis functions. In a further embodiment, if the resulting set of basis functions is not an orthonormal basis, the set of basis functions are ortho-normalized so that each basis function has a magnitude of 1 and is perpendicular to every other basis function. [0047]
  • Following the creation of a set of basis functions in [0048] step 405, a skin mode response is determined for each of the sample armature position basis functions in step 410. A skin mode response is the deformation of the surface of the character in response to the movement of the armature to a sample armature position from its rest pose.
  • FIG. 5 illustrates a block diagram [0049] 500 of a method for determining the skin mode response of a character as called for by step 410 according to an embodiment of the invention. At step 505, the character model and its armature are discretized to create a set of sample points. In an embodiment, the character model is discretized into a three-dimensional grid. In this embodiment, the grid points within the character model or adjacent to the armature are the set of sample points. In an alternate embodiment, the character model is discretized into a set of tetrahedral cells. In this embodiment, a set of tetrahedrons are fitted within the character model and around the armature. The vertices of the tetrahedrons are the set of sample points. These embodiments are intended as examples and any type of discretization can be used by step 505, including finite-element, finite volume, and sum-of-spheres discretizations.
  • In an example application of [0050] step 505, FIG. 6A shows a character model 603 and its associated armature 605 discretized with a three-dimensional grid 607. The character model 603 and its armature 605 are in the rest position. Although shown in two-dimensions, grid 607 is a three-dimensional grid. Additionally, the density of the grid 607, i.e. the number of grid cubes per unit of volume, is shown in FIG. 6A for illustration purposes only. Depending on the size and the relative proportions of the character model 603, a typical grid forms a bounding box around the character approximately 120 cubes high, 50 cubes wide, and 70 deep. These dimensions will vary according to the height, width and depth of the character model 603.
  • In a further embodiment, the density of the grid may vary over different portions of the [0051] character 603 to ensure accuracy in more complicated portions of character, for example, the character's face and hands. It should be noted that the grid 607 not only surrounds the character model 603, but also fills the interior of character model 603 as well. In a further embodiment, grid elements completely outside the character model 603 are discarded, while grid elements either partially or completely inside the character model are retained for determining the skin mode response. This reduces the processing and memory requirements for determining the skin model response
  • FIG. 6B illustrates a sample armature position associated with a basis function for an [0052] example armature 625 and its associated character model 623. In this example, the armature segments 627 and 629 are positioned into a new position. Displacement vectors 631 and 633 define the displacement of the armature segments 627 and 629, respectively, from the rest pose. Outline 635 illustrates the portion of the character model 623 affected by the armature displacement for the rest position into the sample armature position.
  • At [0053] step 510, the displacement vectors from the sample armature positions are assigned to sample points adjacent to armature segments.
  • In an example application of [0054] step 510, FIG. 6C illustrates the assignment of displacement vectors to sample points adjacent to armature segments. FIG. 6C illustrates a portion 640 of a character model, its associated armature 642, and the surrounding grid 641. Armature segments 643 and 645 are shown in their rest pose. The armature displacement vectors, 647 and 649, are associated with armature segments 643 and 645 respectively.
  • The sample points adjacent to [0055] armature displacement vector 647 are each assigned a displacement vector, illustrated by the set of displacement vectors 651. The values of the displacement vector are computed so that the weighted sum of the set of grid displacement vectors 651 is equal to the armature displacement vector 647. Similarly, a set of displacement vectors 653 are assigned to the sample points adjacent to armature displacement vector 649. Displacement vectors are computed for all sample points adjacent to any portion of any armature segments. If armature displacement vectors are only defined for the endpoints of an armature segment, the armature displacement vectors are interpolated along the length of the armature segment. The interpolated armature displacement vector is then used to create a set of displacement values for the sample points adjacent to each portion of the armature.
  • In an embodiment where [0056] grid 641 is a three-dimensional Cartesian grid, each armature displacement vector has eight adjacent displacement vectors. In an alternate embodiment using a tetrahedral discretization, each armature displacement vector has four adjacent displacement vectors.
  • It should be noted that the armature displacement vectors and the displacement vectors assigned to sample points in FIG. 6C are not shown to scale. Furthermore, the magnitudes of the displacement vectors is assumed to be infinitesimally small. Consequently, the sample points are not actually moved from their initial positions by the displacement vectors. Instead, the assigned displacement vectors represent a “virtual displacement” of their associated sample points. [0057]
  • At [0058] step 515, the skin mode response, which is the deformation of the skin of the character model, is computed using the displacement vectors assigned in step 510 as initial input values. In an embodiment, the skin mode response is computed by determining the value of an elastic energy function over every sample point inside the character body. One example elastic energy function is: E 2 = V q x x + q y y + q z z 2 + S [ ( q x y + q y x ) 2 + ( q x z + q z x ) 2 + ( q y z + q z y ) 2 ]
    Figure US20040227761A1-20041118-M00001
  • In this example elastic energy function, q[0059] xyz(x,y,z) is the displacement of a sample point from its rest position of (x,y,z). V is a parameter denoting resistance of the model to volume change and S is a parameter denoting resistance to internal shear. The values of V and S can be varied to change the deformation characteristics for the character. Characters that are very soft or “squishy” have low values of V and S, while characters that are relatively more rigid will have larger values of V and S.
  • Material behavior can be represented in general by a Hamiltonian dynamics system, and any type of Hamiltonian function can be used as an energy function in [0060] step 515. In another embodiment, the energy function includes local terms, which change the energy of the system in response to local deformations, such as shown in the example above, and additional global terms, which change the energy of the system in response to changes to the character model as a whole, such as global volume preservation terms.
  • Using the displacement values assigned to sample points adjacent to armature segments as “seed” values for the set of sample points, a system of equations is created representing the elastic energy of the entire character model. This system of equations is minimized over the set of sample points. The minimization of the elastic energy function can be performed using a numerical solver to find the value of q[0061] xyz, the position offset, for each sample point. In an embodiment, an elliptic numerical solver is used to minimize the energy function. Alternate embodiments can used conjugate gradient multigrid or Jacoby solvers. The skin mode is the set of position offsets adjacent to the skin of the model.
  • In an example application of [0062] step 515, FIG. 6D illustrates an example of a portion of a skin mode response of a character model for a basis function. A portion 660 of a character model is shown in detail. The set of displacement vectors 663 describe the deformation of a portion of the character's skin. In this example, the skin bulges outward around the “kneecap” of the character. Similarly, the set of displacement vectors 665 describe the deformation of a portion of the character's skin behind the “knee.” In this example, the skin creases inward just behind the knee, but bulges outwards just above and below the knee. Although omitted for clarity, displacement vectors are computed for all sample points adjacent to the skin of the character model. In a further embodiment, grid displacement vectors that are zero or very small in value, indicating very little deformation at a skin point from a given basis function, are truncated.
  • FIG. 6E illustrates the skin mode response of FIG. 6D constructed onto the model skin. This figure is presented to clarify the effects of the skin mode response of FIG. 6D on the appearance of the character model. As discussed below, an embodiment of the invention projects the skin mode response on to the set of basis functions to create a more compact representation. In the example of FIG. 6E, a [0063] portion 680 of the character model is shown in detail. The skin 685 bulges outward and inward as a result of the displacement created by the basis function. In this example, the skin mode response presents a realistic looking representation of the deformation of a character's leg as its knee is bent.
  • Unlike prior physical simulation techniques that require the construction of complex bone and muscle structures underneath the skin of a character to create realistic “bulging” appearance, the present invention determines a realistic skin mode response directly from displacement introduced by the armature basis function. And unlike kinematic transformation techniques, there is no need to explicitly associate skin points with one or more armature segments. Instead, realistic skin deformation automatically results from the displacement of the underlying armature. This decreases the time and effort needed to create character models compared with prior techniques. [0064]
  • For comparison with prior art techniques, an [0065] outline 690 of a character surface kinematically transformed by an armature pose is also shown. The kinematically transformed model skin appears rigid and mechanical when compared with the skin mode response of the present example.
  • At [0066] step 520, the process of determining a skin mode response is repeated for each basis function to create a corresponding set of skin mode responses for the set of basis functions. In step 525, the skin mode responses are projected onto the set of basis functions to create a compact representation of the set of skin modes.
  • In an embodiment of [0067] step 525, the positional offsets for sample points adjacent to the armature segments are compared with their original values following the computation of each skin mode. This is done due to the effects of the deformed character model “pushing back” on the armature segments and the effects of separate armature segments pushing into each other. If the sample points adjacent to the armature segments have changed, a new basis function is computed from the set of modified sample points. The new basis function replaces its corresponding original basis function in the set of basis function. The modified set of basis functions is ortho-normalized, and then the skin modes are projected onto the modified, ortho-normalized set of basis function for storage. Following the determination of the skin mode responses for the set of basis functions, the unused positional offsets, which are the positional offsets not adjacent to the skin of the character model, are discarded.
  • Following the completion of [0068] step 410, which results in the determination of a set of skin modes for the set of armature basis functions, step 415 determines a set of frame weights for the character model skin. As discussed in detail below, the set of frame weights are used in the character animation phase to correct for undesired shearing effects introduced by large rotations of portion of the character model.
  • FIG. 7 illustrates a block diagram of a method for weighting a character model with respect to a set of coordinate reference frames according to an embodiment of the invention. At [0069] step 705, a set of coordinate reference frames are attached to the segments of the armature. A coordinate reference frame defines a local coordinate system for an armature segment and the adjacent portions of the character model. In an embodiment, a coordinate reference frame is attached to each armature segment. In an alternate embodiment, several armature segments may share the same coordinate reference frame. In an embodiment, a coordinate reference frame is composed of four vectors: a first vector defining the origin or location of the coordinate reference frame and three vectors defining the coordinate axes of the coordinate reference frame.
  • FIG. 8A illustrates the attachment of coordinate reference frames to an example armature as called for by [0070] step 705. In FIG. 8A, armature 802 is associated with a number of coordinate reference frames, including coordinate reference frames 804, 806, and 808. Each reference frame is attached, or positioned, near an armature segment. In an embodiment, a coordinate reference frame is positioned at the end, or joint, of an armature segment. In an alternate embodiment, armature segments are positioned anywhere along an armature segment.
  • In the example of FIG. 8A, coordinate [0071] reference frame 804 is positioned near the center of the head of the character armature. Coordinate reference frame 806 is positioned at the shoulder joint of the armature 802. Coordinate reference frame 808 is positioned at the knee joint of the armature 802.
  • At [0072] step 710, the armature and the character model are discretized to create a set of sample points. Similar to the discretization discussed above for determining skin mode responses, one embodiment creates a set of sample points from a three-dimensional grid. An alternate embodiment discretizes the character model and armature using a set of tetrahedral cells.
  • At [0073] step 715, a set of initial frame weights are assigned to the sample points adjacent to each coordinate reference frame. A frame weight defines the influence of a coordinate reference frame on a sample point. As discussed below, each sample point can be influenced by more than coordinate reference frame, and therefore can have more than one frame weight. In step 715, the sample points adjacent to a reference frame are initialized with a frame weight of 1. The frame weights of the other sample points that are not adjacent to any of the reference frames are undefined at this stage.
  • FIG. 8B illustrates an example of the assignment of initial frame weights in a portion of an example character model. FIG. 8B shows a portion of a [0074] character model 810 and a portion of a set of sample points 812 created by a three-dimension grid. In this example, coordinate reference frames 814 and 816 are positioned on armature segments. For each coordinate reference frame, the adjacent sample points are assigned a frame weight of 1. For example, sample points 818 are assigned a frame weight of 1 with respect to coordinate reference frame 814, and sample points 820 are assigned a frame weight of 1 with respect to coordinate reference frame 816.
  • At [0075] step 720, the frame weights of the surrounding sample points are determined from the initial frame weights. In an embodiment, a spatial diffusion function is used to calculate the frame weights for sample points. In this embodiment, the initial frame weights values are diffused outward from their initial sample points to the surrounding sample points. As frame weights spread to distant sample points, the frame weight values gradually decrease in value. An example spatial diffusion function is: w t = D 2 w
    Figure US20040227761A1-20041118-M00002
  • In this example, the function w[0076] xyz(x,y,z) is frame weight associated with a sample point with respect to a given coordinate reference frame. D is a diffusion coefficient defining the rate of diffusion. In an embodiment, D is uniform in every direction. In an alternate embodiment, the value of D varies according to direction of diffusion. In this alternate embodiment, a variable diffusion coefficient is useful in defining sharp transitions between reference frames at armature joints. In a further embodiment, the diffusion coefficient is selected to be consistent with the diffusion of shear stress in the character model.
  • Using the initial frame weights as seed values to the set of sample points, a system of equations is created representing the diffusion of frame weights through the entire character model. This system of equations is solved over the set of sample points to find the value of one or more frame weights, w[0077] xyz, for each sample point. If a sample point is influenced by multiple coordinate reference frames, the sample point will have a corresponding set of frame weights defining the degree of influence from each associated coordinate reference frame. The set of frame weights for each sample point are normalized so that the sum of the frame weights for a sample point is 1.
  • In an alternate embodiment of [0078] steps 715 and 720, an optimal set of frame weights are determined using a full non-linear model accounting for rotational effects. In this embodiment, a non-linear solution of the character model's skin mode response is determined for each of the set of armature basis functions. Unlike the linear solutions computed in step 415, the non-linear solution does not assume the displacement vectors are infinitesimally small. The skin mode of each non-linear solution is compared with the corresponding linear skin mode response for an armature basis function as determined in step 415. From these comparisons of the non-linear skin mode responses with their corresponding linear skin mode responses, an optimal set of frame weights are determined for each sample point.
  • In a further embodiment, the frame weights determined in [0079] steps 715 and 720, for example from a spatial diffusion process or from a non-linear model, are manually adjusted for optimal aesthetic results. For example, the frame weights for sample points located near joints on character models can be fine-tuned so that the deformation of skin points is aesthetically pleasing.
  • FIG. 8C illustrates an example of the set of frame weights determined for a portion of the sample points adjacent to the model skin. FIG. 8C shows a portion of a [0080] character model 822. Coordinate reference frames 824 and 826 are shown as well. A portion of the set of frame weights determined for the sample points adjacent to the model skin is highlighted with open circles in FIG. 8C. Each sample point has one or more frame weights. For example, sample point 828 may have a frame weight of 0.9 with respect to coordinate reference frame 826, and a frame weight of 0.1 with respect to coordinate reference frame 824. Sample point 830 may have a frame weight of 0.1 with respect to coordinate reference frame 826, and a frame weight of 0.9 with respect to coordinate reference frame 824. Sample point 832 may have a frame weight of 0.999 with respect to coordinate reference frame 826 and a frame weight of 0.001 with respect to coordinate reference frame 824.
  • At [0081] step 725, the set of reference frames and their associated frame weights are stored for use during the character animation phase. This completes step 415 and the character preparation phase. Following the completion of the character preparation phase, the character is ready to be used by an animator in the character animation phase. The character animation phase uses the set of basis functions, the associated set of skin modes, and the set of frame weights determined from method 400 to create a final posed character.
  • FIG. 9 illustrates a block diagram of a [0082] method 900 for animating a character in the character animation phase for according to an embodiment of the invention. In step 905, a posed armature defines the bodily attitude of the desired final posed character. As discussed above, the posed armature can be created manually by an animator, by interpolating between key frames, or procedurally using one or more animations variables, functions, procedures, or algorithms. The posed armature is compared with the armature in the rest position to determine a pose vector defining the differences in positions and orientations between the armature segments in the posed and rest positions. Additionally, the set of coordinate reference frames attached to the armature follow their associated armature segments from the rest position to the posed position. A set of vectors defining the position and orientation of the set of coordinate reference frames are also determined in step 905.
  • At [0083] step 910, the pose vector is transformed into the each of the coordinate spaces defined by the set of coordinate reference frames in their posed positions. For each of the coordinate reference frames, the transformed pose vector is projected on to the armature basis functions. By projecting the pose vector on to the set of basis functions, the pose vector is transformed into a set of basis function weights. The basis function weights redefine the pose vector as a weighted sum of the set of basis functions. A set of basis function weights is created for each coordinate reference frame from the transformed pose vector.
  • At [0084] step 915, the set of basis function weights determined in step 910 are applied to the skin modes in each coordinate reference frame. As discussed above, a skin mode was previously created during the character preparation phase for each of the basis functions. In step 915, the basis function weights associated with each coordinate reference frame are applied to the skin mode of the basis function. The resulting set of skin modes, each weighted by its associated basis function weight, are summed to create a skin pose response. The skin pose response is the deformation of the character model in response to the posed armature. Provided the set of basis functions forms a complete basis of the pose space, a skin pose response can be determined for any possible character pose, regardless of whether the desired pose was explicitly part of the original pose set.
  • In [0085] step 915, a separate skin mode response is created for each of the coordinate reference frames. In an additional embodiment, step 920 skips the determination of a skin mode response in a reference frame for portions of the character skin where the frame weight is zero or negligible.
  • As discussed above, the skin response is represented in the form of modes that take the form of spatial shape offsets. During the character animation phase, portions of the model can be rotated away from their initial orientations. If the rotation is relatively large with respect to an adjacent portion of the character model, an undesirable shearing effect can be introduced. To correct for this shearing effect, separate skin pose responses are determined for each coordinate reference frame. [0086]
  • At [0087] step 920, the skin pose responses determined in each reference frame are combined to create a single composite skin pose response that does not include any shearing effects. In step 920, the set of skin pose responses are transformed from their associated coordinate reference frames to the global reference frame. Once all of the skin pose responses are in the same coordinate system, the skin poses are summed according to the set of frame weights previously determined in the character preparation phase. Each skin point is the weighted sum of the set of skin pose responses and the corresponding frame weights associated with the skin point. In an embodiment, these skin responses are summed in their basis-projected form. The result is a composite skin response.
  • Following [0088] step 920, the composite skin response is constructed from the basis-projected form back to physical form. In step 925, the weighted sum of the composite skin response and the set of basis functions creates the final posed character.
  • The steps of [0089] method 900 are repeated for each pose of a character to produce an animated sequence. Because the skin mode responses and the skin impulse responses are precomputed in the character preparation phase, the character animation phase can be performed in real-time or near real-time. This permits the animator to efficiently fine-tune the animation. Additionally, because the combined skin response of a character model realistically deforms in response to armature poses, the animator sees the final appearance of the character model during the animation process, rather than having to wait to see the final appearance of the animation.
  • FIGS. 10A, 10B, [0090] 10C, and 10D illustrate the construction of a posed character model from an example armature and an example character model according to an embodiment of the method described in FIG. 9. FIG. 10A illustrates an example posed armature 1005. In this example, posed armature 1005 defines the bodily attitude of a character in a running position. As discussed above, posed armature 1005 may be created manually by an animator, by interpolating between key frames, or procedurally using one or more animations variables, functions, procedures, or algorithms.
  • FIG. 10B illustrates the example posed [0091] armature 1010 and its associated set of coordinate reference frames. In FIG. 10B, each coordinate reference frame is represented by a shaded rectangle. The position and orientation of each rectangle illustrates the position and orientation of the associated coordinate reference frame in the posed position. The size of each rectangle illustrates the approximate portion of the character model influenced by the associated coordinate reference frame.
  • For example, coordinate [0092] reference frame 1015 is associated with the upper leg armature segment of the posed armature 1010. Coordinate reference frame 1015 influences the portion of the character model surrounding the upper leg armature segment. Similarly, coordinate reference frame 1020 influences the portion of the character model surrounding the upper arm armature segment of posed character 1010. Although not shown in FIG. 10B, two or more reference frames can influence the same portion of the character model.
  • FIG. 10C illustrates examples of two of the skin pose responses determined for the coordinate reference frames. Skin pose [0093] response 1025 is associated with the coordinate reference frame 1035. Skin pose response 1030 is associated with coordinate reference frame 1040. As discussed above, but not shown in FIG. 10C, a skin pose response is determined for each of the coordinate reference frames associated with the posed armature.
  • Skin pose [0094] response 1025 shows the deformation of the character model in response to the posed armature from the view of coordinate reference frame 1035. The portion of the skin pose response 1025 within coordinate reference frame 1035 is correctly deformed in response to the posed armature. However, other portions of the skin pose response 1025 outside of coordinate reference frame 1035 are highly distorted due to shearing effects. For example, in skin pose response 1025, the upper leg portion of the character model within coordinate reference frame 1035 is correctly deformed from the posed armature, while the arms 1042 and 1044 of the character model are distorted due to shearing effects.
  • Similarly, the skin pose [0095] response 1030 shows the deformation of the character model in response to the posed armature from the view of coordinate reference frame 1040. The arm portion of the skin pose response 1030 within coordinate reference frame 1040 is correctly deformed in response to the posed armature. However, other portions of the skin pose response 1030 outside of coordinate reference frame 1040, such as the legs of the character model, are highly distorted due to shearing effects.
  • As discussed above with respect to the method of FIG. 9, the separate skin pose responses determined from each reference frame are combined using the set of frame weights into a composite skin pose response without shearing effects. [0096]
  • FIG. 10D illustrates the composite skin pose [0097] response 1050 created from a set of separate skin pose responses associated with different reference frames. For example, the leg portion 1060 of the composite skin pose response 1050 is created primarily from the skin pose response 1025 shown in FIG. 10C. Similarly, the arm portion 1065 of the composite skin pose response 1050 is created primarily from the skin pose response 1030. The set of frame weights determines the contribution of each skin pose response to a given portion of the composite skin pose response. As discussed above, because a skin point can be associated with several coordinate reference frames through a corresponding number of frame weight values, the composite skin pose response can include contributions from several skin pose responses.
  • Using the above-described embodiments, animators can create posed character models with a realistic bulging and bending in real-time. In a further embodiment of the invention, a character model or any other soft object is realistically deformed in response to collisions with other objects in real time. A character's skin can deform due to a collision with an external object, such as another character or a rigid object. A character's skin can also deform due to self-collision, which is the collision of one part of the character model with another part of the character. An example of self-collision can occur when a character's arm is bent at the elbow so that the upper and lower arm contact each other. [0098]
  • Creating realistic character model deformation in response to collisions is a two phase process, similar to that discussed in FIG. 3. The first phase is a collision preparation phase. The collision preparation phase is relatively computationally expensive and is performed in advance of any animation. The collision preparation phase creates a set of skin impulse responses defining the deformation of the character to a set of test collisions. Each character skin impulse response is the deformation of the surface of a character in response to a single collision at a single point. In an embodiment, the skin impulse response defines the displacement of points surrounding the collision point in response to a collision. [0099]
  • Following the completion of the collision preparation phase, in the collision animation phase animators create collisions by placing objects in contact with the character. In an embodiment of the invention, an animator defines the locations of the character model and the colliding object, referred to as a collider, in each frame of an animation sequence. Any portion of the character model overlapping or contacting the collider is considered to be part of the collision. For each frame, the collision animation phase determines the skin collision response, which is the deformation of the character model in response to the collision of the collider with the character model, using the set of skin impulse responses. [0100]
  • Regardless of the shape of the collider or the amount of collision between the character model and the collider, the collision animation phase uses the same set of skin impulse responses to determine the collision skin response. Thus, the collision preparation phase only needs to be performed once for a character model, and the collision animation phase is repeated to create a skin collision response for each frame in an animated sequence. [0101]
  • In the collision preparation phase, an embodiment of the invention determines a set of skin impulse responses for a character model. FIG. 11 illustrates a block diagram [0102] 1100 of a method for determining the skin impulse response of a character according to an embodiment of the invention. At step 1105, the character model is discretized to create a set of sample points. In an embodiment, the character model is discretized into a three-dimensional grid. In an alternate embodiment, the character model is discretized into a set of tetrahedral cells.
  • At [0103] step 1110, a collision point is selected. A collision point can be any point on the surface of the character model, or in a further embodiment, within the interior of a character model. As an example, internal collision points, which are collision points within a character model, can be used to deform the skin of a character model in response to collisions with internal “muscle” objects. However, skin and muscle are in reality often separated by a thin layer of fat. To approximate this anatomical feature, a “collision shield” can be created by selecting interior points of the character model as collision points.
  • [0104] Step 1110 applies a set of displacements to the collision point. Each displacement represents a collision of the character model at the collision point in a different direction. In an embodiment, a displacement is applied to the collision point in each of the three Cartesian directions. In a further embodiment, each displacement is a unit displacement in the appropriate direction. In a manner similar to that discussed in step 410, the sample points adjacent to the collision point are assigned displacement vectors based on the displacement at the collision point.
  • At [0105] step 1115, a skin impulse response is computed for each displacement using the displacement values assigned in step 1110 as initial input values. In an embodiment, the skin mode response is computed by determining the value of an elastic energy function over every sample point inside the character body, in a manner similar to that used to find the skin mode response. By minimizing the value of the elastic energy function over the entire discretized space, the value of qxyz, the position offset, is calculated for each sample point. The skin impulse response for a given skin displacement is the set of position offsets at sample points adjacent to the skin of the model.
  • At [0106] step 1120, steps 1110 and 1115 are repeated for displacements applied to a number of collision points to create a set of skin impulse responses. In an embodiment where the skin includes one or more surfaces defined by control points, each control point is selected as a collision point and set of skin impulse responses are then created. In a further embodiment, control points for relatively rigid portions of the character model are excluded from the set of collision points.
  • At [0107] step 1125, a set of basis functions is determined from the set of skin impulse responses. In an embodiment, a single value decomposition is used to calculate the set of basis functions from the skin impulse responses. In alternate embodiments, other methods of calculating a set of basis functions, such as a canonical correlation, can also be used. In a further embodiment, if the resulting set of basis functions is not an orthonormal basis, the set of basis functions are ortho-normalized so that each basis function has a magnitude of 1 and is perpendicular to every other basis function.
  • [0108] Step 1125 the projects the set of impulse responses onto the set of basis functions to create a compact representation of the set of skin impulse responses. In a further embodiment, less significant terms of the single value decomposition are truncated to decrease the number of basis functions. This results in a smoothing effect as the set of impulse responses are projected onto the truncated basis set. In an alternate embodiment, the set of skin responses are stored as a sparse set of vectors defining the displacement of surrounding points in response to a displacement of a collision point. This alternate embodiment represents skin impulse responses affecting only a small number of points more efficiently than the basis function representation.
  • FIGS. 12A, 12B and [0109] 12C illustrate the determination of a skin impulse response of an example character according to an embodiment of the invention. FIG. 12A illustrates a displacement applied to a character model 1205. In this example, the character model 1205 has been discretized with a three-dimensional grid 1215. A displacement 1220 is applied to a collision point on the model skin 1210. A set of displacement values 1225 are assigned to the sample points adjacent to the collision point.
  • FIG. 12B illustrates a set of [0110] displacement vectors 1230 included as part of the skin impulse response resulting from the skin displacement 1235. The set of displacement vectors 1230 is provided for the purpose of explanation, and the skin impulse response may include any number of displacement vectors dispersed over all or a portion of the model skin. As can be seen from the magnitude and orientation of the set of grid displacement vectors 1230, the model skin bulges inward near the collision point and bulges outward in the region surrounding the collision point.
  • FIG. 12C illustrates the example skin impulse response of FIG. 12B projected onto the model skin. This figure is presented to clarify the effects of the skin impulse response of FIG. 12C on the appearance of the character model. As discussed above, an embodiment of the invention projects the skin impulse response on to a set of basis functions to create a more compact representation. In the example of FIG. 12C, the [0111] model skin 1260 bulges outward and inward as a result of the displacement created by the skin impulse response. In this example, the skin impulse response presents a realistic looking representation of the deformation of a character due to the collision with an object. The model skin 1250 in its rest state is shown for comparison.
  • Following the determination of the set of skin impulse responses in the collision preparation phase, the collision animation phase determines the deformation of the character skin in response to a collision defined by an animator. FIG. 13 illustrates a block diagram of a [0112] method 1300 for determining the collision response of a character model according to an embodiment of the invention. Method 1300 will be discussed below with reference to FIGS. 14A-14F, which illustrate the determination of a skin collision response from an example collision according to an embodiment of the invention.
  • At [0113] step 1305, a set of collision points are identified. Collision points are skin points in contact with or inside of a collider. FIG. 14A illustrates a portion of a model skin 1404 in collision with a collider 1402. The model skin 1404 includes a number of skin points. A portion of these skin points are inside of collider 1402. These skin points, 1406, 1408, 1410, and 1412, are the set of collision points in this example collision. In a further embodiment, skin points that are not inside or in contact with a collider are selected as additional collision points if there are near the surface of the collider or near a collision point inside the collider. This provides a margin of safety when the deformation of the character skin from a collision causes additional skin points to contact the collider.
  • At [0114] step 1310, a first collision point is selected and displaced to a potential rest position, which is a first approximation of the final rest position of a collision point. In an embodiment, the first collision point is selected randomly. In one embodiment, the potential rest position is the position on the surface of the collider nearest to the first collision point. In an alternate embodiment, the potential rest position is a position between the first collision point and the nearest collider surface point. In a further embodiment, a scaling factor is used to determine the distance between the nearest collider surface point and the potential rest position.
  • FIG. 14B illustrates an example [0115] first collision point 1414 displaced from its initial position 1416 to a potential rest position. In this example, the potential rest position is 80% of the distance between the its initial position 1416 and the nearest surface point of the collider 1418. In this example, the scaling factor is selected to optimize the performance of method 1300. Although first collision point 1414 is shown being displaced in a strictly horizontal direction for clarity, it should be noted that a collision point can be moved in any direction to a potential rest position.
  • At [0116] step 1315, an initial collision response is applied to the other, non-displaced collision points. The initial collision response is determined by projecting the displacement of the first collision point from its initial position to the potential rest position on to the set of basis functions previously created from the set of impulse responses. The projection of the displacement on the set of basis functions creates a set of weights defining the displacement in basis space.
  • The set of weights are then applied to the impulse responses associated with the first collision point. This results in an initial collision response for the first collision point. The initial collision response defines the displacement of skin points surrounding the first collision point in response to the displacement of the first collision point from its initial position to the potential rest position. [0117]
  • The initial collision response is applied to the surrounding collision points to displace these collision points from there initial position. It should be noted that the initial collision response is applied only to the collision points, i.e. only point in contact with or within the collider, even though the skin impulse responses associated with a first collision point can define displacements for additional points. [0118]
  • FIG. 14C illustrates the application of an example initial collision response to a set of surrounding collision points. In this example, the [0119] first collision point 1420, outlined for emphasis, has been displaced from its initial position 1421 to a potential rest position, resulting in the application of an initial collision response to the set of surrounding collision points. The initial collision response is the displacement applied to the surrounding collision points 1422, 1424, and 1426 resulting from the displacement of first collision point 1420. The initial collision response displaces collision points 1422, 1424, and 1426 from their respective initial positions to new potential rest positions as shown. As discussed above, the skin points outside of the collider, i.e. the non-collision skin points, are not displaced at this point of the collision animation phase.
  • At [0120] step 1320, the set of surrounding collision points are further displaced to respective potential rest positions. Similar to step 1310, each of the surrounding collision points is moved from the position set at step 1315 to a potential rest position. In one embodiment, the positions on the surface of the collider nearest to each of the surrounding collision points are the respective potential rest positions. In an alternate embodiment, potential rest position is a position between a collision point and the nearest collider surface point. In a further embodiment, a scaling factor is used to determine the distance between the nearest collider surface point and the potential rest position. In one example, the potential rest position of a surrounding collision point is 80% of the distance between the surrounding collision points new position, determined in step 1315 and the nearest surface point of the collider.
  • Following the displacement of the surrounding collision points to their respective potential rest positions, at step [0121] 1325 a set of collision responses are determined for the set of surrounding collision responses. Similar to set 1315, the displacement of each of the surrounding collision points from its initial position to its respective potential rest position is projected on to the set of basis functions previously created from the set of impulse responses. Each displacement projection creates a set of basis weights associated with one of the surrounding collision points.
  • The basis weights of a surrounding collision point are applied to the corresponding impulse responses associated with the surrounding collision point to create a secondary collision response. This is repeated for each of the surrounding collision points to create a set of secondary collision responses. Each secondary collision response defines the displacement of skin points near the surrounding collision point in response to the displacement of the surrounding collision point from its initial position to its respective potential rest position. [0122]
  • The set of secondary collision responses are applied to all of the collision points, including the first collision point selected in [0123] step 1310, to displace these collision points from their potential rest positions. Once again, the secondary collision responses are only applied to collision points and non-collision points are not moved during this step of the collision animation phase.
  • Following the application of the secondary collision responses to the set of collision points, each of the set of collision points will have new positions. [0124] Step 1325 determines a displacement for each of the collision points from their initial positions, and creates a new set of collision responses in a similar manner to that discussed above. The new set of collision responses is applied to further displace the set of collision points. This process of creating a set of collision responses and applying the set of collision responses to the set of collision points is repeated until the set of collision points converge on a final set of displacements.
  • FIG. 14D illustrates the application of a set of secondary collision responses to the set of collision points. A set of collision points, [0125] 1428, 1430, 1432, and 1434, have vectors indicating a displacement from their potential rest positions as a result of the set of secondary collision responses. Each vector represents the sum of the displacements resulting from the secondary collision responses of the other collision points. For example, the displacement of collision point 1428 is the sum of the secondary collision responses from collision points 1430, 1432, and 1434.
  • At [0126] step 1330, the final collision response is applied to the non-collision points. The final set of displacements determined in step 1325 is the displacement of each collision point from its initial position to its final position. Step 1330 projects each displacement from the final set of displacements on the set of basis functions to create a basis weight associated with each collision point. Each collision point's basis weight is applied to its associated impulse responses to determine a set of displacements for the non-collision points. The displacements resulting from each collision point are added together to create a final collision response defining the displacement of the non-collision points in response to the collision.
  • FIG. 14E illustrates the determination of the final collision response for the non-collision points. Collision points [0127] 1436, 1438, 1440, and 1442, outlined for emphasis, have been displaced from their initial positions, shown in dotted outline, to their final positions. Based on the displacements of the set of collision points, the non-collision points 1444, 1446, 1448, and 1450 are displaced from their initial positions as indicated by their respective vectors. Each vector represents the sum of the displacements contributed from the set of collision points 1436, 1438, 1440, and 1442.
  • In an embodiment of the invention, all of the collision responses are determined and summed together in basis space. This improves the efficiency of the [0128] method 1300. In this embodiment, at step 1335, the final collision response is constructed from the basis-projected form back to physical form. The skin deformation from the collision is determined from the weighted sum of the final collision response and the set of basis functions.
  • In an alternate embodiment, sparse vectors representing the impulse response of collisions points are used to determine the final collision response. In another embodiment, the association between a collider and collision points can be determined from their positions in the rest pose. This embodiment is useful in cases where the skin response is not expected to move much in response to a collision, for example, the collision between skin (or an internal collision shield) and muscles. [0129]
  • FIG. 14F illustrates the final collision response constructed onto the model skin. In the example of FIG. 14F, the [0130] skin 1452 bulges inward and around the collider 1454 as a result of the final collision response, presenting a realistic looking deformation in response to a collision. For comparison, an outline 1456 of a initial undeformed character surface is also shown.
  • As discussed above, during the collision animation phase, only the collision points are displaced until the final collision response is determined. This greatly limits the number of points need to be calculated in determining the collision response and allows the collision animation phase to be performed in real-time. This give the animator the ability to fine-tune the interaction of characters with their surroundings. [0131]
  • In a further embodiment of the invention, the methods of deforming a character model in response to a posed armature and in response to a collision can be combined. In this combined embodiment, character models deform realistically in response to a posed armature and to collisions. In this embodiment, the animation process is divided into two phases: a combined preparation phase and a combined animation phase. Similar to the other embodiments, in this embodiment, the combined preparation phase is performed once for a character model. In the combined preparation phase, an armature basis set, a corresponding set of skin modes, and a set of frame weights are determined as described in [0132] method 400. Additionally, the combined preparation phase determines a set of skin impulse responses and an associated impulse basis set as described in method 1100.
  • The combined animation phase uses the armature basis set, the set of skin modes, the set of frame weights, the set of skin impulse responses, and the impulse basis set to create a character model posed and deformed according to the specifications from an animator. FIG. 15 illustrates a block diagram of a [0133] method 1500 for animating a character in the combined animation phase for according to an embodiment of the invention. In step 1505, a posed armature defines the bodily attitude of the desired final posed character. As discussed above, the posed armature can be created manually by an animator, by interpolating between key frames, or procedurally using one or more animations variables, functions, procedures, or algorithms. Additionally, as the set of coordinate reference frames attached to the armature follow their associated armature segments from the rest position to the posed position, a set of vectors defining the position and orientation of the set of coordinate reference frames are also determined in step 1505.
  • At [0134] step 1510, a set of skin poses responses are determined for the set of coordinate reference frames. Each skin pose response is determined in a similar manner to that described in method 900. Generally, the pose vector and the set of basis functions are transformed into the coordinate spaces defined by a coordinate reference frame in its posed positions. The transformed pose vector is projected on to the transformed set of armature basis functions to create a set of basis function weights. The set of basis function weights are applied to the skin modes in each coordinate reference frame to determine a skin pose response for the coordinate reference frame. This is repeated for each coordinate reference frame to create a set of skin pose responses.
  • At step [0135] 1515, a composite skin pose response is determined from the set of skin pose responses. Similar to the method 900 discussed above, the skin pose responses from each coordinate reference frame are combined according to the associated frame weights to correct for undesirable shearing effects. Generally, the set of skin pose responses are transformed from their associated coordinate reference frames to the global reference frame and summed according to the set of frame weights. The results of this step is a composite skin response.
  • At [0136] step 1520, point constraints are identified. Point constraints are the points displaced from collisions of the character model with itself or external objects. Animators can create collisions by positioning objects in contact with the character model in each frame, either manually or as the result of motion defined by a set of key frames or one or more animations variables. Point constraints can also result from the animator attaching a point of the character model to another object, or by manually forcing a skin point into a new position. In an embodiment, step 1520 identifies potential collision points by defining a radius around each point on the skin of the character model. In an alternate embodiment, a bounding box is used to identify potential collision points. Step 1520 identifies the set of collision points to be used to determine the deformation of the character model from a collision.
  • At [0137] step 1525, the set of collision points are evaluated to determine the skin collision response. An embodiment of step 1525 evaluates the set of collision points according to the method 1300 discussed above. Generally, a first displacement is determined for a first collision point. The first displacement is projected on to the set of impulse basis functions to determine an initial collision response from skin impulse responses. The initial collision response displaces the surrounding collision points. The displacements of the surrounding collision points are applied to their respective skin impulse responses to further displace the set of collision points. The further displacement of the set of collision points creates subsequent collision responses, which are iteratively processed until the collision points converge to their final positions. The final positions of the set of collision points define a skin collision response, which is then applied to the set of non-collision points.
  • At [0138] step 1530, character model is constructed from the composite skin pose response and the skin collision response. In an embodiment, both the composite skin pose response and the skin collision response are stored and processed in their basis projected forms. In this embodiment, the weighted sum of the composite skin responses and the set of armature basis functions is added to the weighted sum of the skin collision response and the set of impulse basis functions. The result is a character model deformed in response to the armature pose and collisions.
  • The steps of [0139] method 1500 are repeated for each frame to produce an animated sequence. Because the skin mode responses and the skin impulse responses are precomputed in the combined preparation phase, the combined animation phase can be performed in real-time or near real-time. This permits the animator to efficiently fine-tune the animation and maximize the dramatic impact of the animation. Additionally, because the combined skin response of a character model realistically deforms in response to armature poses and collisions, the animator sees the final appearance of the character model during the animation process, rather than having to wait to see the final appearance of the animation.
  • Furthermore, the present invention determines a realistic character deformation directly from the posed armature without the need to create underlying bone and muscle structures required by physical simulation techniques or complicated armature weightings used by kinematic transform techniques. This decreases the time and effort needed to create character models compared with prior animation techniques. [0140]
  • It should be noted that once the posed or deformed model has been created using one or more of the above discussed embodiments, any rendering technique, for example ray-tracing or scanline rendering, can create a final image or frame from the model in combination with lighting, shading, texture mapping, and any other image processing information. [0141]
  • Further embodiments can be envisioned to one of ordinary skill in the art after reading the attached documents. In other embodiments, combinations or sub-combinations of the above disclosed invention can be advantageously made. The block diagrams of the architecture and flow charts are grouped for ease of understanding. However it should be understood that combinations of blocks, additions of new blocks, re-arrangement of blocks, and the like are contemplated in alternative embodiments of the present invention. [0142]
  • The specification and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense. It will, however, be evident that various modifications and changes may be made thereunto without departing from the broader spirit and scope of the invention as set forth in the claims. [0143]

Claims (38)

What is claimed is:
1. A method for creating a posed character model, the method comprising:
determining a basis set from a set of sample character positions;
determining a set of skin responses for the character model, wherein each skin response corresponds with one of the basis set;
projecting a first character pose onto the basis set to determine a set of basis weights in a first reference frame;
applying the set of basis weights to the set of skin responses to create a first skin pose response; and
constructing a posed character model from the first skin pose response and the basis set.
2. The method of claim 1, further comprising:
repeating for a second reference frame the steps of projecting a character pose and applying the set of basis weights to create a second skin pose response; and
constructing the posed character model from the first skin pose response, the second skin pose response, and the basis set.
3. The method of claim 2, wherein constructing includes combining the first and second skin pose responses according to a set of frame weights associated with the first and second reference frames.
4. The method of claim 3, further comprising determining a set of frame weights by diffusing an initial set of frame weight values through the character model.
5. The method of claim 3, further comprising determining a set of frame weights by comparing the set of skin responses with a corresponding set of non-linear solutions.
6. The method of claim 3, further comprising setting at least a portion of the set of frame weights to values received from a user.
7. The method of claim 1, further comprising:
repeating for a second character pose the steps of projecting a character pose, applying the set of basis weights, and projecting the set of skin responses to create a second posed character model.
8. The method of claim 1, wherein the set of sample character positions comprises a training set of character poses.
9. The method of claim 1, wherein the set of sample character positions comprises at least one dynamically created pose.
10. The method of claim 1, wherein an armature is used to define the set of sample character positions and the first character pose.
11. The method of claim 1, wherein the first character pose is at least partially defined according to an animation variable.
12. The method of claim 1, wherein determining a set of skin responses comprises, for each basis of the basis set, applying a set of displacements from one of the basis set to a portion of the character model and minimizing a function of the displacement over the entire character model.
13. The method of claim 12, wherein the function is a material energy function.
14. The method of claim 12, further comprising:
discretizing the character model into a three-dimensional field of sample points, applying the set of displacements to a portion of the sample points, and minimizing the function at every sample point associated with the character model.
15. The method of claim 14, wherein the field is a Cartesian grid.
16. The method of claim 14, wherein the field is a set of vertices of tetrahedral volume elements.
17. The method of claim 1, wherein each of the set of skin responses is defined as the linear combination of the elements of the basis set.
18. A method for creating a posed character model, comprising:
transforming a character pose into a set of reference frames associated with a character model;
for each reference frame, creating a skin pose response of the character model in response to the character pose; and
constructing a composite skin response of the character model from the skin pose responses of each reference frame.
19. The method of claim 18, wherein constructing a composite skin response comprises combining a portion of the skin response of a first reference frame with a portion of the skin response of a second reference frame.
20. The method of claim 19, wherein the portion of the skin response of the first reference frame and the portion of the skin response of the second reference frame correspond to two, at least partially overlapping regions of the character model.
21. The method of claim 20, wherein the portion of the skin response of the first reference frame and the portion of the skin response of the second reference frame correspond to two identical regions of the character model.
22. The method of claim 19, wherein the portion of the skin response of the first reference frame and the portion of the skin response of the second reference frame correspond to two different regions of the character model.
23. The method of claim 19, wherein the portion of the skin response of a first reference frame and the portion of the skin response of the second reference frame are combined according to a set of frame weights defining the influence of the skin responses of the first and second reference frames on the composite skin response.
24. The method of claim 18, further comprising:
determining a set of frame weights associated with the set of reference frames, the set of frame weights defining the influence of each reference frame on the composite skin response.
25. The method of claim 24, wherein determining a set of frame weights comprises diffusing an initial set of frame weight values through the character model.
26. The method of claim 25, wherein diffusing comprises:
discretizing a character model into a field of three dimensional sample points;
assigning initial frame weight values to sample points adjacent to the origin of each reference frame; and
determining a set of frame weight values for a plurality of sample points from a diffusion of the initial frame weight values.
27. The method of claim 24, further comprising determining a set of frame weights by comparing the set of skin responses with a corresponding set of non-linear solutions.
28. The method of claim 24, further comprising setting at least a portion of the set of frame weights to values received from a user.
29. An information storage medium having a plurality of instructions adapted to direct an information processing device to perform an operation comprising the steps of:
determining a basis set from a set of sample character positions;
determining a set of skin responses for the character model, wherein each skin response corresponds with one of the basis set;
projecting a first character pose onto the basis set to determine a set of basis weights in a first reference frame;
applying the set of basis weights to the set of skin responses to create a first skin pose response; and
constructing a posed character model from the first skin pose response and the basis set.
30. The information storage medium of claim 29, further comprising:
repeating for a second reference frame the steps of projecting a character pose and applying the set of basis weights to create a second skin pose response; and
constructing the posed character model from the first skin pose response, the second skin pose response, and the basis set.
31. The information storage medium of claim 29, wherein the first character pose is at least partially defined according to an animation variable.
32. The information storage medium of claim 29, wherein determining a set of skin responses comprises, for each basis of the basis set, applying a set of displacements from one of the basis set to a portion of the character model and minimizing a function of the displacement over the entire character model.
33. An information storage medium having a plurality of instructions adapted to direct an information processing device to perform an operation comprising the steps of:
transforming a character pose into a set of reference frames associated with a character model;
for each reference frame, creating a skin pose response of the character model in response to the character pose; and
constructing a composite skin response from the skin pose responses of each reference frame.
34. The information storage medium of claim 33, wherein constructing a composite skin response comprises combining a portion of the skin response of a first reference frame with a portion of the skin response of a second reference frame.
35. The information storage medium of claim 34, wherein the portion of the skin response of a first reference frame and the portion of the skin response of the second reference frame are combined according to a set of frame weights defining the influence of the skin responses of the first and second reference frames on the composite skin response.
36. The information storage medium of claim 35, wherein the operation further comprises the step of determining a set of frame weights associated with the set of reference frames, the set of frame weights defining the influence of each reference frame on the composite skin response.
37. A tangible media including a first image having a character model in a first pose and a consecutive image having a character model in a second pose, wherein the appearance of the character model in the second pose is independent of the appearance of the character model in the first pose and wherein the character in the first and second poses are created according to the method of claim 1.
38. A tangible media including a first image having a character model in a first pose and a consecutive image having a character model in a second pose, wherein the appearance of the character model in the second pose is independent of the appearance of the character model in the first pose and wherein the character in the first and second poses are created according to the method of claim 18.
US10/438,748 2003-05-14 2003-05-14 Statistical dynamic modeling method and apparatus Abandoned US20040227761A1 (en)

Priority Applications (11)

Application Number Priority Date Filing Date Title
US10/438,748 US20040227761A1 (en) 2003-05-14 2003-05-14 Statistical dynamic modeling method and apparatus
EP03751882A EP1636759A4 (en) 2003-05-14 2003-08-22 Statistical dynamic collisions method and apparatus
CN03826436.6A CN1788282B (en) 2003-05-14 2003-08-22 Statistical dynamic modelling method and apparatus
PCT/US2003/026546 WO2004104935A1 (en) 2003-05-14 2003-08-22 Statistical dynamic modeling method and apparatus
JP2004572200A JP4358752B2 (en) 2003-05-14 2003-08-22 Statistical mechanical collision methods and equipment
AU2003269986A AU2003269986A1 (en) 2003-05-14 2003-08-22 Statistical dynamic collisions method and apparatus
AU2003260051A AU2003260051A1 (en) 2003-05-14 2003-08-22 Statistical dynamic modeling method and apparatus
EP03817037A EP1639552A4 (en) 2003-05-14 2003-08-22 Statistical dynamic modeling method and apparatus
JP2004572201A JP4361878B2 (en) 2003-05-14 2003-08-22 Statistical mechanical modeling method and apparatus
PCT/US2003/026371 WO2004104934A1 (en) 2003-05-14 2003-08-22 Statistical dynamic collisions method and apparatus
US11/582,704 US7515155B2 (en) 2003-05-14 2006-10-17 Statistical dynamic modeling method and apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US10/438,748 US20040227761A1 (en) 2003-05-14 2003-05-14 Statistical dynamic modeling method and apparatus

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US11/582,704 Continuation US7515155B2 (en) 2003-05-14 2006-10-17 Statistical dynamic modeling method and apparatus

Publications (1)

Publication Number Publication Date
US20040227761A1 true US20040227761A1 (en) 2004-11-18

Family

ID=33417656

Family Applications (2)

Application Number Title Priority Date Filing Date
US10/438,748 Abandoned US20040227761A1 (en) 2003-05-14 2003-05-14 Statistical dynamic modeling method and apparatus
US11/582,704 Expired - Lifetime US7515155B2 (en) 2003-05-14 2006-10-17 Statistical dynamic modeling method and apparatus

Family Applications After (1)

Application Number Title Priority Date Filing Date
US11/582,704 Expired - Lifetime US7515155B2 (en) 2003-05-14 2006-10-17 Statistical dynamic modeling method and apparatus

Country Status (1)

Country Link
US (2) US20040227761A1 (en)

Cited By (36)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060277454A1 (en) * 2003-12-09 2006-12-07 Yi-Chih Chen Multimedia presentation system
US20070097125A1 (en) * 2005-10-28 2007-05-03 Dreamworks Animation Llc Artist directed volume preserving deformation and collision resolution for animation
US20070268293A1 (en) * 2006-05-19 2007-11-22 Erick Miller Musculo-skeletal shape skinning
US20080043021A1 (en) * 2006-08-15 2008-02-21 Microsoft Corporation Three Dimensional Polygon Mesh Deformation Using Subspace Energy Projection
US20080043042A1 (en) * 2006-08-15 2008-02-21 Scott Bassett Locality Based Morphing Between Less and More Deformed Models In A Computer Graphics System
US20090219287A1 (en) * 2008-02-29 2009-09-03 Microsoft Corporation Modeling and rendering of heterogeneous translucent materials using the diffusion equation
US20090319049A1 (en) * 2008-02-18 2009-12-24 Maxx Orthopedics, Inc. Total Knee Replacement Prosthesis With High Order NURBS Surfaces
US20100172557A1 (en) * 2002-01-16 2010-07-08 Alain Richard Method and apparatus for reconstructing bone surfaces during surgery
US20140285513A1 (en) * 2013-03-25 2014-09-25 Naturalmotion Limited Animation of a virtual object
US8847963B1 (en) * 2011-01-31 2014-09-30 Pixar Systems and methods for generating skin and volume details for animated characters
US20140306962A1 (en) * 2013-04-16 2014-10-16 Autodesk, Inc. Mesh skinning technique
CN104156995A (en) * 2014-07-16 2014-11-19 浙江大学 Production method for ribbon animation aiming at Dunhuang flying image
WO2015081978A1 (en) * 2013-12-02 2015-06-11 Brainlab Ag Method and device for determining a transformation between two images of an anatomical structure
US20160019708A1 (en) * 2014-07-17 2016-01-21 Crayola, Llc Armature and Character Template for Motion Animation Sequence Generation
EP2950880A4 (en) * 2013-01-29 2017-06-14 National ICT Australia Limited Neuroprosthetic stimulation
US9734616B1 (en) * 2013-10-11 2017-08-15 Pixar Tetrahedral volumes from segmented bounding boxes of a subdivision
US10022628B1 (en) * 2015-03-31 2018-07-17 Electronic Arts Inc. System for feature-based motion adaptation
US10096133B1 (en) 2017-03-31 2018-10-09 Electronic Arts Inc. Blendshape compression system
US10388053B1 (en) 2015-03-27 2019-08-20 Electronic Arts Inc. System for seamless animation transition
US10403018B1 (en) 2016-07-12 2019-09-03 Electronic Arts Inc. Swarm crowd rendering system
US10535174B1 (en) 2017-09-14 2020-01-14 Electronic Arts Inc. Particle-based inverse kinematic rendering system
US10606966B2 (en) * 2014-08-26 2020-03-31 Samsung Electronics Co., Ltd. Method and apparatus for modeling deformable body including particles
US10726611B1 (en) 2016-08-24 2020-07-28 Electronic Arts Inc. Dynamic texture mapping using megatextures
US10792566B1 (en) 2015-09-30 2020-10-06 Electronic Arts Inc. System for streaming content within a game application environment
US10860838B1 (en) 2018-01-16 2020-12-08 Electronic Arts Inc. Universal facial expression translation and character rendering system
US10878540B1 (en) 2017-08-15 2020-12-29 Electronic Arts Inc. Contrast ratio detection and rendering system
US10902618B2 (en) 2019-06-14 2021-01-26 Electronic Arts Inc. Universal body movement translation and character rendering system
US11210833B1 (en) * 2020-06-26 2021-12-28 Weta Digital Limited Method for computing frictional contacts of topologically different bodies in a graphical system
US11217003B2 (en) 2020-04-06 2022-01-04 Electronic Arts Inc. Enhanced pose generation based on conditional modeling of inverse kinematics
US11335027B2 (en) * 2018-09-28 2022-05-17 Hewlett-Packard Development Company, L.P. Generating spatial gradient maps for a person in an image
US11504625B2 (en) 2020-02-14 2022-11-22 Electronic Arts Inc. Color blindness diagnostic system
US11562523B1 (en) 2021-08-02 2023-01-24 Electronic Arts Inc. Enhanced animation generation based on motion matching using local bone phases
US11648480B2 (en) 2020-04-06 2023-05-16 Electronic Arts Inc. Enhanced pose generation based on generative modeling
US11670030B2 (en) 2021-07-01 2023-06-06 Electronic Arts Inc. Enhanced animation generation based on video with local phase
US11830121B1 (en) 2021-01-26 2023-11-28 Electronic Arts Inc. Neural animation layering for synthesizing martial arts movements
US11887232B2 (en) 2021-06-10 2024-01-30 Electronic Arts Inc. Enhanced system for generation of facial models and animation

Families Citing this family (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060262119A1 (en) * 2005-05-20 2006-11-23 Michael Isner Transfer of motion between animated characters
US7859538B2 (en) * 2006-07-31 2010-12-28 Autodesk, Inc Converting deformation data for a mesh to animation data for a skeleton, skinning and shading in a runtime computer graphics animation engine
US8094156B2 (en) * 2006-07-31 2012-01-10 Autodesk Inc. Rigless retargeting for character animation
JP4842242B2 (en) * 2006-12-02 2011-12-21 韓國電子通信研究院 Method and apparatus for real-time expression of skin wrinkles during character animation
US8203560B2 (en) * 2007-04-27 2012-06-19 Sony Corporation Method for predictively splitting procedurally generated particle data into screen-space boxes
US8223155B2 (en) * 2007-04-27 2012-07-17 Sony Corporation Method for simulating large numbers of spherical bodies interacting
US8279227B2 (en) * 2008-04-04 2012-10-02 Sony Corporation Method for detecting collisions among large numbers of particles
US8860731B1 (en) * 2009-12-21 2014-10-14 Lucasfilm Entertainment Company Ltd. Refining animation
US20110304629A1 (en) * 2010-06-09 2011-12-15 Microsoft Corporation Real-time animation of facial expressions
US9082222B2 (en) * 2011-01-18 2015-07-14 Disney Enterprises, Inc. Physical face cloning
US8913065B2 (en) * 2011-08-05 2014-12-16 Jeffrey McCartney Computer system for animating 3D models using offset transforms
US9786083B2 (en) 2011-10-07 2017-10-10 Dreamworks Animation L.L.C. Multipoint offset sampling deformation
US9208600B2 (en) 2012-03-05 2015-12-08 Trigger Happy, Ltd Custom animation application tools and techniques
US9418465B2 (en) * 2013-12-31 2016-08-16 Dreamworks Animation Llc Multipoint offset sampling deformation techniques
US9928663B2 (en) 2015-07-27 2018-03-27 Technische Universiteit Delft Skeletal joint optimization for linear blend skinning deformations utilizing skeletal pose sampling
US10061871B2 (en) 2015-07-27 2018-08-28 Technische Universiteit Delft Linear blend skinning weight optimization utilizing skeletal pose sampling
US10169903B2 (en) * 2016-06-12 2019-01-01 Apple Inc. Animation techniques for mobile devices
US11087514B2 (en) * 2019-06-11 2021-08-10 Adobe Inc. Image object pose synchronization

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030193503A1 (en) * 2002-04-10 2003-10-16 Mark Seminatore Computer animation system and method
US20040001064A1 (en) * 2002-06-28 2004-01-01 Microsoft Corporation Methods and system for general skinning via hardware accelerators
US20040046760A1 (en) * 2002-08-30 2004-03-11 Roberts Brian Curtis System and method for interacting with three-dimensional data
US6798415B2 (en) * 2001-06-21 2004-09-28 Intel Corporation Rendering collisions of three-dimensional models
US7091975B1 (en) * 2000-07-21 2006-08-15 Microsoft Corporation Shape and animation methods and systems using examples

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6822553B1 (en) * 1985-10-16 2004-11-23 Ge Interlogix, Inc. Secure entry system with radio reprogramming
US6011562A (en) * 1997-08-01 2000-01-04 Avid Technology Inc. Method and system employing an NLE to create and modify 3D animations by mixing and compositing animation data
JP4384813B2 (en) * 1998-06-08 2009-12-16 マイクロソフト コーポレーション Time-dependent geometry compression
US20040048780A1 (en) * 2000-05-10 2004-03-11 The Trustees Of Columbia University In The City Of New York Method for treating and preventing cardiac arrhythmia
US6796415B2 (en) * 2000-10-20 2004-09-28 At Systems, Inc. Loose coin and rolled coin dispenser
JP3935367B2 (en) * 2002-02-06 2007-06-20 キヤノン株式会社 Power supply circuit for display element driving circuit, display device, and camera

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7091975B1 (en) * 2000-07-21 2006-08-15 Microsoft Corporation Shape and animation methods and systems using examples
US6798415B2 (en) * 2001-06-21 2004-09-28 Intel Corporation Rendering collisions of three-dimensional models
US20030193503A1 (en) * 2002-04-10 2003-10-16 Mark Seminatore Computer animation system and method
US20040001064A1 (en) * 2002-06-28 2004-01-01 Microsoft Corporation Methods and system for general skinning via hardware accelerators
US7088367B2 (en) * 2002-06-28 2006-08-08 Microsoft Corporation Methods and system for general skinning via hardware accelerators
US20040046760A1 (en) * 2002-08-30 2004-03-11 Roberts Brian Curtis System and method for interacting with three-dimensional data

Cited By (56)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100172557A1 (en) * 2002-01-16 2010-07-08 Alain Richard Method and apparatus for reconstructing bone surfaces during surgery
US20060277454A1 (en) * 2003-12-09 2006-12-07 Yi-Chih Chen Multimedia presentation system
US7818658B2 (en) * 2003-12-09 2010-10-19 Yi-Chih Chen Multimedia presentation system
US7545379B2 (en) * 2005-10-28 2009-06-09 Dreamworks Animation Llc Artist directed volume preserving deformation and collision resolution for animation
US20070097125A1 (en) * 2005-10-28 2007-05-03 Dreamworks Animation Llc Artist directed volume preserving deformation and collision resolution for animation
WO2007137195A3 (en) * 2006-05-19 2008-04-24 Sony Corp Musculo-skeletal shape skinning
US20070268293A1 (en) * 2006-05-19 2007-11-22 Erick Miller Musculo-skeletal shape skinning
US8358310B2 (en) 2006-05-19 2013-01-22 Sony Corporation Musculo-skeletal shape skinning
US20080043042A1 (en) * 2006-08-15 2008-02-21 Scott Bassett Locality Based Morphing Between Less and More Deformed Models In A Computer Graphics System
US8749543B2 (en) * 2006-08-15 2014-06-10 Microsoft Corporation Three dimensional polygon mesh deformation using subspace energy projection
US20080043021A1 (en) * 2006-08-15 2008-02-21 Microsoft Corporation Three Dimensional Polygon Mesh Deformation Using Subspace Energy Projection
US7999812B2 (en) * 2006-08-15 2011-08-16 Nintendo Co, Ltd. Locality based morphing between less and more deformed models in a computer graphics system
US9788955B2 (en) * 2008-02-18 2017-10-17 Maxx Orthopedics, Inc. Total knee replacement prosthesis with high order NURBS surfaces
US20090319049A1 (en) * 2008-02-18 2009-12-24 Maxx Orthopedics, Inc. Total Knee Replacement Prosthesis With High Order NURBS Surfaces
US8243071B2 (en) * 2008-02-29 2012-08-14 Microsoft Corporation Modeling and rendering of heterogeneous translucent materials using the diffusion equation
US8730240B2 (en) 2008-02-29 2014-05-20 Microsoft Corporation Modeling and rendering of heterogeneous translucent materals using the diffusion equation
US20090219287A1 (en) * 2008-02-29 2009-09-03 Microsoft Corporation Modeling and rendering of heterogeneous translucent materials using the diffusion equation
US8847963B1 (en) * 2011-01-31 2014-09-30 Pixar Systems and methods for generating skin and volume details for animated characters
EP2950880A4 (en) * 2013-01-29 2017-06-14 National ICT Australia Limited Neuroprosthetic stimulation
US20140285513A1 (en) * 2013-03-25 2014-09-25 Naturalmotion Limited Animation of a virtual object
US9652879B2 (en) * 2013-03-25 2017-05-16 Naturalmotion Ltd. Animation of a virtual object
US20140306962A1 (en) * 2013-04-16 2014-10-16 Autodesk, Inc. Mesh skinning technique
US9836879B2 (en) * 2013-04-16 2017-12-05 Autodesk, Inc. Mesh skinning technique
US9734616B1 (en) * 2013-10-11 2017-08-15 Pixar Tetrahedral volumes from segmented bounding boxes of a subdivision
US9836837B2 (en) 2013-12-02 2017-12-05 Brainlab Ag Method and device for determining a transformation between two images of an anatomical structure
WO2015081978A1 (en) * 2013-12-02 2015-06-11 Brainlab Ag Method and device for determining a transformation between two images of an anatomical structure
CN104156995A (en) * 2014-07-16 2014-11-19 浙江大学 Production method for ribbon animation aiming at Dunhuang flying image
US9754399B2 (en) 2014-07-17 2017-09-05 Crayola, Llc Customized augmented reality animation generator
US20160019708A1 (en) * 2014-07-17 2016-01-21 Crayola, Llc Armature and Character Template for Motion Animation Sequence Generation
US10606966B2 (en) * 2014-08-26 2020-03-31 Samsung Electronics Co., Ltd. Method and apparatus for modeling deformable body including particles
US10388053B1 (en) 2015-03-27 2019-08-20 Electronic Arts Inc. System for seamless animation transition
US10022628B1 (en) * 2015-03-31 2018-07-17 Electronic Arts Inc. System for feature-based motion adaptation
US10792566B1 (en) 2015-09-30 2020-10-06 Electronic Arts Inc. System for streaming content within a game application environment
US10403018B1 (en) 2016-07-12 2019-09-03 Electronic Arts Inc. Swarm crowd rendering system
US10726611B1 (en) 2016-08-24 2020-07-28 Electronic Arts Inc. Dynamic texture mapping using megatextures
US10096133B1 (en) 2017-03-31 2018-10-09 Electronic Arts Inc. Blendshape compression system
US11295479B2 (en) 2017-03-31 2022-04-05 Electronic Arts Inc. Blendshape compression system
US10733765B2 (en) 2017-03-31 2020-08-04 Electronic Arts Inc. Blendshape compression system
US10878540B1 (en) 2017-08-15 2020-12-29 Electronic Arts Inc. Contrast ratio detection and rendering system
US10535174B1 (en) 2017-09-14 2020-01-14 Electronic Arts Inc. Particle-based inverse kinematic rendering system
US11113860B2 (en) 2017-09-14 2021-09-07 Electronic Arts Inc. Particle-based inverse kinematic rendering system
US10860838B1 (en) 2018-01-16 2020-12-08 Electronic Arts Inc. Universal facial expression translation and character rendering system
US11335027B2 (en) * 2018-09-28 2022-05-17 Hewlett-Packard Development Company, L.P. Generating spatial gradient maps for a person in an image
US10902618B2 (en) 2019-06-14 2021-01-26 Electronic Arts Inc. Universal body movement translation and character rendering system
US11798176B2 (en) 2019-06-14 2023-10-24 Electronic Arts Inc. Universal body movement translation and character rendering system
US11504625B2 (en) 2020-02-14 2022-11-22 Electronic Arts Inc. Color blindness diagnostic system
US11872492B2 (en) 2020-02-14 2024-01-16 Electronic Arts Inc. Color blindness diagnostic system
US11232621B2 (en) 2020-04-06 2022-01-25 Electronic Arts Inc. Enhanced animation generation based on conditional modeling
US11217003B2 (en) 2020-04-06 2022-01-04 Electronic Arts Inc. Enhanced pose generation based on conditional modeling of inverse kinematics
US11648480B2 (en) 2020-04-06 2023-05-16 Electronic Arts Inc. Enhanced pose generation based on generative modeling
US11836843B2 (en) 2020-04-06 2023-12-05 Electronic Arts Inc. Enhanced pose generation based on conditional modeling of inverse kinematics
US11210833B1 (en) * 2020-06-26 2021-12-28 Weta Digital Limited Method for computing frictional contacts of topologically different bodies in a graphical system
US11830121B1 (en) 2021-01-26 2023-11-28 Electronic Arts Inc. Neural animation layering for synthesizing martial arts movements
US11887232B2 (en) 2021-06-10 2024-01-30 Electronic Arts Inc. Enhanced system for generation of facial models and animation
US11670030B2 (en) 2021-07-01 2023-06-06 Electronic Arts Inc. Enhanced animation generation based on video with local phase
US11562523B1 (en) 2021-08-02 2023-01-24 Electronic Arts Inc. Enhanced animation generation based on motion matching using local bone phases

Also Published As

Publication number Publication date
US20070035547A1 (en) 2007-02-15
US7515155B2 (en) 2009-04-07

Similar Documents

Publication Publication Date Title
US7515155B2 (en) Statistical dynamic modeling method and apparatus
US7307633B2 (en) Statistical dynamic collisions method and apparatus utilizing skin collision points to create a skin collision response
US7570264B2 (en) Rig baking
US6967658B2 (en) Non-linear morphing of faces and their dynamics
Sloan et al. Shape by example
Wilhelms et al. Anatomically based modeling
US7944449B2 (en) Methods and apparatus for export of animation data to non-native articulation schemes
US9892485B2 (en) System and method for mesh distance based geometry deformation
US7259764B2 (en) Defrobulated angles for character joint representation
Orvalho et al. Transferring the rig and animations from a character to different face models
Chen et al. A displacement driven real-time deformable model for haptic surgery simulation
JP4358752B2 (en) Statistical mechanical collision methods and equipment
US7477253B2 (en) Storage medium storing animation image generating program
Angelidis et al. Sweepers: Swept deformation defined by gesture
Decaudin et al. Fusion of 3D shapes
Adzhiev et al. Heterogeneous Objects Modelling and Applications
Magnenat Thalmann et al. Computer animation: a key issue for time visualization
Hopkins Example-Based Parameterization of Linear Blend Skinning for Skinning Decomposition (EP-LBS)
Magnenat-Thalmann et al. 1 Innovations in Virtual Humans
Wu et al. Generic-model based human-body modeling
Smith Animation of captured surface data
Badler et al. Computer Graphics Research Laboratory Quarterly Progress Report Number 49, July-September 1993
Badler AD-A277 999

Legal Events

Date Code Title Description
AS Assignment

Owner name: PIXAR, CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:ANDERSON, JOHN;WOODBURY, ADAM;REEL/FRAME:013992/0570

Effective date: 20030818

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION