US20100045680A1 - Performance driven facial animation - Google Patents

Performance driven facial animation Download PDF

Info

Publication number
US20100045680A1
US20100045680A1 US12/424,481 US42448109A US2010045680A1 US 20100045680 A1 US20100045680 A1 US 20100045680A1 US 42448109 A US42448109 A US 42448109A US 2010045680 A1 US2010045680 A1 US 2010045680A1
Authority
US
United States
Prior art keywords
facial
data
action
pose data
facs
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US12/424,481
Inventor
Parag Havaldar
Josh Ochoa
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sony Corp
Sony Pictures Entertainment Inc
Original Assignee
Sony Corp
Sony Pictures Entertainment Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sony Corp, Sony Pictures Entertainment Inc filed Critical Sony Corp
Priority to US12/424,481 priority Critical patent/US20100045680A1/en
Publication of US20100045680A1 publication Critical patent/US20100045680A1/en
Priority to US12/986,095 priority patent/US8279228B2/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/403D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings

Definitions

  • the present invention relates generally to motion capture, and more particularly to methods and systems for generating facial animation using performance data, such as motion capture data obtained from motion capture systems and video images obtained from video data.
  • Modeling a face, its motion, and rendering it in a manner that appears realistic is a difficult problem, though progress to achieve realistic looking faces has been made from a modeling perspective as well as a rendering perspective.
  • a greater problem is animating a digital face in a realistic and believable manner that will bear close scrutiny, where slight flaws in the animated performance are often unacceptable. While adequate facial animation (stylized and realistic) can be attempted via traditional keyframe techniques by skilled animators, it is a complicated task that becomes especially time-consuming as the desired results approach realistic imagery.
  • the present invention provides methods and systems for generating facial animation using performance data, such as motion capture data obtained from motion capture systems and video images obtained from video data.
  • a method of animating a digital facial model includes: defining a plurality of action units; calibrating each action unit of the plurality of action units via an actor's performance; capturing first facial pose data; determining a plurality of weights, each weight of the plurality of weights uniquely corresponding to the each action unit, the plurality of weights characterizing a weighted combination of the plurality of action units, the weighted combination approximating the first facial pose data; generating a weighted activation by combining the results of applying the each weight to the each action unit; applying the weighted activation to the digital facial model; and recalibrating at least one action unit of the plurality of action units using input user adjustments to the weighted activation.
  • a method of animating a digital facial model includes: defining a plurality of action units, each action unit of including first facial pose data and an activation; calibrating the first facial pose data using calibration pose data derived from a plurality of captured calibration performances, each calibration performance of the plurality of captured calibration performances corresponding with the each action unit; deriving second facial pose data from another calibration performance of the plurality of captured calibration performances; determining a plurality of weights, each weight of the plurality of weights uniquely corresponding to the each action unit, the plurality of weights characterizing a weighted combination of the facial pose data, the weighted combination approximating the second facial pose data; generating a weighted activation by combining the results of applying the each weight to the activation; applying the weighted activation to the digital facial model; and recalibrating the first facial pose data and the activation using input user adjustments to the weighted activation.
  • a system for retargeting facial motion capture data to a digital facial model includes: a FACS module to manage a plurality of action units; a retargeting module to generate at least one weighted activation for the digital facial model using the facial motion capture data and the plurality of action units; an animation module to generate a facial animation frame by applying the at least one weighted activation to the digital facial model; and a tuning interface module to generate recalibrated action units for the FACS module in accordance with input user adjustments to the facial animation.
  • a method of digital facial animation includes: capturing facial motion data; labeling the facial motion data; stabilizing the facial motion data; cleaning the facial motion data using a FACS matrix; normalizing the facial motion data; retargeting the facial motion data onto a digital facial model using the FACS matrix; and performing multidimensional tuning of the FACS matrix.
  • FIG. 1 is a flowchart illustrating a method of animating a digital facial model
  • FIG. 2 is a flowchart illustrating a method of recalibrating action units of a FACS matrix
  • FIG. 3 is a functional block diagram of a system for animating a digital facial model
  • FIG. 4 is a flowchart illustrating a method of performance driven facial animation
  • FIG. 5 is an image of actors on a motion capture set
  • FIG. 6A is a three-part image depicting a neutral facial pose
  • FIG. 6B is a three-part image depicting a brow lowering facial pose
  • FIG. 6C is a three-part image depicting a lip corner pull facial pose
  • FIG. 7A is a three-part image depicting a mouth stretch facial pose
  • FIG. 7B is a three-part image depicting a lip stretch facial pose
  • FIG. 8 is a three-part image depicting variability in facial motion capture data quality
  • FIG. 9 illustrates an example computation of weights for a weighted combination of FACS poses
  • FIG. 10A is an image depicting an example lip articulation for the partially-opened mouth of an animated character
  • FIG. 10B is an image depicting an example lip articulation for the fully-opened mouth of an animated character
  • FIG. 10C is an image depicting an example lip articulation for the closed mouth of an animated character
  • FIG. 11 depicts an example FACS pose before and after a tuning phase
  • FIG. 12 depicts an example of solved animation frames before and after a tuning operation
  • FIG. 13 depicts another example of solved animation frames before and after a tuning operation
  • FIG. 14A illustrates a representation of a computer system and a user
  • FIG. 14B is a functional block diagram illustrating the computer system hosting a facial animation system.
  • Certain implementations as disclosed herein provide for systems and methods to implement a technique for capturing motion of one or more actors or objects.
  • one method as disclosed herein utilizes a motion capture (“MOCAP”) system to capture the body and facial motion and surfaces of multiple actors using cameras and optical markers attached to the actors.
  • the MOCAP system builds data from the captured images to use in animation in a film.
  • FACS Facial Action Coding System
  • CG computer graphics
  • a performance will be understood to be a visual capture of an actor's face. In most instances, the actor is talking and emoting either individually or in a group with other actors. This is often done by capturing a video performance of the actor.
  • the video frames can be used either purely for reference by an animator, for further processing to extract point samples, or for deforming 3-D surfaces which are then retargeted onto a digital facial model.
  • Various technological hurdles must be overcome before the 2-D or 3-D reconstructed data can be used, including calibrating cameras, tracking points, and reconstructing 3-D information.
  • facial puppeteering has been used to drive a digital facial model.
  • a control device such as a cyber glove is used to input control commands, and finger movements are retargeted onto the digital facial model.
  • the MOCAP system captures data of the body and face together.
  • the facial data are targeted onto an animated character whose face is stylized and does not conform to the face of the actual actor.
  • the images are directed toward producing a realistic animation on a character that is intended to look real, and its face to perform realistically.
  • the facial MOCAP data are acquired separately in a sitting position and the facial animation generated is blended in keyframed body shots.
  • the MOCAP system can support multiple approaches and so can be adapted to these, and other, various production requirements.
  • face and body motion are captured simultaneously with multiple cameras (e.g., 200 cameras) positioned about a “capture volume.”
  • An example capture volume is about 20 feet ⁇ 20 feet ⁇ 16 feet in length, width, and height, respectively.
  • Multiple infrared markers e.g. 80 markers
  • the captured data are reconstructed in 3-D using the positions of the multiple cameras during post-processing.
  • a tool such as IMAGEWORKSTM proprietary IMAGEMOTIONTM technology, adapted to capturing and processing MOCAP data, can be used.
  • the number of actors acting in motion capture volume can vary from low to high numbers depending on size of the volume, camera resolutions, strength of optical lights and signals, and other related parameters.
  • each MOCAP “take” ends in all the actors returning to the standard T-pose in the capture volume with the face back to the relaxed neutral position.
  • the T-pose is used by the facial pipeline in a normalization process to ensure that marker placements on a second day of MOCAP performances, for example, correspond to those on the day of the calibration (also referred to as the “master T-pose”).
  • FIG. 5 depicts actors each performing a T-pose in a capture volume.
  • ADR session a motion capture adaptation
  • only one actor is acting in a sit down position with sensors looking at the actors face.
  • a T-Pose would correspond to a neutral pose of the face only with no body position.
  • FACS Facial Action Coding System
  • the human face has muscles that work together in groups called “actions units.”
  • a FACS provides a framework for determining when certain action units are triggered and how to assign to each action unit a relative influence in a facial pose.
  • the FACS was initially designed for psychologists and behavioral scientists to understand facial expressiveness and behavior, though it has also been adapted to other areas.
  • Facial expressions have been categorized into 72 distinct action units. Each action unit defines a muscular activity (“activation”) that produces a momentary change in facial appearance. These changes in facial appearance vary from person to person depending on facial anatomy, e.g., bone structure, fatty deposits, wrinkles, the shapes of various facial features, and other related facial appearances. However, certain commonalities are seen between people as these action units are triggered.
  • An action unit used in a FACS is based on the location on the face of the facial action, and the type of facial action involved. For example, the upper face has muscles that affect the eyebrows, forehead, and eyelids; the lower muscles around the mouth and lips form another group.
  • Each of these muscles works in groups to form action units; and these action units can be broken down further into left and right areas of the face, which can be triggered asymmetrically and independently of each other.
  • all the action units suggested by a FACS provide a broad basis for dynamic facial expressions that can be used in CG animation.
  • a motion capture system may use a FACS as a foundation for capturing and retargeting facial MOCAP data on an animated character's face.
  • a FACS as a foundation for capturing and retargeting facial MOCAP data on an animated character's face.
  • each actor Prior to a MOCAP performance, each actor performs a series of calibration poses that include extreme versions of all the action units.
  • the reconstructed 3-D facial pose data corresponding to an action unit capture the extreme facial expression used by the actor to perform that action unit.
  • the FACS includes 64 poses, some of which are split into left and right positions.
  • 18 phoneme poses corresponding to articulated phonemes are also included.
  • FIGS. 6A-6C and 7 A- 7 B illustrate a few of the action units used in the MOCAP system based on FACS.
  • FACS proposes upwards of 72 action units that include expressions involving facial muscles and head motion.
  • FIG. 6A is a three-part image depicting a neutral facial pose
  • FIG. 6B is a three-part image depicting a brow lowering facial pose
  • FIG. 6C is a three-part image depicting a lip corner pull facial pose
  • FIG. 7A is a three-part image depicting a mouth stretch facial pose
  • FIG. 7B is a three-part image depicting a lip stretch facial pose.
  • the actual FACS reference, the actor's performance, and the retargeted expression on the character are shown from left to right.
  • data capture is performed using an optical system capturing both body and face motion of one or more actors performing in a capture space.
  • This implementation uses passive optical components including infrared cameras to capture infrared light reflected by the markers.
  • An image so captured is a low entropy image comprising mostly black areas where no infrared light is sensed, and white dots representing the reflective markers.
  • the size of a white dot in the image varies depending on whether the dot is a body marker (large) or face marker (small), on the distance of the actor (and hence the marker) from the camera, and on whether any occlusions have occurred, where the occlusions are usually caused by the actors.
  • the low entropy images provide at least two advantages: (1) the cameras can capture and record images at higher definitions and at higher frame rates, typically at 60 Hz; and (2) 3-D reconstruction of the captured marker data triangulates each marker across multiple images with different viewpoints to locate the marker in space.
  • the ability to associate corresponding points automatically is greatly improved by using only white dots on a black background.
  • FIG. 8 is a three-part image depicting variability in facial motion capture data quality. Shown in the left-most part of FIG. 8 is an example of good quality data. Shown in the middle part of FIG. 8 is an example of lower quality data. And, shown in the right-most part of FIG. 8 is an example of poor quality data.
  • the markers reconstructed for each data frame can have both body markers and facial markers. Both the body markers and facial markers require labeling prior to facial data processing. That is, each marker is assigned a unique identification that persists across the data frames. Labeling all body and facial markers according to their trajectories is a cumbersome and error prone process, especially when a large number of markers is visible in the volume.
  • a two-step process based on the size disparity between body markers (larger) and facial markers (smaller) is used.
  • 3-D reconstruction is performed where facial markers are ignored and only body markers are reconstructed and labeled, usually according to velocity-based constraints.
  • the 3-D reconstruction is performed to acquire facial markers, but which will usually also capture body markers.
  • the body markers are removed by eliminating all markers labeled in the first step, leaving only facial data remaining.
  • labeling the facial markers is automated based on a library of action units (a “FACS matrix”) specifically tailored to the corresponding actor's face.
  • the actor is typically moving around in the capture volume.
  • the movements result in a translation of the face markers in accompaniment with the body while the actor is speaking and emoting.
  • it is beneficial to stabilize the facial data by nullifying the translational and rotational effects of body and head movements.
  • Particular difficulties arise with respect to stabilization because facial markers do not necessarily undergo a rigid transform to a standard position as the actor performs. Rigid movements are caused by head rotations and the actor's motion, but when the actor emotes and speaks, many of the facial markers change positions away from their rigid predictions. A few stable point correspondences are typically sufficient to solve for an inverse transformation.
  • a hierarchical solution is invoked by first performing a global (or gross) stabilization using markers that generally do not move due to facial expressions, such as markers coupled to the head, ears and the nose bone. The solution is then refined with a local (or fine) stabilization by determining marker movements relative to a facial surface model.
  • a cleaning and filtering tool which includes a learning system based on good facial model data.
  • the cleaning and filtering tool generates estimates of the positions of missing markers, removes noise, and in general ensures the viability of all the markers.
  • the system is scalable to handle data generated by wide ranges of facial expression, and can be tuned to modify the dynamics of the facial data.
  • the cleaning tool utilizes the underlying FACS theory to organize markers into groups of muscles. Muscle movements can be used to probabilistically estimate the likely positions of missing markers. A missing marker location is estimated spatially in a neighborhood points, and estimated temporally by analyzing ranges of motion of the markers. In one implementation, a probabilistic model and a corresponding marker muscle grouping are tuned to each actor.
  • standard frequency transforms are used to remove noise in the data. It will be appreciated that high frequency content, which is normally categorized as noise, may also represent quick, valid movements of the actor's muscles and changes in the actor's facial expression.
  • normalization is accomplished in two steps.
  • Each MOCAP take starts and ends with the actors performing a T-pose, as discussed in relation to FIG. 5 .
  • the T-pose of each actor in a subsequent MOCAP take is aligned with the master T-pose of the actor determined during calibration. Aligning a T-pose to the master T-pose relies on the use of various relaxed landmark markers. For example, the corners of the eyes and mouth are used because they are expected to change very little from day to day. Offset vectors for each marker are computed according to discrepancies in the alignment of the T-pose and master T-pose.
  • the offset vectors are applied to the T-pose of the corresponding MOCAP take so that each marker in the T-pose is identically aligned to the markers of the master T-pose.
  • the offsets are propagated through the actor's performance during that day, thus normalizing the data in all the frames.
  • a FACS provides a set of action units or poses deemed representative of most facial expressions.
  • MOCAP frames of calibration poses performed by an actor relating to facial expressions corresponding to FACS poses i.e., action units
  • Some of the calibration poses are broken into left and right sides to capture an asymmetry that the actor's face may exhibit.
  • incoming frames of the actor's performance are analyzed in the space of all the FACS poses (i.e., action units) of the FACS matrix.
  • the action units may thus be viewed as facial basis vectors, and a weight for each is computed for an incoming data frame.
  • a weighted combination of action units i.e., facial basis vectors, FACS poses
  • a weighted combination of action units i.e., facial basis vectors, FACS poses
  • FIG. 9 illustrates an example computation of weights w 1 , w 2 . . . w n for a weighted combination of FACS poses.
  • Computing weights w 1 , w 2 . . . w n determines an influence associated with each of n FACS action units.
  • computing the weights includes a linear optimization.
  • computing the weights includes a non-linear optimization.
  • the weights are applied to the associated n FACS action units to generate a weighted activation.
  • the weighted activation is transferred onto a digital facial model rigged with a facial muscle system.
  • the facial poses of an animated character are generated by an artist using a facial rig.
  • a digital facial model setup is based on IMAGEWORKS'TM proprietary character facial system. The character facial system helps pull and nudge vertices of the digital facial model so that resulting deformations are consistent with the aspects of a human face.
  • the digital facial model includes different fascia layers blended to create a final facial deformation on the digital facial model.
  • the fascia layers in one implementation include a muscle layer that allows facial muscle deformations, a jaw layer that allows jaw movement, a volume layer that control skin bulges in different facial areas, and an articulation layer for pronounced lip movement.
  • the muscle layer includes skull patches with muscle controls that deform the face. The muscle controls are activated by weighted activations generated from MOCAP data.
  • the jaw layer helps to control movements of the jaw of the digital facial model.
  • the volume layer adds volume to the deformations occurring on the digital facial model. It aids in modeling wrinkles and other facial deformations, which can be triggered by weighted activations generated from MOCAP data.
  • the articulation layer relates to the shape of the lips as they deform. In particular, it aids in controlling the roll and volume of lips, essential when the lips thin out or pucker in facial expressions.
  • FIG. 10A is an image depicting an example lip articulation for the partially-opened mouth of an animated character.
  • FIG. 10B is an image depicting an example lip articulation for the fully-opened mouth of an animated character.
  • FIG. 10C is an image depicting an example lip articulation for the closed mouth of an animated character.
  • the fascia layers can be constructed onto the digital facial model.
  • Incoming MOCAP data are mapped, or retargeted, onto the digital facial model as weighted activations that trigger the fascia layers.
  • an incoming frame of MOCAP data is analyzed in the space of all of the action units (i.e., facial basis vectors) of the FACS matrix.
  • the resulting weights quantify the proportional influence that each of the action units of the FACS matrix exerts in triggering the fascia layers.
  • the weights are obtained using mathematical methods (e.g., linear and non-linear optimization)
  • the resulting expression created on the digital facial model sometimes fails to replicate facial deformations naturally recognized as articulating a desired expression. That is, although the facial retargeting achieved using the various mapping solutions may be optimally correct in a mathematical sense, the resulting facial expressions may not conform to the desired look or requirements of a finalized animation shot.
  • the actor may not perform according to the calibration poses provided initially for the FACS matrix, thus causing the action units to be non-representative of the actor's performance; retargeting inconsistencies sometimes arise when mapping mathematically correct marker data to an aesthetically designed face; the digital facial model may conform poorly to the actor's face; marker placements on the actor's face may differ adversely from day to day; and/or the desired animation may be inconsistent with the actions performed by the actor, such as when a desired expression is not present in the MOCAP data, or an exaggeration of the captured expression is attempted.
  • a multidimensional tuning system can use tuning feedback provided by an animator to reduce the effects of incorrect mathematical solutions. This is mathematically achievable since the facial basis vectors of the FACS matrix mimic real human expressions and can therefore be easily edited by the animator.
  • the animator can adjust one or more selected frames (e.g., five to ten frames having unacceptable results) to achieve a “correct look” in the animator's artistic judgment.
  • the adjustment is performed by modifying the weights resulting from the FACS solves associated with the poses in the selected frames.
  • the modified poses are then used to update and optimize the FACS matrix.
  • the updated FACS matrix thus includes action units based on actual marker ranges of motion as well as the modified weights.
  • non-linear mathematical optimization tools are used to optimize the action unit pose data and activation levels.
  • artistic input is taken from the artist or user by modifying weights so that the overall expression suite closely matches the desires of a user. This is done on a few frames.
  • the tuning process then learns from all the changed weights resulting in a new/modified FACS matrix.
  • the modified FACS matrix is used in subsequent solves on the MOCAP data in order to apply the adjusted weighting provided by the animator on the poses in the selected frames.
  • the modifications in the FACS library are also incorporated in the other frames, generating improved results over the entire animation. Further, should the modified FACS library generate results that are still not satisfactory, the animator can perform further adjustments to build updated FACS libraries.
  • FIG. 11 depicts an example of FACS poses before and after a tuning operation.
  • the left image of FIG. 11 shows a lip shut phoneme position overlaid before and after tuning.
  • the right image of FIG. 11 shows a lip tightener pose before and after tuning.
  • the new marker positions (in black) have been adjusted to an optimized location based on the animator's corrected weighting values over a few tuned frames. This change is shown on the two poses depicted, but often occurs on more poses depending on the nature of the animator's input adjustments.
  • FIG. 12 and FIG. 13 depict examples of solved animation frames before and after a tuning operation.
  • the left image depicts a frame solved using the initial, calibrated FACS matrix
  • the right image depicts the same frame solved using the modified (tuned) FACS matrix.
  • the resulting effect is concentrated on the right lip tightener of the pose.
  • the left image depicts a frame solved using the initial, calibrated FACS matrix
  • the right image depicts the same frame solved using the modified (tuned) FACS matrix.
  • the actor is uttering the beginning of the word “please.”
  • the solve using the initial, calibrated FACS matrix does not show the lips closed to say the first syllable whereas the solve using the modified FACS matrix does show the lips closed.
  • FIG. 1 is a flowchart illustrating a method 100 of animating a digital facial model.
  • action units are defined for a FACS matrix.
  • the FACS matrix includes 64 action units, each action unit defining groups of facial muscle groups working together to generate a particular facial expression. Action units can further be broken down to represent left and right sides of the face, and thus compose asymmetrical facial poses.
  • the action units of the FACS matrix are calibrated, at 120 .
  • each actor has a unique, individualized FACS matrix.
  • each action unit is calibrated by motion capturing the actor's performance of the pose corresponding to the action unit. Facial marker data are captured as described above, FACS cleaned and stabilized, and assigned to the FACS matrix in correspondence with the particular action unit.
  • the actor performs the pose in an extreme manner to establish expected bounds for marker excursions when the pose is executed during a performance.
  • MOCAP data are acquired during a performance.
  • New facial pose data are received one frame at a time, at 130 , as the MOCAP data are generated during performance and acquisition.
  • the frame of MOCAP data comprises volumetric (3-D) data representing the facial marker positions in the capture space.
  • the volumetric data are FACS cleaned and stabilized, as described above, before being received (at 130 ).
  • Weights are determined, at 140 , which characterize a weighted combination of action units approximating the new facial pose data.
  • Action units represent activations of certain facial muscle groups, and can be regarded as facial basis vectors, as discussed above.
  • a linear optimization such as a least squares fit, is used to compute the optimal combination of weights.
  • a non-linear optimization is used to perform the fit.
  • a weighted activation is generated, at 150 .
  • the weights are applied to muscle group activations associated with each action unit and the resulting activations are combined to generate a weighted activation.
  • the weighted activation is then applied to the digital facial model, at 160 .
  • MOCAP data frames are available for processing (determined at 170 )
  • a new frame of MOCAP data is received, at 130 , and the process continues as described above. If no more MOCAP data frames are available, then the process continues by recalibrating the FACS matrix, at 180 . In one implementation, recalibrating the FACS matrix (at 170 ) is undertaken while more MOCAP data frames are available, on command by the user.
  • Recalibrating the FACS matrix can include receiving adjustments to the weighted activation from the user. For example, if the user desires a modification to a pose in a particular frame, the user may select the frame and adjust the weights used to generate the weighted activation. Since the weights correspond to predefined action units, and the action units correspond to distinct facial movements (i.e., activations of certain facial muscle groups), the pose can be adjusted by manipulating the weights corresponding to facial muscle groups controlling the particular aspect of the pose intended to be changed. For example, where movement of the left corner of the mouth is defined in an action unit, the left corner of the mouth of the digital model is moved to a more extreme position, or less extreme position, by manipulating the weight associated with that action unit. Thus, an animator or artist, for example, is able to control various aspects of a facial expression by manipulating natural components of the face (i.e., action units).
  • FIG. 2 is a flowchart illustrating the recalibration of action units of a FACS matrix (at 180 ).
  • frames containing poses on the digital facial model which the user wishes to modify are selected. For example, out of thousands of data frames, five to ten frames might be selected for modification of the facial data.
  • the weights are modified to generate the desired facial pose, at 210 .
  • the corresponding action units are modified accordingly to include the adjusted weights, and are exported to the FACS matrix.
  • the FACS matrix is updated with new versions of those particular action units, modified to accommodate the user's expectations for the particular facial poses associated with them.
  • FIG. 3 is a functional block diagram of a system 300 for animating a digital facial model, including a retargeting module 310 , a FACS module 320 , an animation module 330 , and a tuning interface module 340 .
  • the retargeting module 310 receives cleaned, stabilized facial MOCAP data, and action units from the FACS module 320 .
  • the FACS module 320 receives cleaned, stabilized calibration data, and maintains a plurality of action units in a FACS matrix, the functionality of which is described above.
  • the cleaned, stabilized calibration data are used to calibrate the action units of the FACS matrix maintained by the FACS module 320 .
  • the retargeting module 310 generates a weighted activation, according to weights determined therein characterizing a weighted combination of action units which approximates the facial pose data represented by the received facial MOCAP data.
  • the animation module 330 receives a weighted activation and generates animation data.
  • the animation data include the results of activating a digital facial model according to the weighted activation.
  • the animation module 330 maintains a digital facial model, and includes a rigging unit 332 , which is used to generate fascia layers on the digital facial model.
  • the fascia layers are components of the digital facial model to which the weighted activation is applied to generate the animation data.
  • the animation module 330 includes a transfer unit 334 which applies the weighted activation to the fascia layers of the digital facial model.
  • a tuning interface module 340 is configured to receive input user adjustments, and is used by a user to generate recalibrated action units for the FACS matrix maintained by the FACS module 320 .
  • the tuning interface module 340 includes a frame selection unit 342 used by a user to select animation data frames in which the resulting pose of the digital facial model is deemed unsatisfactory. The frame selection unit 342 can be used to select any number of frames from the frames of animation data.
  • the tuning interface module 340 includes a weight modification unit 344 , which is used by the user to modify the weights corresponding to appropriate action units for the purpose of adjusting a pose of the digital facial model to achieve a desired result. Once the weights have been adjusted to the user's satisfaction, the tuning interface module 340 conveys information regarding the adjusted action unit to the FACS module 320 , where the information is received and used to update the FACS matrix.
  • FIG. 4 is a flowchart illustrating a method 400 of performance driven facial animation.
  • facial motion data are captured.
  • MOCAP cameras disposed about a capture space are used to capture infra-red light reflected by reflective markers coupled to an actor's body and face. The reflected light appears as white dots on a black background, where the white dots represent the markers in the images.
  • the images from the MOCAP cameras are used to reconstruct sequential frames of volumetric data in which the marker positions are located.
  • the facial data are segmented from the volumetric data (essentially by filtering out the body data) and are labeled, at 420 .
  • the facial data are stabilized, as discussed above, at 430 .
  • the facial data are then cleaned using a FACS matrix, at 440 .
  • the facial data are then normalized, at 450 , to remove positional offset discrepancies due to day-to-day variations in marker placement, for example.
  • the facial data are retargeted frame-by-frame to a digital facial model using weighted combinations of action units of the FACS matrix.
  • a multidimensional tuning is then performed by a user, at 470 , where action units comprising a pose on the digital facial model are modified by the user to achieve a more desirable result.
  • the modified action units are incorporated into the FACS matrix as updates.
  • the updated FACS matrix is then used to generate a higher quality of animation output.
  • FIG. 14A illustrates a representation of a computer system 1400 and a user 1402 .
  • the user 1402 can use the computer system 1400 to process and manage performance driven facial animation.
  • the computer system 1400 stores and executes a facial animation system 1416 , which processes facial MOCAP data.
  • FIG. 14B is a functional block diagram illustrating the computer system 1400 hosting the facial animation system 1416 .
  • the controller 1410 is a programmable processor which controls the operation of the computer system 1400 and its components.
  • the controller 1410 loads instructions from the memory 1420 or an embedded controller memory (not shown) and executes these instructions to control the system. In its execution, the controller 1410 provides the facial animation system 1416 as a software system. Alternatively, this service can be implemented as separate components in the controller 1410 or the computer system 1400 .
  • Memory 1420 stores data temporarily for use by the other components of the computer system 1400 .
  • memory 1420 is implemented as RAM.
  • memory 1420 also includes long-term or permanent memory, such as flash memory and/or ROM.
  • Storage 1430 stores data temporarily or long term for use by other components of the computer system 1400 , such as for storing data used by the facial animation system 1416 .
  • storage 1430 is a hard disk drive.
  • the media device 1440 receives removable media and reads and/or writes data to the inserted media.
  • the media device 1440 is an optical disc drive.
  • the user interface 1450 includes components for accepting user input from the user of the computer system 1400 and presenting information to the user.
  • the user interface 1450 includes a keyboard, a mouse, audio speakers, and a display.
  • the controller 1410 uses input from the user to adjust the operation of the computer system 1400 .
  • the I/O interface 1460 includes one or more I/O ports to connect to corresponding I/O devices, such as external storage or supplemental devices (e.g., a printer or a PDA).
  • the ports of the I/O interface 1460 include ports such as: USB ports, PCMCIA ports, serial ports, and/or parallel ports.
  • the I/O interface 1460 includes a wireless interface for communication with external devices wirelessly.
  • the network interface 1470 includes a wired and/or wireless network connection, such as an RJ-45 or “Wi-Fi” interface (including, but not limited to 802.11) supporting an Ethernet connection.
  • a wired and/or wireless network connection such as an RJ-45 or “Wi-Fi” interface (including, but not limited to 802.11) supporting an Ethernet connection.
  • the computer system 1400 includes additional hardware and software typical of computer systems (e.g., power, cooling, operating system), though these components are not specifically shown in FIG. 14B for simplicity. In other implementations, different configurations of the computer system can be used (e.g., different bus or storage configurations or a multi-processor configuration).
  • One implementation includes one or more programmable processors and corresponding computer system components to store and execute computer instructions, such as to provide the various subsystems of a motion capture system (e.g., calibration, matrix building, cleanup, stabilization, normalization, retargeting, and tuning using FACS techniques).
  • a motion capture system e.g., calibration, matrix building, cleanup, stabilization, normalization, retargeting, and tuning using FACS techniques.
  • the animation supported by the motion capture system could be used for film, television, advertising, online or offline computer content (e.g., web advertising or computer help systems), video games, computer games, or any other animated computer graphics video application.
  • different types of motion capture techniques and markers can be used, such as optical markers other than infrared, active optical (e.g., LED), radio (e.g., RFID), paint, accelerometers, deformation measurement, etc.
  • a combination of artistic input and mathematical processes is used to model a face which is activated using retargeting solutions.
  • mathematical, heuristic, and aesthetically based rules are developed to enhance the fidelity of muscle and skin movements on the digital facial model when the animated character talks.

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Or Creating Images (AREA)
  • Image Processing (AREA)

Abstract

A method of animating a digital facial model, the method including: defining a plurality of action units; calibrating each action unit of the plurality of action units via an actor's performance; capturing first facial pose data; determining a plurality of weights, each weight of the plurality of weights uniquely corresponding to the each action unit, the plurality of weights characterizing a weighted combination of the plurality of action units, the weighted combination approximating the first facial pose data; generating a weighted activation by combining the results of applying the each weight to the each action unit; applying the weighted activation to the digital facial model; and recalibrating at least one action unit of the plurality of action units using input user adjustments to the weighted activation.

Description

    CROSS-REFERENCE TO RELATED APPLICATIONS
  • This application is a continuation application of a co-pending U.S. patent application Ser. No. 12/198,762, filed Aug. 26, 2008 entitled “PERFORMANCE DRIVEN FACIAL ANIMATION”, which is a continuation application of a co-pending U.S. patent application Ser. No. 11/956,728, filed Dec. 14, 2007 entitled “PERFORMANCE DRIVEN FACIAL ANIMATION”, which is a continuation application of a co-pending U.S. patent application Ser. No. 11/739,448, filed Apr. 24, 2007, entitled “PERFORMANCE DRIVEN FACIAL ANIMATION”, which claims the benefit of priority of U.S. Provisional Patent Application No. 60/794,790, filed Apr. 24, 2006, entitled “PERFORMANCE DRIVEN FACIAL ANIMATION.” The disclosures of the above-referenced patent application are hereby incorporated by reference.
  • BACKGROUND
  • 1. Field of the Invention
  • The present invention relates generally to motion capture, and more particularly to methods and systems for generating facial animation using performance data, such as motion capture data obtained from motion capture systems and video images obtained from video data.
  • 2. Description of the Prior Art
  • Modeling a face, its motion, and rendering it in a manner that appears realistic is a difficult problem, though progress to achieve realistic looking faces has been made from a modeling perspective as well as a rendering perspective. A greater problem is animating a digital face in a realistic and believable manner that will bear close scrutiny, where slight flaws in the animated performance are often unacceptable. While adequate facial animation (stylized and realistic) can be attempted via traditional keyframe techniques by skilled animators, it is a complicated task that becomes especially time-consuming as the desired results approach realistic imagery.
  • Apart from keyframe techniques, other methods based on principal component analysis have also been implemented to develop animated facial models from performance data. These methods typically generate lowest-dimensional models from the data. Further, being mathematically-based solutions, the facial models so developed often look unnatural in one or more aspects. Moreover, the resulting low dimensionality makes post-development modification of the facial model difficult and non-intuitive to a user when the principal components do not correspond with natural, identifiable facial movements that can be adjusted to achieve a desired result. That is, the basis vectors (obtained using principal component analysis) do not correspond to any logical expression subset that an artist can edit afterwards. For example, a simultaneous lip corner rise with eyebrow rise might be solved from performance data as single component activation. However, the single component activation may not be decoupled into separate activations for the lip corner and eyebrow. Thus, an animator wishing to adjust only the lip corner rise may be unable to do so without also activating the eyebrow component.
  • Therefore, what is needed is a system and method that overcomes these significant problems found in the conventional systems as described above.
  • SUMMARY
  • The present invention provides methods and systems for generating facial animation using performance data, such as motion capture data obtained from motion capture systems and video images obtained from video data.
  • In one aspect, a method of animating a digital facial model is disclosed. The method includes: defining a plurality of action units; calibrating each action unit of the plurality of action units via an actor's performance; capturing first facial pose data; determining a plurality of weights, each weight of the plurality of weights uniquely corresponding to the each action unit, the plurality of weights characterizing a weighted combination of the plurality of action units, the weighted combination approximating the first facial pose data; generating a weighted activation by combining the results of applying the each weight to the each action unit; applying the weighted activation to the digital facial model; and recalibrating at least one action unit of the plurality of action units using input user adjustments to the weighted activation.
  • In another aspect, a method of animating a digital facial model includes: defining a plurality of action units, each action unit of including first facial pose data and an activation; calibrating the first facial pose data using calibration pose data derived from a plurality of captured calibration performances, each calibration performance of the plurality of captured calibration performances corresponding with the each action unit; deriving second facial pose data from another calibration performance of the plurality of captured calibration performances; determining a plurality of weights, each weight of the plurality of weights uniquely corresponding to the each action unit, the plurality of weights characterizing a weighted combination of the facial pose data, the weighted combination approximating the second facial pose data; generating a weighted activation by combining the results of applying the each weight to the activation; applying the weighted activation to the digital facial model; and recalibrating the first facial pose data and the activation using input user adjustments to the weighted activation.
  • In yet another aspect, a system for retargeting facial motion capture data to a digital facial model is disclosed. The system includes: a FACS module to manage a plurality of action units; a retargeting module to generate at least one weighted activation for the digital facial model using the facial motion capture data and the plurality of action units; an animation module to generate a facial animation frame by applying the at least one weighted activation to the digital facial model; and a tuning interface module to generate recalibrated action units for the FACS module in accordance with input user adjustments to the facial animation.
  • In a further aspect, a method of digital facial animation includes: capturing facial motion data; labeling the facial motion data; stabilizing the facial motion data; cleaning the facial motion data using a FACS matrix; normalizing the facial motion data; retargeting the facial motion data onto a digital facial model using the FACS matrix; and performing multidimensional tuning of the FACS matrix.
  • Other features and advantages of the present invention will become more readily apparent to those of ordinary skill in the art after reviewing the following detailed description and accompanying drawings.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • The details of the present invention, both as to its structure and operation, may be gleaned in part by study of the accompanying drawings, in which:
  • FIG. 1 is a flowchart illustrating a method of animating a digital facial model;
  • FIG. 2 is a flowchart illustrating a method of recalibrating action units of a FACS matrix;
  • FIG. 3 is a functional block diagram of a system for animating a digital facial model;
  • FIG. 4 is a flowchart illustrating a method of performance driven facial animation;
  • FIG. 5 is an image of actors on a motion capture set;
  • FIG. 6A is a three-part image depicting a neutral facial pose;
  • FIG. 6B is a three-part image depicting a brow lowering facial pose;
  • FIG. 6C is a three-part image depicting a lip corner pull facial pose;
  • FIG. 7A is a three-part image depicting a mouth stretch facial pose;
  • FIG. 7B is a three-part image depicting a lip stretch facial pose;
  • FIG. 8 is a three-part image depicting variability in facial motion capture data quality;
  • FIG. 9 illustrates an example computation of weights for a weighted combination of FACS poses;
  • FIG. 10A is an image depicting an example lip articulation for the partially-opened mouth of an animated character;
  • FIG. 10B is an image depicting an example lip articulation for the fully-opened mouth of an animated character;
  • FIG. 10C is an image depicting an example lip articulation for the closed mouth of an animated character;
  • FIG. 11 depicts an example FACS pose before and after a tuning phase;
  • FIG. 12 depicts an example of solved animation frames before and after a tuning operation;
  • FIG. 13 depicts another example of solved animation frames before and after a tuning operation;
  • FIG. 14A illustrates a representation of a computer system and a user; and
  • FIG. 14B is a functional block diagram illustrating the computer system hosting a facial animation system.
  • DETAILED DESCRIPTION
  • Certain implementations as disclosed herein provide for systems and methods to implement a technique for capturing motion of one or more actors or objects. For example, one method as disclosed herein utilizes a motion capture (“MOCAP”) system to capture the body and facial motion and surfaces of multiple actors using cameras and optical markers attached to the actors. The MOCAP system builds data from the captured images to use in animation in a film.
  • Features provided in implementations include, but are not limited to, cleaning and stabilizing facial data using a Facial Action Coding System (FACS) regardless of the capturing medium, including normal low/high resolution video and MOCAP, for example; facial animation using FACS; and multidimensional tuning of FACS action units. The FACS, proposed by Paul Eckmann and Wallace Friesen, and based on a library of well-studied facial expression set from psychology, has been the basis of driving computer graphics (CG) models.
  • After reading this description it will become apparent to one skilled in the art how to practice the invention in various alternative implementations and alternative applications. However, although various implementations of the present invention will be described herein, it is understood that these embodiments are presented by way of example only, and not limitation. As such, this detailed description of various alternative implementations should not be construed to limit the scope or breadth of the present invention as set forth in the appended claims.
  • When an exact replica of an actor's performance is desired, many processes work by tracking features on the actor's face and using information derived from these tracked features to directly drive the digital character. These features include, for example, use of a few marker samples, curves or contours on the face, and a deforming surface of the face. These processes are intended to programmatically translate data derived from the performance of an act to animations on a digital computer graphics (“CG”) face. The success of these processes often depends on the quality of data, the exactness and realness required in the final animation, and facial calibration. The expertise of both artists (trackers, facial riggers, technical animators) and software technology experts is also often required to achieve a desired end product. Setting up a facial processing pipeline to ultimately produce hundreds of shots of many actors' performances, captured simultaneously, and requiring inputs and controls from artists and animators, presents significant further challenges.
  • A performance will be understood to be a visual capture of an actor's face. In most instances, the actor is talking and emoting either individually or in a group with other actors. This is often done by capturing a video performance of the actor. The video frames can be used either purely for reference by an animator, for further processing to extract point samples, or for deforming 3-D surfaces which are then retargeted onto a digital facial model. Various technological hurdles must be overcome before the 2-D or 3-D reconstructed data can be used, including calibrating cameras, tracking points, and reconstructing 3-D information.
  • Other media types such as audio have been used to capture a voice performance and drive digital facial models. Most of the work approximates the lip and mouth movement of lines of speech but does not have explicit information relating to other areas of the face such as brows, eyes, and the overall emotion of the character. These attributes have to be either implicitly derived or added during post-processing. In one implementation, facial puppeteering has been used to drive a digital facial model. In another implementation, a control device such as a cyber glove is used to input control commands, and finger movements are retargeted onto the digital facial model.
  • While these forms of capture for driving a digital facial model have yielded results, a common mode of data for driving a facial animation has been optical data, used to reconstruct certain facial feature points that are retargeted onto a digital facial model.
  • There are different ways in which the facial expressions can be captured. In one implementation, the MOCAP system captures data of the body and face together. The facial data are targeted onto an animated character whose face is stylized and does not conform to the face of the actual actor. In another implementation, the images are directed toward producing a realistic animation on a character that is intended to look real, and its face to perform realistically. In a further implementation, the facial MOCAP data are acquired separately in a sitting position and the facial animation generated is blended in keyframed body shots.
  • Making data-driven facial animation work well is a challenge because there are many requirements that produce varying levels of data quality including the different types of systems used, the number of people simultaneously captured, and the nature of facial only versus face and body capture. The MOCAP system can support multiple approaches and so can be adapted to these, and other, various production requirements.
  • In one implementation, face and body motion are captured simultaneously with multiple cameras (e.g., 200 cameras) positioned about a “capture volume.” An example capture volume is about 20 feet×20 feet×16 feet in length, width, and height, respectively. Multiple infrared markers (e.g., 80 markers) are coupled to an actor's face and used to capture the actor's performance. It will be appreciated that other configurations of cameras, capture volumes, and markers can be used. The captured data are reconstructed in 3-D using the positions of the multiple cameras during post-processing. A tool such as IMAGEWORKS™ proprietary IMAGEMOTION™ technology, adapted to capturing and processing MOCAP data, can be used. The number of actors acting in motion capture volume can vary from low to high numbers depending on size of the volume, camera resolutions, strength of optical lights and signals, and other related parameters.
  • During a typical MOCAP session, all the actors are instructed to stand apart. Each actor then individually performs a standard T-pose position, where the legs are placed together, hands are stretched out, and the face is relaxed to a neutral position. The T-pose is useful for search and standardization purposes for both the body and face in the MOCAP data during post-processing. Also, each MOCAP “take” ends in all the actors returning to the standard T-pose in the capture volume with the face back to the relaxed neutral position. The T-pose is used by the facial pipeline in a normalization process to ensure that marker placements on a second day of MOCAP performances, for example, correspond to those on the day of the calibration (also referred to as the “master T-pose”). FIG. 5 depicts actors each performing a T-pose in a capture volume. In another instance of a motion capture adaptation (known as an ADR session), only one actor is acting in a sit down position with sensors looking at the actors face. In such cases a T-Pose would correspond to a neutral pose of the face only with no body position.
  • According to a Facial Action Coding System (“FACS”), the human face has muscles that work together in groups called “actions units.” A FACS provides a framework for determining when certain action units are triggered and how to assign to each action unit a relative influence in a facial pose. The FACS was initially designed for psychologists and behavioral scientists to understand facial expressiveness and behavior, though it has also been adapted to other areas.
  • Facial expressions have been categorized into 72 distinct action units. Each action unit defines a muscular activity (“activation”) that produces a momentary change in facial appearance. These changes in facial appearance vary from person to person depending on facial anatomy, e.g., bone structure, fatty deposits, wrinkles, the shapes of various facial features, and other related facial appearances. However, certain commonalities are seen between people as these action units are triggered. An action unit used in a FACS is based on the location on the face of the facial action, and the type of facial action involved. For example, the upper face has muscles that affect the eyebrows, forehead, and eyelids; the lower muscles around the mouth and lips form another group. Each of these muscles works in groups to form action units; and these action units can be broken down further into left and right areas of the face, which can be triggered asymmetrically and independently of each other. In general, all the action units suggested by a FACS provide a broad basis for dynamic facial expressions that can be used in CG animation.
  • A motion capture system may use a FACS as a foundation for capturing and retargeting facial MOCAP data on an animated character's face. Prior to a MOCAP performance, each actor performs a series of calibration poses that include extreme versions of all the action units. The reconstructed 3-D facial pose data corresponding to an action unit capture the extreme facial expression used by the actor to perform that action unit. In one implementation, the FACS includes 64 poses, some of which are split into left and right positions. In another implementation, 18 phoneme poses corresponding to articulated phonemes are also included.
  • FIGS. 6A-6C and 7A-7B illustrate a few of the action units used in the MOCAP system based on FACS. As discussed above, FACS proposes upwards of 72 action units that include expressions involving facial muscles and head motion. FIG. 6A is a three-part image depicting a neutral facial pose; FIG. 6B is a three-part image depicting a brow lowering facial pose; FIG. 6C is a three-part image depicting a lip corner pull facial pose; FIG. 7A is a three-part image depicting a mouth stretch facial pose; and FIG. 7B is a three-part image depicting a lip stretch facial pose. In each of FIGS. 6A-6C and 7A-7B, the actual FACS reference, the actor's performance, and the retargeted expression on the character are shown from left to right.
  • As discussed above, in one implementation, data capture is performed using an optical system capturing both body and face motion of one or more actors performing in a capture space. This implementation uses passive optical components including infrared cameras to capture infrared light reflected by the markers. An image so captured is a low entropy image comprising mostly black areas where no infrared light is sensed, and white dots representing the reflective markers. The size of a white dot in the image varies depending on whether the dot is a body marker (large) or face marker (small), on the distance of the actor (and hence the marker) from the camera, and on whether any occlusions have occurred, where the occlusions are usually caused by the actors.
  • The low entropy images provide at least two advantages: (1) the cameras can capture and record images at higher definitions and at higher frame rates, typically at 60 Hz; and (2) 3-D reconstruction of the captured marker data triangulates each marker across multiple images with different viewpoints to locate the marker in space. The ability to associate corresponding points automatically is greatly improved by using only white dots on a black background.
  • After 3-D reconstruction, the markers are represented by spatial positions (i.e., x, y, z) in a plurality of data frames. However, the data are often noisy, do not have temporal associativity (i.e., consistent labeling) across all of the data frames, and may have gaps. FIG. 8 is a three-part image depicting variability in facial motion capture data quality. Shown in the left-most part of FIG. 8 is an example of good quality data. Shown in the middle part of FIG. 8 is an example of lower quality data. And, shown in the right-most part of FIG. 8 is an example of poor quality data. These problems can be addressed in a learning-based approach taking information both from a facial data model and the temporal associativity of the data.
  • The markers reconstructed for each data frame can have both body markers and facial markers. Both the body markers and facial markers require labeling prior to facial data processing. That is, each marker is assigned a unique identification that persists across the data frames. Labeling all body and facial markers according to their trajectories is a cumbersome and error prone process, especially when a large number of markers is visible in the volume. In one implementation, a two-step process based on the size disparity between body markers (larger) and facial markers (smaller) is used. First, 3-D reconstruction is performed where facial markers are ignored and only body markers are reconstructed and labeled, usually according to velocity-based constraints. Next, the 3-D reconstruction is performed to acquire facial markers, but which will usually also capture body markers. The body markers are removed by eliminating all markers labeled in the first step, leaving only facial data remaining. In another implementation, labeling the facial markers is automated based on a library of action units (a “FACS matrix”) specifically tailored to the corresponding actor's face.
  • During a performance, the actor is typically moving around in the capture volume. The movements result in a translation of the face markers in accompaniment with the body while the actor is speaking and emoting. To retarget the facial marker data onto a digital facial model, it is beneficial to stabilize the facial data by nullifying the translational and rotational effects of body and head movements. Particular difficulties arise with respect to stabilization because facial markers do not necessarily undergo a rigid transform to a standard position as the actor performs. Rigid movements are caused by head rotations and the actor's motion, but when the actor emotes and speaks, many of the facial markers change positions away from their rigid predictions. A few stable point correspondences are typically sufficient to solve for an inverse transformation. However, it is frequently difficult to determine on a frame-by-frame basis which markers are relatively stable, having undergone only a rigid transformation, and which have not been subject to other movements related to emoting or speaking. Noise in the 3-D reconstructed positions of the markers can further impede the determination of a rigid transformation.
  • In one implementation, a hierarchical solution is invoked by first performing a global (or gross) stabilization using markers that generally do not move due to facial expressions, such as markers coupled to the head, ears and the nose bone. The solution is then refined with a local (or fine) stabilization by determining marker movements relative to a facial surface model.
  • After the facial data have been stabilized, the facial data may be missing markers due to occlusions, lack of visibility in the cameras, noise caused by errors in 3-D reconstructions, and/or mislabeled markers. In one implementation, a cleaning and filtering tool is used which includes a learning system based on good facial model data. The cleaning and filtering tool generates estimates of the positions of missing markers, removes noise, and in general ensures the viability of all the markers. The system is scalable to handle data generated by wide ranges of facial expression, and can be tuned to modify the dynamics of the facial data.
  • The cleaning tool utilizes the underlying FACS theory to organize markers into groups of muscles. Muscle movements can be used to probabilistically estimate the likely positions of missing markers. A missing marker location is estimated spatially in a neighborhood points, and estimated temporally by analyzing ranges of motion of the markers. In one implementation, a probabilistic model and a corresponding marker muscle grouping are tuned to each actor.
  • Once all the marker positions are determined (or estimated), standard frequency transforms are used to remove noise in the data. It will be appreciated that high frequency content, which is normally categorized as noise, may also represent quick, valid movements of the actor's muscles and changes in the actor's facial expression.
  • When capturing a long performance, such as a movie spanning over more than one day, actors typically remove and reattach motion capture markers. Although steps are taken to ensure that the markers are placed at the same positions on the face each time, small differences between marker placements at the daily positions are common. These differences can significantly affect the retargeting solutions described below. Normalization is therefore an important component of adjusting the marker placements so that the differences in the daily positions do not compromise the extent of facial expression performed by the actor, and the facial expression is accurately transferred onto the digital facial model.
  • In one implementation, normalization is accomplished in two steps. Each MOCAP take starts and ends with the actors performing a T-pose, as discussed in relation to FIG. 5. The T-pose of each actor in a subsequent MOCAP take is aligned with the master T-pose of the actor determined during calibration. Aligning a T-pose to the master T-pose relies on the use of various relaxed landmark markers. For example, the corners of the eyes and mouth are used because they are expected to change very little from day to day. Offset vectors for each marker are computed according to discrepancies in the alignment of the T-pose and master T-pose. The offset vectors are applied to the T-pose of the corresponding MOCAP take so that each marker in the T-pose is identically aligned to the markers of the master T-pose. The offsets are propagated through the actor's performance during that day, thus normalizing the data in all the frames.
  • As discussed above, a FACS provides a set of action units or poses deemed representative of most facial expressions. In one implementation, MOCAP frames of calibration poses performed by an actor relating to facial expressions corresponding to FACS poses (i.e., action units) are captured. Some of the calibration poses are broken into left and right sides to capture an asymmetry that the actor's face may exhibit. Subsequently, incoming frames of the actor's performance are analyzed in the space of all the FACS poses (i.e., action units) of the FACS matrix. The action units may thus be viewed as facial basis vectors, and a weight for each is computed for an incoming data frame. A weighted combination of action units (i.e., facial basis vectors, FACS poses) is determined to approximate a new pose in an incoming data frame.
  • FIG. 9 illustrates an example computation of weights w1, w2 . . . wn for a weighted combination of FACS poses. Computing weights w1, w2 . . . wn determines an influence associated with each of n FACS action units. In one implementation, computing the weights includes a linear optimization. In another implementation, computing the weights includes a non-linear optimization.
  • The weights are applied to the associated n FACS action units to generate a weighted activation. The weighted activation is transferred onto a digital facial model rigged with a facial muscle system.
  • In one implementation, the facial poses of an animated character, corresponding to FACS poses, are generated by an artist using a facial rig. In another implementation, a digital facial model setup is based on IMAGEWORKS'™ proprietary character facial system. The character facial system helps pull and nudge vertices of the digital facial model so that resulting deformations are consistent with the aspects of a human face.
  • The digital facial model includes different fascia layers blended to create a final facial deformation on the digital facial model. The fascia layers in one implementation include a muscle layer that allows facial muscle deformations, a jaw layer that allows jaw movement, a volume layer that control skin bulges in different facial areas, and an articulation layer for pronounced lip movement. The muscle layer includes skull patches with muscle controls that deform the face. The muscle controls are activated by weighted activations generated from MOCAP data. The jaw layer helps to control movements of the jaw of the digital facial model. The volume layer adds volume to the deformations occurring on the digital facial model. It aids in modeling wrinkles and other facial deformations, which can be triggered by weighted activations generated from MOCAP data. The articulation layer relates to the shape of the lips as they deform. In particular, it aids in controlling the roll and volume of lips, essential when the lips thin out or pucker in facial expressions. FIG. 10A is an image depicting an example lip articulation for the partially-opened mouth of an animated character. FIG. 10B is an image depicting an example lip articulation for the fully-opened mouth of an animated character. FIG. 10C is an image depicting an example lip articulation for the closed mouth of an animated character.
  • The fascia layers can be constructed onto the digital facial model. Incoming MOCAP data are mapped, or retargeted, onto the digital facial model as weighted activations that trigger the fascia layers. As discussed above, an incoming frame of MOCAP data is analyzed in the space of all of the action units (i.e., facial basis vectors) of the FACS matrix. The resulting weights quantify the proportional influence that each of the action units of the FACS matrix exerts in triggering the fascia layers. However, because the weights are obtained using mathematical methods (e.g., linear and non-linear optimization), the resulting expression created on the digital facial model sometimes fails to replicate facial deformations naturally recognized as articulating a desired expression. That is, although the facial retargeting achieved using the various mapping solutions may be optimally correct in a mathematical sense, the resulting facial expressions may not conform to the desired look or requirements of a finalized animation shot.
  • There can be several reasons for these nonconforming results. The actor may not perform according to the calibration poses provided initially for the FACS matrix, thus causing the action units to be non-representative of the actor's performance; retargeting inconsistencies sometimes arise when mapping mathematically correct marker data to an aesthetically designed face; the digital facial model may conform poorly to the actor's face; marker placements on the actor's face may differ adversely from day to day; and/or the desired animation may be inconsistent with the actions performed by the actor, such as when a desired expression is not present in the MOCAP data, or an exaggeration of the captured expression is attempted.
  • A multidimensional tuning system can use tuning feedback provided by an animator to reduce the effects of incorrect mathematical solutions. This is mathematically achievable since the facial basis vectors of the FACS matrix mimic real human expressions and can therefore be easily edited by the animator. After a FACS solve and retargeting is performed, the animator can adjust one or more selected frames (e.g., five to ten frames having unacceptable results) to achieve a “correct look” in the animator's artistic judgment. The adjustment is performed by modifying the weights resulting from the FACS solves associated with the poses in the selected frames. The modified poses are then used to update and optimize the FACS matrix. The updated FACS matrix thus includes action units based on actual marker ranges of motion as well as the modified weights. In one implementation, non-linear mathematical optimization tools are used to optimize the action unit pose data and activation levels. In the tuning process, artistic input is taken from the artist or user by modifying weights so that the overall expression suite closely matches the desires of a user. This is done on a few frames. The tuning process then learns from all the changed weights resulting in a new/modified FACS matrix. The modified FACS matrix is used in subsequent solves on the MOCAP data in order to apply the adjusted weighting provided by the animator on the poses in the selected frames. The modifications in the FACS library are also incorporated in the other frames, generating improved results over the entire animation. Further, should the modified FACS library generate results that are still not satisfactory, the animator can perform further adjustments to build updated FACS libraries.
  • FIG. 11 depicts an example of FACS poses before and after a tuning operation. The left image of FIG. 11 shows a lip shut phoneme position overlaid before and after tuning. The right image of FIG. 11 shows a lip tightener pose before and after tuning. The new marker positions (in black) have been adjusted to an optimized location based on the animator's corrected weighting values over a few tuned frames. This change is shown on the two poses depicted, but often occurs on more poses depending on the nature of the animator's input adjustments.
  • FIG. 12 and FIG. 13 depict examples of solved animation frames before and after a tuning operation. In FIG. 12, the left image depicts a frame solved using the initial, calibrated FACS matrix, and the right image depicts the same frame solved using the modified (tuned) FACS matrix. The resulting effect is concentrated on the right lip tightener of the pose. In FIG. 13, the left image depicts a frame solved using the initial, calibrated FACS matrix, and the right image depicts the same frame solved using the modified (tuned) FACS matrix. The actor is uttering the beginning of the word “please.” The solve using the initial, calibrated FACS matrix does not show the lips closed to say the first syllable whereas the solve using the modified FACS matrix does show the lips closed.
  • FIG. 1 is a flowchart illustrating a method 100 of animating a digital facial model. At 110, action units are defined for a FACS matrix. In one implementation, as discussed above, the FACS matrix includes 64 action units, each action unit defining groups of facial muscle groups working together to generate a particular facial expression. Action units can further be broken down to represent left and right sides of the face, and thus compose asymmetrical facial poses.
  • The action units of the FACS matrix are calibrated, at 120. Typically, each actor has a unique, individualized FACS matrix. In one implementation, each action unit is calibrated by motion capturing the actor's performance of the pose corresponding to the action unit. Facial marker data are captured as described above, FACS cleaned and stabilized, and assigned to the FACS matrix in correspondence with the particular action unit. In another implementation, the actor performs the pose in an extreme manner to establish expected bounds for marker excursions when the pose is executed during a performance.
  • After the calibration (at 120) is completed, MOCAP data are acquired during a performance. New facial pose data are received one frame at a time, at 130, as the MOCAP data are generated during performance and acquisition. The frame of MOCAP data comprises volumetric (3-D) data representing the facial marker positions in the capture space. In one implementation, the volumetric data are FACS cleaned and stabilized, as described above, before being received (at 130).
  • Weights are determined, at 140, which characterize a weighted combination of action units approximating the new facial pose data. Action units represent activations of certain facial muscle groups, and can be regarded as facial basis vectors, as discussed above. As such, one or more action units—including all of the action units in the FACS matrix—are used as components which, in a weighted combination, approximate the new facial pose data. That is, the new facial pose data are characterized as some combination of the predefined action units in the FACS matrix. Determining the weights involves optimally fitting a weighted combination of the facial pose data associated with each action unit to the new facial pose data. In one implementation, a linear optimization, such as a least squares fit, is used to compute the optimal combination of weights. In another implementation, a non-linear optimization is used to perform the fit.
  • Once the weights are determined (at 140) a weighted activation is generated, at 150. In one implementation, the weights are applied to muscle group activations associated with each action unit and the resulting activations are combined to generate a weighted activation. The weighted activation is then applied to the digital facial model, at 160.
  • If more MOCAP data frames are available for processing (determined at 170), then a new frame of MOCAP data is received, at 130, and the process continues as described above. If no more MOCAP data frames are available, then the process continues by recalibrating the FACS matrix, at 180. In one implementation, recalibrating the FACS matrix (at 170) is undertaken while more MOCAP data frames are available, on command by the user.
  • Recalibrating the FACS matrix (at 170) can include receiving adjustments to the weighted activation from the user. For example, if the user desires a modification to a pose in a particular frame, the user may select the frame and adjust the weights used to generate the weighted activation. Since the weights correspond to predefined action units, and the action units correspond to distinct facial movements (i.e., activations of certain facial muscle groups), the pose can be adjusted by manipulating the weights corresponding to facial muscle groups controlling the particular aspect of the pose intended to be changed. For example, where movement of the left corner of the mouth is defined in an action unit, the left corner of the mouth of the digital model is moved to a more extreme position, or less extreme position, by manipulating the weight associated with that action unit. Thus, an animator or artist, for example, is able to control various aspects of a facial expression by manipulating natural components of the face (i.e., action units).
  • FIG. 2 is a flowchart illustrating the recalibration of action units of a FACS matrix (at 180). At 200, frames containing poses on the digital facial model which the user wishes to modify are selected. For example, out of thousands of data frames, five to ten frames might be selected for modification of the facial data. For each selected frame, the weights are modified to generate the desired facial pose, at 210. In one implementation, the corresponding action units are modified accordingly to include the adjusted weights, and are exported to the FACS matrix. Thus, the FACS matrix is updated with new versions of those particular action units, modified to accommodate the user's expectations for the particular facial poses associated with them. In another implementation, the same data set originally processed according to the method illustrated in FIG. 1 is reprocessed using the updated FACS matrix. While the data of the particular frames that were adjusted will now be retargeted to the digital facial model in a more desirable manner, other facial pose data for which the modified action units nevertheless play a significant role in terms of weighting will also be retargeted in such a way as to improve the overall quality of the animation.
  • FIG. 3 is a functional block diagram of a system 300 for animating a digital facial model, including a retargeting module 310, a FACS module 320, an animation module 330, and a tuning interface module 340.
  • The retargeting module 310 receives cleaned, stabilized facial MOCAP data, and action units from the FACS module 320. The FACS module 320 receives cleaned, stabilized calibration data, and maintains a plurality of action units in a FACS matrix, the functionality of which is described above. The cleaned, stabilized calibration data are used to calibrate the action units of the FACS matrix maintained by the FACS module 320. The retargeting module 310 generates a weighted activation, according to weights determined therein characterizing a weighted combination of action units which approximates the facial pose data represented by the received facial MOCAP data.
  • The animation module 330 receives a weighted activation and generates animation data. The animation data include the results of activating a digital facial model according to the weighted activation. In one implementation, the animation module 330 maintains a digital facial model, and includes a rigging unit 332, which is used to generate fascia layers on the digital facial model. In particular, the fascia layers are components of the digital facial model to which the weighted activation is applied to generate the animation data. In another implementation, the animation module 330 includes a transfer unit 334 which applies the weighted activation to the fascia layers of the digital facial model.
  • A tuning interface module 340 is configured to receive input user adjustments, and is used by a user to generate recalibrated action units for the FACS matrix maintained by the FACS module 320. In one implementation, the tuning interface module 340 includes a frame selection unit 342 used by a user to select animation data frames in which the resulting pose of the digital facial model is deemed unsatisfactory. The frame selection unit 342 can be used to select any number of frames from the frames of animation data. In another implementation, the tuning interface module 340 includes a weight modification unit 344, which is used by the user to modify the weights corresponding to appropriate action units for the purpose of adjusting a pose of the digital facial model to achieve a desired result. Once the weights have been adjusted to the user's satisfaction, the tuning interface module 340 conveys information regarding the adjusted action unit to the FACS module 320, where the information is received and used to update the FACS matrix.
  • FIG. 4 is a flowchart illustrating a method 400 of performance driven facial animation. At 410, facial motion data are captured. In one implementation, as discussed above, MOCAP cameras disposed about a capture space are used to capture infra-red light reflected by reflective markers coupled to an actor's body and face. The reflected light appears as white dots on a black background, where the white dots represent the markers in the images. The images from the MOCAP cameras are used to reconstruct sequential frames of volumetric data in which the marker positions are located. The facial data are segmented from the volumetric data (essentially by filtering out the body data) and are labeled, at 420. The facial data are stabilized, as discussed above, at 430. The facial data are then cleaned using a FACS matrix, at 440. The facial data are then normalized, at 450, to remove positional offset discrepancies due to day-to-day variations in marker placement, for example.
  • At 460, the facial data are retargeted frame-by-frame to a digital facial model using weighted combinations of action units of the FACS matrix. A multidimensional tuning is then performed by a user, at 470, where action units comprising a pose on the digital facial model are modified by the user to achieve a more desirable result. The modified action units are incorporated into the FACS matrix as updates. The updated FACS matrix is then used to generate a higher quality of animation output.
  • FIG. 14A illustrates a representation of a computer system 1400 and a user 1402. The user 1402 can use the computer system 1400 to process and manage performance driven facial animation. The computer system 1400 stores and executes a facial animation system 1416, which processes facial MOCAP data.
  • FIG. 14B is a functional block diagram illustrating the computer system 1400 hosting the facial animation system 1416. The controller 1410 is a programmable processor which controls the operation of the computer system 1400 and its components. The controller 1410 loads instructions from the memory 1420 or an embedded controller memory (not shown) and executes these instructions to control the system. In its execution, the controller 1410 provides the facial animation system 1416 as a software system. Alternatively, this service can be implemented as separate components in the controller 1410 or the computer system 1400.
  • Memory 1420 stores data temporarily for use by the other components of the computer system 1400. In one implementation, memory 1420 is implemented as RAM. In another implementation, memory 1420 also includes long-term or permanent memory, such as flash memory and/or ROM.
  • Storage 1430 stores data temporarily or long term for use by other components of the computer system 1400, such as for storing data used by the facial animation system 1416. In one implementation, storage 1430 is a hard disk drive.
  • The media device 1440 receives removable media and reads and/or writes data to the inserted media. In one implementation, the media device 1440 is an optical disc drive.
  • The user interface 1450 includes components for accepting user input from the user of the computer system 1400 and presenting information to the user. In one implementation, the user interface 1450 includes a keyboard, a mouse, audio speakers, and a display. The controller 1410 uses input from the user to adjust the operation of the computer system 1400.
  • The I/O interface 1460 includes one or more I/O ports to connect to corresponding I/O devices, such as external storage or supplemental devices (e.g., a printer or a PDA). In one implementation, the ports of the I/O interface 1460 include ports such as: USB ports, PCMCIA ports, serial ports, and/or parallel ports. In another implementation, the I/O interface 1460 includes a wireless interface for communication with external devices wirelessly.
  • The network interface 1470 includes a wired and/or wireless network connection, such as an RJ-45 or “Wi-Fi” interface (including, but not limited to 802.11) supporting an Ethernet connection.
  • The computer system 1400 includes additional hardware and software typical of computer systems (e.g., power, cooling, operating system), though these components are not specifically shown in FIG. 14B for simplicity. In other implementations, different configurations of the computer system can be used (e.g., different bus or storage configurations or a multi-processor configuration).
  • It will be appreciated that the various illustrative logical blocks, modules, and methods described in connection with the above described figures and the implementations disclosed herein have been described above generally in terms of their functionality. In addition, the grouping of functions within a module or subunit is for ease of description. Specific functions or steps can be moved from one module or subunit to another without departing from the invention.
  • One implementation includes one or more programmable processors and corresponding computer system components to store and execute computer instructions, such as to provide the various subsystems of a motion capture system (e.g., calibration, matrix building, cleanup, stabilization, normalization, retargeting, and tuning using FACS techniques).
  • Additional variations and implementations are also possible. For example, the animation supported by the motion capture system could be used for film, television, advertising, online or offline computer content (e.g., web advertising or computer help systems), video games, computer games, or any other animated computer graphics video application. In another example, different types of motion capture techniques and markers can be used, such as optical markers other than infrared, active optical (e.g., LED), radio (e.g., RFID), paint, accelerometers, deformation measurement, etc. In another example, a combination of artistic input and mathematical processes is used to model a face which is activated using retargeting solutions. In a further example, mathematical, heuristic, and aesthetically based rules are developed to enhance the fidelity of muscle and skin movements on the digital facial model when the animated character talks.
  • The above description of the disclosed implementations is provided to enable any person skilled in the art to make or use the invention. Various modifications to these implementations will be readily apparent to those skilled in the art, and the generic principles described herein can be applied to other implementations without departing from the spirit or scope of the invention. Thus, it will be understood that the description and drawings presented herein represent implementations of the invention and are therefore representative of the subject matter which is broadly contemplated by the present invention. It will be further understood that the scope of the present invention fully encompasses other implementations that may become obvious to those skilled in the art and that the scope of the present invention is accordingly limited by nothing other than the appended claims.

Claims (30)

1. A method of animating a digital facial model, the method comprising:
defining a plurality of action units;
calibrating each action unit of said plurality of action units via an actor's performance;
capturing first facial pose data;
determining a plurality of weights, each weight of said plurality of weights uniquely corresponding to said each action unit, said plurality of weights characterizing a weighted combination of said plurality of action units, said weighted combination approximating said first facial pose data; and
recalibrating at least one action unit of said plurality of action units using input user adjustments.
2. The method of claim 1, wherein said each action unit includes a second facial pose data and an activation.
3. The method of claim 2, wherein said calibrating each action unit includes
calibrating said second facial pose data of said each action unit using calibration pose data derived from a calibration performance corresponding with said each action unit.
4. The method of claim 3, further comprising
cleaning and stabilizing said calibration pose data.
5. The method of claim 2, wherein said weighted combination includes
a weighted combination of said second facial pose data of said each action unit.
6. The method of claim 5, wherein said determining a plurality of weights includes
an optimization of a correspondence between said first facial pose data and said weighted combination of said second facial pose data.
7. The method of claim 6, wherein said optimization includes a linear optimization.
8. The method of claim 7, wherein said linear optimization includes a least-squares method.
9. The method of claim 6, wherein said optimization includes a non-linear optimization.
10. (canceled)
11. The method of claim 2, wherein said recalibrating at least one action unit includes
recalibrating said second facial pose data.
12. The method of claim 2, wherein said recalibrating at least one action unit includes
recalibrating said activation.
13. The method of claim 2, wherein said activation of said each action unit is directed to a fascia layer.
14. The method of claim 13, wherein said fascia layer includes a muscle layer.
15. The method of claim 13, wherein said fascia layer includes a jaw layer.
16. The method of claim 13, wherein said fascia layer includes a volume layer.
17. The method of claim 13, wherein said fascia layer includes an articulation layer.
18. The method of claim 1, wherein said plurality of action units comprises a FACS matrix.
19. The method of claim 1, further comprising
cleaning and stabilizing said first facial pose data.
20. A method of animating a digital facial model, the method comprising:
defining a plurality of action units, each action unit of including first facial pose data and an activation;
calibrating said first facial pose data using calibration pose data derived from a plurality of captured calibration performances, each calibration performance of said plurality of captured calibration performances corresponding with said each action unit;
deriving second facial pose data from another calibration performance of said plurality of captured calibration performances;
determining a plurality of weights, each weight of said plurality of weights uniquely corresponding to said each action unit, said plurality of weights characterizing a weighted combination of said facial pose data, said weighted combination approximating said second facial pose data; and
recalibrating said first facial pose data and said activation using input user adjustments.
21. A system for retargeting facial motion capture data to a digital facial model, the system comprising:
a FACS module to manage a plurality of action units;
a calibration module to calibrate each action unit of the plurality of action units via an actor's performance; and
a tuning interface module to generate recalibrated action units for said FACS module in accordance with input user adjustments to a facial animation frame.
22. The system of claim 21, wherein said animation module includes
a rigging unit to generate said digital facial model.
23. The system of claim 22, wherein said rigging unit generates at least one fascia layer on said digital facial model.
24. The system of claim 23, wherein said animation module includes
a transfer module to apply said at least one weighted activation to said at least one fascia layer.
25. The system of claim 21, wherein said tuning interface module includes
a frame selection unit to select said facial animation frame for tuning.
26. (canceled)
27. A method of digital facial animation, the method comprising:
defining a plurality of action units in a FACS matrix;
calibrating each action unit of said plurality of action units via an actor's performance;
capturing facial motion data;
labeling said facial motion data;
stabilizing said facial motion data;
cleaning said facial motion data using said FACS matrix;
normalizing said facial motion data;
retargeting said facial motion data onto a digital facial model using said FACS matrix; and
recalibrating using multidimensional tuning of said FACS matrix.
28. The method of claim 27, wherein multidimensional tuning uses tuning feedback provided by an animator to reduce the effects of incorrect mathematical solutions in the FACS matrix associated with poses in selected frames.
29. The method of claim 28, wherein the tuning feedback is performed by modifying weights resulting from the solutions in the FACS matrix associated with the poses in the selected frames.
30. The method of claim 29, wherein the modified weights are used to update and optimize the FACS matrix, resulting in the FACS matrix including action units based on actual marker ranges of motion as well as the modified weights.
US12/424,481 2006-04-24 2009-04-15 Performance driven facial animation Abandoned US20100045680A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US12/424,481 US20100045680A1 (en) 2006-04-24 2009-04-15 Performance driven facial animation
US12/986,095 US8279228B2 (en) 2006-04-24 2011-01-06 Performance driven facial animation

Applications Claiming Priority (5)

Application Number Priority Date Filing Date Title
US79479006P 2006-04-24 2006-04-24
US73944807A 2007-04-24 2007-04-24
US95672807A 2007-12-14 2007-12-14
US19876208A 2008-08-26 2008-08-26
US12/424,481 US20100045680A1 (en) 2006-04-24 2009-04-15 Performance driven facial animation

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US19876208A Continuation 2006-04-24 2008-08-26

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US12/986,095 Continuation US8279228B2 (en) 2006-04-24 2011-01-06 Performance driven facial animation

Publications (1)

Publication Number Publication Date
US20100045680A1 true US20100045680A1 (en) 2010-02-25

Family

ID=38656332

Family Applications (2)

Application Number Title Priority Date Filing Date
US12/424,481 Abandoned US20100045680A1 (en) 2006-04-24 2009-04-15 Performance driven facial animation
US12/986,095 Active US8279228B2 (en) 2006-04-24 2011-01-06 Performance driven facial animation

Family Applications After (1)

Application Number Title Priority Date Filing Date
US12/986,095 Active US8279228B2 (en) 2006-04-24 2011-01-06 Performance driven facial animation

Country Status (8)

Country Link
US (2) US20100045680A1 (en)
EP (1) EP2011087A4 (en)
JP (2) JP2009534774A (en)
CN (1) CN101473352A (en)
AU (1) AU2007244921B2 (en)
CA (1) CA2650796C (en)
NZ (1) NZ572324A (en)
WO (1) WO2007127743A2 (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100259683A1 (en) * 2009-04-08 2010-10-14 Nokia Corporation Method, Apparatus, and Computer Program Product for Vector Video Retargeting
US20140035929A1 (en) * 2012-08-01 2014-02-06 Disney Enterprises, Inc. Content retargeting using facial layers
US20140085315A1 (en) * 2009-12-21 2014-03-27 Lucasfilm Entertainment Company, Ltd. Combining shapes for animation
CN106504308A (en) * 2016-10-27 2017-03-15 天津大学 Face three-dimensional animation generation method based on 4 standards of MPEG
WO2018031946A1 (en) * 2016-08-11 2018-02-15 MetaMason, Inc. Customized cpap masks and related modeling algorithms
KR20210082203A (en) * 2018-12-05 2021-07-02 소니그룹주식회사 Emulation of hand-drawn lines in CG animation
US20220198959A1 (en) * 2019-03-22 2022-06-23 Essilor International Device for simulating a physiological behaviour of a mammal using a virtual mammal, process and computer program
US11393150B2 (en) * 2020-07-02 2022-07-19 Unity Technologies Sf Generating an animation rig for use in animating a computer-generated character based on facial scans of an actor and a muscle model
US20230102682A1 (en) * 2021-09-30 2023-03-30 Intuit Inc. Large pose facial recognition based on 3d facial model

Families Citing this family (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102007062843A1 (en) * 2007-12-21 2009-06-25 Amedo Smart Tracking Solutions Gmbh Method for detecting movement
US8902232B2 (en) 2008-01-31 2014-12-02 University Of Southern California Facial performance synthesis using deformation driven polynomial displacement maps
US8447065B2 (en) * 2008-09-16 2013-05-21 Cyberlink Corp. Method of facial image reproduction and related device
CN101436312B (en) 2008-12-03 2011-04-06 腾讯科技(深圳)有限公司 Method and apparatus for generating video cartoon
US8334872B2 (en) * 2009-05-29 2012-12-18 Two Pic Mc Llc Inverse kinematics for motion-capture characters
WO2011034963A2 (en) * 2009-09-15 2011-03-24 Sony Corporation Combining multi-sensory inputs for digital animation
US20110065076A1 (en) * 2009-09-16 2011-03-17 Duffy Charles J Method and system for quantitative assessment of social cues sensitivity
US8777630B2 (en) * 2009-09-16 2014-07-15 Cerebral Assessment Systems, Inc. Method and system for quantitative assessment of facial emotion sensitivity
US20110066003A1 (en) * 2009-09-16 2011-03-17 Duffy Charles J Method and system for quantitative assessment of facial emotion nulling
US20110304629A1 (en) * 2010-06-09 2011-12-15 Microsoft Corporation Real-time animation of facial expressions
US8655810B2 (en) 2010-10-22 2014-02-18 Samsung Electronics Co., Ltd. Data processing apparatus and method for motion synthesis
CN103052973B (en) * 2011-07-12 2015-12-02 华为技术有限公司 Generate method and the device of body animation
US20140168204A1 (en) * 2012-12-13 2014-06-19 Microsoft Corporation Model based video projection
CN104123735B (en) * 2014-07-24 2017-05-10 无锡梵天信息技术股份有限公司 Method for blending multiple actions
KR101788709B1 (en) * 2014-08-25 2017-10-20 주식회사 룩시드랩스 Method, system and non-transitory computer-readable recording medium for recognizing facial expression
CN107403170A (en) * 2017-08-08 2017-11-28 灵然创智(天津)动画科技发展有限公司 A kind of seizure point positioning system based on facial seizure system
US11508107B2 (en) 2018-02-26 2022-11-22 Didimo, Inc. Additional developments to the automatic rig creation process
US10796468B2 (en) 2018-02-26 2020-10-06 Didimo, Inc. Automatic rig creation process
US11062494B2 (en) 2018-03-06 2021-07-13 Didimo, Inc. Electronic messaging utilizing animatable 3D models
US11741650B2 (en) 2018-03-06 2023-08-29 Didimo, Inc. Advanced electronic messaging utilizing animatable 3D models
TWI694384B (en) * 2018-06-07 2020-05-21 鴻海精密工業股份有限公司 Storage device, electronic device and method for processing face image
US11645800B2 (en) 2019-08-29 2023-05-09 Didimo, Inc. Advanced systems and methods for automatically generating an animatable object from various types of user input
US11182945B2 (en) 2019-08-29 2021-11-23 Didimo, Inc. Automatically generating an animatable object from various types of user input
US11341702B2 (en) 2020-09-10 2022-05-24 Unity Technologies Sf Systems and methods for data bundles in computer animation
US11875504B2 (en) 2020-09-10 2024-01-16 Unity Technologies Sf Systems and methods for building a muscle-to-skin transformation in computer animation
WO2022055365A1 (en) * 2020-09-10 2022-03-17 Weta Digital Limited Systems and methods for building a muscle-to-skin transformation in computer animation
JPWO2022249539A1 (en) * 2021-05-26 2022-12-01

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6163322A (en) * 1998-01-19 2000-12-19 Taarna Studios Inc. Method and apparatus for providing real-time animation utilizing a database of postures
US6232965B1 (en) * 1994-11-30 2001-05-15 California Institute Of Technology Method and apparatus for synthesizing realistic animations of a human speaking using a computer
US6735566B1 (en) * 1998-10-09 2004-05-11 Mitsubishi Electric Research Laboratories, Inc. Generating realistic facial animation from speech
US20040095352A1 (en) * 2000-11-27 2004-05-20 Ding Huang Modeling object interactions and facial expressions
US20040179013A1 (en) * 2003-03-13 2004-09-16 Sony Corporation System and method for animating a digital facial model
US20050248582A1 (en) * 2004-05-06 2005-11-10 Pixar Dynamic wrinkle mapping
US20060071934A1 (en) * 2004-10-01 2006-04-06 Sony Corporation System and method for tracking facial muscle and eye motion for computer graphics animation

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6285794B1 (en) * 1998-04-17 2001-09-04 Adobe Systems Incorporated Compression and editing of movies by multi-image morphing
JP2003044873A (en) * 2001-08-01 2003-02-14 Univ Waseda Method for generating and deforming three-dimensional model of face
GB0128863D0 (en) 2001-12-01 2002-01-23 Univ London Performance-driven facial animation techniques and media products generated therefrom
US8013852B2 (en) * 2002-08-02 2011-09-06 Honda Giken Kogyo Kabushiki Kaisha Anthropometry-based skeleton fitting

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6232965B1 (en) * 1994-11-30 2001-05-15 California Institute Of Technology Method and apparatus for synthesizing realistic animations of a human speaking using a computer
US6163322A (en) * 1998-01-19 2000-12-19 Taarna Studios Inc. Method and apparatus for providing real-time animation utilizing a database of postures
US6735566B1 (en) * 1998-10-09 2004-05-11 Mitsubishi Electric Research Laboratories, Inc. Generating realistic facial animation from speech
US20040095352A1 (en) * 2000-11-27 2004-05-20 Ding Huang Modeling object interactions and facial expressions
US20040179013A1 (en) * 2003-03-13 2004-09-16 Sony Corporation System and method for animating a digital facial model
US7068277B2 (en) * 2003-03-13 2006-06-27 Sony Corporation System and method for animating a digital facial model
US20050248582A1 (en) * 2004-05-06 2005-11-10 Pixar Dynamic wrinkle mapping
US20060071934A1 (en) * 2004-10-01 2006-04-06 Sony Corporation System and method for tracking facial muscle and eye motion for computer graphics animation

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100259683A1 (en) * 2009-04-08 2010-10-14 Nokia Corporation Method, Apparatus, and Computer Program Product for Vector Video Retargeting
US20140085315A1 (en) * 2009-12-21 2014-03-27 Lucasfilm Entertainment Company, Ltd. Combining shapes for animation
US9183660B2 (en) * 2009-12-21 2015-11-10 Lucasfilm Entertainment Company Ltd. Combining shapes for animation
US20140035929A1 (en) * 2012-08-01 2014-02-06 Disney Enterprises, Inc. Content retargeting using facial layers
US9245176B2 (en) * 2012-08-01 2016-01-26 Disney Enterprises, Inc. Content retargeting using facial layers
WO2018031946A1 (en) * 2016-08-11 2018-02-15 MetaMason, Inc. Customized cpap masks and related modeling algorithms
CN106504308A (en) * 2016-10-27 2017-03-15 天津大学 Face three-dimensional animation generation method based on 4 standards of MPEG
KR20210082203A (en) * 2018-12-05 2021-07-02 소니그룹주식회사 Emulation of hand-drawn lines in CG animation
KR102561020B1 (en) * 2018-12-05 2023-07-31 소니그룹주식회사 Emulation of hand-drawn lines in CG animation
US11763507B2 (en) * 2018-12-05 2023-09-19 Sony Group Corporation Emulating hand-drawn lines in CG animation
US20220198959A1 (en) * 2019-03-22 2022-06-23 Essilor International Device for simulating a physiological behaviour of a mammal using a virtual mammal, process and computer program
US11393150B2 (en) * 2020-07-02 2022-07-19 Unity Technologies Sf Generating an animation rig for use in animating a computer-generated character based on facial scans of an actor and a muscle model
US11393149B2 (en) * 2020-07-02 2022-07-19 Unity Technologies Sf Generating an animation rig for use in animating a computer-generated character based on facial scans of an actor and a muscle model
US20230102682A1 (en) * 2021-09-30 2023-03-30 Intuit Inc. Large pose facial recognition based on 3d facial model
US12080028B2 (en) * 2021-09-30 2024-09-03 Intuit Inc. Large pose facial recognition based on 3D facial model

Also Published As

Publication number Publication date
EP2011087A2 (en) 2009-01-07
US20110175921A1 (en) 2011-07-21
WO2007127743A3 (en) 2008-06-26
WO2007127743A2 (en) 2007-11-08
NZ572324A (en) 2011-10-28
EP2011087A4 (en) 2010-02-24
JP2009534774A (en) 2009-09-24
US8279228B2 (en) 2012-10-02
CA2650796C (en) 2014-09-16
JP2013054761A (en) 2013-03-21
CN101473352A (en) 2009-07-01
JP5344358B2 (en) 2013-11-20
AU2007244921B2 (en) 2012-05-17
AU2007244921A1 (en) 2007-11-08
CA2650796A1 (en) 2007-11-08

Similar Documents

Publication Publication Date Title
US8279228B2 (en) Performance driven facial animation
JP7198332B2 (en) Image regularization and retargeting system
US11189084B2 (en) Systems and methods for executing improved iterative optimization processes to personify blendshape rigs
Thies et al. Facevr: Real-time facial reenactment and eye gaze control in virtual reality
EP2043049B1 (en) Facial animation using motion capture data
US7733346B2 (en) FACS solving in motion capture
Deng et al. Computer facial animation: A survey
US8736616B2 (en) Combining multi-sensory inputs for digital animation
AU2006352758A1 (en) Talking Head Creation System and Method
Pelachaud et al. Final report to NSF of the standards for facial animation workshop
JP2009545084A (en) FACS (Facial Expression Coding System) Solution in Motion Capture
Kalberer et al. Realistic face animation for speech
Zhang et al. Anatomy-based face reconstruction for animation using multi-layer deformation
Müller et al. Realistic speech animation based on observed 3-D face dynamics
Kalberer et al. Lip animation based on observed 3D speech dynamics
Havaldar Sony pictures imageworks
Ypsilos et al. Speech-driven face synthesis from 3D video
Yin et al. A synthesis of 3-D Kabuki face from ancient 2-D images using multilevel radial basis function
Chauhan Vertex-Based Facial Animation Transfer System for Models with Different Topology; A Tool for Maya
Bailey Applications of Machine Learning for Character Animation
Bargmann et al. Learning-based facial rearticulation using streams of 3D scans
Cosker Animation of a Hierarchical Appearance Based Facial Model and Perceptual Analysis of Visual Speech
Radovan Facial Modelling and Animation Trends in the New Millennium: A Survey
Nadtoka Analysis, modelling and animation of emotional speech in 3D

Legal Events

Date Code Title Description
STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION