EP0919031A4 - Procede et systeme de scenarisation d'acteurs animes interactifs - Google Patents

Procede et systeme de scenarisation d'acteurs animes interactifs

Info

Publication number
EP0919031A4
EP0919031A4 EP97935290A EP97935290A EP0919031A4 EP 0919031 A4 EP0919031 A4 EP 0919031A4 EP 97935290 A EP97935290 A EP 97935290A EP 97935290 A EP97935290 A EP 97935290A EP 0919031 A4 EP0919031 A4 EP 0919031A4
Authority
EP
European Patent Office
Prior art keywords
actor
actors
author
actions
behavior
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP97935290A
Other languages
German (de)
English (en)
Other versions
EP0919031A1 (fr
Inventor
Kenneth Perlin
Athomas Goldberg
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
New York University NYU
Original Assignee
New York University NYU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by New York University NYU filed Critical New York University NYU
Publication of EP0919031A1 publication Critical patent/EP0919031A1/fr
Publication of EP0919031A4 publication Critical patent/EP0919031A4/fr
Withdrawn legal-status Critical Current

Links

Classifications

    • A63F13/10
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/60Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor
    • A63F13/12
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/30Interconnection arrangements between game servers and game devices; Interconnection arrangements between game devices; Interconnection arrangements between game servers
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/45Controlling the progress of the video game
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/004Artificial life, i.e. computing arrangements simulating life
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/403D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/50Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by details of game servers
    • A63F2300/53Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by details of game servers details of basic data processing
    • A63F2300/534Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by details of game servers details of basic data processing for network load management, e.g. bandwidth optimization, latency reduction
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/60Methods for processing data by generating or executing the game program
    • A63F2300/6009Methods for processing data by generating or executing the game program for importing or creating game content, e.g. authoring tools during game development, adapting content to different platforms, use of a scripting language to create content
    • A63F2300/6018Methods for processing data by generating or executing the game program for importing or creating game content, e.g. authoring tools during game development, adapting content to different platforms, use of a scripting language to create content where the game content is authored by the player, e.g. level editor or by game device at runtime, e.g. level is created from music data on CD
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/60Methods for processing data by generating or executing the game program
    • A63F2300/65Methods for processing data by generating or executing the game program for computing the condition of a game character
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/60Methods for processing data by generating or executing the game program
    • A63F2300/66Methods for processing data by generating or executing the game program for rendering three dimensional images
    • A63F2300/6607Methods for processing data by generating or executing the game program for rendering three dimensional images for animating game characters, e.g. skeleton kinematics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2213/00Indexing scheme for animation
    • G06T2213/12Rule based animation

Definitions

  • the present invention relates to a method and a system for creating real-time, behavior-based animated actors.
  • Cinema is a medium that can suspend disbelief.
  • the audience enjoys the psychological illusion that fictional characters have an internal life. When this is done proper l y, the characters can take the audience on a compelling emotional journey.
  • cinema is a linear medium; for any given film, the audience's journey is always the same.
  • the experience is inevitably a passive one as the audience's reactions can have no effect on the course of events.
  • the present invention takes these notions further, in that it supports autonomous figures that do not directly represent any participant.
  • the "Alive” system of P. Maes et al. (The Alive System: Full Body Interaction wi th Autonomous Agents in Computer Animation' 95 Conference, Switzerland, April 1995 IEEE Press, pages 11-18) focuses on self-organizing embodied agents which are capable of making inferences and of learning from their experiences. Instead of maximizing the author's ability to express personality, the "Alive” system uses ethological mechanisms to maximize the actor's ability to reorganize its own personality, based on its own perception and accumulated experience.
  • the present invention is directed to the problem of building believable animated characters that respond to users and to each other in real-time, with consistent personalities, properly changing moods and without mechanical repetition, while always maintaining the goals and intentions of the author.
  • An object of the method and system according to the present invention is to enable authors to construct various aspects of an interactive application.
  • the present invention provides tools which are intuitive to use, allow for the creation of rich, compelling content and produce behavior at run-time which is consistent with the author's vision and intentions.
  • the animated actors are able to respond to a variety of user- interactions in ways that are both appropriate and non-repetitive.
  • the present invention enables multiple actors to work together while faithfully carrying out the author's intentions, allowing the author to control the choices the actors make and how the actors move their bodies.
  • the system of the present invention provides an integrated set of tools for authoring the "minds" and "bodies" of interactive actors.
  • animated actors follow scripts, sets of author- defined rules governing their behavior, which are used to determine the appropriate animated actions to perform at any given time.
  • the system of the present invention also includes a behavioral architecture that supports author-directed, multi- actor coordination as well as run- time control of actor behavior for the creation of user-directed actors or avatars.
  • the system uses a plain-language, or "english-style" scripting language and a network distribution model to enable creative experts, who are not primarily programmers, to create powerful interactive applications.
  • the present invention provides a method and system for manipulating the geometry of one or more animated characters displayed in real-time in accordance with an actor behavior model.
  • the present invention employs an actor behavior model similar to that proposed by B. Blumberg et al., Multi -Level Direction of Autonomous Creatures for Real - Time Virtual Environments Computer Graphics (SIGGRAPH '95 Proceedings), 30 (3) : 47- -54, 1995.
  • the system of the present invention comprises two subsystems.
  • the first subsystem is an Animation Engine that uses procedural techniques to enable authors to create layered, continuous, non-repetitive motions and smooth transitions between motions.
  • the Animation Engine utilizes descriptions of atomic animated actions (such as walk or wave) to manipulate the geometry of the animated actor.
  • the second subsystem is a Behavior Engine that enables authors to create sophisticated rules governing how actors communicate, change, and make decisions.
  • the Behavior Engine is responsible for both higher-level capabilities (such as going to the store or engaging another actor in a conversation) and determining which animations to trigger.
  • the Behavior Engine also maintains the internal model of the actor, representing various aspects of an actor's moods, goals and personality.
  • the Behavior Engine constitutes the "mind" of the actor.
  • an actor's movements and behavior are computed by iterating an "update cycle" that alternates between the Animation and Behavior Engines.
  • Fig. 1 shows a block diagram of the behavior model of an animated actor, in accordance with the present invention.
  • Fig. 2 illustrates the flexing of a deformable mesh.
  • Fig. 3 illustrates the use of a buffering action.
  • Fig. 4 shows a block diagram of the behavior model of an animated actor including a blackboard for communication with other actors.
  • Fig. 5 shows a block diagram of the behavior model of an animated actor including a user interface allowing users to interact with the actor at different semantic levels.
  • Fig. 6 shows a block diagram of a model for distributing components of the system of the present invention over a Wide Area Network.
  • Figs. 7a and 7b illustrate two renderings of the same animated actor performing the same action.
  • Fig. 1 is a block diagram of a behavior model describing the major functional components of an animated actor's behavior.
  • the behavior model comprises a geometry model 10 that is manipulated in real-time, an Animation Engine 20 which utilizes descriptions of atomic animated actions (such as "walk” or “wave") to manipulate the geometry, and a Behavior Engine 30 which is responsible for higher-level capabilities, such as "going to the store," or engaging another actor in a conversation, and decisions about which animations to trigger.
  • the Behavior Engine 30 maintains the internal model of the actor, representing various aspects of the actor's moods, goals and personality.
  • the Behavior Engine 30 constitutes the “mind” of the actor, whereas the Animation Engine constitutes the "body” of the actor.
  • an actor's movements and behavior are computed by iterating an update cycle that alternates between the Animation and Behavior Engines.
  • the Animation Engine 20 provides tools for manipulating the geometry 10 by generating and interactively blending realistic gestures and motions.
  • the .Animation Engine controls the body of the actor. Actors are able to move from one animated motion to another in a smooth and natural fashion in real time. The motions can be layered and blended to convey different moods and personalities.
  • Such an Animation Engine is described in U.S. Patent Application Serial No. 08/234,799, filed August 2, 1994, entitled GESTURE SYNTHESIZER FOR IMAGE ANIMATION, and incorporated herein by reference in its entirety, and U.S. Patent Application Serial No. 08/511,737, filed August 7, 1995, entitled COMPUTER GENERATED INTERACTION OF CHARACTERS IN IMAGE ANIMATION, and incorporated herein by reference in its entirety.
  • an author is able to build any of a variety of articulated characters.
  • Actors can be given the form of humans, animals, animated objects or imaginary creatures.
  • the geometric model of an actor consists of parts that are connected by rotational joints.
  • the model can be deformable, which is useful for muscle flexing or facial expressions. Such deformation is illustrated in Fig. 2.
  • a method which can be used in conjunction with the present invention for generating such deformations in animated actors is described in J. Chadwick et al., Layered construction for deformable animated characters . Computer Graphics (SIGGRAPH '89 Proceedings), 23 (3) :243- -252 , 1989 89.
  • DOF degree of freedom
  • DOFs there are various types that an author can control. The simplest are the three rotational axes between any two connected parts of the geometric model 10. Examples of actions involving such DOFs are head turning and knee bending. The author can also position a part, such as a hand or a foot. The system automatically does the necessary inverse kinematics to preserve the kinematic chain. From the author's point of view, the x,y,z coordinates of the part are each directly available as a DOF.
  • the author can also specify part mesh deformations as DOFs.
  • DOFs part mesh deformations
  • the author provides a "deformation target," a version of the model (or just some parts of the model) in which some vertices have been moved.
  • the system detects which vertices have been moved, and builds a data structure containing the x,y,z displacement for each such vertex.
  • the author provides a smiling face as a deformation target, he can then declare SMILE to be a DOF.
  • the author can then specify various values for SMILE between 0 (no smile) and 1 (full smile) .
  • the system handles the necessary interpolation between mesh vertices.
  • the author can also specify negative values for SMILE, to make the face frown.
  • the author defines an action as a list of DOFs, together with a range and a time-varying expression for each DOF.
  • Most actions are constructed by varying a few DOFs over time via combinations of sine, cosine and coherent noise. For example, sine and cosine signals are used together within actions to impart elliptical rotations .
  • Coherent noise is used in the method and system of the present invention to enhance realism.
  • Using noise in limb movements allows authors to give the impression of naturalistic motions without the need to incorporate complex simulation models.
  • Coherent noise can be used to convey the small motions of a character trying to maintain balance, the controlled randomness of eye blinking, or the way a character's gaze wanders around a room.
  • viewers do not perceive the mechanism itself but rather perceive some statistics of the motion produced by the mechanism.
  • coherent noise is applied in a way that matches those statistics, the actor's movements are believable.
  • Use of noise to produce realistic animated motion is described in U.S. Patent Application Serial No. 08/234,799, filed August 2, 1994, entitled GESTURE SYNTHESIZER FOR IMAGE ANIMATION, and incorporated herein by reference in its entirety, and in U.S.
  • the author can also import keyframed animation from commercial modeling systems such as Alias or Softimage.
  • the system internally converts these into actions that specify time varying values for various DOF's. To the rest of the system, these imported actions look identical to any other action.
  • an author uses DOFs to build actions.
  • An exemplary syntax for expressing actions will now be described.
  • the upper arm movement is controlled by NO
  • the lower arm movement is controlled by Nl.
  • the upper arm will, on average, swing back and forth about the shoulder once per second
  • the lower arm will, on average swing back and forth about the elbow twice per second.
  • the hand which is controlled by N2 makes small rapid rotations about the wrist.
  • the exemplary frequency combination discussed imparts motion that appears natural.
  • the 2:1 frequency ratio reflects the fact that the lower arm has about half the mass of the total arm and thus tends to swing back and forth about twice as frequently.
  • Animated actors generated in accordance with the present invention can do many things at once and these simultaneous activities can interact in different ways. For example, an author may want an actor who is waving to momentarily scratch his head with the same hand. It would be incorrect for the waving movement to continue during the time the actor is scratching his head. The result could be strange. For example, the actor might try feebly to wave his arm while making vague scratching motions about his head. In this case, it is desirable to decrease the amount of waving activity as the amount of scratching activity increases. In other words, some sort of ease- in/out transition between motions is needed. However, if the author wants an actor to scratch his head for a moment while walking downstage, it would be incorrect if the system were to force the actor to stop walking every time he scratched his head. In this case, an ease- in/out transition would be inappropriate.
  • the difference between the aforementioned examples is that the former situation involves two actions which cannot coexist, whereas the latter situation involves two actions that can gracefully coexist.
  • the present invention provides a mechanism which allows an author, in an easy and unambiguous way, to make distinctions between actions which cannot coexist and actions that can gracefully coexist. To accomplish this, the system employs a set of rules.
  • Motion can be treated as being layered, analogously to composited images which can be layered back to front.
  • an image maps pixels to colors
  • an action maps DOFs to values.
  • the system of the present invention allows an author to place actions in different groups, which groups are organized in a "back- to- front” order. Also, the system allows the author to "select" any action.
  • Actions which are in the same group compete with each other. At any moment, every action possesses some weight, or opacity. When an action is selected, its weight transitions smoothly from zero to one.
  • actions which compete with each other should be placed by the author in the same group.
  • Some actions, such as walking, are fairly global in that they involve many DOFs throughout the body.
  • Others, such as head scratching are fairly localized and involve relatively few DOFs .
  • the author should place more global actions in the rear-most groups. More localized actions should be placed in front of the global actions.
  • some actions are relatively persistent, while others are generally done fleetingly. Groups of very fleeting or temporary action (such as scratching or coughing) should be placed still further in front.
  • the present invention makes it easy to specify intuitively reasonable action relationships. For example, suppose the author specifies the following action grouping :
  • the grouping structure of the present invention allows the author to easily impart to the actor many behavioral rules. For example, given the above exemplary action groupings, the actor "knows” to wave with either one hand or the other but not both at once. The actor also "knows” he doesn't need to stop walking in order to wave or to scratch his head and “knows” that after he's done scratching he can resume whatever else he was doing with that arm.
  • the run-time system must assign a unique value to each DOF for the model, then move the model into place and render it.
  • the procedure for computing these DOFs will now be described.
  • a weighted sum is taken over the contribution of each action to each DOF.
  • the values for all DOFs in every group are then composited, proceeding from back to front. The result is a single value for each DOF, which is then used to move the model into place.
  • This algorithm should also correctly composite inverse kinematic DOFs over direct rotational DOFs. DOF compositing is described in U.S. Patent Application Serial No.
  • the system of the present invention provides the author with tools to easily synchronize movements of the same
  • DOF across actions Transitions between actions that must have different tempos are handled using a morphing approach. During the time of the transition, the speed of a master clock is continuously varied from the first tempo to the second tempo, so that the phases of the two actions are always aligned.
  • the system allows the author to insert a buffering action. For example, suppose an actor transitions from having his hands behind his back to crossing his arms over his chest . Because DOFs are combined linearly, the actor would pass his hands through his body.
  • the system of the present invention allows the author to avoid such situations by declaring that some action in a group can be a buffering action for another. This is implemented by building a finite state machine that forces the actor to pass through this buffering action when entering or leaving the troublesome action.
  • a goal of the Behavior Engine is to help the author in the most expressive way possible.
  • the Behavior Engine provides several authoring tools for guiding an actor's behavioral choices.
  • the most basic tool is a simple parallel scripting system.
  • an actor will be executing a number of scripts in parallel.
  • the most common operation is to select one item from a list of items. These items are usually other scripts or actions for the actor (or for some other actor) to perform.
  • the Behavior Engine in accordance with the present invention provides the author with "probability shaping" tools for guiding an actor's choices.
  • Behavior Engine The operation of the Behavior Engine will now be described, starting with a description of the basic parallel scripting structure followed by a description of the probability shaping tools.
  • actions are the mechanism for the continuous control of the movements made by an actor's body.
  • Scripts are provided as a mechanism for the discrete control of the decisions made by the actor's mind. It is to be assumed that the user will be making unexpected responses. For this reason, it is not sufficient to provide the author with a tool for scripting long linear sequences. Rather, the system of the present invention allows the author to create layers of choices, from more global and slowly changing plans, to more localized and rapidly changing activities, that take into account the continuously changing state of the actor's environment, and the unexpected behavior of the human participant.
  • the system of the present invention allows the author to organize scripts into groups. However, unlike actions, when a script within a group is selected, any other script that was running in the same group immediately stops. In any group at any given moment, exactly one script is running. Generally, the author should organize into the same group those scripts that represent alternative modes that an actor can be in at some level of abstraction. For example, the group of activities that an actor performs during his day might be:
  • scripts are generally those that are most physical. They tend to include actual body actions, in response to a user's actions and to the state of higher level scripts.
  • the behavior model of an actor might contain the following groups of scripts, in order, within a larger set of scripts:
  • a script is organized as a sequence of clauses.
  • the system runs the clauses sequentially for the selected script in each group.
  • the system may run the same clause that it ran in the previous cycle, or it may move on to the next clause.
  • the author is provided with tools to "hold" clauses in response to events or timeouts.
  • the two primary functions of a script clause are: 1) to trigger other actions or scripts and 2) to check, create or modify the actor's properties
  • phrases in quotes represent scripts or actions. Each of these scripts might, in turn, call other scripts and/or actions.
  • the other information (continue, etc.) is used to control the timing of the scene.
  • the "enter” script is activated first.
  • the "enter” script can for example, cause the actor to walk to center stage.
  • the "enter” script and “greeting” script are now running in parallel.
  • the “greeting” script waits four seconds before activating the "turn to camera” script. This tells the actor to turn to face the specified target, which in this case is the camera.
  • the “greeting” script then waits one second, before instructing the actor to begin the "wave” and "talk” actions.
  • the script waits another 3 seconds before activating the "sit” action during which time the "wave” action has ended, returning to the default "No Hand Gesture” action in its group. Meanwhile, the "talk” action continues for another three seconds after the actor sits. Two seconds later, the actor bows to the camera, waits another two seconds and then leaves.
  • the present invention provides a number of tools for generating the more non-deterministic behavior required for interactive non-linear applications.
  • An author may specify that an actor choose randomly from a set of actions or scripts, as in the following example:
  • weights associated with each item in the choice are used to affect the probability of each item being chosen, as in the following example:
  • the method and system of the present invention allows the author to have an actor's decisions reflect the actor's mental state and the state of the actor's environment.
  • An actor's decision about what to do may depend on any number of factors, including mood, time of day, which other actors are in proximity and what they're doing, what the user is doing, etc.
  • the present invention allows authors to create decision rules which take information about an actor and his environment and use this to determine the actor's tendencies toward certain choices over others.
  • the author can specify what information is relevant to the decision and how this information influences the weight associated with each choice. As this information changes, the actor's tendency to make certain choices over others will change as well .
  • the information about an actor and his relationship to his environment are stored in the system as an actor's properties. These properties may be used to describe aspects of an actor's personality such as assertiveness, temperament or dexterity, an actor's current mood such as happiness or alertness, or his relationship to other actors or objects such as his sympathy toward the user or his attitude about dealing with a particular object.
  • These properties can be specified by the author either when the actor is created, or within a clause of a script, to reflect a change in the actor due to some action or event. The latter case is illustrated in the following example:
  • the author specifies how the actor's behavior is reflected in his personality by reducing the actor's appetite after eating.
  • An author can also use properties to provide information about any aspect of an actor's environment, including inanimate props and scenery and even the scripts and actions an actor chooses from.
  • the author can assign properties to actions and scripts describing the various semantic information associated with them, such as aggressiveness, formality, etc.
  • the author can then use these values in the construction of decision rules. Decision rules allow actors to make decisions that reflect the state of the world the author has created.
  • a list of objects is passed to it.
  • the system uses the decision rule to generate a weight between zero and one for each object. This list can then be used to generate a weighted decision.
  • Each decision rule consists of a list of author- specified factors, i.e., pieces of information that will influence the actor's decision. Each of these factors is assigned a weight which the author uses to control how much influence that piece of information has upon the decision. This information can simply be the value of a property of an object as in the following example:
  • the decision rule will use the "Charisma” and "Intelligence” properties of the three actors to generate a weight for each actor that will be used in the decision.
  • the author has specified that the value of an actor's "Charisma” will have the greatest influence in determining that weight, whereas the value of an actor's "Intelligence” will have a lesser influence.
  • the influence is optional and defaults to 1.0 if unspecified.
  • the final weight is determined in accordance with the following equation:
  • fl, f2 ... fn are ctors 1, 2,...n, and iill,, ii22 whil iinn aarree influences 1, 2,...n.
  • An author can also use the relationship between the actor and the various choices to influence a decision, by making "fuzzy" comparisons between their properties. For example:
  • the author is comparing the actor's "Courage” property with the "Courage Level” property associated with the scripts "Fight" and "Flee”. If the actor's "Courage” equals the script's "Courage Level,” the decision rule will assign a weight of 1 to that choice. If the values are not equal, a weight between 0 and 1 will be assigned based on the difference between them, dropping to 0 when the difference is greater than the "within” range, in this case, 0.5. As the actor's "Courage” increases or decreases, so will the actor's tendency toward one option or the other.
  • a fuzzy comparison such as that described above, entails comparing how close an Input Value comes to a Target Value (or Target Range) .
  • the result of the comparison is 1 if the Input Value is at the Target Value (or within the Target Range) , and drops to 0 at a distance of Spread from the TargetValue.
  • the fuzzy comparison is implemented as follows:
  • y is the Fuzzy Value
  • w is a bell curve weighting kernel
  • a raised cosine function can be used for the bell curve weighting kernel, w.
  • a high and low spread may be specified, in which case input values greater than the target value (or range) will use the high spread in the calculation, while input values lower than the target value (or range) will apply the low spread.
  • the returned value is then modified based on the type of fuzzy operation as follows:
  • An author may want an actor to choose from a set of options using different factors to judge different kinds of items.
  • a list of objects passed to the decision rule may be divided into subsets using author-defined criteria for inclusion.
  • the weights assigned to a given subset may be scaled, reflecting a preference for an entire group of choices over another. For example:
  • the preferred model is that the author is a director who can direct the drama via pre-written behavior rules.
  • all of the actors constitute a coordinated "cast", which in some sense is a single actor that happens to have multiple bodies.
  • the system of the present invention allows actors to modify each other's properties with the same freedom with which an actor can modify his own properties. From the author's point of view, this is part of a single larger problem of authoring dramatically responsive group behavior. For example, if one actor tells a joke, the author may want the other actors to respond, favorably or not, to the punchline.
  • the blackboard 40 allows the actors to be coordinated, whether running on a single processor, on multiple processors or even across a network.
  • the author can also include user-interface specifications in an actor's scripts.
  • the system can generate widgets at run-time in response to the actor's behavior or to serve the needs of the current scene or interaction.
  • the user can employ these widgets to trigger actions and scripts at any level of the actor's behavioral hierarchy.
  • Directing the actions of one or more animated actors enables users to enter the virtual environment.
  • this interface By making this interface a scriptable element, the present invention enables authors to more easily choreograph the interaction between the virtual actors and human participants.
  • a feature of the present invention is the ability to provide user interaction with the system at different semantic levels. This ability is illustrated in Fig. 5 which shows the behavioral model of an animated actor including a user interface 50.
  • the user interface 50 allows a user to interact with both the Behavior Engine 30 and Animation Engine 20 of an animated actor.
  • the result of a user's actions can cause changes in the system anywhere from high level scripts to low level actions.
  • the system of the present invention allows the author to give the user the right kind of control for every situation. If the user requires a very fine control over the actors' motor skills, the system allows the author to provide the user with direct access to the action level.
  • the system allows the author to let the user specify a set of gestures for the actor to use, but have the actor decide on the specific gestures from moment to moment.
  • the author may want to have the user directing large groups of actors, such as an acting company or an army, in which case he might have the user give the entire group directions and leave it to the individual actors to carry out those instructions. Since any level of the actor's behavior can be made accessible to the user, the author is free to vary the level of control at any point in the application.
  • actors such as an acting company or an army
  • the present invention provides a number of "english-style" scripting language extensions that make it easier for authors and artists to begin scripting interactive scenarios.
  • the scripting language is written as an extension of the system language. Thus, as users become more experienced they can easily migrate from scripting entirely using the high- level english-style syntax to extending the system through low- level algorithmic control.
  • the system of the present invention can be distributed over a network.
  • An exemplary embodiment of a system in accordance with the present invention is implemented as a set of distributed programs in UNIX, connected by TCP/IP socket connections, multicast protocols and UNIX pipes.
  • the participating processes can be running on any UNIX machines. This transport layer is hidden from the author.
  • All communication between participant processes is done by continually sending and receiving programs around the network. These are immediately parsed into byte code and executed.
  • routing processes There must be at least one routing process on every participating Local Area Network.
  • the router relays information among actors and renderer processes. For Wide Area Network (WAN) communication, the router opens sockets to routers at other LANs .
  • WAN Wide Area Network
  • each actor maintains a complete copy of the blackboard information for all actors. If an actor's behavior state changes between the beginning and end of a time step, the changes are routed to all other actors.
  • Typical WAN latencies can be several seconds. This poses a problem for two virtual actors interacting over a distributed system. From the viewpoint of believability, some latency is acceptable for high level decisions but not for low level physical actions. For example, when one character waves at another, the second character can get away with pausing for a moment before responding. But two characters who are shaking hands cannot allow their respective hands to move through space independently of each other. The hands must be synchronized to at least the animation frame rate.
  • the Behavior Engine and the Animation Engine for an actor can be split across a WAN.
  • the Behavior and Animation Engines can communicate with each other through the blackboard.
  • the blackboard is allowed to contain different values at each LAN.
  • the actor maintains a single global blackboard.
  • the Behavior Engine for each actor runs at only a single LAN, whereas the Animation Engine runs at each LAN.
  • two characters must physically coordinate with each other, they use the local versions of their DOFs. In this way, an actor is always in a single Behavioral State everywhere on the WAN, even though at each LAN he might appear to be in a slightly different position. In a sense, the actor has one mind, but multiple bodies.
  • Fig. 6 shows a block diagram of a Wide Area Network distribution model for an exemplary embodiment of the system of the present invention.
  • a WAN 100 links three LANs 101, 102 and 103, in a known manner.
  • the WAN 100 can be the world wide web, for example.
  • On each LAN one "mind”, or Behavior Engine is executed for one of three animated characters, whereas separate "bodies”, or Animation Engines, are executed for each of the three characters on each of the three LANs .
  • the various body renderings of an actor inhabit a parallel universe. Although these bodies may differ slightly in their position within their own universe, they are all consistent with the actor's single mind.
  • a researcher can write a standalone C program that links with the support library.
  • the program can pass string arguments such as "Gregor Sit” or "Otto Walk-To- Door” to an output function.
  • the standalone program can modify actors' behavior states.
  • the system of the present invention can also include several audio subsystems. Such subsystems are used for generating speech and/or music, allowing actors to follow musical cues, and generating ambient background noise.
  • the system of the present invention allows actors and users to interact with each other.
  • An example of a scene involving multiple actors involved in a social interaction with a user will now be described.
  • the actor executing the script randomly chooses one of the actors not controlled by the user, and turns to the chosen actor.
  • the actor then cues the other non-user actors to execute the "Listen To Joke" script, in which the actor chooses the appropriate gestures and body language that will give the appearance of listening attentively.
  • the actor narrows the list down to those actions that are reactive and conversational, or generic actions that can be used in any context.
  • the rule compares the "confidence" and “self control” of the actor to those assigned to each action, creating a weighted list favoring actions that match the fuzzy criteria. After choosing from the list, the actor will wait from 3 to 12 seconds before repeating the script and choosing another gesture.
  • the actor telling the joke then executes the "No Soap, Radio” script which contains a command to an external speech system to generate the text of the joke.
  • the actor executes the "Joke Gestures” script which, like the "Listen To Joke” script chooses appropriate gestures based on the actor's personality.
  • the actor executes the "React To Player” script in which the actor chooses an appropriate reaction to the player, depending on whether or not the player tells his actor to laugh. If he does, the joke teller laughs, either maliciously, if her sympathy for the player is low, or playfully, if her sympathy for the player is high. If the player's actor doesn't laugh, the joke teller executes the "Get It?" script. This script taunts the player until he gets mad and/or leaves .
  • the system of the present invention can also operate in conjunction with voice recognition.
  • an animated interactive embodied actor can respond to spoken statements and requests.
  • a voice recognition subsystem which can be used in conjunction with the system of the present invention is available from DialecTech.
  • untrained participants can conduct a game, such as "Simon Says" with the actor. The actor will follow requests only if they are preceded by the words "Simon Says”. To make it more interesting, the actor can be programmed so that sometimes he also follows requests not preceded by "Simon Says", but then acts embarrassed at having been fooled. Such interaction increases the sense of psychological involvement by the participants. Participants appear to completely "buy into” the animated actor's presence.
  • a user can be represented as an embodied avatar, further enhancing the user's sense of fun, play, and involvement.
  • the participant is presented with a large rear projection of a room full of embodied conversational agents.
  • the system includes an overhead video camera which tracks the user's position and arm gestures.
  • the user can be represented, for example, as a flying bat. As the participant walks around, the bat flies around accordingly. The nearest actor will, for instance, break out of conversing with other actors and begin to interact with the bat . When the participant flaps her arms, the bat flies higher in the scene and the camera follows. This gives the participant a sense of soaring high in the air.
  • the system of the present invention is a useful tool for the embodiment of intelligent actors, especially for the study of social interaction.
  • it is a good tool for building educational virtual reality environments, when used in conjunction with research software for virtual interactive theater.
  • the combination can be used to simulate behaviors that would be likely to engage children to respond to, identify with, and learn from knowledge agents.
  • a further embodiment of the present invention includes extensions so that animators can use commercial tools, such as Alias and Softimage, to create small atomic animation components. Trained animators can use these tools to build up content. Such content can include various walk cycles, sitting postures, head scratching, etc.
  • the procedural animation subsystem is designed in such a way that such action styles can be blended. For example, two or three different styles of walks can be separately designed from a commercial key frame animation package and then blended together. They can also be blended with various procedural walks, to create continuously variable walk styles that reflect the actor's current mood and attitude, as well as the animator's style.
  • the system of the present invention can be used to tie into commercial animation tools to build up a library of component motions, and to classify these motions in a way that makes them most useful as building blocks.
  • the system of the present invention can also be embedded into a client-based application for a Java compatible browser (such as Netscape Navigator version 2.0) .
  • the system of the present invention can be implemented as a full 3D system or as a "nearly 3D" system for lower-end applications.
  • the nearly-3D version can be implemented with a low-end platform, such as a personal computer.
  • the user can still be able to see a view into a three-dimensional world, but the visual representations of the actors are simpler and largely two-dimensional.
  • participants using systems with different capabilities e.g., an SGI Onyx workstation and an Intel '486-based PC
  • Both users would see the same actor, at the same location, performing the same action and having the same personality. The only difference would be that the user with the higher performance system will see a much more realistic quality of rendering.
  • the english-style behavioral sub-system can be integrated with a voice recognition subsystem. This allows a user to fully exploit the object substrate and give access to the direction of goals, mood changes, attitudes and relationships between actors. Such direction can be provided via spoken sentences.
  • the method and system of the present invention is applicable to a wide variety of applications, including computer role playing games, simulated conferences, "clip animation," graphical front ends for MUDs, synthetic performance, shared virtual worlds, interactive fiction, high- level direction for animation, digital puppetry, computer guides and companions, point-to-point communication interfaces, and true, non-linear narrative television.

Abstract

On décrit un système de création d'acteurs animés en temps réel ayant un certain comportement en fonction de certaines circonstances. Ce système fournit des outils pour créer des acteurs qui répondent aux utilisateurs et se répondent entre eux en temps réel, avec des personnalités et des humeurs correspondant aux souhaits et intentions de l'auteur. Ce système comprend deux sous-systèmes, le premier étant un éditeur d'animation (20) mettant en oeuvres des techniques procédurales pour permettre aux auteurs de créer des mouvements non répétitifs, continus et à couches ainsi que des transitions sans heurts entre ces mouvements, et le second étant un éditeur de comportement (30) permettant aux auteurs de créer des règles très subtiles pour décider comment les acteurs doivent communiquer, changer et quelles décisions ils doivent prendre. Ce système combiné constitue un ensemble intégré d'outils de création des 'esprits' et des 'corps' des acteurs interactifs, et il met en oeuvre un langage de scénarisation en style anglais afin que des experts créatifs, qui ne sont pas à l'origine des programmateurs, puissent créer des applications interactives marquantes. Ce système permet aux auteurs de talents divers de créer des interactions de personnages animées et réactives, vraiment comme dans la vie, interactions que l'on peut gérer en temps réel sur des réseaux.
EP97935290A 1996-08-02 1997-08-01 Procede et systeme de scenarisation d'acteurs animes interactifs Withdrawn EP0919031A4 (fr)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US2305296P 1996-08-02 1996-08-02
US23052P 1996-08-02
PCT/US1997/013664 WO1998006043A1 (fr) 1996-08-02 1997-08-01 Procede et systeme de scenarisation d'acteurs animes interactifs

Publications (2)

Publication Number Publication Date
EP0919031A1 EP0919031A1 (fr) 1999-06-02
EP0919031A4 true EP0919031A4 (fr) 2006-05-24

Family

ID=21812855

Family Applications (1)

Application Number Title Priority Date Filing Date
EP97935290A Withdrawn EP0919031A4 (fr) 1996-08-02 1997-08-01 Procede et systeme de scenarisation d'acteurs animes interactifs

Country Status (2)

Country Link
EP (1) EP0919031A4 (fr)
WO (1) WO1998006043A1 (fr)

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1999016022A2 (fr) * 1997-09-22 1999-04-01 Lamb & Company, Inc. Procede et appareil pour traiter les donnees de mouvement
FR2781299B1 (fr) * 1998-07-15 2000-09-15 Eastman Kodak Co Procede et dispositif de transformation d'images numeriques
US6230111B1 (en) 1998-08-06 2001-05-08 Yamaha Hatsudoki Kabushiki Kaisha Control system for controlling object using pseudo-emotions and pseudo-personality generated in the object
US6249780B1 (en) * 1998-08-06 2001-06-19 Yamaha Hatsudoki Kabushiki Kaisha Control system for controlling object using pseudo-emotions and pseudo-personality generated in the object
WO2001093206A1 (fr) * 2000-05-31 2001-12-06 Sharp Kabushiki Kaisha Dispositif de montage d'animation, procede de montage d'animation, programme de montage d'animation, et support enregistre contenant un programme informatique de montage d'animation
WO2002029715A1 (fr) * 2000-10-03 2002-04-11 Kent Ridge Digital Labs Systeme et procede de programmation de comportements de creatures synthetiques
DE10195799D2 (de) 2001-04-10 2004-04-15 Alfred Schurmann Determination der Befriedigung und des Verlangens in virtuellen Wesen
GB2388235B (en) * 2002-05-04 2005-09-14 Ncr Int Inc Self-service terminal
WO2004056537A2 (fr) * 2002-12-19 2004-07-08 Koninklijke Philips Electronics N.V. Systeme et procede pour commander un robot
GB2404315A (en) * 2003-07-22 2005-01-26 Kelseus Ltd Controlling a virtual environment
JP3919801B1 (ja) * 2005-12-28 2007-05-30 株式会社コナミデジタルエンタテインメント ゲーム装置、ゲーム装置の制御方法及びプログラム
CN104945509A (zh) 2009-09-16 2015-09-30 弗·哈夫曼-拉罗切有限公司 包含卷曲螺旋和/或系链的蛋白质复合体及其用途
JP6144738B2 (ja) 2015-09-18 2017-06-07 株式会社スクウェア・エニックス ビデオゲーム処理プログラム、ビデオゲーム処理システム及びビデオゲーム処理方法

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5261041A (en) * 1990-12-28 1993-11-09 Apple Computer, Inc. Computer controlled animation system based on definitional animated objects and methods of manipulating same

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US594856A (en) * 1897-12-07 Seesaw

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5261041A (en) * 1990-12-28 1993-11-09 Apple Computer, Inc. Computer controlled animation system based on definitional animated objects and methods of manipulating same

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
BLUMBERG B M ET AL ASSOCIATION FOR COMPUTING MACHINERY: "MULTI-LEVEL DIRECTION OF AUTONOMOUS CREATURES FOR REAL-TIME VIRTUALENVIRONMENTS", COMPUTER GRAPHICS PROCEEDINGS. LOS ANGELES, AUG. 6 - 11, 1995, COMPUTER GRAPHICS PROCEEDINGS (SIGGRAPH), NEW YORK, IEEE, US, 6 August 1995 (1995-08-06), pages 47 - 54, XP000546215, ISBN: 0-89791-701-4 *
BRUDERLIN A ET AL ASSOCIATION FOR COMPUTING MACHINERY: "MOTION SIGNAL PROCESSING", 6 August 1995, COMPUTER GRAPHICS PROCEEDINGS. LOS ANGELES, AUG. 6 - 11, 1995, COMPUTER GRAPHICS PROCEEDINGS (SIGGRAPH), NEW YORK, IEEE, US, PAGE(S) 97-104, ISBN: 0-89791-701-4, XP000546220 *
CHADWICK, HAUMANN, PARENT: "Layered Construction for Deformable Animated Characters", COMPUTER GRAPHICS, vol. 23, no. 3, July 1989 (1989-07-01), XP002373713 *
See also references of WO9806043A1 *
WITKIN A ET AL ASSOCIATION FOR COMPUTING MACHINERY: "MOTION WARPING", COMPUTER GRAPHICS PROCEEDINGS. LOS ANGELES, AUG. 6 - 11, 1995, COMPUTER GRAPHICS PROCEEDINGS (SIGGRAPH), NEW YORK, IEEE, US, 6 August 1995 (1995-08-06), pages 105 - 108, XP000546221, ISBN: 0-89791-701-4 *

Also Published As

Publication number Publication date
WO1998006043A1 (fr) 1998-02-12
EP0919031A1 (fr) 1999-06-02

Similar Documents

Publication Publication Date Title
US6285380B1 (en) Method and system for scripting interactive animated actors
Perlin et al. Improv: A system for scripting interactive actors in virtual worlds
Maes Artificial life meets entertainment: lifelike autonomous agents
Mateas et al. Integrating plot, character and natural language processing in the interactive drama Façade
Elliott et al. Autonomous agents as synthetic characters
Breazeal et al. Interactive robot theatre
Gillies et al. Comparing and evaluating real time character engines for virtual environments
EP0856174A1 (fr) Animation d'un personnage et technique de simulation
WO1998006043A1 (fr) Procede et systeme de scenarisation d'acteurs animes interactifs
JPH11508491A (ja) 可動装置(apparatus)を制御する設備(installation)および方法
Grillon et al. Simulating gaze attention behaviors for crowds
Dai et al. Virtual spaces-VR projection system technologies and applications
Pina et al. Computer animation: from avatars to unrestricted autonomous actors (A survey on replication and modelling mechanisms)
Allbeck et al. Avatars a/spl grave/la Snow Crash
Thalmann et al. Crowd and group animation
Thalmann The virtual human as a multimodal interface
Perlin Building virtual actors who can really act
Corradini et al. Towards believable behavior generation for embodied conversational agents
Rich et al. An animated on-line community with artificial agents
Gillies et al. Piavca: a framework for heterogeneous interactions with virtual characters
Sparacino DirectIVE--choreographing media for interactive virtual environments
Fraser et al. Intelligent virtual worlds continue to develop
Sparacino et al. Media Actors: Characters in Search of an Author
Monzani An Architecture for the Behavioural Animation of Virtual Humans
Turner et al. SL-Bots: Automated and Autonomous Performance Art in Second Life

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 19990301

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AT BE CH DE DK ES FI FR GB GR IE IT LI LU MC NL PT SE

A4 Supplementary search report drawn up and despatched

Effective date: 20060411

17Q First examination report despatched

Effective date: 20060801

REG Reference to a national code

Ref country code: HK

Ref legal event code: WD

Ref document number: 1021418

Country of ref document: HK

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN

18D Application deemed to be withdrawn

Effective date: 20081111