WO2000049478A2 - Authoring system for selective reusability of behaviors - Google Patents

Authoring system for selective reusability of behaviors Download PDF

Info

Publication number
WO2000049478A2
WO2000049478A2 PCT/IB2000/000330 IB0000330W WO0049478A2 WO 2000049478 A2 WO2000049478 A2 WO 2000049478A2 IB 0000330 W IB0000330 W IB 0000330W WO 0049478 A2 WO0049478 A2 WO 0049478A2
Authority
WO
WIPO (PCT)
Prior art keywords
state
trait
states
objects
traits
Prior art date
Application number
PCT/IB2000/000330
Other languages
French (fr)
Other versions
WO2000049478A3 (en
Inventor
Eden Shochat
Sbav Erlichmen
Original Assignee
Shells Interactive, Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shells Interactive, Ltd. filed Critical Shells Interactive, Ltd.
Priority to AU33143/00A priority Critical patent/AU3314300A/en
Publication of WO2000049478A2 publication Critical patent/WO2000049478A2/en
Publication of WO2000049478A3 publication Critical patent/WO2000049478A3/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/30Creation or generation of source code
    • G06F8/34Graphical or visual programming
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/30Creation or generation of source code
    • G06F8/36Software reuse

Definitions

  • This invention relates to application development systems generally, and in particular to systems for authoring interactive applications. Description of Related Art:
  • a common example of such a modular component found in most computer programming languages is the subroutine or function call that is written once and can be invoked many times (i.e., reused) by one or more programs.
  • a more modern example found in object-oriented programming languages is the concept of an "object” that encapsulates or hides its internal behavior (i.e., the code and data that implements its functionality), and can be reused in other contexts, and with other objects.
  • these modular components or objects isolate their internal implementation from their external interface to other components/objects in the system.
  • a subroutine or function call has an external calling interface that includes its name and the names and types of data on which it operates (i.e., its parameters). If its internal implementation relies on external data (e.g., global variables) or other subroutines or functions, these dependencies may limit its reusability - e.g., in other environments in which such external data and code are not present.
  • Formal object-oriented systems attempt to eliminate these dependencies by requiring objects to be completely self-contained.
  • all of the data and code (methods) on which an object relies are encapsulated (i.e., hidden) within the object's internal implementation.
  • the object's external interface to other objects in the system defines the data structures that are passed to and from the object and the methods or functions which that object can perform (without revealing how such functions are performed).
  • an application developer can write a program that instantiates one or more members of that class (e.g., "car" objects with particular attributes and behaviors).
  • a more specialized child class of objects can be defined (e.g., "sports cars") that inherits all of the attributes/behaviors of its parent "car” class, and then adds/modifies (overrides) certain attributes/behaviors.
  • another programmer can leverage (reuse) many of the attributes/behaviors created by the author of the "car” class (e.g., the presence of a steering wheel, how the car starts, etc.) without “reinventing the wheel.”
  • Macromedia's "Director” is another popular program that enables authors to "loosely couple" objects (i.e., objects that already have been instantiated) with their component behaviors (also instantiated objects). These objects are linked together dynamically at runtime, such that an object exhibits the behavior with which it currently is associated.
  • any complex system requires fairly extensive inter-object communication, both among characters or elements of the application and among the various component behaviors exhibited by such characters or elements.
  • an application that enables the user to drive a car may require many objects, such as the car itself (which may consist of component objects such as a body, wheels, etc.), other cars, roads, obstacles, and so forth.
  • Each car object must exhibit various behaviors, such as steering, accelerating, braking, etc.
  • the author will have to create an application-specific communication mechanism that likely will create many dependencies among the various obj ects .
  • Director's "Lingo" scripting language requires authors to send messages from one script to another, or centralize the communication mechanism within a single large script.
  • objects cannot easily share information without prior knowledge of one another.
  • attributes or properties e.g., an object's "location”
  • objects need to monitor such properties and know when they change or when certain conditions are met. If one object must detect such a condition, and then send a message directly to another object (and hence "know" of the other object's existence at runtime), then dependencies will be created, and such objects will have limited reusability.
  • the present invention provides a solution to the above problems by addressing the need for a system-wide communication mechanism that enables objects to share "state change" information with other objects in the system, and among component behaviors associated with a particular object. Moreover, these objects and components need not be aware of one another's existence to share this information, thereby limiting the dependencies of these objects and components on one another, and enhancing their reusability in other contexts.
  • the system provides a mechanism for objects to contain a set of component behavioral capabilities (Traits) as well as various attributes or properties defined by those Traits (States). Moreover, these objects can be linked in a physical (parent-child) hierarchy to enable parent objects to provide a physical frame of reference (i.e., relative coordinate system) for their children objects, independent of the system- wide communication mechanism. Thus, children objects can "move with their parents", independent of their own ability to move.
  • Traits component behavioral capabilities
  • States various attributes or properties defined by those Traits
  • Traits As authors of an application add Traits to their objects, these Traits define or "expose" States that can be monitored by the system for changes in value. The system will monitor changes in an exposed State's value when any Trait defines a Listener associated with that State. These Listeners will be notified automatically when the system detects changes in these exposed States.
  • Other object components (Actions) modify exposed States so as to trigger the capabilities of Traits that are Listening for changes in those exposed States, and provide modular components which Traits can call - to modify other States in response to the triggering of their capabilities, and so on as this condition/response model ripples throughout the objects in an application.
  • This communications mechanism allows Traits/Listeners/ Actions to communicate indirectly with one another (both within and across their objects), without knowledge of one another's existence.
  • the system will automatically notify other Traits (that defined Listeners for that State) of such "state change” information, thereby providing a form of "anonymous" communication that minimizes dependencies among objects, and thus enhances their reusability in other contexts.
  • Listeners also can impose conditions, such that they will be notified by the system when a designated State changes value only if the specified condition also is met.
  • hierarchies of States can be defined within an object, by defining parent "Tag" States associated with other States. States can have multiple parent Tag States, and each Tag State can be a parent to multiple other States.
  • the system When the system detects a change in a State, it will notify not only Listeners for that State, but also Listeners for any of that State's "ancestor" (parent, grandparent, etc.) Tag States.
  • Listeners can therefore be notified not only of the changes in a single State, but of changes in any of a group of States, as well as other specified combinations of States or conditions.
  • the system is integrated into Macromedia's Director application development environment, and adds the capability of creating/importing 3D objects with a "drag and drop" interface that enables authors to create complex interactive applications that manipulate 3D objects, while providing selective reusability of the objects and components created by multiple authors across multiple applications.
  • FIGs. l(a)-(d) illustrate the user interface of Macromedia Director, including its Stage, Score, Cast, Library, Message and Control Panel windows, and the creation of sprite objects.
  • FIGs. 2(a)-(e) illustrate the integration of the 3D Dreams user interface of the present invention into Macromedia's Director authoring environment, including the creation of a Viewport and a hierarchy of Persona Objects having various Traits, States and Actions.
  • FIG. 3 illustrates the system architecture of the present invention, including the interaction of the system and Persona Objects, Traits, States, Listeners and Actions, as well as their data structures.
  • FIG. 4 illustrates the State Hierarchy including States and their parent Tag States.
  • FIG. 5 shows the hierarchy of Object Types in accordance with one embodiment of the present invention.
  • FIG. 6 shows a screenshot of a Viewport hierarchy window.
  • FIG. 7 shows a diagram of an object, world, and universe to generally illustrate the concepts of "listening” and “exposing” by Traits of internal States and external States, in accordance with one embodiment of the present invention.
  • FIG. 8 shows the layers of the software model in accordance with one embodiment of the present invention.
  • FIG. 9 shows the different layers and components of the software engine in accordance with one embodiment of the present invention.
  • FIG. 10 illustrates the overall operation of the system and the resulting selective reusability of component objects in the context of a "Bumper Car” example application.
  • the present invention can be embodied in virtually any new or existing application development environment, as well as in the wide variety of applications or components thereof generated by that environment.
  • Applications could include movies and animations, traditional productivity, educational and entertainment titles, and, in particular, highly interactive games, advertisements and product demonstrations, as well as the interactive characters and other reusable components of those applications.
  • the invention is integrated into Macromedia's Director 7 (“Director”), a well-known multimedia authoring system used to create animated, interactive CD-ROM and Internet-based applications, referred to as "Movies.”
  • Director enables authors to create Movies ranging from simple eel-based animations to more complex interactive titles in which characters exhibit behaviors and interact with users and with one another.
  • Director is extensible, via an API layer that interfaces with "Xtras,” described in greater detail below.
  • one embodiment of the present invention consists of multiple Xtras that interface with Director to provide additional functionality as well as embrace and extend certain aspects of Director's existing functionality, in particular much of its user interface.
  • FIG. 1(a) illustrates some of Director's major components which an author utilizes to create a Movie. These include the "Cast” window 10 (containing "Cast Members” that represent the appearance, behavior or other aspects of the characters or "sprites” that perform in the "Cast" window 10 (containing "Cast Members” that represent the appearance, behavior or other aspects of the characters or "sprites” that perform in the "Cast” window 10 (containing "Cast Members” that represent the appearance, behavior or other aspects of the characters or "sprites” that perform in the
  • the "Score” window 20 (containing "Channels” representing the existence over time of the sprites that perform in the Movie, as well as other special effects such as sound, transitions, etc.) and the "Stage” window 30 (that enables the author to preview the Movie as well as control where the sprites will appear).
  • the author of a Movie can control its presentation via the Control Panel 40, e.g., starting, stopping, pausing and rewinding the Movie, stepping it forward or backward one frame at a time, changing its speed (frames/second), turning its sound on or off, or enabling/disabling the loopback control which determines whether the Movie will run only once or indefinitely.
  • the "Message" window 60 enables the author to type commands written in Lingo (Director's scripting language) to control the sprites on the Stage.
  • a Library 60 of built-in Lingo scripts (or programs in other languages), known as "Behaviors," enables an author to add certain common functionality to a sprite (e.g., detecting collisions) merely by dragging the Behavior from the Library 60 onto the sprite (e.g., in the Score 20 or on the Stage 30).
  • a Behavior e.g., associating a Lingo script with that sprite
  • that sprite will exhibit that Behavior during the Movie.
  • Authors also can create their own Behaviors (whether written in Lingo or in another programming language) and associate them with the sprites in their Movie.
  • an author desires to utilize a media element in a Movie (such as a bitmap, vector shape or other more complex 2D image, or even a sound or movie)
  • the author either can create that media element using tools within Director (not shown) or import a media element created in another program, such as Adobe Photoshop.
  • the media element will be added to the Cast window 10.
  • an author has created a vector shape (circle) which therefore was added as a Cast Member (named "Ball") 15 to Cast window 10.
  • the author By dragging the Ball (Cast Member) 15 onto either the Score 20 or the Stage 30, the author caused Director to create a sprite - i.e., a "ball" that can be animated and interactively controlled in a Movie - that is represented both in the Score 20 and on the Stage 30 as Ball 35. While on the Stage 30 or in the Score 20, sprites represent the animated characters or other elements that actually will perform in the Movie, whereas the Cast Members in the Cast window 10 can be thought of as templates from which one or more sprites can be created.
  • the author has added a Behavior to Ball 35 by dragging the "Avoid Mouse” Behavior template 17 from the built-in Library 70 to Ball 35 (either on the Stage 30 or in the Score 20).
  • This Avoid Mouse Behavior template 17 now appears in the Cast window 10, as well as in Library 17.
  • the Ball 35 now has an associated Avoid Mouse Behavior 37, which can be distinguished from the template 17 of that Behavior.
  • the author now has two Cast Member templates (Ball 15 and Avoid Mouse 17) from which a single sprite has been created - i.e., a Ball 35 that exhibits the Avoid Mouse Behavior 37 during the Movie.
  • the author can then preview this interactive Movie (e.g., by pushing the "play" button in the button bar 50 or on Control Panel 40) and watch Ball 35 avoid the mouse (i.e., by moving slightly away whenever the author positions his mouse over Ball 35) while the Movie is running.
  • the author could write a more complex Lingo script controlling the timing of the sprite's visibility and behavior.
  • the author could create another "ball” sprite from the same Ball Cast Member 15 - e.g., by dragging Ball 15 onto the Stage 30 or into the Score 20. The result of such an action would be to create a second sprite (Ball 36), as illustrated in FIG. 1(d). Note that, unlike Ball 35 which has an associated Avoid Mouse Behavior 37, Ball 36 does not have any associated Behavior. Thus, unlike Ball 35, Ball 36 will not "avoid the mouse” while the Movie is running.
  • the author can cause Ball 35 to appear throughout the entire 28 frames of the Movie, while Ball 36 will appear only during frames 10-20 (in accordance with the frame timeline 22 in the Score 20).
  • the author also can determine the location where Ball 36 will appear when the Movie is running, either by dragging Ball 36 across the Stage 30 to the desired location, or (for more precise control) by setting the exact coordinates (in addition to size and other attributes) in the data fields 24 of the Score 20.
  • Director has a number of significant limitations that impair an author's ability to create reusable behaviors, characters and other components of a Movie. For example, when a Cast Member (such as Ball 15 in the above example) is used to create multiple sprites (such as Ball 35 and Ball 36), two separate objects with separate properties (e.g., location, size, etc.) and Behaviors are created, even though the author might desire to portray only a single object - while dictating when and where that object will be displayed and which Behaviors will be exhibited under particular conditions.
  • a Cast Member such as Ball 15 in the above example
  • sprites such as Ball 35 and Ball 36
  • the present invention provides such a communication mechanism to enable objects to communicate with users and with one another, and to share common properties and other information, without inherent knowledge of one another's presence or existence. It should be emphasized that, although this mechanism is integrated into Director's application development environment in one embodiment described below, it could be integrated into virtually any new or existing application development environment, as well as in the wide variety of applications or components thereof generated by that environment.
  • object is used quite broadly herein to encompass not only formal objects implemented in accordance with object-oriented programming techniques, but also virtually any other component of an application or of the application development environment itself, including characters, sprites, and other elements that may or may not be displayed in a Director Movie, as well as their component properties, behaviors and other characteristics.
  • object' is also used to describe one particular Object Type, to be discussed in greater detail further below.
  • a particular object such as a sprite or other element of an application (e.g., a car), from its particular attributes (i.e., what the object "is” - e.g., its size, appearance, location, number of wheels, etc.) or behaviors (i.e., what the object "does” or “can do” - e.g., accelerate, steer, brake, etc.).
  • the system supports 3D objects, not merely the 2D objects supported by Director.
  • the system utilizes the concept of a "Viewport" through which an author may view rendered 3D objects on the Stage 30.
  • an author would first create a Viewport 5, whose representation (i.e., icon, symbol) is inserted by the system into 5 the Cast window 10.
  • a Viewport 5 whose representation (i.e., icon, symbol) is inserted by the system into 5 the Cast window 10.
  • Persona Objects can now be viewed through the Viewport 5 (though none has yet been created).
  • the Viewport 5 although it is a Persona Cast Member (i.e., it appears in Cast window 10 and can contain "Traits” as discussed below), is a special type of • 10 Persona Object used for viewing other Persona Objects in "edit mode" - i.e., during authoring time, as opposed to runtime. It does not exist in the physical hierarchy of Persona Objects (also discussed below). Yet, the Viewport 5 appears in the Score 20 and on the Stage 30, and thus is a Director sprite, in this case a Persona Object, that can perform various functions in addition to its primary function of displaying rendered 3D objects during authoring time.
  • the author might next desire to create (or import) a 3D object having a data format that is recognized by the system for rendering 3D objects through the Viewport 5.
  • the author must explicitly elect to make the object "interactive” (e.g., capable of having "Traits,” as discussed below) before it is displayed in the Cast Window 10. In other embodiments, this occurs by default.
  • Persona Objects (which also may be represented as Persona Cast Members in the Cast window 10) are already "characters in the Movie" - i.e., they can appear in the Viewport 5 on the Stage 30 and can be visible at runtime (see, e.g., Ball 66). Yet, they do not exhibit any "behavior” 25 until they are added to the Cast window 10 and given "Traits,” for example, as discussed below.
  • each of these Persona sprites (referred to hereafter as “Personas” - i.e., “personalities” or collections of "Traits” and “States” within a Persona Object) is not a separately controllable Persona Object 30 with distinct properties and behaviors.
  • a single Persona Object (whether represented on the Stage 30 or as a Persona Cast Member in Cast window 10) contains all of the properties and behaviors (e.g., "States" and "Traits,” as discussed below) defined within each Persona.
  • the Persona Object may, however, exhibit only a subset of these "States” and "Traits” at any given point in time, e.g., if a single Persona containing that subset is active at that point in time (in accordance with the frame timeline 22 in the Score 20).
  • Viewport Hierarchy window 90 that displays the default physical hierarchy of Persona Objects.
  • 3D objects that an author can create such as "Ball" 66 shown in Viewport Hierarchy window 90
  • Viewport Hierarchy window 90 includes other "special" Persona Objects (such as “cameras” and “lights,” discussed in greater detail below) which can be viewed/edited through the Viewport 5 or the Viewport Hierarchy window 90 during authoring time, but which perform in the Movie and are viewed through the "active camera” during runtime.
  • the author of a Director Movie can perform the same functions during authoring time through the Viewport 5 and Viewport Hierarchy window 90.
  • runtime or while filming a movie
  • only those objects within the field of view of a camera are displayed as part of the Movie (with the exception of other standard Director objects on the portion of the Stage 30 outside of the Viewport 5.
  • the Universe 91 can contain one or more child “Worlds” 92, which (in one embodiment) are independent and do not interact with one another. By default (i.e., upon creation of Viewport 5), each World 92 contains a "Camera” 93
  • a "Directional Light” 94 for illuminating Persona Objects directly in front of it within a specified range
  • an "Ambient Light” 95 for illuminating all Persona Objects within the World 92.
  • the author may remove these Persona Objects from a World 92, but nothing will be visible during runtime without a Camera 93 and some type of lighting (e.g., Directional Light 94). Nevertheless, Persona Objects could exist in such a World 92 (and even collide with one another and generate sounds), though the viewer of the Movie might not see anything.
  • only one Camera 93 can be "active" at any given time, and thus only one World 92 can be displayed at a time (though a Movie could switch among multiple Worlds 92 over time, though they cannot interact with one another in this embodiment).
  • multiple Cameras 93 could be active simultaneously, revealing a "split screen" view of the Universe 91, and perhaps across multiple Worlds 92 that could interact with one another. The decision is merely an implementation tradeoff of flexibility versus performance.
  • the author can create Persona Objects within a World 92, or perhaps within "sub-world” Persona Objects that the author creates to extend the physical hierarchy to additional levels.
  • an author typically would create (or import) a 3D object which has a data format that the system can recognize and render through the Viewport 5 at authoring time, and through a Camera 92 at runtime.
  • the hierarchy of the different types of Persona Objects is discussed in greater detail below. As a consequence of this physical hierarchy of Persona Objects, "parent" objects become a physical frame of reference for their "child” objects.
  • Certain properties of the Persona Objects in the physical hierarchy are displayed in the right pane 96 of Viewport Hierarchy window 90, while the hierarchy of names of the Persona Objects is displayed in the left pane 97, as noted above.
  • an author can utilize certain primitive objects (e.g., spheres, cones, etc.) that are already built into the system, as well as import more complex 3D objects created with other programs, such as 3D Studio MAX from Kinetix, the multimedia business unit of AutoDesk.
  • the author may create a Persona Object, such as Ball 66, which appears in the Cast window 10 as a Persona Cast Member with which one or more Personas can be associated.
  • Ball 66 illustrated in the Cast window 10 of FIG. 2(c), is a Persona Cast Member.
  • the Persona Object on the Stage 20 is essentially the same object as the Persona Cast Member with which it is associated. They are, in essence, two different representations of the same object.
  • the author can create one or more Personas or "personalities,” each of which is associated with the same Ball 66, but possibly during different/overlapping periods of time in accordance with frame timeline 22.
  • the author creates a
  • the author can add a Behavior (in this case, an "Action” as described below) to a Persona Object by dragging it from the Library 70 into the Persona 66a on the Stage 30.
  • a Behavior in this case, an "Action” as described below
  • the author can add a Behavior to a Persona Object by dragging it from the Library 70 into the Persona 66a on the Stage 30.
  • "Color Fade Action” 73 into the Persona 66a on the Stage 30
  • it ' is added to the Cast 10 as a Cast Member (and can later be associated with virtually any Persona Object the author creates) as well as to the Persona 66a, where it is identified as Color Fade Action 74 to distinguish it from Cast Member 73, which is not necessarily associated with Persona Object 66.
  • Ball 66 now has a Persona 66a that consists of a special type of behavior (referred to as an "Action” and discussed below), namely Color Fade Action 74.
  • the author adds another Persona by dragging the Persona Cast Member (Ball 66) into the Score 20 or onto the Stage 30 (e.g., Persona 66b, which currently is empty)
  • the single Persona Object (Ball 66) would then have two Personas 66a and 66b (i.e., two personalities).
  • Ball 66 would exhibit Persona 66a (i.e., the Color Fade Action 74).
  • the same Ball 66 also would exhibit the behavior of Persona 66b (which, in this example, is still empty, but could contain any number of "Traits" or "Actions,” as described below).
  • Score 20 windows an author could associate one or more Personas (e.g., Personas 66a and 66b) with particular Channels of a particular Score window, such as Score 20, and dynamically enable that Score 20 based upon some runtime condition.
  • the Score 20 windows might operate simultaneously and hierarchically to provide even greater flexibility.
  • an author could invoke a particular Scores 20 window (and thus certain behaviors) from within the main Score 20 window. Yet, the duration of this "child" Score 20 window might be determined by runtime conditions (to enable and disable this "child” Score 20 window) rather than merely global time as reflected in the main Score 20 window.
  • States i.e., attributes or properties that can be monitored by the system for changes in value
  • Listening for a State (i.e., getting a callback from the system when it detects a change in the value of that State)
  • a "Trait” that defined that State or is Listening to that State
  • this State e.g., an object's "location"
  • individual Traits can effectively communicate with one another and with the user of an application, and share various characteristics of their Persona Objects, even though these Traits are unaware of one another's existence.
  • a "CanBeep" Trait in a Persona Object could Listen for changes in the ball's "location” State and, in response, could generate a "beep” sound.
  • the ball would beep whenever it was moving, and would stop beeping whenever the ball stopped.
  • other Traits may be responsible for moving the ball (eg, UserDraggable, CollisionResponse, etc.); but the CanBeep Trait need not know about these other Traits in order to communicate with them and share this State change information.
  • the CanBeep Trait therefore, is reusable in other contexts in which various other Traits may alter the ball's location.
  • the car therefore, must have the capability of steering in response to the user's actions.
  • the author might create a "CanSteer" Trait which monitors the user's mouse/keyboard actions and causes the car to turn in response.
  • This Trait might define and expose certain States, such as the angle at which the user is turning the steering wheel (“wheelAngle”) and the car's current location ("location") and velocity ("velocity”).
  • wheelAngle the angle at which the user is turning the steering wheel
  • location location
  • velocity velocity
  • the car's velocity would consist of two components - the car's current direction (“direction") and its rate of speed (“speed”). The speed, as noted above, will remain constant in this application.
  • the CanSteer Trait might modify the wheelAngle State.
  • the wheelAngle State In case another object also modified that State, it could then calculate a new "orientation,” which would in turn result in a new direction (either via this or some other Trait), thus modifying the car's velocity State.
  • the velocity State By also Listening for changes in the velocity State (again because another object might modify that State), it could then calculate the car's ultimate location, which would depend upon its current location and its new direction (e.g., based on its orientation) and speed (even though constant).
  • the CanAccelerateandDecelerate Trait could define and modify a State reflecting the car's rate of acceleration or deceleration ("rateOf Acceleration") in response to the user's mouse/keyboard actions. By Listening for changes in this State, it could then modify the car's current velocity State (i.e., its speed) based upon this new rateOfAcceleration. It must then, however, update the car's location based upon its current location and new velocity.
  • rateOf Acceleration rate of acceleration or deceleration
  • the CanSteer Trait already implements the function of updating the car's location in response to a change in its velocity. It is Listening for changes in velocity because it is changing the car's direction as the user "turns the steering wheel.” Nevertheless, the CanSteer Trait will automatically be notified by the system when the CanAccelerateandDecelerate Trait modifies the car's velocity State; and it will update the car's location correctly (i.e., based on both its direction and speed), even though it was expecting only a change in direction, not a change in speed.
  • the CanSteer Trait which assumed a constant speed for its initial simple application, can be reused in a more complex application in which the car can accelerate and decelerate. It is reusable because it can respond to changes in the velocity State due not only to its own changes in direction, but also due to changes in speed made by the subsequently developed CanAccelerateandDecelerate Trait.
  • the CanSteer and CanAccelerateandDecelerate Traits are thus able not only to share the car's State information, but also to communicate with each other indirectly in this new Movie, even though the author of the CanSteer Trait may never have contemplated the existence of a CanAccelerateandDecelerate Trait.
  • the system's Listening mechanism facilitates this communication by notifying a Listening Trait of changes in States (even across Persona Objects or a network), regardless of which Trait, for example, made such changes.
  • a Listening Trait of changes in States (even across Persona Objects or a network), regardless of which Trait, for example, made such changes.
  • the reusability of Traits is enhanced significantly as authors of Traits "expose" their States to the system, which can automatically monitor changes in those States (whether made by the Trait that defined/exposed them, or by any other Trait or entity that is allowed to modify them) and notify other interested Traits (Listeners) of such changes.
  • this sharing of State changes can even occur across Persona Objects. For example, if one car is notified by the system that it has collided with another car (e.g., its “hasCollided” and “collidedWith” States have changed), it can then access known States of the other car (such as its current "velocity” and “mass”) to determine how to respond to the collision. Moreover, Traits could even Listen for "conditional" State-change information - e.g., Listen for changes in the "hasCollided” State if the Persona Object identified by the "collidedWith” State is a car (e.g., has an "IsACar” Trait). This could greatly enhance system performance by minimizing the number of system callbacks.
  • state-change information cannot easily be shared, even using global properties, because the Behaviors need to know when other Behaviors change these properties (e.g., the direction or speed of the car) so that they can respond appropriately. Mere global access to these properties is insufficient. Though Behaviors could send messages to one another to communicate this State-change information, this quickly becomes unwieldy as more and more Behaviors are added, and the implementation of many Behaviors must be modified to a greater extent each time a new Behavior is added.
  • the present invention enables Traits to communicate with one another indirectly by exposing (sharing) their States to the system, which can monitor and shares changes in such States with any other interested Trait that desires to Listen for such State-change information.
  • the communications interface requires little more than knowledge of States that have been exposed to the system. For example, knowing only that some Trait within a car exposes the "velocity" State, another Trait (or even another authoring environment that could interact with the system) could implement sound with a tempo that matched the car's velocity (e.g., engine sound). It would merely Listen for changes in velocity, and make corresponding changes in tempo.
  • Having the system notify only interested Listeners of a change in the value of a State is a far more efficient method of communication than, for example, broadcasting a message to all objects; and it creates far fewer dependencies than explicitly sending a message to a particular object.
  • Director objects typically have properties that define what the object "is” (e.g., size and location) and Behaviors that determine what the object "does" or “can do” (e.g., AvoidMouse or Draggable).
  • Persona Objects that an author creates for a Movie also have special properties, referred to as States.
  • States are special in that they can be "exposed” to the system, which can monitor their changes and notify interested parties (Listeners).
  • Traits (not mere Behaviors) that define/expose one or more States. Traits also can define "private" properties that are never exposed to the system. As will be explained below, these Traits also can Listen for changes in these and other States, and respond accordingly - i.e., by performing functions that require the modification of other States, which other Traits may Listen and respond to, and so forth as this "condition/response" process continues to ripple across all of the Persona Objects in a Movie.
  • certain States are inherent in certain types of Persona Objects by default (e.g., every Persona Object has a "location" in 3D space). In other words, certain types of Persona Objects have default Traits that defined/exposed these default States.
  • the system provides additional "built- in" Traits and States (also described in greater detail below) that the author may add to the Persona Objects in a Movie.
  • an author utilizing one embodiment of the present invention creates a new Movie by first creating a Viewport 5 and dragging it onto the Stage 30.
  • the author then creates or imports one or more 3D objects into the Persona Object physical hierarchy (e.g., Ball 66 in Viewport Hierarchy window 90 of FIG. 2(b)), and then drags it onto thejStage 30 or Score 20 to create one or more Personas (e.g., Personas 66a and 66b in FIG. 2(e)) associated with that Persona Object (Ball 66).
  • Personas e.g., Personas 66a and 66b in FIG. 2(e)
  • the author can add Traits to each Persona Object (in addition to the default Traits automatically associated with that Persona Object, based on its Object Type, as discussed below).
  • the author may write a Trait from "scratch” (e.g., by creating a Lingo script and associating it with a Persona), or drag a previously written Trait from the Library window 70 onto one of the Persona sprites (e.g., 66a) associated with the Persona Object.
  • a Persona Object will exhibit only those Traits that are associated with a Persona that is "enabled” at that time (in accordance with the frame timeline 22 in the Score 20).
  • This enables an author to associate distinct collections of Traits (i.e., "personalities") with a Persona Object at different/overlapping times.
  • Traits are added to a Persona Object by associating them directly with one of the Personas associated with that Persona Object.
  • these Traits are treated by Director as Behaviors, while the States defined by these Traits are treated as Director properties.
  • the system of the present invention manages these Traits and States quite differently from the manner in which Director maintains Behaviors and properties.
  • FIG. 3 illustrates the manner in which the system 300 interacts with Persona Objects (e.g., Persona Object 330, as well as other Persona Objects 380 and 390), and with their associated Traits 350 and States 340, and in particular the mechanism 310 by which the system monitors States 340 for changes and notifies Traits (e.g., Trait Tj 351) that have added Listeners 360 (e.g., Listener L s 361) for such State changes.
  • System 300 includes many other
  • Core functions, including those relating to its engine for rendering 3D objects and its integration with Director (all described in greater detail below in connection with the core components of system 300).
  • System 300 maintains three major types of data structures 320 (discussed in greater detail below) for an author's Movie - one for the Persona Objects themselves, one for the Traits associated with the Persona Objects, and one for the hierarchy of States defined/exposed by the Traits.
  • the system 300 adds data identifying this object (e.g., an object ID) to data structure 321, which is subsequently modified as the author adds/removes features, such as new Traits (which may define new States as well as Listeners for changes in other States).
  • object ID e.g., an object ID
  • features such as new Traits (which may define new States as well as Listeners for changes in other States).
  • Persona Object 330 is shown with Traits T, 351, T 2 352, T 3 353 . . . T N 354. What distinguishes these Traits from standard Director Behaviors (or other Lingo scripts, C++ code, etc.) is that they can define/expose States and/or add Listeners, associated with particular States, that are notified (i.e., by a system "callback") when the system detects, via mechanism 310, that the value of any of those designated States has changed. This mechanism is illustrated conceptually within Persona Object 330.
  • States are defined from within Traits.
  • An author can define standard Director properties (maintained by Director) from within a Trait, e.g., for purely internal use by the Trait. If, however, the author desires to have the system 300 monitor changes in value, and notify Listeners 360 when such changes occur, the author will define a State.
  • Trait T t 351 defines two States (S 4 and S 5 ).
  • this embodiment extends Director's existing "addProp" Lingo command.
  • the command identifies various characteristics of the State, including the object ID of the Persona Object with which the State will be associated (i.e., the Persona Object associated with the defining Trait), the name of the State, a description of the State, its attributes (e.g., "read only” except for this Trait; "internal” to prevent other Listeners, and “reset” after callbacks have been made in response to a change in value), and a min/max range of values. States can also be defined implicitly (e.g., by setting the value of, or adding a Listener to, a previously undefined State), though (in one embodiment) no description or range of values may be specified for such an implicitly defined State.
  • States can be of virtually any data type (e.g., boolean, integer, floating point, list, etc.) and, as noted above, are treated by Director as mere properties. Yet, by defining a State, the author of a Trait effectively "exposes" that State - i.e., directs the system 300 to monitor the State, via mechanism 310, for changes in the value of that State (if any Trait has added a Listener associated with that State), and notify Listeners 360 when such changes occur. If the State is defined as "read only,” then only the Trait which defined the State can modify its value. Moreover, a State can also be defined as "internal” such that only the defining Trait can access, and Listen for changes in, that State. Otherwise, any other Trait can add a Listener for that State, whether or not associated with the same Persona Object.
  • boolean integer, floating point, list, etc.
  • the command identifies various characteristics of the Listener, including the object ID of the Persona Object with which the State (that this Trait wants to Listen to) is associated, the name of that State, the particular Persona (i.e., sprite) to which Director will refer the callback, and the name of the callback handler.
  • the "addListener” command could also include a "condition” (e.g., callback when State X changes only if the value of State X more than doubled).
  • the system 300 (via mechanism 310) would not only monitor States for changes in value, but would also evaluate the specified condition to determine whether to issue a callback to a particular "conditional Listener.”
  • Trait T, 351 also will include two handlers in this Trait to handle the callbacks for Listeners L s 361 and L S4 362.
  • this Trait T, 351 issues a Lingo command that changes the value of State S j 341 (which in turn will result in a callback to the handler for Listener L s 361).
  • this Trait also calls an "Action"(discussed below) A, 371, that changes the values of two other States - S 4 and S 5 .
  • Trait T 2 352 also defines two States (S 6 and S 9 ), and changes the value of each of those States, in addition to defining three Listeners - L $4 363, L s 364 and L s 365.
  • the Traits associated with Persona Object 330 (including those not shown) define the various States S, 341, S 2 342, S 3 343 . . . S N 344.
  • These Traits also include various Listeners 360 (including Listeners L s 361 and L S4 362 in Trait T, 351, and Listeners L $4 363, L s 364 and L s 365 in
  • Listeners can receive callbacks that identify other Persona Objects (e.g., "collidedWith”), and then access known States (e.g., "velocity") in those other Persona Objects. Even if a Trait does not know whether the Persona Object that it "collidedWith” has a particular State, the Trait could query that object for its list of Traits and States, and then (after examining this list) conditionally access the desired State information. In this manner, Persona Objects can communicate with one another without prior knowledge of one another's existence, making them far more reusable.
  • Persona Objects can communicate with one another without prior knowledge of one another's existence, making them far more reusable.
  • TRAITS AND ACTIONS Persona Object 330 in FIG. 3 also includes various "Actions" 370 (including Actions A !
  • Actions serve two primary purposes, both closely related to Traits. On the one hand,
  • Actions can invoke Traits implicitly - i.e., by modifying States to which other Traits are Listening.
  • Actions can be invoked by a Trait explicitly - i.e., as a consequence of the Trait Listening to a State, receiving a callback upon a change in the value of that State, and then responding by explicitly invoking one or more Actions.
  • a Trait can be thought of conceptually as a "capability" of doing something - e.g., CanAccelerateAndDecelerate - whereas an Action does something directly that often triggers a Trait's capability (e.g., modifying a State such as "moveAcceleratorPedal,” to which the CanAccelerateandDecelerate Trait may be Listening) and/or implements that capability (e.g., modifying a State such as "speed” upon being invoked explicitly by the CanAccelerateandDecelerate Trait).
  • Action A 2 372 modifies the value of State S] 341 (as well as States S 4 and S 8 ).
  • This change in value is detected by the system 300 (via mechanism 310), which issues a callback to the handler in Trait T( 351 that is defined by Listener L s 361.
  • Various Actions and Traits may modify States. By Listening for such changes, another Trait can perform its functionality (i.e., be reused) in a variety of different contexts, and communicate indirectly and share information with other Traits without even being aware of their existence (as noted in our "car” example above).
  • Actions also can serve to break up the response of a Trait to a callback from the system into modular reusable components.
  • Traits can be "parameterized” - i.e., separating their Listening functionality (waiting for a State-change "condition” to occur) from their "response” to a triggering of their capability (i.e., changing other States, that may trigger other Traits that are Listening to those States).
  • a triggering of their capability i.e., changing other States, that may trigger other Traits that are Listening to those States.
  • Traits can delegate such responses to separate Actions, making the Trait even more modular and reusable, while allowing Actions to be replaced (or selected as alternative parameters) to optimize existing functionality or even add new or alternative functionality.
  • a "Draggable" Trait for 3D objects might want to offer alternative functionality in response to the user dragging the object with a mouse - “move” or “rotate” the object.
  • the Trait could Listen for the user's mouse dragging events and then delegate the response to whichever Action parameter was selected.
  • Trait T t 351 might want to change two States (S and S 5 ) in response to a callback to its Listener L S[ 361.
  • an Action (Ai)
  • another author could reuse this Trait and substitute an alternative Action in its place to perform a slightly different function.
  • Actions too can be parameterized to provide even further reusability.
  • our example "canBeep" Trait might call j an Action that enables the user to select the desired "beep” sound.
  • Actions may modify State values "over time” - e.g., changing color gradually over one minute from one color to another. As the value of this State (color) changes, the Action may control the gradual shift from one color to another, while another Trait may be Listening for all color changes, or only those meeting a certain condition.
  • the Listener's trigger i.e., the change in a State
  • the Listener's trigger can be made conditional - i.e., adding almost any other State or combination of States, or even Director properties, as a condition which must be met before the Listener will receive a callback.
  • States can be grouped together (via a concept referred to as "Tags") to enable a Listener to receive a callback if any of the "Tagged States” changes in value.
  • This concept could be completely genericized, enabling the Listener to specify any condition (combination of States, Director properties, etc.) under which a callback is to be received; though the Trait also could evaluate such condition itself, creating a separate Listener for each of the component States.
  • the concept of a Tag is employed to enable authors to group any set of States together (even across Persona Objects, if desired), and receive a callback ifany of these States changes its value.
  • the Tag is itself a State.
  • an author of a Trait can define a new State, and then add it as a Tag on one or more other existing States; or simply add a State that already has been defined (e.g., by another Trait) as a Tag onto another State.
  • Tags can be added to Tag States, so as to extend the hierarchy to additional levels.
  • a Listener to the parent Tag State will receive a callback from the system when the value of the child (or any "descendant") State changes.
  • the parent Tag State may be a parent to multiple child States, and each child State may have multiple parent Tag States, each of which may itself have multiple parent Tag States at higher levels of this "State Hierarchy.”
  • Each Persona Object will therefore have its own State Hierarchy, starting with the "root" (non-Tag) States defined by Traits associated with that Persona Object.
  • the system When the system detects a change in any particular State, it will issue callbacks not only to Listeners to that State, but also to any of its parent, grandparent and other "ancestor" Tag States. Moreover, the callback will include a list or "chain” of States from the initial State that changed up through the State Hierarchy until it reaches the Tag State specified by the Listener. For example, looking at FIG. 4, consider the following 2 "root” (i.e., non-Tag) States -
  • orientation 410 and “location” 420 If the author of a Trait was interested in knowing whether its associated Persona Object moved (whether in location or merely in orientation), it could create a separate Listener for each of these States (orientation 410 and location 420) and, upon receiving a callback for either one, check the value of the other. Instead, by using Tags, it could simply define a State called “movement” 480 (which could itself contain data, though this is probably unnecessary in this example) and add it as a parent Tag to both the orientation 410 (illustrated by line 415) and location 420 (illustrated by line 425) States for the Persona Object.
  • the specified handler in the author's Trait would receive a callback when the system detected a change in the value of either the orientation State 410 (in which case the callback data would include the list of identifiers, "orientation, movement") or the location State 420 (in which case the callback data would include the list of identifiers, "location, movement”).
  • the Listener's callback handler would thus know which State changed, and could respond accordingly.
  • the author of the NetworkSync Trait can simply add a Listener over the network to the published State 460 on each of the other player's copies of the application, and receive the appropriate callback whenever the system detects a change in value of any of these States on any player's machine (e.g., "color, published, Player 1" when the color State 430 changes on Player 1 's machine).
  • the handler in each NetworkSync Trait could simply update the local copy of the State corresponding to the one that changed (after querying the corresponding game over the network for the changed State value, if it was not sent along with the callback).
  • the Listeners could operate locally, and then exchange packets over the network, requiring each copy of the game to unpack the network packets and understand the State changes from the other players' games.
  • the network overhead is substantially reduced, and the network synchronization task made significantly easier, by having each copy of the game made aware of any change in value of its many local States.
  • each new State merely need be given a "published" Tag 460 to maintain network synchronization.
  • the "published" Tag 460 itself could contain data that aids in the synchronization task. For example, it might contain data corresponding to instructions for handling certain "child” States differently. Or it might contain the changed State value information to simplify the task of providing this information over the network.
  • "audit" Tag State 470 might be used as a parent Tag State to published Tag State 460 (as illustrated by line 461). Assuming that the value of the published Tag 460 did not change, but that the value of one of its child States did change, then a Listener to the audit Tag 470 would receive virtually the same information (e.g., "color, published, audit") as would the Listener to the published Tag 460 (e.g., "color, published”). Yet, the Listeners to the audit Tag 470 might not need "real-time" synchronization information, but merely accurate data sent periodically, as opposed to Listeners of the published Tag 460, which must synchronize the actual games in real time. In such a case, the system might handle the audit Tag 470 differently (whether inherently, through a condition specified in the Tag itself or through a condition specified by the Listeners), and provide only periodic (though accurate) callbacks.
  • the system might handle the audit Tag 470 differently (whether inherently, through a condition specified in the Tag itself or through a condition
  • Tag States can be conceptual groups of other existing States, there is virtually no limit to their use. For example, one could define a "physicalProperties" Tag State to various other physical States, such as size, width, color, height, etc. As noted above, the Tag State data may or may not have independent meaning.
  • the system adds information relating to that Persona Object to a Persona Object data structure, as illustrated in Table I below (for one embodiment of the present invention).
  • This data structure enables the system to maintain various characteristics of the Persona Objects necessary for the performance of certain system functions.
  • the "Persona Object ID" field provides a unique identifier that enables the system, as well as Traits and other objects, to access the States of a Persona Object, among other characteristics (via the State Hierarchy data structures, discussed below).
  • the "Persona Object ID" field provides a unique identifier that enables the system, as well as Traits and other objects, to access the States of a Persona Object, among other characteristics (via the State Hierarchy data structures, discussed below).
  • the associated Persona Object ID of the Persona Object containing that State In order to modify a State, one must know the associated Persona Object ID of the Persona Object containing that State.
  • the "Persona Object Description” field provides a textual description of the Persona Object that is accessible, e.g., by another Trait within another Persona Object, and could be used to infer information about this Persona Object's capabilities.
  • the "List of Active Trait Names” and "The State Hierarchy Storage” fields provide the system, as well as other interested Traits, with a list of the Traits (with parameters) and States (with hierarchy information and data) exhibited by a Persona Object at any given point in time during runtime. Traits can query a Persona Object for this information in order to determine whether particular Traits or States are supported and, if so, can then query the Persona Object for these values or make other decisions dependent upon this information.
  • the system utilizes this information, for example, to determine which Traits should be exhibited by a particular Persona Object at runtime, as well as to access (read and write, and monitor changes in) the values of the States of the Persona Object.
  • the code implementing the Trait is stored only once, accessible via the Trait data structure illustrated in Table II below.
  • the Persona Object data structure maintains the list of Traits and their parameters contained within (or associated with) each Persona Object, as well as complete State Hierarchy information, as discussed more fully below.
  • Each distinct Trait (regardless of how many Persona Objects contain that Trait) consists of a "Trait Name” field and a "Script ID” field that identifies the Trait (by name) and its associated script, which contains, for example, the Lingo code that implements the Trait (defines/exposes States, adds Listeners, modifies States, etc.).
  • the "Trait Description” field provides a textual description of the Trait that is accessible, e.g., by other Traits, and could be used to infer information about the Trait's capabilities.
  • Each Trait may also have a list of "Trait Dependencies” that identify the Traits which must also be present in the same Persona Object that contains this Trait in order for this Trait to function properly.
  • the "IsACamera” Trait (present by default only in Camera Persona Objects) depends upon the "IsAnObject” Trait, which is present in practically every Persona Object, including Cameras (e.g., because Cameras have a "location" in a World).
  • State Hierarchy data structure maintains all of the States for each Persona Object (including Tag States that enable the hierarchy).
  • a State e.g., "location”
  • the system will maintain multiple separate data structures for that State, one for each Persona Object containing that State.
  • Each State has a "State Name” field, which both the system and authors use to reference the State (e.g., from within a Trait's Lingo script).
  • each State has a "Persona Object ID” (as noted above) which uniquely identifies the Persona Object containing the particular "instance” of that State.
  • the "Trait Name” field identifies the particular Trait that defined/exposed the State - e.g., so that the system can limit changes to a "read-only” State to that Trait.
  • the "State Description” field provides a textual description of the State that is accessible, e.g., by other Traits, and could be used to infer information about the Trait's capabilities.
  • the "DATA" field contains the actual data values for this "instance" of the State in a particular Persona Object. As noted above, this data is “variant” and can thus be of virtually any data type.
  • the system monitors changes to this DATA, i.e., by controlling this data structure and making all changes on behalf of other Traits, etc. Thus, the system also can enforce (validity checking) the "Range of Values" specified by the author of the Trait that defined/exposed this State.
  • the next few fields relate to this State's list of Listeners (defined, for example, from within Traits that want to be informed of changes to this State). When a Trait defines such a Listener, the system updates this data structure to include that Trait.
  • the "addListener" command can include, in addition to the name of the desired State, a "Callback Name” (maintained in the "List of Callback Names of each Trait/Listener” field) which the system uses to issue a "callback” message (when the value of the State changes) that will invoke the Listening Trait's handler of the same name. If the callback handler is not given, a default one will be assumed to be used (stateNameChanged), and the author is expected to implement that handler. The system also needs to know the "Script ID" of the Trait that is to receive the callback, and the "List of Conditions” that must be satisfied for the Listener to receive a callback (in addition to the change in the State's value).
  • the final few fields support the State Hierarchy (i.e., the "parent" Tags). If the author of a Trait adds a Tag State to a particular State, the "Tag State Names" and "Persona Object IDs” fields are updated. In other words, the system adds the name of the desired Tag State to this list (because States can have multiple "parent” Tag States), as well as the Persona Object ID that is associated with that Tag State (which need not necessarily be the same Persona Object that contains the child State). For example, a Trait could add a Tag State (e.g., "published"), associated with Persona
  • this data structure maintains "child-parent links” only in one direction, but could be optimized to also include “parent-child links” (i.e., bidirectional links), e.g., for better performance.
  • parent-child links i.e., bidirectional links
  • one embodiment of the present invention broadly categorizes the various author-created entities in the authoring environment into Object Types.
  • Object Types are built into the authoring tool in one embodiment and used to categorize at a very high level all the potential entities or "things" that are created for the stage or Viewport of the authoring tool.
  • an object such as a dog is associated with the same Object Type (i.e., object) as another dog object.
  • this dog object is also categorized under the same Object Type as a cat object.
  • the following Object Types are supported in one embodiment of the present invention:
  • this categorization allows the authoring tool to facilitate the rapid development of the author's project - because certain features are automatically incorporated into his newly created entity by virtue of its categorization as an "object” or a "light” or a "primitive” or any of the aforementioned Object Types.
  • the author need not concern himself with basic inherent Traits associated with that object, since the system has included these basic features by default into the author's newly created entity.
  • the Object Type is hierarchical. In other words, one Object Type is not necessarily at the same level as another Object Type. Referring to FIG. 5, the "entity" Object Type 500 is at the top. Although the "entity” Object Type is an Object Type, it is not visible to the user and accordingly, none of the objects created by the user will be categorized as an "entity” Object Type. This "entity” Object Type provides for State management for all the Object Types. Immediately below the "entity” layer are the "world” 501, "universe” 502, “background” 503, “object” 504, and “sensor” 509 Object Types. Immediately below “object” 504 are the “light” 505, “camera” 506, other "objects” 507 (e.g., imported objects from another software package), and “primitive” 508 Object Types.
  • Object Types The hierarchical nature of these Object Types implies that an Object Type necessarily includes the Traits associated with that Object Type as well as the Object Type directly above it.
  • a light source is categorized as a "light" Object Type 505.
  • the system automatically associates the Trait IsALight with this cast member without any intervention by the user.
  • This IsALight Trait exposes States LightType, IsLightOn, and LightRange, among others, in one embodiment of the present invention.
  • the "light" Object Type is also directly below the Object Type of "object" 504.
  • the system also automatically associates the Trait IsAnObject with the light source and all the States (e.g., Visible, Color, IsSolid, Location, Orientation, and Dimensions) exposed by this Trait.
  • the light source is associated by default with the IsALight and IsAnObject Traits upon creation.
  • a Viewport hierarchy window (named "3D Dreams") of the authoring tool is shown in FIG. 6. Below the title bar are the set of drop-down menus. Below the drop-down menus is the edit-related toolbar. On the left side of the window, the primitives tool bar is shown.
  • a hierarchical directory structure of the objects in the Viewport are shown.
  • a world is created under one universe. Within this world, several objects are created - a box, a camera, and two different lights (directional light and ambient light). The box is categorized as an "object" Object Type.
  • the camera is a "camera” Object Type and the directional light and ambient light are "light” Object Types.
  • the directorylike structure of this left pane view should not be construed to indicate in any manner the Trait inheritances among the Object Types. Rather, the contents of this left pane merely show the various cast members within each Object Type.
  • the box has a particular location, as indicated by the location coordinates (x, y, z).
  • the box also has an orientation, as indicated by the yaw, roll, and pitch.
  • the box also has dimensions, as indicated by the width, height, and depth.
  • one embodiment of the present invention defines an object in the context of Traits and States; that is, an object is associated with one or more Traits that "listens" to changes in or “exposes” one or more State values.
  • States can be either internal or external. Internal States are those States whose values (and changes therein) are only used and useful to a particular Trait which is associated with this internal State. This particular Trait can "listen” to this State but the Trait does not notify (via the system) other Traits of any changes in this State.
  • FIG. 7 shows a diagram generally illustrating this concept. Note that this diagram is an object-based diagram rather than an actually implemented data structure diagram.
  • the universe 510 contains one or more worlds, such as world 51 1.
  • World 511 contains one or more objects, such as object 512.
  • Object 512 contains or is associated with any number of Traits (i.e., behavior, such as "can have color”) and any number of internal and external States (i.e., parameters that give some bound and meaning to the behavior, such as "the color is red”).
  • object 512 contains Traitl 513 and TraitN 514.
  • Traitl 513 is associated with internal State 1 515 and external State 1 517.
  • TraitN 514 is associated with internal StateN 516 and external StateN 518.
  • Traitl 513 "listens” to State value changes in internal Statel 515.
  • This internal Statel 515 is unique to Traitl 513 and accordingly, the values or changes in value in internal Statel 515 is not provided to TraitN 514. Only Traitl 513 can "listen” to internal Statel 515, as indicated by line 519.
  • internal StateN 516 is unique to TraitN 514 and accordingly, the values or changes in value in internal StateN 516 is not provided to Traitl 513. Only TraitN 514 can "listen” to internal Statel 516, as indicated by line 520.
  • These internal States are for the exclusive use of their respective Traits. However, external States are available for anyone (i.e., Trait) who might care to listen.
  • external Statel 517 can be “exposed” by Traitl 513 to TraitN 514, as indicated by line 522.
  • external StateN 518 can be “exposed” by TraitN 514 to Traitl 513, as indicated by line 523.
  • the exposure is not limited to other Traits within the same object. Traits in other objects may also "listen” to these exposed States, or conversely, the Traits in one object can expose their respective external States to other Traits in other objects.
  • Traitl 513 can expose external Statel 517 to the other Traits via line 524, while TraitN 514 can expose external StateN 518 to the other Traits via line 525.
  • an object of a dog can have a Trait called HasPhysicalCharacteristics, which as the name implies, is associated with the physical characteristics of that dog.
  • This Trait called HasPhysicalCharacteristics exposes a State called Mass, to name but one of many possible States.
  • this State called Mass is associated with the mass of that dog and can vary from one dog to the next depending on how massive (i.e., in grams, for example) that dog is.
  • the world (or some object) is associated with a Trait called HasGravity, which as the name implies, provides some gravitational force on the objects in this world. Thus, when the dog jumps up, gravity pulls it back to the world.
  • the HasGravity Trait in the world must know the mass of the dog as provided in the dog's Mass State.
  • the Mass State is exposed by the HasPhysicalCharacteristics Trait in the dog object and the Has Gravity Trait in the world (or some other object) is one of many possible Traits that listens to it.
  • the mechanism by which "listening" and “exposing” is accomplished is via Trait data structures, State data structures, and object data structures.
  • all listeners (i.e., Traits) to this State (as referenced in the State data structure) are alerted as the system delivers the chain of States information to the listeners. Thereafter, the listeners can elect to obtain the actual State values or not.
  • the Trait has "exposed” that State. Thereafter, this State's hierarchical structure lists all of its listeners (i.e., Traits).
  • Table IV lists the default Traits that are automatically implemented in an author's newly created entity having one of the high level Object Types - object, background, primitive, universe, world, sensor, camera, and light.
  • the table lists the default Trait names at the leftmost column.
  • Corresponding columns provide an exemplary list of States that each default Trait exposes, the Object Type that each default Trait is supported by, and any other Traits that each default Trait depends from. Although this table only lists some exemplary States that each Trait exposes, the States that each Trait listens to can vary from one implementation to the next.
  • the Trait IsALight is automatically associated with an entity when the user created an object of Object Type "light.” This Trait exposes such States as LightType, IsLightOn, and LightRange. As explained more fully below, LightType indicates the type of light including point light, spot light, directional light, and parallel point light. IsLightOn is a State that holds a Boolean value to indicate whether the light is on or off. LightRange indicates the maximum distance that the light will travel for this light source.
  • the IsALight Trait depends on the IsAnObject Trait from the Object Type of "object.”
  • the Trait IsACamera is automatically associated with an entity when the user creates an object of Object Type "camera.”
  • This Trait exposes such States as FieldOfView and FarClip.
  • FieldOfView indicates the field of view of the camera that ranges from 0 to ⁇ radians in one embodiment. For those skilled in the art, 0 radians represents 0 degrees and 5 radians represents 180 degrees.
  • FarClip indicates the maximum distance from the camera that objects within this distance will be viewable (and 3D calculations performed on). Any objects beyond this distance from the camera will not be viewable until the camera's movements place these objects within its range.
  • the IsACamera Trait depends on the IsAnObject Trait from the Object Type of "object.”
  • the Trait IsAObject is automatically associated with an entity when the user creates an object of Object Type "object.” This Trait is also automatically associated with those entities whose Object Type is designated as "primitive,” "camera,” or "light.”
  • the IsAnObject Trait is the most widely used Trait in the authoring tool because most cast members are, in one form or another, objects that have either been created internally with the authoring tool's edit/drawing tools or externally with some third party software package to be imported into the authoring tool. Accordingly, the States that it exposes far outnumber that of the other Traits.
  • the IsAnObject Trait exposes such representative States as Parent, Visible, Color, IsSolid, Location, Orientation, and Dimensions. These States are self-explanatory but will be discussed in greater detail below.
  • the Trait IsABackground is automatically associated with an entity when the user creates an object of Object Type "background" or "world.” It exposes the State Appearance.
  • Appearance is a string value that references another object's appearance.
  • only one background can be viewable for a given Viewport even though multiple backgrounds can be saved.
  • multiple backgrounds can be saved and viewable so that the background is combination of different background files.
  • the Trait IsASensor is automatically associated with an entity when the user creates an object of Object Type "sensor.”
  • sensors are implemented as point sensors so that ifany designated object impinges on its trigger range, it provides an indication.
  • One such indication is the State TriggerProximity which is exposed by the IsASensor Trait. Thus, if a designated object is within a certain distance from the sensor, the TriggerProximity indicates that distance.
  • sensors can be made directional so that trip wires can be implemented.
  • the sensors can be made into a variable-sized sphere.
  • sensors can be used as a reference point to indicate how far or close an object is to that sensor. In these applications, the sensor need not activate any other action; the sensor merely exist to alert other objects of their proximity.
  • the Trait IsAUniverse is automatically associated with an entity when the user creates an object of Object Type "universe."
  • the IsAUniverse Trait keeps track of the WorldTime State which is referenced by many different Traits. As described more fully below, the WorldTime State is an integer value that changes as time changes.
  • the IsAUniverse Trait also exposes children States because it could have one or more worlds that it supports.
  • the Trait IsAWorld is automatically associated with an entity when the user creates an object of Object Type "world.”
  • a Viewport can have multiple worlds but these worlds are mutually exclusive and cannot interact with each other. Thus, Traits in one world cannot listen to Traits in another world.
  • the world Object Type is primarily used to provide authors with some flexibility in their project design. For example, a Viewport could be designed to present a multiple story building. To prevent objects on one floor from hearing sounds emitted by objects in another floor, one implementation may involve creating a world for each floor. The world's parent is the universe and its children are other Object Types within that world.
  • non-default Traits that the author may elect to directly associate with his objects exist. These non-default Traits will now be discussed.
  • the Traits listed in Table 5 are of the non-default variety. Although these Traits are built into the authoring tool, the fact that these Traits are not automatically associated with an author's newly created object makes them non-default. The author must purposely associate any of these Traits with his newly created object in order for the object to acquire the corresponding behaviors. This is in stark contrast to the default Traits (Table 4 above) where the mere creation of an object into one of the eight Object Types automatically associates certain Traits with that object.
  • Table V lists the non-default Traits that are implemented in an author's newly created entity only when he affirmatively associates them with his entity, in contrast to default Traits which are automatically associated with his entity upon creation.
  • the table lists the non-default Trait names at the leftmost column.
  • Corresponding columns provide an exemplary list of States that each non- default Trait exposes, the Object Type that each non-default Trait is supported by, and any other Traits that each non-default Trait depends from. Although this table only lists some exemplary States that each Trait exposes, the States that each Trait listens to can vary from one implementation to the next.
  • the CanBeKeyboardControlled Trait provides the user with the ability to control certain objects with the keyboard instead of the mouse.
  • bandwidth limitations may be a concern especially for fast game environments that require rapid changes in States.
  • the author can program his score so that the CanBeKeyboardControlled Trait modifies the IsAnObject Trait.
  • the program may transfer or provide some gameplay control to the keyboard so that the user can continue to play his game at a fast and furious pace while waiting for the mouse-control related Traits to also be synchronized.
  • the CanBeKeyboardControlled Trait listens to these network synchronization Tags (e.g., Published, Audit, Licensed). In most cases, this CanBeKeyboardControlled Trait is provided for all object control functions as a default modifier of the IsAnObject Trait.
  • the CanBeNetworkSynchronized Trait provides the Trait that listens to Tags for the purpose of network synchronization.
  • Tags are essentially parents of States so that some sort of grouping of States can be employed. Whenever a child State changes, the system alerts all listeners of the parent State (the Tag) so that the listener (usually the CanBeNetworkSynchronized Trait) can obtain the new State information, if desired.
  • the Published State is used as a Tag in one embodiment to synchronize the various States across a network. Published is a parent or Tag to the location, orientation, and size States.
  • the system alerts the CanBeNetworkSynchronized Trait because this Trait is a listener of the Published State. Having received the chain of States information, the CanBeNetworkSynchronized Trait can retrieve the changed State information for synchronization across the network. For a more detailed treatment, refer below to the Published Tag description. This process can be repeated for every State that is monitored by the Published Tag for network synchronization.
  • the CanEmitSounds Trait provides the object with the ability to play a predetermined sound.
  • the particular sound is found in the State PlayedSound, described below.
  • a lion object has a Trait called CanEmitSounds, which is invoked along with the animation that shows the lion's mouth move.
  • the particular lion's roar sound is found in its PlayedSound State. If the lion's CanEmitSounds Trait listens to its PlayedSound State, which happens to contain the sound file of a kitten, the lion will make a kitten's sound. Because this is a sound emitting Trait, no sensory prerequisites are required of the object, unlike the CanHear Trait.
  • the CanHear Trait provides the ability of an object to hear sounds. This Trait depends on two requirements - the world (or other objects in the world) should provide sounds for the object to hear and the world (or other objects in the world) should provide the particular sounds that the object is capable of hearing. First, in order for the object to hear any sounds, the world in which the object is in must provide those sounds for the object to hear. Second, the sensory capability must be consistent with the ability of the world to provide senses that are compatible with those senses. For example, if the object can only hear sounds in the frequency range of 0 Hz to 3,000 Hz, a world that only emits sounds in the 4,000 Hz to 5,000 Hz range is not providing any sounds that the object can hear. The world is providing sounds but these sounds are beyond the sensory capabilities of the object. This second requirement is described in greater detail in the HearingQuality State discussion below.
  • the CanReceiveTimedE vents Trait provides an object, primitive, world, or universe to receive timed events, such as WorldTime.
  • timed events such as WorldTime.
  • the CanReceiveTimedEvents Trait can invoke other Traits to perform some function.
  • the TimeOutPeriod could be set for some time period that is triggered at the last mouse click or mouse movement. If the time out period expires, the system interprets this as an idle State because the user had not used the mouse for some specified time period. Accordingly, some Trait such as RunAnimation that runs some animation for the object can be activated in response to the CanReceiveTimedEvents Trait that is triggered by the time out elapsing.
  • an animation of a clock ticking may appear, or some objects on the Viewport could be pacing back and forth while looking at their respective watches, or the screen may be animated in some other way to indicate the idleness of the program (much like a screen saver) in accordance with some pre-scripted animation.
  • the CanReceiveTimedEvents Trait provides the object with the ability to receive timed events, such as WorldTime, so that some other action may be invoked.
  • the CanRemember Trait like many of the Traits and States described in this patent specification, provides an object with the ability to remember things.
  • the actual memory itself can be implemented via a simple database table or a sophisticated neural network in the MemoryCell State. Regardless of the implementation, the CanRemember Trait allows an object to learn and remember data that it may have collected some time ago to govern future behavior.
  • the CanSee Trait provides an object with the ability to see other objects in the world. Like the CanHear Trait, this Trait depends on two requirements - the world (or other objects in the world) should provide some light source for the object to see and the world (or other objects in the world) should provide the particular visual stimulation that the object is capable of seeing.
  • the lack of any light may prevent the object from seeing anything even though the object may have the ability to see.
  • the world or other objects in that world must be capable of providing the particular visual stimulation that the object is capable of seeing. For example, if an object cannot see the color red, a world filled with red-colored objects and backgrounds will be invisible to the object despite the fact that it is illuminated with light. This Trait listens to (and exposes) the SeenObjects and SeeingQuality States.
  • the CanSpeak Trait provides an object or primitive with the ability to speak. This Trait relies on the SpeakText, FinishedTalking, and TalkSentence States. Accordingly, an appropriate text-to-speech engine is necessary to fully enable this Trait to function.
  • the Can Walk Trait provides an object with the ability to walk. This Trait is augmented by the physical characteristics of the object. If the object has no legs, it cannot walk. Furthermore, the walking can be implemented in different ways.
  • the Can Walk Trait will invoke other functions that will provide the procedural steps necessary for the object to walk; that is, movements of the knee with respect to the torso and feet are activated to enable the object to progress forward and thus "walk.” If an animation is involved, the mechanics of walking becomes less procedural and more pre-determined animation without regard to individually controlling the knees, feet, torso, upper leg, and lower leg.
  • the HasGoals Trait provides a mechanism by which a series of individual prioritized goals can be set up for an object.
  • Each goal is associated with some priority and some behavior action.
  • the priority would be used by the system to resolve potential conflicts.
  • the behavior action is used to ensure that the object with that particular goal is motivated to take steps to achieve that goal. In order to accomplish a goal, these steps (or conditions) must be satisfied first.
  • the author has in effect built in some personality for that object that explain its behavior.
  • These goals are parameterized instances of an instruction set that is part of the HasGoals Trait.
  • These goals may also have one or more listeners which parameterize a CanBeAchieved Trait according to the States to which they listen.
  • the deer Obviously, if the deer continues to eat to satisfy its "must eat periodically" goal, it will be devoured by the lion and it will not satisfy its "must survive” goal. If the deer has detected the presence of the lion, not to mention the fact that the lion is now running toward the deer, the deer has to resolve the potential conflict of the "must survive” and “must eat periodically” goals. The deer has not eaten yet even though it is now the period to eat. However, in light of the lion's chase and the higher priority set for "must survive,” the deer will attempt to satisfy the "must survive” goal without now fulfilling the "must eat periodically” goal. The deer now runs for its life.
  • the HasGoals Trait that has this particular goal listens to various States (e.g., "IsAircraftPilot”, “FoundAirport”, “FoundAircraft”) to determine if this goal is achievable or not. If these various States indicate that the achievement thresholds for these States are satisfied, then this goal is deemed achievable. If this goal is deemed achievable, it attempts to accomplish this goal now (e.g., the human object will now fly the aircraft).
  • the HasGravity Trait provides some simulated gravitational force in the world or in an object. If an object is selected as having gravity, the actual gravitational force depends on the mass of the object. The more massive the object, the greater the gravitational force, in accordance with the laws of physics. Thus, all other objects are pulled toward the object with the HasGravity Trait. When a Trait such as CanJump is invoked, an object would jump up but its return to the ground is not governed by the CanJump Trait; rather, its return to the ground after jumping up is governed by the HasGravity Trait.
  • the CanTranslate Trait allows the system to provide an author with a consistent or coherent definition of a State, given that different authors may initialize or define the same State in an inconsistent manner.
  • a program is authored, many different Traits can define or initialize the same State. Normally, this is not a problem because the same author will take certain precautions to define the same State consistent. However, in cases of carelessness or the intervention of a different author working on the same program, the same States can be defined in an inconsistent manner.
  • the CanTranslate Trait provides an author with the ability to listen on a particular State (any State) and provide the author with a coherent value or range for future use.
  • the translator typically includes a transformation function that enables the integration or combination of different systems.
  • the original authors may have had a particular perspective about State definitions in that system which may not be appropriate when his system is integrated with or combined with another system.
  • the inconsistent States are put in another namespace.
  • the translator then performs the transformation of the relevant States for the first system that is incorporating or integrating another system into the first system.
  • the translator can even listen to a Tag that is attached to multiple States that have the same convention, decreasing the work needed to attach the translator to each State that is implemented in the systems that are using inconsistent State definitions.
  • HasPhysicalCharacteristics Trait provides an object with physical characteristics, such as mass, density, elasticity, and volume. This Trait depends on the IsAnObject Trait and accordingly, collision detection (if solid) and gravity will affect it.
  • the Has Wind Trait provides wind to exist in and interact with the world. It exposes the State WindSpeed.
  • the IsADraggableObject Trait provides the user with the ability to drag a selected object. This Trait exposes the States ShouldDrag and Mouse Vector. Typically, this Trait depends on the IsAPickableObject Trait and listens for the MouseDown state being TRUE for the selected object. Thereafter, the dragging occurs by following the MouseVector.
  • States can be classified as user-changeable States or system-changeable States, regardless of whether they are internal or external. Thus, both user-changeable States and system- changeable States contain internal and external States.
  • User-changeable States are those States that can be changed by the user (via a Trait or some script) at the user's initiation.
  • System- changeable States are those States that can be changed by the system only. In both cases, various Traits can listen for State changes and receive their actual State values via the system.
  • Table VI lists the user-changeable States that are exposed by any Trait or action script implemented by the author in his Viewport and listened to by any Trait.
  • the table lists the State names at the leftmost column.
  • Corresponding columns provide a list of Object Types that each State is supported by, the reset information (i.e., reset to default values manually or automatically), the data type (e.g., string, boolean, integer, float), and the property list which includes the range of possible values. Note that this table lists some of the many States that could be implemented in the authoring tool in accordance with one embodiment of the present invention.
  • the ActiveBackground State indicates the particular background file (via a Persona
  • the Appearance State indicates the general appearance of an object, primitive, or background. This State is typically associated with the IsABackground Trait. Specifically, Appearance represents the geometry that describes the object. Its reset type is manual and the data type is a string value. Thus, when the author already has an object that looks a certain way, that look can be given a string name. Thus, this look can now be replicated in the background by referencing the string. When the appearance of an object is changed, it should look different. For example, the pseudo-code
  • the Audit State is used as a Tag in one embodiment of the present invention. Accordingly, it possesses no internal value so to speak. This State is typically associated with the CanBeNetworkSynchronized Trait. As described more fully above, a Tag represents a parent State to one or more child States. The Tag could also be child State to another Tag parent State. As one of its children States change value, the Tag itself does not change value. Rather, the system detects the change in State value and sends the chain of States information (which is analogous to a pointer) to all of the listeners listed in that changed State's hierarchical data structure. The listeners (i.e., Traits) can then decide to retrieve the actual value of that changed State via the system or do nothing.
  • the listeners i.e., Traits
  • the Audit State itself can be used to monitor changes in the Published Tag.
  • the Audit Tag can be used in a network gaming environment along with the Published Tag. Player 1 and Player2 are located across the network from each other and are playing the same game against each other.
  • the Published Tag is used to keep track of all changes (e.g., location, orientation, size of the players) in the gaming environment to facilitate network synchronization, the Audit Tag can be used to only receive State change notification when any player moves from one level to the next.
  • the Audit Tag does not require high bandwidth resources to receive notifications of every single State change; rather, only the particular State change information is delivered to the Audit Tag.
  • the Children State lists the name or names or one or more children Persona Objects of a primitive, object, universe, or world.
  • a Persona Object is an object that is capable of having a Trait; that is, the object is Persona-ized.
  • a Persona Object of a dog in a particular world is considered a child of that world where the world is considered the parent.
  • a Persona Object of a bed is considered a parent of a Persona Object of a pillow that is located on top of that bed.
  • the Children State for the bed lists the pillow as one of its Children.
  • a Trait associated with the bed may listen to the Children State for information on its physical hierarchy.
  • the Color State indicates the color of an object, light, background, or primitive via the standard color components of red, green, blue, and alpha (i.e., translucency). This State is typically associated with the IsAnObject Trait. For each of these color components, the data type is float. For objects of Object Type "light,” the Color State indicates the color of the light.
  • the Density State indicates the density (e.g., mass per unit volume) of an object or primitive. This State is typically associated with the IsAnObject Trait. Its data type is float that has a minimum of 0.
  • the Dimensions State indicates the dimensions of an object or primitive. This State is typically associated with the IsAnObject Trait. Its data type is float. The property list for this State includes width, height, and depth.
  • the Elasticity State indicates the solidness or softness of an object or primitive. This State is typically associated with the IsAnObject Trait. Its data type is float. Thus, a solid rock may have a very hard surface that is represented by some elasticity index. A piece of rubber will have a different elasticity index because it can return to its original shape after being forcefully deformed (i.e., stretched, pressed).
  • the Elasticity State has a minimum value of 0.
  • the FarClip State indicates the maximum distance that is viewable by the user with the "camera' Object Type.
  • FarClip represents the maximum distance from the camera within which the system will render (i.e., calculate and image) objects.
  • the system will render only those objects that are within a range of 100 yards from the camera. All other objects beyond this range will not be rendered.
  • backgrounds will be shown but this is because the background is not considered an object which needs 3D resources. Backgrounds are persistently constant in appearance.
  • the system can reserve resources for calculating 3D data only for those objects within the viewing distance of the user.
  • This State is typically associated with the IsACamera Trait.
  • the data type is float and the range has a minimum of 0.
  • the FieldOfView State indicates field of view of the camera; that is, how narrow or wide is the angle of view for the user.
  • This State is typically associated with the IsACamera Trait.
  • the State value is float with a range of 0 radians to ⁇ radians. In other embodiments, more than ⁇ radians is provided to the user as an option.
  • the FinishedTalking State indicates whether the primitive or object is finished talking (or emitting some designated sound). This State is typically associated with the CanSpeak Trait. It is a Boolean value to represent finished talking or not finished talking. Thus, during the talking process, this State would indicate that the object or primitive has not finished talking.
  • the reset type is automatic since the system knows when an object has finished talking or not and changes the State value accordingly.
  • the Gravity State indicates the graviatational force of the object associated with the HasGravity Trait.
  • the magnitude of the gravitational force may depend on the mass of the object as well as this object's interaction with any other nearby objects that also possess gravity. Thus, if a large object with gravity is located near a smaller and less massive object with gravity, another object may be gravitationally "pulled” more toward the large object than the smaller object. Because of the interaction of these two gravity-laden objects, the gravitational force of the smaller and less massive object is less than it would otherwise be had the larger object not been located nearby.
  • This State is typically associated with the IsAnObject and HasGravity Traits.
  • the Gravity State has a data type of float and has a minimum value of 0.
  • the HearingQuality State indicates the quality of the hearing level of the object, primitive, world, or sensor.
  • This State is typically associated with the CanHear Trait. With this State, the author can give some objects better hearing quality than other objects.
  • This State uses float data type. Its property list includes frequency range (i.e., minimum Hertz to maximum Hertz) and hearing distance. Thus, those objects with better hearing capabilities can hear sounds at a wider frequency range than those objects with lesser hearing capabilities, all things being equal (i.e., same volume, same hearing distance). Analogously, those objects with better hearing capabilities would tend to hear sounds coming from a sound source located farther away than those objects with lesser hearing capabilities, all things being equal (i.e., same sound volume, same frequency range).
  • the author may program one object to have a wider hearing frequency range than another object, even though this other object can hear farther. Although one cannot absolutely State that the one object has better hearing quality than the other object, one can say that the hearing qualities of the two objects are different.
  • the author may create a dog object and a human object, where the dog object has a wider hearing frequency range and a greater hearing distance than those of the human object. Their respective hearing qualities will enable the dog object and the human object to respond differently to different sounds and accordingly, the author will be able to accurately simulate a real world environment on his Viewport.
  • the HearSound State indicates whether or not the sound that had been emitted was heard by a given object. This State is typically associated with the CanHear Trait. Its data type is Boolean and the reset type is automatic. If a given object has heard or is hearing the sound, the system will change the State value accordingly.
  • the HearSound State is dependent on the HearingQuality State because an object's hearing quality (frequency range, hearing distance) determine whether the object has heard a sound or not.
  • the HorizontalTile State indicates the general pattern-type appearance of an object or primitive.
  • An author may want to use a particular pre-defined tile to "cover” or “decorate” his object or primitive. By indicating the number of this tile to be used, the author will alter the appearance of the object or primitive.
  • the entire object is covered by one tile which is sized to fit the object. If two tiles are used, the same object is covered by two tiles where half of the object will be covered by one tile and the other half of the object will be covered by the other tile. Also, by using two tiles, the size of each tile is smaller (by one-half) than the former one-tile design. Stated generally, the number of tiles specified dictates the size of each tile, given that the object's size remains constant.
  • each tile becomes in order to cover the object with the specified number of tiles.
  • the tiling effect is horizontal; that is, if N number of tiles is specified, these N tiles will be laid out horizontally across the surface of the object. See the VerticalTile State for the vertical tile layout.
  • the State's data type is integer and its property list requires a minimum of 1 tile.
  • the IsLightOn State indicates whether or not the light is on. This State is typically associated with the IsALight Trait. Its data type is Boolean. The reset type is manual, although in other embodiments, the reset type is automatic.
  • the IsPerspectiveCorrected State indicates whether or not perspective correction is enabled for the objects and primitives. Its data type is Boolean. Its primary use is as a quality flag. Because 3D graphics is supported in the authoring tool, perspective projections are also implemented. Although geometric shapes may be altered to fit the perspective projection model, textures on the geometric shapes are not so easily alterable. Thus, when this State indicates that perspective correction is enables, it is indicating that the textures are not corrupted from the perspective projections and have been appropriately corrected.
  • the IsPickable State indicates whether or not an object or primitive is pickable with the mouse or keyboard. Its data type is Boolean with manual reset. As anyone familiar with computeFsystems, if an object is pickable and is picked (typically with a mouse), that object is normally highlighted for further action by the user (e.g., edit, drag). These other actions or behaviors are controlled by other Traits and action scripts that dictate how that object should behave once it is picked, given that the IsPickable State is TRUE (or some other equivalent binary logic value).
  • the IsSolid State indicates whether an object, primitive, or camera is solid or not for the purpose of the collision detection scheme implemented in the authoring tool. Its data type is Boolean. This State is typically associated with the IsAnObject Trait.
  • the collision detection scheme will detect whether this solid object has collided with another object (usually another solid object).
  • the collision detection scheme is a C++ code that exposes itself using several States (e.g., HasCollided,
  • the collision detection scheme will dictate whether a collision has occurred in light of the IsSolid State. If an object that is solid makes "contact" with and object that is not solid, the collision detection scheme will dictate whether a collision has occurred at all (requiring a response from the objects), or if only one of the objects collided while the other object did not (requiring a response from only one of the objects but not the other).
  • the camera also uses this State in some cases and is therefore subject to collision detection. For example, a first person shooter game places the game player in the gaming environment as the camera; that is, he sees what the camera sees. When he moves about this environment, the camera moves with him showing him what his eyes see.
  • the IsTransparent State indicates whether an object or primitive is transparent or not. This State is typically associated with the IsAnObject Trait. Its data type is Boolean. If the camera can "see through” an object, much like a transparent glass window can be seen through, then it is transparent. Similarly, an opaque glass may not be transparent because the camera cannot "see through” it. For a discussion on the distinctions among the IsSolid, IsTransparent, and IsVisible States, see below in the IsVisible State description.
  • the IsVisible State indicates whether or not an object or primitive is visible within the field of view of the camera. This State is typically associated with the IsAnObject Trait. Its data type is Boolean. If an object is in the field of view of the camera and not obstructed by any other object, it is considered to be visible. If an object is partially obstructed by another object and is in the field of view of the camera, it considered to be visible. If an object is completely obstructed by another object but is otherwise in the field of view of the camera, this obstructed object is not considered to be visible. If an object is not in the field of view of the camera, regardless of whether or not it is obstructed by another object, it is not considered to be visible.
  • the IsTransparent, IsVisible, and IsSolid States should be distinguished from each other. If the camera can "see through" an object, much like a transparent glass window can be seen through, then it is transparent. This object, whether transparent or not, can be a solid for the purposes of collision detection. Thus, a transparent glass window can be transparent and solid because any other solid object can collide with the glass window. An object of a ghost or apparition, however, can be transparent but not be solid because the author may want other objects to go through this ghost when they are co-located. Similarly, a ghost that shows its form may not be transparent because the camera cannot "see through” it.
  • the IsVisible State is only relevant to the field of view of the camera. An object may be on the Viewport but it may not be visible because of some obstruction that hides its presence. A ghost that is transparent can also be visible if it is on the Viewport, in the field of view of the camera, and not obstructed by any other object.
  • the Licensed State is used as a Tag in one embodiment of the present invention. Accordingly, it possesses no internal value so to speak. This State is typically associated with the CanBeNetworkSynchronized Trait. Refer above to the description on the Audit State as well as the general Tag discussion.
  • the Licensed State itself can be used to enable users to have access to certain behaviors or features of an object.
  • a race car driving simulation game is provided on an Internet gaming site.
  • This racing site has 10 different tracks and 10 different cars.
  • each track features different terrain, scenery, and driving difficulty levels.
  • the game player can only access only the first 3 tracks and the first 3 cars.
  • the Internet game site unlocks some of the remaining tracks and cars.
  • These additional tracks are analogous to bonus tracks and cars.
  • the Licensed Tag is a Tag or parent State for any number of States, such as HasPaidForBonusTrack and FinalRaceResults.
  • the CanRaceBonusTrack Trait listens to changes in these two States via the Licensed Tag to keep track of certain performance criteria (e.g., these final race results) and only when performance goals have been met (e.g., finishing in the top 3 in each track) as well as whether some payment has been provided. If payment has been made already for the bonus cars and tracks (via a CanPay Trait which exposes a State called HasPaidForBonusTrack) and the performance criteria have been met, the system notifies all listeners of the Licensed Tag of the achieved performance goals. A Trait such as CanRaceBonusTrack listens to the Licensed State to invoke some other action that allows the game player to race on that bonus track across the network, if desired.
  • certain performance criteria e.g., these final race results
  • performance goals e.g., finishing in the top 3 in each track
  • HasPaidForBonusTrack which exposes a State called HasPaidForBonusTrack
  • the Licensed Tag can also be used to license reusable objects, such as movie or game characters, across the network. If a user pays for the right to use that object such that a CanPay Trait changes a State called HasPaid, the Licensed Tag can be used and monitored by a Trait called CanLicense across a network to deliver the licensed object to the user.
  • a basketball game site allows any game player to play NBA basketball on its site with any team or combination of known NBA basketball player objects or newly created basketball objects.
  • a particular game player wants to use Michael Jordan in his team to play a game against another team.
  • the CanPay Trait changes the State HasPaid to some value indicating that payment has been made for this object.
  • the LightlnnerAngles State indicates the angle of the inner light for spot lights. This State is typically associated with the IsALight Trait. When a spot light is illuminated against a wall, a couple of concentric circles appear- an outer circle and an inner circle. Most of the light's intensity is centered in the inner circle defined. The area between the perimeter of the inner circle and the perimeter of the outer circle also has some light, but at a much lower intensity.
  • the LightlnnerAngles State represents the inner angle of the light projected from the spot light source.
  • the data type is float with a range from 0 radian to 2 ⁇ radians. Thus, if the angle is set at 0 radian, the light comes out of the spot light source relatively parallel and the size of the inner circle is relatively small.
  • the LightOuterAngles State indicates the angle of the outer light for spot lights. This State is typically associated with the IsALight Trait. As described above for the LightlnnerAngles State, when a spot light is illuminated against a wall, a couple of concentric circles appear - an outer circle and an inner circle. This State is applicable to the outer circle, where the lower intensity light appears.
  • the LightOuterAngles State represents the outer angle of the light projected from the spot light source.
  • the data type is float with a range from 0 radian to 2 ⁇ radians.
  • the angle is set at 0 radian, the light comes out of the spot light source relatively parallel and the size of the outer circle is relatively small and is no different from the inner circle. If the angle is set at ⁇ /4 radians, the outer circle is larger and the outer light comes out of the spot light source at an angle of 45 degrees, or ⁇ /4 radians.
  • the inner circle,- as defined by the LightlnnerAngles State, should be smaller than the outer circle.
  • the LightRange State indicates the maximum distance that the light travels; that is, how far out does this light shine. This State is typically associated with the IsALight Trait. The data type is float with a minimum value of 0. Thus, if this State indicates 100 yards, the light will only shine 100 yards from the light source. Any object beyond this distance is not illuminated at all by this light source.
  • the LightType State indicates type of the light source. This State is typically associated with the IsALight Trait.
  • the data type is an integer numerator where an integer value represents the particular light source type.
  • the various light types include unknown, point light, spot light, directional light, parallel point light, and ambient light.
  • the "unknown" type is merely a default setting until the proper light type is selected. Thus, for all implementations, a light type of "unknown” has no meaning. The author should inevitably set it to one of the known light types. A discussion of these different types of light will now be provided.
  • a point light provides a light source that illuminates anything that is within a specified spherical radius from that point light.
  • an ambient light illuminates everything in the world without any range limitations.
  • the point light could be used to illuminate a room in a world by limiting its range to a sphere about the room, allowing for some light to spillover out of a door or window of that room.
  • a spot light is like a flashlight which emits a directional light of a specified diameter (cross-section of the light at the source) and a specified angle of emission.
  • the diameter allows the author to specify the magnitude of his spot light (e.g., pocket flashlight v. car headlights v. spotlight at a movie premier).
  • the angle of emission is measured from the center of the light's cross-section for all points around the perimeter of the light's cross-section. The angle of emission allows the author to specify how large the illuminated area on a surface will be given a constant distance from that surface.
  • the illuminated area from a spot light having an emission angle of 0 radian is smaller than the illuminated area from a spot light having an emission angle of ⁇ /4 radians, given that the distances to the surface are the same.
  • a directional light emits parallel light (angle of emission is 0 radian) having a specified diameter for the light's cross-section.
  • a parallel point light emits a directional parallel beam of light having minimal cross-sectional diameter.
  • the Location State indicates the present location of the object, primitive, light, or camera. This State is typically associated with the IsAnObject Trait.
  • the data type is float for each of the x, y, and z coordinates.
  • Another property element of the property list beyond the float values for x, y, z position coordinates is the Persona Object reference.
  • the Persona Object can be used to set the location of an object relative to another object.
  • the Persona Object can be used.
  • the location of an object relative to a referenced Persona Object is determined by calculating x, y, and z units in 3D space from this Persona Object. Note that the reference to a Persona Object does not necessarily create a parent-child relationship automatically; the author would have to explicitly do this to form that relationship.
  • the Mass State indicates the mass (e.g., in grams) of the primitive or object. This State is typically associated with the IsAnObject and HasGravity Traits. Its data type is float and
  • the MemoryCell State indicates the kind of memory and the memory itself of the primitive, object, world, universe, sensor, camera, or light. This State is typically associated with the CanRemember Trait. Thus, if an object has a CanRemember Trait, it keeps data in the MemoryCell State.
  • the data type and property list vary depending on the specific implementation.
  • the kind of memory can vary from a simple database to a sophisticated neural network.
  • the memory itself are the values in the table or the neural network itself of processors and local memories.
  • a neural network is a network of many processor units, each possibly having a small amount of local memory. The processor units are connected together via communication channels which usually carry encoded numeric data.
  • the processor units operate only on their local data and on the inputs they receive via the channels.
  • Most neural networks have some sort of "training” rule where the weights of the channels are adjusted on the basis of the input data; that is, neural networks “learn” from examples and exhibit some capability for generalization beyond the training data.
  • the Orientation State indicates the orientation of the light, primitive, object, or camera. This State is typically associated with the IsAnObject Trait. Its data type is float for each of yaw, roll, and pitch. Another property element of the property list beyond the float values for the yaw, roll, and pitch is the Persona Object reference. To set the orientation of an object relative to another object, a reference to a Persona Object can be used. Thus, the orientation of an object relative to a referenced Persona Object is determined by calculating yaw, roll, and pitch units in 3D space from this Persona Object.
  • the Parent State indicates the Persona Object name of the parent of the object, primitive, camera, or light.
  • a pillow object child
  • the pillow object will list the bed object as a parent in its data structure.
  • a Trait associated with the pillow may listen to the Parent State for information on its physical hierarchy.
  • the Pivot State indicates the pivot point, or the point of rotation, of the object, primitive, light, or camera. This State is typically associated with the IsAnObject Trait. Its data type is float for each of yaw, roll, and pitch. Additionally, the Pivot State can also reference a Persona Object. Thus, the Pivot of an object relative to a referenced Persona Object is determined by calculating yaw, roll, and pitch units in 3D space from this Persona Object.
  • the PlayedSound State indicates the particular sound that is to be played upon invocation by the object or primitive (via a Trait). This State is typically associated with the
  • CanEmitSounds Trait includes the sound itself (whether mono or 3D) and the volume.
  • an object emits certain unique sounds when invoked.
  • an object of a car can emit one of several different sounds including the engine starting sound, the general whirl of the engine, a high pitched screech due to depression of the brakes, the gear-shifting sound of the transmission system, and a crash sound when it crashes against another object.
  • the car object may elect to emit one of these sounds in response to some event on the Viewport.
  • the car object makes the engine starting sound. Thereafter, the general whirl of the engine can be heard.
  • Three dimensional sounds can also be associated with the PlayedSound State. Of course, these 3D sounds are associated with location and orientation of both the listener and sound emitter which can provoke other dynamic responses.
  • the different sounds are associated with different versions of the PlayedSound State; that is, the engine object in the car object may be associated with the engine whirl sound in its PlayedSound State.
  • the brake pad of the car object may be associated with the screeching sound in its own PlayedSound State.
  • the Published State is used as a Tag in one embodiment of the present invention. Accordingly, it possesses no internal value so to speak.
  • the Published Tag is used to synchronize the various States across a network. For example, assume that two game players are playing a first person shooter game on an Internet gaming site. In order for each game player to succeed, he must know his (and his opponent's) location, orientation, and. After all, in this first person shooter game, each game player is trying to destroy the other game player in some game environment (e.g., a labyrinthian dungeon). Over a network, the Internet game site must keep track of all crucial State changes so that each game player can be synchronized with each other and the game site.
  • some game environment e.g., a labyrinthian dungeon
  • the other game player must also know this fact.
  • system alerts all listeners to this Published Tag, which is the CanBeNetworkSynchronized Trait in this example.
  • the listener can then request that the new location value be written to some variable or memory location, which the listener can read to get the new location information.
  • CanBeNetworkSynchronized Trait can retrieve this new location information of that other game player and may include in its script some script that calls for updating the other game player's location. Thus, the two game players are synchronized. This process can be repeated for every State that is monitored by the Published Tag for network synchronization.
  • the SeeingQuality State indicates the quality of the seeing level of an object or primitive.
  • This State is typically associated with the CanSee Trait.
  • the data type is float and the property list includes a minimum seeing distance of 0. With this State, the author can give some objects better seeing quality than other objects. Thus, an object with a particular seeing ability can see things located farther away than an object with lesser seeing ability.
  • the SeenObjects State indicates whether or not an object or primitive has seen the designated thing. This State is typically associated with the CanSee Trait. Its data type is Boolean.
  • the ShadingMode State indicates the shading of an object or primitive. Its data type is an integer. The integer value is an index to the type of shading employed: minimum, no lights, flat, gouraud, and maximum.
  • the ShouldDrag State indicates whether or not an object or primitive can be dragged. This State is typically associated with the IsADraggableObject Trait. Its data type is Boolean. This State relies on other State values for it to be TRUE. For example, the ShouldDrag State becomes modified to true only if an object is pickable and mouse-clicked. However, other implementations may require other conditions before changing the ShouldDrag State.
  • the SpeakText State is similar to the PlayedSound State except that instead of emitting a sound, this State indicates the text that the object or primitive should speak. Its data type is string. This State is typically associated with the CanSpeak Trait.
  • this State depends on the system being able to bind to a text-to-speech engine so that any given text string can be converted into audible speech.
  • the range of values is limited only by its specific implementation. For example, if WAV files are used, the range of its values is only limited to the WAV file format.
  • the TalkSentence State indicates the text sentence that is spoken. This State is typically associated with the CanSpeak Trait. Its data type is a string. While the SpeakText State indicates the text that should be spoken, the TalkSentence State actually invokes the object or primitive to speak a certain sentence from the SpeakText State via the CanSpeak Trait or some other speaking script.
  • the TimeOutHappened State indicates whether or not a time-out event occurred.
  • This State is typically associated with the CanReceiveTimedEvents Trait. Its data type is Boolean. Thus, if a time-out timer has expired (as provided in the TimeOutPeriod), this State notifies (via the system) all listener Traits. So, if 4 minutes is the time-out period and over 4 minutes have gone by, the system changes the value of the TimeOutHappened State to TRUE to alert all listeners of this timed-out event.
  • the TimeOutPeriod State indicates the time-out period. This State is typically associated with the CanReceiveTimedEvents Trait. Its data type is float. Any time-out period can be specified and if this period expires, the system changes the value of the TimeOutHappened State to TRUE to alert all listeners of this timed-out event.
  • the State requires a minimum of 0 and this State is manually reset.
  • the TriggerProximity State indicates the distance of any object from the sensor. This State is typically associated with the IsASensor Trait. Its data type is float. If a designated object is within a certain distance from the point sensor, the TriggerProximity State indicates that distance. Based on this distance, other Traits such as CanBeep or CanChangeColor which listen to TriggerProximity can invoke other actions such as beeping or changing the object's color to red if a certain distance threshold is crossed. In other applications, sensors can be used as a reference point to indicate how far or close an object is to that sensor. In these applications, the sensor need not activate any other action; the sensor merely exist to alert other objects of their proximity.
  • the VerticalTile State indicates the general pattern-type appearance of an object or primitive.
  • An author may want to use a particular pre-defined tile to "cover” or “decorate” his object or primitive. By indicating the number of this tile to be used, the author will alter the appearance of the object or primitive.
  • the entire object is covered by one tile which is sized to fit the object. If two tiles are used, the same object is covered by two tiles where half of the object will be covered by one tile and the other half of the object will be covered by the other tile. Also, by using two tiles, the size of each tile is smaller (by one-half) than the former one-tile design. Stated generally, the number of tiles specified dictates the size of each tile, given that the object's size remains constant.
  • each tile becomes in order to cover the object with the specified number of tiles.
  • the tiling effect is vertical; that is, if N number of tiles is specified, these N tiles will be laid out vertically across the surface of the object. See the HorizontalTile State for the horizontal tile layout.
  • the State's data type is float and its property list requires a minimum of 1 tile.
  • the WalkComplete State indicates whether or not an object or primitive has completed its walk. This State is typically associated with the Can Walk Trait. Its data type is Boolean.
  • the WalkDestination State indicates the object's or primitive's walk destination by specifiying the location via the x, y, z coordinates (float data type) or referring to a Persona Object. This State is typically associated with the Can Walk Trait.
  • the WalkDirection State indicates angular direction of the object's or primitive's walk. This State is typically associated with the Can Walk Trait. Its data type is float for the range of 0 radian to 2 ⁇ radians. This State can also reference a Persona Object instead of an angular direction; that is, by referring to a specific Persona Object as the walk direction, the angle can be deduced.
  • the WindSpeed State indicates the speed of the wind so that its effects can be compensated for and perhaps responded to by the object or primitive that is making contact with the wind.
  • This State is typically associated with the Has Wind Trait. Its data type is float. For example, an object running against the wind may run slower than normal if the wind speed was high enough to significantly impede his progress. To elaborate further, an object may be running via the CanRun Trait and modifying its RunSpeed State to 5 MPH. Because the wind speed as indicated in the WindSpeed Trait is large enough to significantly impede the running object, the Has Wind Trait may modify the RunSpeed State of the running object to 4.8 MPH. So, when the CanRun Trait reads its RunSpeed State again, it is no longer 5 MPH but rather 4.8 MPH. Based on this State value, the running object may elect to run faster to compensate for its wind impeded slowdown.
  • system-changeable States are those States that are changeable by the system only.
  • the system-changeable States will now be discussed with reference to Table VII below:
  • Table VII lists the system-changeable States that are exposed by any Trait or action script implemented by the author in his Viewport and listened to by any Trait.
  • the table lists the State names at the leftmost column.
  • Corresponding columns provide a list of Object Types that each State is supported by, the reset information (i.e., reset to default values manually or automatically?), the data type (e.g., string, boolean, integer, float), and the property list which includes the range of possible values. Note that this table lists some of the many States that could be implemented in the authoring tool in accordance with one embodiment of the present invention.
  • the CollidedWith State indicates the Persona Object that the primitive, object, light, or camera has collided with for the purpose of the collision detection scheme.
  • This State is typically associated with the IsAnObject Trait. This information is important for all the solid objects that are subject to the collision detection scheme because each object need to know who it collided with in order to react or respond appropriately. For example, assume several billiard balls are on a table. If ball 1 collides with ball2, ball2 will bounce in the opposite direction from where balll came from. Ball2 will not know which direction to bounce toward or how fast to bounce without knowing which ball collided with it. By knowing balll's identity, ball2 can determine balll's velocity (speed and direction) so that ball2 can now calculate the bounce direction and speed.
  • the HasCollided State indicates the whether or not an object, primitive, light, or camera has collided with any other entity on that Viewport. This State is typically associated with the IsAnObject Trait. Its data type is Boolean with automatic reset. In one embodiment, the collision detection scheme uses a bounding cube around the object. In other embodiments, the collision detection scheme uses a more complex bounding shape to more perfectly encompass the object.
  • the HasObstructed State indicates whether or not the object, light, primitive, or camera is obstructing another entity on that Viewport from the field of view of the camera. If an obstructing object is obstructing some other object, this State would be TRUE for the obstructing object.
  • This State is typically associated with the IsAnObject Trait. Its data type is Boolean with automatic reset.
  • the IsValid State indicates whether or not an object, primitive, light, camera, world, background, universe, or sensor is valid or not. This State is particularly applicable to those objects that have just been created and are still being edited. If it is not ready to interact with other objects on the Viewport (i.e., it is not fully created yet), its IsValid State is FALSE. Accordingly, no other object can modify parameters (State values) associated with this newly created object. Similarly, this newly created object cannot change the parameters of other objects. If the object is fully created and ready to interact with other objects on the Viewport, the IsValid State for this object is TRUE. Its data type is Boolean with manual reset.
  • the MediaReady State indicates whether or not certain attributes of the object, primitive, or background that is being downloaded over a network (e.g., the Internet) are ready for display and interactivity purposes. If ready, these attributes of the object will be shown for that object. Thus, every aspect of the object need not be completely downloaded for the user to see it; rather, the user can see the object step by step as it is being "constructed" so to speak. These attributes include geometry, materials, and texture. The data type is Boolean for each of these attributes. As an object is being downloaded over the network, the MediaReady State may indicate that the geometry of the object is ready but not the texture or materials. Thus, the computer system will show the skeletal geometry of the object.
  • a network e.g., the Internet
  • some interactive elements may also be ready as a result of the geometry attribute being ready, such as being pickable and draggable.
  • the material attribute may be ready and the MediaReady State for the materials attribute will present TRUE. The materials attribute will then cover the geometry skeleton of the object which was previously downloaded and displayed.
  • the MouseClicked State indicates whether or not the mouse was clicked on an object, primitive, or background.
  • the data type is Boolean with automatic reset. Based on this mouse click, other actions may follow, such as highlighting the object and dragging it if the mouse button was not released. Of course, these further actions (so to speak) are part of those Traits that are listening to the MouseClicked State. After all, the IsADraggableObject Trait, for example, cannot drag the object unless the mouse has clicked and is held down on the object itself.
  • the MouseVector State indicates the direction and magnitude of the mouse.
  • the direction component provides mouse position information.
  • a Trait such as IsADraggableObject relies on the ShouldDrag and the direction component of the MouseVector along with mouse down to enable the user to drag the selected object from one position to another.
  • the magnitude component provides speed information so that the rate of change of the mouse position with respect to time is translated to speed, which can be used by various Traits.
  • a Trait such as CanAccelerate with respect to a race car game can give speed control to the user based on how rapidly the user moves his mouse. If the user moves his mouse slowly, the race car moves slowly. If the user moves his mouse quickly, the race car moves quickly.
  • the WorldTime State indicates the universal time for all worlds. In some embodiments, different worlds in the universe can have their own distinct WorldTime States. This State is typically associated with the IsAUniverse Trait. Its data type is integer with manual reset. The time reference can be obtained internally from the computer system's clock or the authoring tool's frame time in the score. Thus, whenever time changes, the system updates the WorldTime State for all the worlds in the universe to access.
  • the authoring tool in accordance with one embodiment of the present invention uses the concepts of Traits and States to allow the author to program his project with the benefits of selective reusability of objects and parameters. Additionally, the concept of Actions is also employed, where an action is any script that does not define, initialize, expose, or listen to a State. Simply put, an Action merely modifies or changes a State value. In Macromedia's Director, these concepts of Traits and States are not recognized directly. So, Macromedia's Director treats Traits and Actions as behavior and States as properties.
  • the score in Director is used to place the cast member itself (the object) as well as the behavior.
  • a cast member such as a ball may be scored for frame periods 1-100. This frame period represents the ball's lifespan on the stage. If the author wants some behavior associated with this ball, he will designate another channel in the score to instantiate this behavior. For example, the author may want the ball to turn blue during frame periods 50-60 while retaining its original color for the other frame periods during its lifespan. In contrast, the actual object itself (e.g., the ball) does not appear in the score in 3D dreams according to one embodiment of the present invention.
  • the Action does show up on the score in the form of a Persona (i.e., a "container” for Traits and Actions) so that the author can exhibit the behavior at any desired time.
  • the score would contain the Persona, which in turn contains the Action of Change Color to Blue, associated with the ball object.
  • the author can dictate when and how long this particular Action will occur for the ball object in the Viewport.
  • the author can now adjust the Persona in the score to appear only during frame periods 50-60.
  • the actual object itself e.g., the ball
  • 3D Texture Animation Action The texture of the Persona Object may exhibit some animation.
  • the Action requires the author to select the "from” texture and a "to” texture.
  • the system includes an animation handler in the script that takes the bit map of the "from” texture to gradually change to the bit map of the "to” texture.
  • the walking animation can be selected from a number of pre-animated bit map files that change over time.
  • Animation Action allows an object to exhibit some animation of its surface.
  • the Action requires the author to select the "from” object and a "to” object.
  • the system includes an animation handler in the script that takes the bit map of the "from” object to gradually change to the bit map of the "to” object.
  • This Action allows an object to exhibit some animation of its surface color.
  • this Action allows the author to make his designated object randomly change its surface color at a particular rate.
  • the Action requires the author to select the rate at which the color changes randomly.
  • Fade-In This Action allows an object to exhibit some animation of the designated object fading into view from a non-visible state to a visible state at a particular rate.
  • the Action requires the author to select the rate at which the color changes randomly.
  • Interpolate Color This Action allows an object to exhibit some animation of its surface color. In particular, this Action allows the author to make his designated object change its surface color at a particular rate. The author selects a "from” color and a "to” color and the Interpolate Color Action changes the color of the designated object by cycling through the range of colors in the spectrum between the "from" color and the "to” color. For example, the author may select red as the "from” color and yellow as the "to” color. This Action would then cycle through those colors in the spectrum located between red and yellow. This range excludes the "cold” colors such as blue and green.
  • the Action requires the author to select the rate at which the color changes.
  • This Action allows an object to exhibit some animation of any property that the author can select from a list.
  • this Action allows the author to make his designated object change its properties at a particular rate. The author selects a "from” property and a "to” property and the Interpolate Properties Action interpolates the properties between these two boundaries. The Action also requires the author to select the rate at which the property changes.
  • Jump To Marker This Action allows the author to create an object that jumps from its present location to a new location dictated by a marker. The author must select the marker in his Viewport for the jumping object to jump to. The author also designates how high the object jumps and the time delay between the movement of the marker and the first jump.
  • a basket of carrots may be designated as a marker.
  • the author creates a rabbit, he drags the Jump to Marker Action into the rabbit's Persona. Thereafter, whenever the user moves the basket of carrots (with the mouse or some other object), the rabbit jumps toward the basket of carrots.
  • This Action allows the author to create an object that looks at a marker.
  • the author must select the marker in his Viewport for the object to look at.
  • the author also designates the time delay between the movement of the marker and the looking action.
  • the object will automatically look at the marker at the new location after some designated time delay has expired.
  • a basket of carrots may be designated as a marker.
  • This Action allows the author to create an object that moves to a marker.
  • the author must select the marker in his Viewport for the object to move to.
  • the author also designates the time delay between the movement of the marker and the moving action.
  • the object will automatically move toward the marker at the new location after some designated time delay has expired.
  • a basket of carrots may be designated as a marker.
  • This Action allows an object to exhibit some 3D animation of its surface.
  • the Action requires the author to select the "from” object and a "to” object.
  • the system includes an animation handler in the script that takes the bit map of the "from” object to gradually change to the bit map of the "to” object.
  • This Action allows the author to create an object that rotates around a pivot point toward a marker.
  • the author must select the marker in his Viewport for the object to rotate toward.
  • the author also designates the time delay between the movement of the marker and the rotating action.
  • the author must specify which part of his object will rotate toward or face the marker.
  • the object will automatically turn toward the marker at the new location after some designated time delay has expired.
  • a sun object may be designated as a marker.
  • This Action makes the designated object into a Parent of a specified child. This creates a parent-child physical hierarchy relationship between the two objects that may affect such states as location and orientation. Thus, if a bed is made a parent of a child pillow that is lying on top of it, the mere movement of the bed (parent) will necessarily move the pillow in a corresponding manner.
  • This Action allows the author to create an object that turns toward a marker.
  • the author must select the marker in his Viewport for the object to turn toward.
  • the author also designates the time delay between the movement of the marker and the turning action.
  • the author must specify which part of his object will turn toward or face the marker.
  • a basket of carrots may be designated as a marker.
  • the author creates a rabbit, he drags the Turn To Action into the rabbit's Persona. Thereafter, whenever the user moves the basket of carrots (with the mouse or some other object), the rabbit turns toward the basket of carrots at the new location.
  • this Action does not require the object to look at the marker since the author can designate any part of the object to turn toward or face the marker.
  • the authoring tool resides in a computing system that typically includes at least one microprocessor, memory (RAM), hard disk memory, some input devices (e.g., mouse, keyboard, microphone, camera), and some output devices (e.g., monitor, printer, sound system).
  • the computing system may also have other components such as a 3D graphics aceelerator. It may be connected to a local network (e.g., LAN) via a network adapter card or a wide network (e.g., WAN, Internet).
  • FIG. 8 generally shows the hierarchical layers of software 530 incorporated into the computer system when one embodiment of the present invention is incorporated into an authoring tool.
  • a particular layer of software typically depends on the software at the layers below it and does not depend on software which is at the same layer.
  • the software 530 is stored in memory or stored in some mass storage unit and then loaded into memory when executed.
  • the software 530 includes an operating system 531 for controlling and coordinating the computer system 100.
  • the invention can be applied to virtually any operating system, but, preferably, the operating system includes the capability to process sound, graphics, video or animation and to provide a windowing environment for display on the display screen of the computer system.
  • the operating system can be, for example, Microsoft Windows on x86 or Pentium-based systems or an Apple Macintosh.
  • control passes to its initialization code to set up necessary data structures, and load and initialize device drivers. Control is then passed to the command line interpreter (CLI), which prompts the user to indicate the program to be run.
  • CLI command line interpreter
  • the operating system determines the amount of memory needed to run the program, locates the block of memory, or allocates a block of memory and accesses the memory either directly or through BIOS.
  • the application program After completion of the memory loading process, the application program begins execution. During the course of its execution, the application program may require numerous services from the operating system, including, but not limited to, reading from and writing to disk files, performing data communications, and interfacing with the display/keyboard/mouse.
  • the software 530 further includes a software development environment 532, a tool 533 and one or more multimedia software titles 534.
  • the software development environment 532 conceptually lies between the operating system 132 and the tool 533, providing an interface between the two.
  • the development environment 532 is typically present during the creation of the tool 533 (or components or extensions of the tool 533), but may or may not be present during execution of the tool, depending on the development environment.
  • environment 532 is a C++ or SmallTalk environment.
  • the tool 533 is typically an authoring tool for producing multimedia products, such as Macromedia's Director, which incorporates the various embodiments of the present invention. Users of the tool 533 can create, manipulate and execute multimedia products 534. Such users may be authors or end-users of multimedia software titles 534.
  • the software title 534 of course contains content 535, which includes all the files and features for the user.
  • the tool 533 preferably varies the functionality it provides to a user of the tool based on a user-specified preference or the task being performed by the user.
  • a plurality of user-selectable modes are provided so that the user can specify to a certain degree the functionality provided by the tool.
  • the tool 533 can provide two modes: an author mode for creating and editing a multimedia product, and a user mode for simply executing a multimedia product. The two modes are provided to the user as selectable or specifiable items in the user interface of the tool 533.
  • any programmer can write extensions to existing software products provided that the software product supports certain extensions and the programmer complies with its interfaces.
  • Macromedia's Director is no exception. Although some embodiments of the present invention can be used as a standalone software product, other embodiments of the present invention are implemented as extensions to Macromedia's Director using the "Macromedia Open Architecture.”
  • Tool Xtras include such additional resources as editing tools and window tools.
  • Asset Xtras include such additional resources as the Viewport and Viewport hierarchy window, Sprite Xtras, and Personas and Persona Objects.
  • the stock library of default Traits and States for the Persona Objects were discussed above.
  • Both the Asset and Lingo Xtras include the various interfaces to allow these extensions to communicate with Director and the 3D Dreams segmentations. These segmentations include the Object Types (e.g., universe, world, object) and the support for 3D graphics and objects.
  • FIG. 9 provides a more detailed view of the layers and components of the software package. Referring now to FIG. 9, at the bottom layers are the Operating System Abstraction
  • the Operating System Abstraction Layer 540 provides the interface between the Operating System 531 (FIG. 8) and Engine Services 542.
  • the Graphics Engine 541 replaces the graphics engine in Macromedia's Director.
  • the Graphics Engine 541 manages the scene and updates all elements of the scene so that they can be rendered (drawn) in the next frame.
  • the Graphics Engine 541 also has a camera.
  • the Graphics Engine 541 When the Graphics Engine 541 receives a command from an upper layer to render a scene, it sends the command to DirectX, which is the name of a technology designed by Microsoft to make Windows-based computers an ideal platform for running and displaying applications rich in multimedia elements such as full-color graphics, video, 3D animation, and surround sound.
  • DirectX is an integral part of Windows 98, as well as Microsoft's Internet Explorer. DirectX components may also be automatically installed on your system by advanced multimedia games and applications written for Windows 95. With DirectX, developers are given a set of instructions and components that would ensure that their multimedia applications would run on any Windows- based PC, regardless of the hardware. Also, DirectX provided the developers with tools that simplify the creation and playback of multimedia content. In the Apple Macintosh, it sends the command to QD3D.
  • the Graphics Engine 541 is usually OS-specific.
  • Engine Services 542 provides a set of functions 550 that generally includes resource management and communications. Engine Services 542 provides file loading and saving functionality, bitmap and geometry reading and writing, and Internet communications. Engine Services 542 also routes Director-related information to the Director- related Engine Services 563 located at an upper layer. Unlike the Graphics Engine 541, Engine Services is not OS-specific.
  • Engine Services layer 542 At the next layer above the Engine Services layer 542 is the Object Engine 543, which includes 3D world management and collision detection functionality 551. The task of 3D world management includes dedicating resources for all objects so that they are implemented in the system. These implemented objects are also identified and treated as three-dimensional entities.
  • the collision detection scheme is implemented in the Object Engine 543 and is exposed by the 3D Dreams Layer 544. In particular, the collision detection is exposed by the IsAnObject Trait through various States (e.g., HasCollided, CollidedWith).
  • the 3D Dreams Layer 544 This layer defines, changes, listens to, and exposes States for all objects and primitives.
  • the objects and primitives are implemented in the Object Engine layer 543, but the States linked to these objects and primitives are exposed in the 3D Dreams Layer 544.
  • the 3D world i.e., objects, primitives
  • a proximity sensor is implemented in the object engine but its relevant States (e.g., TriggerProximity) are exposed using sensor primitives in the 3D Dreams Layer 544.
  • the 3D Dreams Layer 544 includes several components for managing the various State updates (i.e., listening, exposing) and the Traits. These components are organized according to the various Object Types, including Entity 552, Object 553, Camera 554, Lights 555, World 556, Universe 557, Backgrounds 558, Sensor 559, and Primitives 560.
  • the Entity component 552 contains and manages all of the States.
  • the Entity component 552 performs the primary State management duties such as updating (i.e., changing) State values, maintaining Tags, and dedicating resources for States once they have been created (i.e., added or initialized) by a Trait.
  • the other components e.g., Object 553, primitives 560
  • the other components contain and create Traits to give all the various States their meaning. Via the Traits, these components listens to and exposes the States that primarily reside in the Entity component 552. These other components also generally send commands to the Entity component 552 so that the Entity component 552 can update States.
  • the 3D Dreams Layer 544 is the 3D Dreams Xtras layer 545.
  • This layer 545 contains the Director Glue Layer/Viewport 561, Persona 562, and the Director- Related Engine Services 563.
  • the Director Glue Layer/Viewport one embodiment of the present invention provides a Viewport which is placed on Director's stage (which, along with the Viewport, enables the author/user to view his Movie).
  • This Viewport is available in both edit mode and playback mode.
  • a Director cast member When a Director cast member is created, it can be placed on the stage, but not in the Viewport.
  • a Persona Object When a Persona Object is created, it can be placed both on the stage and the Viewport.
  • All interactivity and score action occurs in the Viewport for those movies created using Persona Objects and 3D Dreams. All interactivity and score action for those objects and scenes that were created using Director (not 3D Dreams and Persona) occur on the stage and not the Viewport.
  • the Viewport is basically a window to the 3D world where Persona Objects are populated, behave, and otherwise interact with other objects, its surrounding, and the user.
  • the portion of the stage outside the Viewport is nothing more than an editing scratchpad, where the author can place his Personas and Persona Objects during edit mode prior to placing them in the Viewport.
  • the Director Glue Layer/Viewport 561 manages these functions so that both the Director scenes on the stage and 3D Dreams scenes in the Viewport can be edited and viewed.
  • the Director Glue Layer/Viewport 561 interacts with the Engine Services 542, where resource management occurs. This Engine Services 542, of course, communicates with the Director-Related Engine Services 563.
  • the Persona component 562 interfaces with all Persona Objects in 3D Dreams and can monitor State changes, as can the various components 553-560 in the 3D Dreams Layer 544.
  • the Persona Object and State change information are provided to the Director Glue Layer/Viewport so that they can be properly interpreted in a form understood by Director.
  • This component 562 allows the author/user to transparently use Persona Objects through the Director interface, even though Persona Objects are not directly supported by Director.
  • Director treats Traits as Behaviors and States as properties.
  • the Persona Objects are maintained at a higher level but are implemented in part in the Object Engine 543. For example, collision detection and 3D calculations of these Persona Objects occur in the Object Engine 543.
  • the Director-related Engine services 563 is another glue layer for providing Director- related resource management. These services know how to create Director-recognizable cast members for the Persona Objects. Note that Persona Objects are supported in the Persona component 562 of the 3D Dreams Xtras 545, which also monitors Traits and States in the lower 3D Dreams Layer.
  • the next layer above the 3D Dreams Xtras 545 is the Director layer. This is the Director user interface that is familiar to developers. Of course, some additional windows and features have been provided, such as the Viewport, Viewport hierarchy window, Persona Objects, Traits, and Actions.
  • BUMPER CAR EXAMPLE The following example is intended to illustrate the enhanced reusability provided by the present invention over existing application development environments, such as Director.
  • components e.g., Traits
  • authors can reuse not just individual Traits, but also more complex collections of Traits (Personas), or even entire Persona Objects (perhaps having multiple Personas).
  • this degree of selective reusability is simply not offered by other existing development environments, object-oriented or otherwise.
  • the ride includes multiple "computer controlled” cars, which drive randomly (i.e., random turn, then drive at fixed speed) and one "user controlled” car, which the user drives (i.e., steers and accelerates/decelerates) by dragging the mouse in a "control pad” section on the screen, and/or typing keyboard commands.
  • both types of cars "bounce back" (i.e., briefly reverse direction), and then resume their initial driving behavior.
  • the two types of cars have both similarities and differences. Each drives differently.
  • the computer-controlled car turns randomly and then drives at a fixed speed; whereas the user-controlled car's direction and speed (i.e., velocity) is determined solely by the user's mouse/keyboard actions.
  • both types of cars briefly exhibit precisely the same "bounce back" behavior, before returning to their different driving behaviors.
  • FIG. 10 illustrates an implementation of this example using certain of the concepts underlying the present invention, such as Traits, States, Actions and Listeners.
  • both types of cars would include the three basic car-related Traits: IsAnObject 1114, HasVelocity 1120 and IsACar 1130 (described below).
  • the computer-controlled car would also include the IsAComputerControlled Car Trait 1140 and the ComputerDrive Action 1150; whereas the user-controlled car would include (in addition to the three basic car-related Traits) the IsAUserControlledCar 1160 Trait and the UserDrive Action 1170.
  • These commands 1106 are issued by the IsAnObject Trait 1110, as illustrated by line 1107, in response to a change in the object's location State 1108.
  • the IsAnObject Trait 1114 defines, and is Listening for changes in, location State 1108 (as illustrated by line 1109).
  • the IsAnObject Trait 1110 issues the Lingo commands (Walk3D, Jump3D) 1106 that direct the engine to move the cars (Persona Objects) 1105 to the new location.
  • the HasVelocity State 1120 defines, and is Listening for changes in, the velocity State 1112 (as illustrated by line 1113).
  • the HasVelocity Trait 1120 calculates the new location (based on the current location and the new velocity - i.e., direction and speed) and modifies the location State 1108 accordingly (which, as discussed above, triggers the IsAnObject Trait 1110 to move the cars).
  • the velocity State 1112 can be changed. Either type of car may modify its speed or direction, as discussed below. Moreover, the HasVelocity State 1120 itself may modify its direction (and thus its velocity State 1112) in response to a collision. The HasVelocity State 1120 also is Listening for changes in the hasCollided State 1185 (i.e., a collision as illustrated via line 1180). In response to a collision, it implements the "bounce back" functionality described above by changing the car's direction, and thus modifying its velocity State 1112, which, in turn triggers its own handler to update the location based on this new velocity 1112.
  • the IsACar Trait 1130 Listens for changes in the driveTo or drivingSpeed States 1135 (i.e., requested changes in direction or speed, simulating the functionality of the steering wheel and accelerator pedal). In response to such changes (illustrated by line 1136), it modifies the velocity State 1112, triggering the handler in the HasVelocity Trait 1120 described above.
  • the three Traits of a "generic" car implement the basic functionality of a car - i.e., responding to steering wheel and accelerator pedal requests (as well as collisions) by making appropriate adjustments in the car's velocity and thus its location.
  • These Traits are reusable in virtually any type of car that "requests" changes in direction and/or speed, i.e., by Listening for such changes (as illustrated by line 1136 ) in the driveTo or drivingSpeed States 1135.
  • Each type of car has its own unique driving functionality, which to some extent also is reusable, as will now be discussed.
  • the computer-controlled car has the IsAComputerControlledCar Trait 1140, which Listens (as illustrated by line 1176) for a change in the IsValid State 1175, indicating, in essence, that the car objects are now initialized. It also Listens (as illustrated by line 1178) for a change in the HasCollided State 1185, which is generated when the system detects a collision (illustrated by line 1197) between this car and another car or other obstacle (e.g., the wall).
  • IsAComputerControlledCar Trait 1140 which Listens (as illustrated by line 1176) for a change in the IsValid State 1175, indicating, in essence, that the car objects are now initialized. It also Listens (as illustrated by line 1178) for a change in the HasCollided State 1185, which is generated when the system detects a collision (illustrated by line 1197) between this car and another car or other obstacle (e.g., the wall).
  • the computer-controlled car responds identically upon initialization and upon a collision.
  • the IsAComputerControlledCar Trait 1140 invokes the ComputerDrive Action 1150 (i.e., whether in response to a change in the isValid State, illustrated by line 1141, or the hasCollided State, illustrated by line 1142). It could perform this response itself (i.e., computerized driving behavior), but by delegating pure Actions (i.e., modifying States), the ComputerDrive Action can more easily be replaced by another Action, or added as one of many parameterized alternative Actions, thereby further enhancing reusability.
  • the ComputerDrive Action 1150 implements a random turn, and then drives at a fixed speed - i.e., it modifies the driveTo and drivingSpeed States 1135 (as illustrated by line 1137) and thus triggers the IsACar Trait 1136 as discussed above.
  • the computer-controlled car is able to share State information, and effectively communicate indirectly with, the basic car Traits without een explicitly being aware of their existence.
  • the user-controlled car has the IsAUserControlledCar Trait 1160, which Listens (as illustrated by line 1177) for a change in the isValid State 1175, indicating, in essence, that the application has started. It also Listens (as illustrated by line 1179) for a change in the HasCollided State 1185, which is generated when the system detects a collision (illustrated by line 1197) between this car and another car or other obstacle (e.g., the wall).
  • IsAUserControlledCar Trait 1160 which Listens (as illustrated by line 1177) for a change in the isValid State 1175, indicating, in essence, that the application has started. It also Listens (as illustrated by line 1179) for a change in the HasCollided State 1185, which is generated when the system detects a collision (illustrated by line 1197) between this car and another car or other obstacle (e.g., the wall).
  • the IsAUserControlledCar Trait 1160 must continue Listening (as illustrated by line 1196) for changes in the mouseVector and/or keyPressed States 1195 (e.g., detected by the system, as illustrated by line 1198), and respond to these user events by calculating a new desired direction and/or speed, and then changing the driveTo and drivingSpeed States 1135 (which will trigger the IsACar State 1130, as discussed above).
  • the user-controlled car need not take any action in response to changes in the isValid 1175 and/or HasCollided 1185 States. Thus, in this example, it need not call the UserDrive Action 1170 (via lines 1161 and 1162, respectively, for changes in the hasCollided 1185 and isValid 1175 States), as there is nothing for it to do.
  • the IsAUserControlledCar Trait 1160 could delegate the functionality of calculating the new desired direction and/or speed to a separate Action; but, in this case, because this functionality is dependent upon repeated changes in the mouseVector and/or keyPressed States, it is more efficient to handle this calculation itself (and directly modify the driveTo and drivingSpeed States 1135).
  • the above example illustrates how a number of different and seemingly interdependent Traits can work together (i.e., share State and State-change information, and communicate indirectly with one another) without explicit knowledge of one another's existence, thereby avoiding many of the dependencies created by explicit message-passing.
  • the messages i.e., callbacks
  • the messages which are sent to the Traits by the system are those which the Traits themselves explicitly request (via Listeners), and involve changes in known States that have explicitly been shared/exposed, as opposed to private IDs that are dependent on the existence of particular objects.
  • these messages are independent of the physical hierarchy of objects, which relates to relative movement, but not to sharing of information.
  • the scripts would have to communicate the "state- change" information among one another.
  • the shared script would probably implement the common "bounce back” functionality following a collision, but would then have to know the "car type” to determine which of the other two scripts to invoke to resume normal driving behavior.
  • a development system such as mTropolis, which provided some degree of selective reusability by utilizing the author's physical hierarchy for "anonymous messaging," still would have problems in this regard.
  • These different Traits or Behaviors do not bear any physical relationship to the car or to one another. They are not components of a car, like a steering wheel, accelerator pedal, etc.
  • mTropolis bases its messaging system on the author's physical hierarchy. Even if one modeled these elements, e.g., by giving each type of car a "child” steering wheel, accelerator pedal and collision-response mechanism (each with its ownHbehavior), these different elements within a particular type of car still would need to communicate with one another.
  • the steering wheel and accelerator pedal objects would both need to evaluate mouse events to determine whether they were intended for the steering wheel or the acceleration pedal. Whichever component handled the calculation of the new location (based on a change in velocity) would have to communicate with the other components in order to share this functionality within the car.
  • the effort expended to enable the different components to communicate with one another i.e., workarounds by creating additional elements to provide an application-specific communications interface
  • the concepts described above are incorporated in a client software application that can interface with a server across a network (e.g., WAN, LAN, Internet) so that certain data relating to the user's interaction with the server content can be logged and transmitted to the server (possibly an alternate server as defined by the content).
  • a network e.g., WAN, LAN, Internet
  • certain data relating to the user's interaction with the server content can be logged and transmitted to the server (possibly an alternate server as defined by the content).
  • a server can be set up to monitor how many times and when a web page is downloaded.
  • the following example illustrates how the present invention can be used to overcome this obstacle without requiring client software that must be rewritten for each different web site.
  • the web site provides the user with the option to further view certain features for that PC model.
  • the user could view the CPU type and speed, type of video card, CD-ROM drive, memory configuration, 3D accelerator, bundled software, monitor, modem, and DVD player, among other features.
  • the company that sells these PC models will probably be interested in gathering information regarding users' interest in various aspects of this web site.
  • the computer manufacturer may be interested in such information as how long a user stayed at particular areas of the web site, which product(s) the user viewed, and which features were of greatest interest.
  • the computer manufacturer is now armed with potentially valuable information that sheds some light into consumer behavior and motivations. For example, why are certain people interested in one product but not another?
  • the company must first determine the type of information that it desires to track, and define that information as States which the client application can monitor.
  • the company wants to track how long a user viewed a particular web page, it might define a set of States on the client application, such as "webPageURL” (which identifies the URL of that web page) and "timeSpentOnWebPageURL” (which the client will determine).
  • the company also would define a Listener on the client application to the timeSpentOnWebPageURL State.
  • the client application also could have one or more Traits to detect a match with the "webPageURL" State, and would then start a timer (possibly by invoking an Action).
  • Another Trait might Listen for a State change that indicated that the client had left that web page, and then possibly invoke another Action to store the elapsed time in the timeSpentOnWebPageURL State. Upon detecting a change in this State, it would notify the Listener on the server application of such change, and probably send the updated timeSpentOnWebPageURL State value.
  • the Traits and parameters necessary to enable this feature auditing can be delivered with the content being audited in a platform-independent representation. Such an embodiment would eliminate the need for the user to install a specific client application solely for the purpose of enabling the Traits and States described here.
  • the company could track virtually any number of desired characteristics relating to the user's interest in its web site. Such information could even be stored by the client application across repeated visits to the web site, despite the stateless nature of the server requests.
  • Tag States could be used by the client application to track groups of States, and notify the server application whenever the value of any of such States was changed.
  • a telecommunications network can incorporate the States and Traits concepts at the client side and the telco side without departing from the spirit of the invention.
  • a smart telephone can determine the various telecommunications services offered by the telco so that the telephone unit can subsequently reconfigure itself to support these services.
  • the network can obtain usage pattern information for a given customer.
  • a telecommunications network provides a number of services to its customers.
  • these services include, but are not limited to, call waiting, call forwarding, and voice mail.
  • the services that may be available to any given customer may vary considerably from one telco to another.
  • States are defined for each of these services.
  • a ServiceAvailable Tag groups these States together at the telco.
  • a telephone unit at the customer's location might include some software functionality including a Trait called CanEnableService.
  • Another trait in the telephone unit called CanDetectService Listens to the ServiceAvailable Tag at the telco.
  • the CanDetectService Trait in the telephone unit can retrieve State information about each of the available services from the telco, including which ones are available and additional features associated with each service.
  • the CanDetectService Trait retrieves the State information and updates its own set of corresponding States in the telephone unit.
  • the CanEnableService Trait would Listen for these States that have just been modified, and would not enable these services in the telephone unit, if these services are not already implemented therein.
  • the second application involves obtaining telecommunications usage pattern information for a given customer or an aggregate number of customers.
  • the usage pattern information can be derived from call data including, but not limited to, long-distance call times, long-distance call duration, local call times, and local call duration.
  • States are associated with the call data.
  • the States which the customer is interested in monitoring are grouped together by a Call Tag.
  • these states are updated by a Trait called CanUpdateCallData.
  • CanDetectCallData Listens for a change in any of the States associated with the Call Tag at the telco side. Periodically, the CanDetectCallData Trait of the customer's smart phone detects a change in one of the States associated with the Call Tag, and actually retrieves the State values themselves.
  • additional network software could perform some analysis on the pool of data to present the data in a format that is useful to the telco, which can then prepare various reports to inform the user of his usage patterns, possibly to assist him in optimizing his calling habits, or permit his smart phone or the telco to do so automatically on his behalf.

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Debugging And Monitoring (AREA)
  • Stored Programmes (AREA)

Abstract

The system provides a mechanism for objects (item 330 of fig. 3) to contain a set of component behaviors (Traits, Item 350 of fig. 3) as well as various attributes or properties defined by those behaviours (States, item 340). Moreover, these objects can be linked in a physical (parent-child) hierarchy to enable parent objects to provide a physical frame of reference (i.e., relative coordinate systems) for their children objects, independent of the system wide communication mechanism. Thus, children objects can 'move with their parents', independent of their own ability to move. As authors of an application add Traits (item 350) to their objects, these Traits (item 350) define or 'expose' states (item 340) that can be monitored by the system (item 310) for changes in value - i.e., if a Trait (item 350) defines a Listener (item 360) associated with one or more of these exposed States (340). These Listeners (360) will be notified automatically when the system detects changes in these exposed States (340). Other object components (Actions) modify exposed States so as to trigger the capabilities of Traits (350) (Listening for changes in those exposed States) and provide modular components which Traits (350) can call - to modify other States (340) in response to the triggering of their capabilities, and so on as this condition/response model ripples throughout the objects in an application.

Description

Specification
Authoring System for Selective Reusability of Behaviors
Related U.S. Application Data:
This is a continuation-in-part of U.S. patent application No. 60/120,150, titled "Framework for Autonomous Interactive Objects," filed February 17, 1999.
BACKGROUND OF THE INVENTION Field of the Invention:
This invention relates to application development systems generally, and in particular to systems for authoring interactive applications. Description of Related Art:
Throughout the relatively short history of computer programming, many attempts have been made to simplify the programmer's task of creating complex applications. In one form or another, most application development systems provide a framework that enables programmers to create relatively simple modular components that can be reused (at least to some extent) and combined together to implement a wide variety of complex applications, ranging from word processing, database management and graphics presentation systems to Internet browsers and multi-player gaming environments.
A common example of such a modular component found in most computer programming languages is the subroutine or function call that is written once and can be invoked many times (i.e., reused) by one or more programs. A more modern example found in object-oriented programming languages is the concept of an "object" that encapsulates or hides its internal behavior (i.e., the code and data that implements its functionality), and can be reused in other contexts, and with other objects.
In order to achieve a certain degree of reusability, these modular components or objects isolate their internal implementation from their external interface to other components/objects in the system. For example, a subroutine or function call has an external calling interface that includes its name and the names and types of data on which it operates (i.e., its parameters). If its internal implementation relies on external data (e.g., global variables) or other subroutines or functions, these dependencies may limit its reusability - e.g., in other environments in which such external data and code are not present.
Formal object-oriented systems attempt to eliminate these dependencies by requiring objects to be completely self-contained. In other words, all of the data and code (methods) on which an object relies are encapsulated (i.e., hidden) within the object's internal implementation. The object's external interface to other objects in the system defines the data structures that are passed to and from the object and the methods or functions which that object can perform (without revealing how such functions are performed). Having defined a class of these self-contained objects, an application developer can write a program that instantiates one or more members of that class (e.g., "car" objects with particular attributes and behaviors). Moreover, via the concept of inheritance, a more specialized child class of objects can be defined (e.g., "sports cars") that inherits all of the attributes/behaviors of its parent "car" class, and then adds/modifies (overrides) certain attributes/behaviors. In this manner, another programmer can leverage (reuse) many of the attributes/behaviors created by the author of the "car" class (e.g., the presence of a steering wheel, how the car starts, etc.) without "reinventing the wheel."
One significant problem with inheritance, however, is that the internal implementation of a child class often relies upon the external interface of its parent class. For example, the sports car implementation might rely upon the fact that the car class defined its horn as being part of the steering wheel. If one wanted to replace the car class with another car class that placed its horn on the dashboard, and still reuse the sports car class, this dependency might prevent the sports car class from functioning properly with its new substitute parent car class.
Moreover, this problem is exacerbated when objects do not bear a "kind of relationship to one another (e.g., a "sports car" is a kind of "car"). For example, a car and its wheels bear a physical relationship to one another, but they are very different objects. It is far easier for authors to associate these very different types (i.e., different classes) of objects directly with one another (e.g., based on their physical "container" relationship), than to create abstract classes of objects that somehow inherit attributes or behaviors from one another. The "mTropolis" authoring environment from mFactory, Inc. was an example of one such system - see U.S. Pat. No. 5,680,619. Macromedia's "Director" is another popular program that enables authors to "loosely couple" objects (i.e., objects that already have been instantiated) with their component behaviors (also instantiated objects). These objects are linked together dynamically at runtime, such that an object exhibits the behavior with which it currently is associated.
Yet, any complex system requires fairly extensive inter-object communication, both among characters or elements of the application and among the various component behaviors exhibited by such characters or elements. For example, an application that enables the user to drive a car may require many objects, such as the car itself (which may consist of component objects such as a body, wheels, etc.), other cars, roads, obstacles, and so forth. Each car object must exhibit various behaviors, such as steering, accelerating, braking, etc. Without some system- wide mechanism for these objects to communicate with one another (both individually and collectively as groups of more complex objects and behaviors), the author will have to create an application-specific communication mechanism that likely will create many dependencies among the various obj ects .
For example, Director's "Lingo" scripting language requires authors to send messages from one script to another, or centralize the communication mechanism within a single large script. In either case, objects cannot easily share information without prior knowledge of one another. Even if certain attributes or properties (e.g., an object's "location") are shared globally, objects need to monitor such properties and know when they change or when certain conditions are met. If one object must detect such a condition, and then send a message directly to another object (and hence "know" of the other object's existence at runtime), then dependencies will be created, and such objects will have limited reusability.
Although current application development systems provide various mechanisms for inter- object communication, none seems to offer a sufficient degree of reusability. For example, formal object-oriented systems typically require that objects know about one another's existence and/or external interface. Thus, even if the system detected a collision among a car and a tree, and notified the car and the tree of each other's existence (i.e., by giving each object a pointer to the other), the car would not know whether it had collided with a tree or another car. Without knowing the tree's precise external interface, the car would not be able to communicate with the tree - e.g., to obtain its mass.
This problem is particularly pervasive in authoring environments for generating 3D multimedia content. Consider a 3D object that is being "looked at" from a particular camera angle, and hen moves at some later point in time. How does the camera know when the 3D object moved, so it can track this movement? Current 3D authoring systems (e.g., 3D Groove and Virtus OpenSpace 3D) typically require the author to send messages to the camera or know how to control the camera's properties, thereby creating dependencies among cameras and various other objects. None of these systems provides a communications infrastructure that enables objects to know not only certain properties of other objects, but also when the state of those properties is modified, whether individually or in combination.
Various attempts have been made to achieve a greater degree of reusability by sharing particular types of information among objects in object-oriented systems. See, e.g.. Gamma, "Design Patterns: Elements of Reusable Object-Oriented Software" (Addison- Wesley Professional Computing Series, 1995), pp. 293-303. Yet, as noted above, these solutions do not address the problem on a systemic level. While it may be possible for objects to share information among themselves, particularly where such objects already have a great deal in common, there remains no mechanism for sharing information, including state-change information, across an entire system, among simple and complex objects, and among their component behaviors. For example, consider the various component behaviors exhibited by a car (e.g, steering, accelerating, braking), which must communicate with one another, even though they may share all of the car's global properties (velocity, location, size, etc.). Authoring systems such as mTropolis attempted to resolve this communications problem by modeling the communications hierarchy based on the author's physical hierarchy of objects. The problem with this approach is that the component behaviors within an object often bear no physical relationship to one another (other than being part of an object, such as a car). Nevertheless, these component behavior objects must communicate with one another to share information reflecting a particular condition or a change in state - e.g., a collision among cars, a change in a car's velocity, etc.
In order to selectively reuse these component objects (e.g., a car's behaviors), whether individually or collectively (as a more complex set of behaviors), there must be some mechanism that enables objects to communicate and share information reflecting the current state of the system (as various attributes and properties are changed over time), without creating significant dependencies of the objects on one another or on their physical hierarchy (e.g., requiring knowledge of one another's existence, or of an application-specific communications protocol).
SUMMARY OF THE INVENTION The present invention provides a solution to the above problems by addressing the need for a system-wide communication mechanism that enables objects to share "state change" information with other objects in the system, and among component behaviors associated with a particular object. Moreover, these objects and components need not be aware of one another's existence to share this information, thereby limiting the dependencies of these objects and components on one another, and enhancing their reusability in other contexts.
The system provides a mechanism for objects to contain a set of component behavioral capabilities (Traits) as well as various attributes or properties defined by those Traits (States). Moreover, these objects can be linked in a physical (parent-child) hierarchy to enable parent objects to provide a physical frame of reference (i.e., relative coordinate system) for their children objects, independent of the system- wide communication mechanism. Thus, children objects can "move with their parents", independent of their own ability to move.
As authors of an application add Traits to their objects, these Traits define or "expose" States that can be monitored by the system for changes in value. The system will monitor changes in an exposed State's value when any Trait defines a Listener associated with that State. These Listeners will be notified automatically when the system detects changes in these exposed States. Other object components (Actions) modify exposed States so as to trigger the capabilities of Traits that are Listening for changes in those exposed States, and provide modular components which Traits can call - to modify other States in response to the triggering of their capabilities, and so on as this condition/response model ripples throughout the objects in an application.
This communications mechanism allows Traits/Listeners/ Actions to communicate indirectly with one another (both within and across their objects), without knowledge of one another's existence. When one Trait modifies an exposed State, the system will automatically notify other Traits (that defined Listeners for that State) of such "state change" information, thereby providing a form of "anonymous" communication that minimizes dependencies among objects, and thus enhances their reusability in other contexts.
Listeners also can impose conditions, such that they will be notified by the system when a designated State changes value only if the specified condition also is met. Moreover, hierarchies of States can be defined within an object, by defining parent "Tag" States associated with other States. States can have multiple parent Tag States, and each Tag State can be a parent to multiple other States.
When the system detects a change in a State, it will notify not only Listeners for that State, but also Listeners for any of that State's "ancestor" (parent, grandparent, etc.) Tag States.
Listeners can therefore be notified not only of the changes in a single State, but of changes in any of a group of States, as well as other specified combinations of States or conditions.
In one embodiment, the system is integrated into Macromedia's Director application development environment, and adds the capability of creating/importing 3D objects with a "drag and drop" interface that enables authors to create complex interactive applications that manipulate 3D objects, while providing selective reusability of the objects and components created by multiple authors across multiple applications.
IN THE DRAWINGS: The above objects and description of the present invention may be better understood with the aid of the following text and accompanying drawings.
FIGs. l(a)-(d) illustrate the user interface of Macromedia Director, including its Stage, Score, Cast, Library, Message and Control Panel windows, and the creation of sprite objects. FIGs. 2(a)-(e) illustrate the integration of the 3D Dreams user interface of the present invention into Macromedia's Director authoring environment, including the creation of a Viewport and a hierarchy of Persona Objects having various Traits, States and Actions. FIG. 3 illustrates the system architecture of the present invention, including the interaction of the system and Persona Objects, Traits, States, Listeners and Actions, as well as their data structures. FIG. 4 illustrates the State Hierarchy including States and their parent Tag States. FIG. 5 shows the hierarchy of Object Types in accordance with one embodiment of the present invention.
FIG. 6 shows a screenshot of a Viewport hierarchy window.
FIG, 7 shows a diagram of an object, world, and universe to generally illustrate the concepts of "listening" and "exposing" by Traits of internal States and external States, in accordance with one embodiment of the present invention.
FIG. 8 shows the layers of the software model in accordance with one embodiment of the present invention.
FIG. 9 shows the different layers and components of the software engine in accordance with one embodiment of the present invention.
FIG. 10 illustrates the overall operation of the system and the resulting selective reusability of component objects in the context of a "Bumper Car" example application.
DETAILED DESCRIPTION OF PREFERRED EMBODIMENTS
PROGRAMMING ENVIRONMENT OVERVIEW: MACROMEDIA DIRECTOR
The present invention can be embodied in virtually any new or existing application development environment, as well as in the wide variety of applications or components thereof generated by that environment. Applications could include movies and animations, traditional productivity, educational and entertainment titles, and, in particular, highly interactive games, advertisements and product demonstrations, as well as the interactive characters and other reusable components of those applications.
In one embodiment described herein, the invention is integrated into Macromedia's Director 7 ("Director"), a well-known multimedia authoring system used to create animated, interactive CD-ROM and Internet-based applications, referred to as "Movies." Director enables authors to create Movies ranging from simple eel-based animations to more complex interactive titles in which characters exhibit behaviors and interact with users and with one another. Director is extensible, via an API layer that interfaces with "Xtras," described in greater detail below. In short, one embodiment of the present invention consists of multiple Xtras that interface with Director to provide additional functionality as well as embrace and extend certain aspects of Director's existing functionality, in particular much of its user interface.
FIG. 1(a) illustrates some of Director's major components which an author utilizes to create a Movie. These include the "Cast" window 10 (containing "Cast Members" that represent the appearance, behavior or other aspects of the characters or "sprites" that perform in the
Movie), the "Score" window 20 (containing "Channels" representing the existence over time of the sprites that perform in the Movie, as well as other special effects such as sound, transitions, etc.) and the "Stage" window 30 (that enables the author to preview the Movie as well as control where the sprites will appear). The author of a Movie can control its presentation via the Control Panel 40, e.g., starting, stopping, pausing and rewinding the Movie, stepping it forward or backward one frame at a time, changing its speed (frames/second), turning its sound on or off, or enabling/disabling the loopback control which determines whether the Movie will run only once or indefinitely. Some of these controls, as well as other Director functionality, can be accessed from Director's button bar 50, in addition to relying on Director's menus (not shown) for complete functionality. The "Message" window 60 enables the author to type commands written in Lingo (Director's scripting language) to control the sprites on the Stage.
A Library 60 of built-in Lingo scripts (or programs in other languages), known as "Behaviors," enables an author to add certain common functionality to a sprite (e.g., detecting collisions) merely by dragging the Behavior from the Library 60 onto the sprite (e.g., in the Score 20 or on the Stage 30). As a result of adding a Behavior to a sprite (e.g., associating a Lingo script with that sprite), that sprite will exhibit that Behavior during the Movie. Authors also can create their own Behaviors (whether written in Lingo or in another programming language) and associate them with the sprites in their Movie. If an author desires to utilize a media element in a Movie (such as a bitmap, vector shape or other more complex 2D image, or even a sound or movie), the author either can create that media element using tools within Director (not shown) or import a media element created in another program, such as Adobe Photoshop. As a result, the media element will be added to the Cast window 10. For example, as illustrated in FIG. 1(b), an author has created a vector shape (circle) which therefore was added as a Cast Member (named "Ball") 15 to Cast window 10. By dragging the Ball (Cast Member) 15 onto either the Score 20 or the Stage 30, the author caused Director to create a sprite - i.e., a "ball" that can be animated and interactively controlled in a Movie - that is represented both in the Score 20 and on the Stage 30 as Ball 35. While on the Stage 30 or in the Score 20, sprites represent the animated characters or other elements that actually will perform in the Movie, whereas the Cast Members in the Cast window 10 can be thought of as templates from which one or more sprites can be created.
As a next step, illustrated in FIG. 1(c), the author has added a Behavior to Ball 35 by dragging the "Avoid Mouse" Behavior template 17 from the built-in Library 70 to Ball 35 (either on the Stage 30 or in the Score 20). This Avoid Mouse Behavior template 17 now appears in the Cast window 10, as well as in Library 17. Moreover, The Ball 35 now has an associated Avoid Mouse Behavior 37, which can be distinguished from the template 17 of that Behavior.
Thus, the author now has two Cast Member templates (Ball 15 and Avoid Mouse 17) from which a single sprite has been created - i.e., a Ball 35 that exhibits the Avoid Mouse Behavior 37 during the Movie. The author can then preview this interactive Movie (e.g., by pushing the "play" button in the button bar 50 or on Control Panel 40) and watch Ball 35 avoid the mouse (i.e., by moving slightly away whenever the author positions his mouse over Ball 35) while the Movie is running.
If the author desires to display a "ball" at different times and/or locations during the Movie, and perhaps exhibiting different "behaviors," then the author could write a more complex Lingo script controlling the timing of the sprite's visibility and behavior. Alternatively, the author could create another "ball" sprite from the same Ball Cast Member 15 - e.g., by dragging Ball 15 onto the Stage 30 or into the Score 20. The result of such an action would be to create a second sprite (Ball 36), as illustrated in FIG. 1(d). Note that, unlike Ball 35 which has an associated Avoid Mouse Behavior 37, Ball 36 does not have any associated Behavior. Thus, unlike Ball 35, Ball 36 will not "avoid the mouse" while the Movie is running. Moreover, by dragging the "bars" representing Ball 35 and Ball 36 in the Score 20, the author can cause Ball 35 to appear throughout the entire 28 frames of the Movie, while Ball 36 will appear only during frames 10-20 (in accordance with the frame timeline 22 in the Score 20). The author also can determine the location where Ball 36 will appear when the Movie is running, either by dragging Ball 36 across the Stage 30 to the desired location, or (for more precise control) by setting the exact coordinates (in addition to size and other attributes) in the data fields 24 of the Score 20.
Despite its flexibility, Director has a number of significant limitations that impair an author's ability to create reusable behaviors, characters and other components of a Movie. For example, when a Cast Member (such as Ball 15 in the above example) is used to create multiple sprites (such as Ball 35 and Ball 36), two separate objects with separate properties (e.g., location, size, etc.) and Behaviors are created, even though the author might desire to portray only a single object - while dictating when and where that object will be displayed and which Behaviors will be exhibited under particular conditions.
This is quite difficult to accomplish with Director (as discussed above) because of the many dependencies created between Behaviors when they interact with one another (whether associated with a single sprite object, or spread across multiple sprites). Any moderately complex object (e.g., a car) will have multiple distinct Behaviors (e.g., accelerating, steering, braking, responding to collisions, opening its doors, etc.). Moreover, these Behaviors often need to communicate with one another (e.g., to stop accelerating before braking) and share the object's common properties (e.g., it's location, size, velocity, etc.).
Yet, as noted above, if each of these Behaviors must be aware of the other's presence, the Behaviors will have limited reusability in other contexts. In order to communicate with one another, they either must send messages directly to one another or agree upon a common communications protocol, as none is provided by Director itself. Such communications generally will require knowledge of the other's presence, effectively requiring that the same author write all of the Behaviors or at least collaborate and share implementation details with the authors of the other Behaviors. This problem is exacerbated if, as is often the case, an author desires to reuse Behaviors previously written by another author in a different context, particularly if it is not feasible for the author to have access to the implementation details of the prior Behavior. This was the original promise of object-oriented programming, though (as noted above) it remains unfulfilled due to the lack of a mechanism for relatively simple objects to communicate with one another in a complex system, and be selectively reused in combination with one another. OBJECTS. PERSONAS AND THE PHYSICAL HIERARCHY OF PERSONA OBJECTS
To address these problems, the present invention provides such a communication mechanism to enable objects to communicate with users and with one another, and to share common properties and other information, without inherent knowledge of one another's presence or existence. It should be emphasized that, although this mechanism is integrated into Director's application development environment in one embodiment described below, it could be integrated into virtually any new or existing application development environment, as well as in the wide variety of applications or components thereof generated by that environment.
The term "object" is used quite broadly herein to encompass not only formal objects implemented in accordance with object-oriented programming techniques, but also virtually any other component of an application or of the application development environment itself, including characters, sprites, and other elements that may or may not be displayed in a Director Movie, as well as their component properties, behaviors and other characteristics. The term "object' is also used to describe one particular Object Type, to be discussed in greater detail further below.
Frequently, however, there is a need to distinguish a particular object, such as a sprite or other element of an application (e.g., a car), from its particular attributes (i.e., what the object "is" - e.g., its size, appearance, location, number of wheels, etc.) or behaviors (i.e., what the object "does" or "can do" - e.g., accelerate, steer, brake, etc.). Therefore, in the context of embodiments of the present invention involving its integration into Director's application development environment, the terms "Persona Object" (and sometimes just "object") and "Persona Cast Member" (i.e., a Persona Object that appears in the Cast window 10) and are used interchangeably when distinguishing an element of a Movie (whether it is a simple object, such as a "ball" or a more complex object, such as a "car") from its components, including its "properties" (Director's terminology for an object's attributes such as size, location, etc.) and its Behaviors (Director's terminology for an object's actions or capabilities, typically embodied in Lingo scripts). As will be discussed in greater detail below, other terminology is employed to distinguish aspects of Director from those of the present invention (e.g., "States" as opposed to properties, or "Traits" as opposed to Behaviors). In one embodiment of the present invention, illustrated in FIG. 2(a), the system supports 3D objects, not merely the 2D objects supported by Director. To integrate 3D rendering into Director (discussed in greater detail below), the system utilizes the concept of a "Viewport" through which an author may view rendered 3D objects on the Stage 30. Thus, an author would first create a Viewport 5, whose representation (i.e., icon, symbol) is inserted by the system into 5 the Cast window 10. By dragging the symbol of the Viewport 5 from the Cast window 10 onto either the Stage 30 or into the Score 20, Persona Objects can now be viewed through the Viewport 5 (though none has yet been created).
It should be noted that the Viewport 5, although it is a Persona Cast Member (i.e., it appears in Cast window 10 and can contain "Traits" as discussed below), is a special type of • 10 Persona Object used for viewing other Persona Objects in "edit mode" - i.e., during authoring time, as opposed to runtime. It does not exist in the physical hierarchy of Persona Objects (also discussed below). Yet, the Viewport 5 appears in the Score 20 and on the Stage 30, and thus is a Director sprite, in this case a Persona Object, that can perform various functions in addition to its primary function of displaying rendered 3D objects during authoring time. 15 Having created a Viewport 5, the author might next desire to create (or import) a 3D object having a data format that is recognized by the system for rendering 3D objects through the Viewport 5. In one embodiment, the author must explicitly elect to make the object "interactive" (e.g., capable of having "Traits," as discussed below) before it is displayed in the Cast Window 10. In other embodiments, this occurs by default. 20 Unlike Director Cast Members, which are effectively just templates from which multiple sprites can be created and inserted into a Movie (via the Stage 30 or Score 20) as distinct objects, Persona Objects (which also may be represented as Persona Cast Members in the Cast window 10) are already "characters in the Movie" - i.e., they can appear in the Viewport 5 on the Stage 30 and can be visible at runtime (see, e.g., Ball 66). Yet, they do not exhibit any "behavior" 25 until they are added to the Cast window 10 and given "Traits," for example, as discussed below. Moreover, although multiple Director sprites can be created from a single Persona Cast Member (i.e., multiple "Persona" sprites associated with a single Persona Object), each of these Persona sprites (referred to hereafter as "Personas" - i.e., "personalities" or collections of "Traits" and "States" within a Persona Object) is not a separately controllable Persona Object 30 with distinct properties and behaviors. Instead, a single Persona Object (whether represented on the Stage 30 or as a Persona Cast Member in Cast window 10) contains all of the properties and behaviors (e.g., "States" and "Traits," as discussed below) defined within each Persona. The Persona Object may, however, exhibit only a subset of these "States" and "Traits" at any given point in time, e.g., if a single Persona containing that subset is active at that point in time (in accordance with the frame timeline 22 in the Score 20).
As illustrated in FIG. 2(b), by opening the media editor of the Viewport 5, the author can see a Viewport Hierarchy window 90 that displays the default physical hierarchy of Persona Objects. Before examining the 3D objects that an author can create (such as "Ball" 66 shown in Viewport Hierarchy window 90), it is important to first understand the concept of the physical hierarchy of Persona Objects, as displayed in Viewport Hierarchy window 90.
Viewport Hierarchy window 90 includes other "special" Persona Objects (such as "cameras" and "lights," discussed in greater detail below) which can be viewed/edited through the Viewport 5 or the Viewport Hierarchy window 90 during authoring time, but which perform in the Movie and are viewed through the "active camera" during runtime. In other words, just as a "director" on a movie set can, during rehearsal time, see/modify all elements on the set, including cameras and lights, as well as the actors and props, the author of a Director Movie can perform the same functions during authoring time through the Viewport 5 and Viewport Hierarchy window 90. Yet, during runtime (or while filming a movie), only those objects within the field of view of a camera are displayed as part of the Movie (with the exception of other standard Director objects on the portion of the Stage 30 outside of the Viewport 5.
At the top of the hierarchy in the left pane 97 of Viewport Hierarchy window 90 is the "Universe" 91, which is the highest-level "parent" object in the Persona Object physical hierarchy. The Universe 91 can contain one or more child "Worlds" 92, which (in one embodiment) are independent and do not interact with one another. By default (i.e., upon creation of Viewport 5), each World 92 contains a "Camera" 93
(for displaying Persona Objects within its field of view during runtime, as opposed to the author's view of the Universe 91 through the Viewport 5 during authoring time), a "Directional Light" 94 (for illuminating Persona Objects directly in front of it within a specified range) and an "Ambient Light" 95 (for illuminating all Persona Objects within the World 92). The author may remove these Persona Objects from a World 92, but nothing will be visible during runtime without a Camera 93 and some type of lighting (e.g., Directional Light 94). Nevertheless, Persona Objects could exist in such a World 92 (and even collide with one another and generate sounds), though the viewer of the Movie might not see anything. Moreover^even with a Camera 93 and lighting such as Directional Light 94, certain Persona Objects may be obstructed or outside of the field of view of the Camera 93 or the range of the Directional Light 94, and thus not visible at certain times during the Movie.
In one embodiment (for better runtime performance), only one Camera 93 can be "active" at any given time, and thus only one World 92 can be displayed at a time (though a Movie could switch among multiple Worlds 92 over time, though they cannot interact with one another in this embodiment). In other embodiments, multiple Cameras 93 could be active simultaneously, revealing a "split screen" view of the Universe 91, and perhaps across multiple Worlds 92 that could interact with one another. The decision is merely an implementation tradeoff of flexibility versus performance.
With this physical hierarchy infrastructure, the author can create Persona Objects within a World 92, or perhaps within "sub-world" Persona Objects that the author creates to extend the physical hierarchy to additional levels. As noted above, an author typically would create (or import) a 3D object which has a data format that the system can recognize and render through the Viewport 5 at authoring time, and through a Camera 92 at runtime. The hierarchy of the different types of Persona Objects is discussed in greater detail below. As a consequence of this physical hierarchy of Persona Objects, "parent" objects become a physical frame of reference for their "child" objects. In other words, even at runtime, if the parent object moves, the child object will move relative to the parent object, even if the child object does not have an explicit movement-related behavior. This models the real world in which objects "attached" to other objects (e.g., a phone on top of a desk, or a moon within a planet's gravitational field) move relative to those objects, as well as under their own power. An example of such an object is the "Ball" 66 (i.e., a sphere) shown in Viewport Hierarchy window 90 of FIG. 2(b). Certain properties of the Persona Objects in the physical hierarchy (such as x-y-z location coordinates, yaw-pitch-roll orientation coordinates and width- height-depth size coordinates) are displayed in the right pane 96 of Viewport Hierarchy window 90, while the hierarchy of names of the Persona Objects is displayed in the left pane 97, as noted above.
As with Director, an author can utilize certain primitive objects (e.g., spheres, cones, etc.) that are already built into the system, as well as import more complex 3D objects created with other programs, such as 3D Studio MAX from Kinetix, the multimedia business unit of AutoDesk. For example, the author may create a Persona Object, such as Ball 66, which appears in the Cast window 10 as a Persona Cast Member with which one or more Personas can be associated.
Ball 66, illustrated in the Cast window 10 of FIG. 2(c), is a Persona Cast Member. As noted above, the Persona Object on the Stage 20 is essentially the same object as the Persona Cast Member with which it is associated. They are, in essence, two different representations of the same object. Yet, by dragging the Persona Cast Member into the Score 20, the author can create one or more Personas or "personalities," each of which is associated with the same Ball 66, but possibly during different/overlapping periods of time in accordance with frame timeline 22. Thus, by dragging Ball 66 from Cast window 10 into the Score 20, the author creates a
Persona 66a (a sprite from Director's perspective) that appears both in the Score 20 and on the Stage 30 (though effectively in a "scratch editing area" outside of the Viewport 5, to enable the author to add "Traits" as discussed below). At runtime, Ball 66 will appear only once in the Movie, unless, e.g., a particular "Trait" causes it to clone itself. It should be noted that not all Cast Members are Persona Objects. Some, for example, may be "Traits" that can be associated with a Persona Object (just as Director can put Behaviors and other scripts in the Cast window 10). For example, as illustrated in FIG. 2(d), the author can add a Behavior (in this case, an "Action" as described below) to a Persona Object by dragging it from the Library 70 into the Persona 66a on the Stage 30. In this case, by dragging "Color Fade Action" 73 into the Persona 66a on the Stage 30, it ' is added to the Cast 10 as a Cast Member (and can later be associated with virtually any Persona Object the author creates) as well as to the Persona 66a, where it is identified as Color Fade Action 74 to distinguish it from Cast Member 73, which is not necessarily associated with Persona Object 66. In other words, Ball 66 now has a Persona 66a that consists of a special type of behavior (referred to as an "Action" and discussed below), namely Color Fade Action 74. If, as illustrated in FIG. 2(e), the author adds another Persona by dragging the Persona Cast Member (Ball 66) into the Score 20 or onto the Stage 30 (e.g., Persona 66b, which currently is empty), the single Persona Object (Ball 66) would then have two Personas 66a and 66b (i.e., two personalities). Thus, during all 28 frames of the Movie, Ball 66 would exhibit Persona 66a (i.e., the Color Fade Action 74). During frames 10-20 of the Movie, the same Ball 66 also would exhibit the behavior of Persona 66b (which, in this example, is still empty, but could contain any number of "Traits" or "Actions," as described below).
It should be noted that, if Director provided the concept of multiple Score 20 windows (not shown), an author could associate one or more Personas (e.g., Personas 66a and 66b) with particular Channels of a particular Score window, such as Score 20, and dynamically enable that Score 20 based upon some runtime condition. The Score 20 windows might operate simultaneously and hierarchically to provide even greater flexibility.
For example, an author could invoke a particular Scores 20 window (and thus certain behaviors) from within the main Score 20 window. Yet, the duration of this "child" Score 20 window might be determined by runtime conditions (to enable and disable this "child" Score 20 window) rather than merely global time as reflected in the main Score 20 window.
In any event, having created a Persona Object (Ball 66), and perhaps multiple Personas 66a and 66b (personalities) to control Ball 66 at different/overlapping times, the author can now add functionality to Ball 66 to "bring it to life." It is at this point in particular that the present invention diverges sharply from the Director paradigm of merely adding Behaviors (i.e., Lingo scripts) to objects.
STATES. TRAITS, and LISTENERS OVERVIEW To facilitate the selective reusability Persona Objects and their components in different contexts (individually and as collections of objects and their component properties and behaviors), the system provides a global communication mechanism to enable these behavioral capabilities (referred to as "Traits") to share automatically the common properties of the Persona Object with which they are associated (and even properties of other Persona Objects). The different Traits of a Persona Object need not communicate directly with one another (or even be aware of one another's existence) in order to share these common properties (referred to as "States").
By defining States (i.e., attributes or properties that can be monitored by the system for changes in value) that are meant to be "exposed" to the system, or by "Listening" for a State (i.e., getting a callback from the system when it detects a change in the value of that State), a "Trait" (that defined that State or is Listening to that State) of a Persona Object can share this State (e.g., an object's "location") with any other Trait that elects to Listen for a change in that State's value. By Listening to exposed States, individual Traits can effectively communicate with one another and with the user of an application, and share various characteristics of their Persona Objects, even though these Traits are unaware of one another's existence. For example, a "CanBeep" Trait in a Persona Object ("ball") could Listen for changes in the ball's "location" State and, in response, could generate a "beep" sound. Thus, the ball would beep whenever it was moving, and would stop beeping whenever the ball stopped. Yet, other Traits may be responsible for moving the ball (eg, UserDraggable, CollisionResponse, etc.); but the CanBeep Trait need not know about these other Traits in order to communicate with them and share this State change information. The CanBeep Trait, therefore, is reusable in other contexts in which various other Traits may alter the ball's location.
As an example of how Persona Objects may develop greater complexity over time, consider a "car" object. Initially, an author might create a car for a particular application or Movie that requires only the function of steering. The user must maneuver the car along a continuous road that twists and turns but never ends. Although the user can set the car's initial speed, this speed will remain constant throughout the Movie.
The car, therefore, must have the capability of steering in response to the user's actions. So, the author might create a "CanSteer" Trait which monitors the user's mouse/keyboard actions and causes the car to turn in response. This Trait might define and expose certain States, such as the angle at which the user is turning the steering wheel ("wheelAngle") and the car's current location ("location") and velocity ("velocity"). The car's velocity would consist of two components - the car's current direction ("direction") and its rate of speed ("speed"). The speed, as noted above, will remain constant in this application.
In response to the user's mouse/keyboard actions, the CanSteer Trait might modify the wheelAngle State. By Listening for changes in the wheelAngle State (in case another object also modified that State), it could then calculate a new "orientation," which would in turn result in a new direction (either via this or some other Trait), thus modifying the car's velocity State. By also Listening for changes in the velocity State (again because another object might modify that State), it could then calculate the car's ultimate location, which would depend upon its current location and its new direction (e.g., based on its orientation) and speed (even though constant). Well after this Movie has been completed, the same or another author might desire to reuse this CanSteer Trait in a more complex Movie in which the user also can make the car accelerate and decelerate - i.e., simulating the car's accelerator pedal with a "CanAccelerateandDecelerate" Trait. Because determining the car's location depends on knowing when the user either turns the steering wheel to change the car's direction
(implemented via the CanSteer Trait) or adjusts the car's accelerator pedal to change the car's speed (implemented via the CanAccelerateandDecelerate Trait), these two Traits must communicate with each other in order to properly update the car's location.
The CanAccelerateandDecelerate Trait could define and modify a State reflecting the car's rate of acceleration or deceleration ("rateOf Acceleration") in response to the user's mouse/keyboard actions. By Listening for changes in this State, it could then modify the car's current velocity State (i.e., its speed) based upon this new rateOfAcceleration. It must then, however, update the car's location based upon its current location and new velocity.
Yet, the CanSteer Trait already implements the function of updating the car's location in response to a change in its velocity. It is Listening for changes in velocity because it is changing the car's direction as the user "turns the steering wheel." Nevertheless, the CanSteer Trait will automatically be notified by the system when the CanAccelerateandDecelerate Trait modifies the car's velocity State; and it will update the car's location correctly (i.e., based on both its direction and speed), even though it was expecting only a change in direction, not a change in speed.
Thus, the CanSteer Trait, which assumed a constant speed for its initial simple application, can be reused in a more complex application in which the car can accelerate and decelerate. It is reusable because it can respond to changes in the velocity State due not only to its own changes in direction, but also due to changes in speed made by the subsequently developed CanAccelerateandDecelerate Trait. The CanSteer and CanAccelerateandDecelerate Traits are thus able not only to share the car's State information, but also to communicate with each other indirectly in this new Movie, even though the author of the CanSteer Trait may never have contemplated the existence of a CanAccelerateandDecelerate Trait.
The system's Listening mechanism facilitates this communication by notifying a Listening Trait of changes in States (even across Persona Objects or a network), regardless of which Trait, for example, made such changes. As will become apparent, the reusability of Traits is enhanced significantly as authors of Traits "expose" their States to the system, which can automatically monitor changes in those States (whether made by the Trait that defined/exposed them, or by any other Trait or entity that is allowed to modify them) and notify other interested Traits (Listeners) of such changes.
As will be explained below, this sharing of State changes can even occur across Persona Objects. For example, if one car is notified by the system that it has collided with another car (e.g., its "hasCollided" and "collidedWith" States have changed), it can then access known States of the other car (such as its current "velocity" and "mass") to determine how to respond to the collision. Moreover, Traits could even Listen for "conditional" State-change information - e.g., Listen for changes in the "hasCollided" State if the Persona Object identified by the "collidedWith" State is a car (e.g., has an "IsACar" Trait). This could greatly enhance system performance by minimizing the number of system callbacks.
Using prior methods, such as Director's Behaviors, state-change information cannot easily be shared, even using global properties, because the Behaviors need to know when other Behaviors change these properties (e.g., the direction or speed of the car) so that they can respond appropriately. Mere global access to these properties is insufficient. Though Behaviors could send messages to one another to communicate this State-change information, this quickly becomes unwieldy as more and more Behaviors are added, and the implementation of many Behaviors must be modified to a greater extent each time a new Behavior is added.
Instead of requiring Traits to communicate directly by sending messages to one another (requiring knowledge of one another, and thus creating dependencies), the present invention enables Traits to communicate with one another indirectly by exposing (sharing) their States to the system, which can monitor and shares changes in such States with any other interested Trait that desires to Listen for such State-change information. The communications interface requires little more than knowledge of States that have been exposed to the system. For example, knowing only that some Trait within a car exposes the "velocity" State, another Trait (or even another authoring environment that could interact with the system) could implement sound with a tempo that matched the car's velocity (e.g., engine sound). It would merely Listen for changes in velocity, and make corresponding changes in tempo.
Having the system notify only interested Listeners of a change in the value of a State is a far more efficient method of communication than, for example, broadcasting a message to all objects; and it creates far fewer dependencies than explicitly sending a message to a particular object.
DEFINING STATES, TRAITS AND LISTENERS As noted above, Director objects typically have properties that define what the object "is" (e.g., size and location) and Behaviors that determine what the object "does" or "can do" (e.g., AvoidMouse or Draggable). Persona Objects that an author creates for a Movie also have special properties, referred to as States. As noted above, States are special in that they can be "exposed" to the system, which can monitor their changes and notify interested parties (Listeners).
Persona Objects attain their States by containing Traits (not mere Behaviors) that define/expose one or more States. Traits also can define "private" properties that are never exposed to the system. As will be explained below, these Traits also can Listen for changes in these and other States, and respond accordingly - i.e., by performing functions that require the modification of other States, which other Traits may Listen and respond to, and so forth as this "condition/response" process continues to ripple across all of the Persona Objects in a Movie.
As will be explained in greater detail below, in one embodiment of the present invention, certain States are inherent in certain types of Persona Objects by default (e.g., every Persona Object has a "location" in 3D space). In other words, certain types of Persona Objects have default Traits that defined/exposed these default States. The system provides additional "built- in" Traits and States (also described in greater detail below) that the author may add to the Persona Objects in a Movie. As noted above, and illustrated in FIGs. 2(a) - 2(e), an author utilizing one embodiment of the present invention creates a new Movie by first creating a Viewport 5 and dragging it onto the Stage 30. The author then creates or imports one or more 3D objects into the Persona Object physical hierarchy (e.g., Ball 66 in Viewport Hierarchy window 90 of FIG. 2(b)), and then drags it onto thejStage 30 or Score 20 to create one or more Personas (e.g., Personas 66a and 66b in FIG. 2(e)) associated with that Persona Object (Ball 66).
To bring the Persona Objects in a Movie "to life," the author can add Traits to each Persona Object (in addition to the default Traits automatically associated with that Persona Object, based on its Object Type, as discussed below). The author may write a Trait from "scratch" (e.g., by creating a Lingo script and associating it with a Persona), or drag a previously written Trait from the Library window 70 onto one of the Persona sprites (e.g., 66a) associated with the Persona Object.
As noted above, at any given point in time, a Persona Object will exhibit only those Traits that are associated with a Persona that is "enabled" at that time (in accordance with the frame timeline 22 in the Score 20). This enables an author to associate distinct collections of Traits (i.e., "personalities") with a Persona Object at different/overlapping times. Thus, Traits are added to a Persona Object by associating them directly with one of the Personas associated with that Persona Object.
In one embodiment of the present invention, these Traits are treated by Director as Behaviors, while the States defined by these Traits are treated as Director properties. In this embodiment, however, the system of the present invention manages these Traits and States quite differently from the manner in which Director maintains Behaviors and properties.
FIG. 3 illustrates the manner in which the system 300 interacts with Persona Objects (e.g., Persona Object 330, as well as other Persona Objects 380 and 390), and with their associated Traits 350 and States 340, and in particular the mechanism 310 by which the system monitors States 340 for changes and notifies Traits (e.g., Trait Tj 351) that have added Listeners 360 (e.g., Listener Ls 361) for such State changes. System 300, of course, includes many other
"core" functions, including those relating to its engine for rendering 3D objects and its integration with Director (all described in greater detail below in connection with the core components of system 300). System 300 maintains three major types of data structures 320 (discussed in greater detail below) for an author's Movie - one for the Persona Objects themselves, one for the Traits associated with the Persona Objects, and one for the hierarchy of States defined/exposed by the Traits. When the author first creates a Persona Object, such as Persona Object 330, the system 300 adds data identifying this object (e.g., an object ID) to data structure 321, which is subsequently modified as the author adds/removes features, such as new Traits (which may define new States as well as Listeners for changes in other States).
Persona Object 330 is shown with Traits T, 351, T2 352, T3 353 . . . TN 354. What distinguishes these Traits from standard Director Behaviors (or other Lingo scripts, C++ code, etc.) is that they can define/expose States and/or add Listeners, associated with particular States, that are notified (i.e., by a system "callback") when the system detects, via mechanism 310, that the value of any of those designated States has changed. This mechanism is illustrated conceptually within Persona Object 330.
As noted above. States are defined from within Traits. An author can define standard Director properties (maintained by Director) from within a Trait, e.g., for purely internal use by the Trait. If, however, the author desires to have the system 300 monitor changes in value, and notify Listeners 360 when such changes occur, the author will define a State.
For example, Trait Tt 351 defines two States (S4 and S5). To define a State, this embodiment extends Director's existing "addProp" Lingo command. The command identifies various characteristics of the State, including the object ID of the Persona Object with which the State will be associated (i.e., the Persona Object associated with the defining Trait), the name of the State, a description of the State, its attributes (e.g., "read only" except for this Trait; "internal" to prevent other Listeners, and "reset" after callbacks have been made in response to a change in value), and a min/max range of values. States can also be defined implicitly (e.g., by setting the value of, or adding a Listener to, a previously undefined State), though (in one embodiment) no description or range of values may be specified for such an implicitly defined State.
States can be of virtually any data type (e.g., boolean, integer, floating point, list, etc.) and, as noted above, are treated by Director as mere properties. Yet, by defining a State, the author of a Trait effectively "exposes" that State - i.e., directs the system 300 to monitor the State, via mechanism 310, for changes in the value of that State (if any Trait has added a Listener associated with that State), and notify Listeners 360 when such changes occur. If the State is defined as "read only," then only the Trait which defined the State can modify its value. Moreover, a State can also be defined as "internal" such that only the defining Trait can access, and Listen for changes in, that State. Otherwise, any other Trait can add a Listener for that State, whether or not associated with the same Persona Object.
Finally, some States are defined to "reset" their value after notifying Listeners of the change in value. For example, in a system-wide collision detection scheme (as is present in one embodiment of this invention), the system notifies Listeners of changes in the "hasCollided" and "collidedWith" States. This lets Traits know when, and with whom, their Persona Object has collided. Yet, in order to notify Listeners of subsequent collisions, this "hasCollided" State, in particular, must be reset. Rather than require individual Traits or Persona Objects to reset their own State, the system can easily perform this task. Thus, certain States would include this "reset" attribute.
Having defined States S and S5, the author of Trait T! 351 also defines two Listeners - Ls 361 and Ls 362, which cause the system 300 to issue "callbacks" to a specified handler in this Trait when the values of States St 341 and S4, respectively, have changed. To define a Listener, a new "addListener" (as well as "removeListener") Lingo command has been added to Director. The command identifies various characteristics of the Listener, including the object ID of the Persona Object with which the State (that this Trait wants to Listen to) is associated, the name of that State, the particular Persona (i.e., sprite) to which Director will refer the callback, and the name of the callback handler.
As noted above, the "addListener" command could also include a "condition" (e.g., callback when State X changes only if the value of State X more than doubled). The system 300 (via mechanism 310) would not only monitor States for changes in value, but would also evaluate the specified condition to determine whether to issue a callback to a particular "conditional Listener."
The author of Trait T, 351 also will include two handlers in this Trait to handle the callbacks for Listeners Ls 361 and LS4 362. As a result of the callback to Ls 362, for example, or as just another part of its functionality, this Trait T, 351 issues a Lingo command that changes the value of State Sj 341 (which in turn will result in a callback to the handler for Listener Ls 361). Finally, this Trait also calls an "Action"(discussed below) A, 371, that changes the values of two other States - S4 and S5.
Similarly, Trait T2 352 also defines two States (S6and S9), and changes the value of each of those States, in addition to defining three Listeners - L$4 363, Ls 364 and Ls 365. Thus, the Traits associated with Persona Object 330 (including those not shown) define the various States S, 341, S2342, S3 343 . . . SN 344. These Traits also include various Listeners 360 (including Listeners Ls 361 and LS4 362 in Trait T, 351, and Listeners L$4 363, Ls 364 and Ls 365 in
Trait T2 352) that Listen for changes in the specified States and receive callbacks from the system 300 when it detects those changes (via mechanism 310).
It should be noted that Listeners can receive callbacks that identify other Persona Objects (e.g., "collidedWith"), and then access known States (e.g., "velocity") in those other Persona Objects. Even if a Trait does not know whether the Persona Object that it "collidedWith" has a particular State, the Trait could query that object for its list of Traits and States, and then (after examining this list) conditionally access the desired State information. In this manner, Persona Objects can communicate with one another without prior knowledge of one another's existence, making them far more reusable.
TRAITS AND ACTIONS Persona Object 330 in FIG. 3 also includes various "Actions" 370 (including Actions A!
371, A2 372. A3 373 . . . AN 374). Director also treats Actions as Behaviors, but they too are "special" in that they modify States (not merely Director properties), possibly over time. Actions can be distinguished from Traits in that they do not define States or add Listeners. Traits, on the other hand, do define States and/or add Listeners, and may also modify States. Actions serve two primary purposes, both closely related to Traits. On the one hand,
Actions can invoke Traits implicitly - i.e., by modifying States to which other Traits are Listening. On the other hand, Actions can be invoked by a Trait explicitly - i.e., as a consequence of the Trait Listening to a State, receiving a callback upon a change in the value of that State, and then responding by explicitly invoking one or more Actions. A Trait can be thought of conceptually as a "capability" of doing something - e.g., CanAccelerateAndDecelerate - whereas an Action does something directly that often triggers a Trait's capability (e.g., modifying a State such as "moveAcceleratorPedal," to which the CanAccelerateandDecelerate Trait may be Listening) and/or implements that capability (e.g., modifying a State such as "speed" upon being invoked explicitly by the CanAccelerateandDecelerate Trait).
For example, in FIG. 3 Action A2372 modifies the value of State S] 341 (as well as States S4 and S8). This change in value is detected by the system 300 (via mechanism 310), which issues a callback to the handler in Trait T( 351 that is defined by Listener Ls 361. Various Actions and Traits may modify States. By Listening for such changes, another Trait can perform its functionality (i.e., be reused) in a variety of different contexts, and communicate indirectly and share information with other Traits without even being aware of their existence (as noted in our "car" example above).
In addition to invoking Traits indirectly, Actions also can serve to break up the response of a Trait to a callback from the system into modular reusable components. For example, Traits can be "parameterized" - i.e., separating their Listening functionality (waiting for a State-change "condition" to occur) from their "response" to a triggering of their capability (i.e., changing other States, that may trigger other Traits that are Listening to those States). Particularly where such responses themselves do not require additional parameters, Traits can delegate such responses to separate Actions, making the Trait even more modular and reusable, while allowing Actions to be replaced (or selected as alternative parameters) to optimize existing functionality or even add new or alternative functionality.
For example, a "Draggable" Trait for 3D objects might want to offer alternative functionality in response to the user dragging the object with a mouse - "move" or "rotate" the object. By creating two Actions (MoveObject and RotateObject), and making them parameters selectable by another author reusing the Trait (or even the user at runtime), the Trait could Listen for the user's mouse dragging events and then delegate the response to whichever Action parameter was selected.
Returning to FIG. 3, Trait Tt 351 might want to change two States (S and S5) in response to a callback to its Listener LS[ 361. By delegating this response to an Action (Ai), and making the call to A, a parameter, another author could reuse this Trait and substitute an alternative Action in its place to perform a slightly different function. Actions too can be parameterized to provide even further reusability. For example, our example "canBeep" Trait might calljan Action that enables the user to select the desired "beep" sound. Moreover, Actions may modify State values "over time" - e.g., changing color gradually over one minute from one color to another. As the value of this State (color) changes, the Action may control the gradual shift from one color to another, while another Trait may be Listening for all color changes, or only those meeting a certain condition.
Thus, authors can increase reusability dramatically by creating Traits as "capabilities" dependent upon certain conditions (i.e., changes in State values by Traits and/or Actions), and modularizing the "responses" to those conditions being met into discrete Actions that can be called separately by the Trait and often parameterized into component (alternative and/or substitute) Actions. Such Traits can then be reused as well as enhanced by other authors.
TAGS AND THE STATE HIERARCHY
Although the ability of a Trait to Listen to exposed States, and thus indirectly communicate and share information with a wide array of other Traits and Actions (within and among Persona Objects) provides a significant degree of reusability, this concept can be extended even further. As noted above, the Listener's trigger (i.e., the change in a State) can be made conditional - i.e., adding almost any other State or combination of States, or even Director properties, as a condition which must be met before the Listener will receive a callback.
Moreover, States can be grouped together (via a concept referred to as "Tags") to enable a Listener to receive a callback if any of the "Tagged States" changes in value. This concept could be completely genericized, enabling the Listener to specify any condition (combination of States, Director properties, etc.) under which a callback is to be received; though the Trait also could evaluate such condition itself, creating a separate Listener for each of the component States.
In any event, in one embodiment, the concept of a Tag is employed to enable authors to group any set of States together (even across Persona Objects, if desired), and receive a callback ifany of these States changes its value. The Tag is itself a State. Thus, an author of a Trait can define a new State, and then add it as a Tag on one or more other existing States; or simply add a State that already has been defined (e.g., by another Trait) as a Tag onto another State. Moreover, Tags can be added to Tag States, so as to extend the hierarchy to additional levels. If one considers a Tag State to be a "parent" of its child State, then a Listener to the parent Tag State will receive a callback from the system when the value of the child (or any "descendant") State changes. The parent Tag State may be a parent to multiple child States, and each child State may have multiple parent Tag States, each of which may itself have multiple parent Tag States at higher levels of this "State Hierarchy." Each Persona Object will therefore have its own State Hierarchy, starting with the "root" (non-Tag) States defined by Traits associated with that Persona Object.
When the system detects a change in any particular State, it will issue callbacks not only to Listeners to that State, but also to any of its parent, grandparent and other "ancestor" Tag States. Moreover, the callback will include a list or "chain" of States from the initial State that changed up through the State Hierarchy until it reaches the Tag State specified by the Listener. For example, looking at FIG. 4, consider the following 2 "root" (i.e., non-Tag) States -
"orientation" 410 and "location" 420. If the author of a Trait was interested in knowing whether its associated Persona Object moved (whether in location or merely in orientation), it could create a separate Listener for each of these States (orientation 410 and location 420) and, upon receiving a callback for either one, check the value of the other. Instead, by using Tags, it could simply define a State called "movement" 480 (which could itself contain data, though this is probably unnecessary in this example) and add it as a parent Tag to both the orientation 410 (illustrated by line 415) and location 420 (illustrated by line 425) States for the Persona Object.
By adding a Listener to this movement Tag State 480, the specified handler in the author's Trait would receive a callback when the system detected a change in the value of either the orientation State 410 (in which case the callback data would include the list of identifiers, "orientation, movement") or the location State 420 (in which case the callback data would include the list of identifiers, "location, movement"). The Listener's callback handler would thus know which State changed, and could respond accordingly.
Consider the example of a network game, in which the individual copies of the game on each player's computer need to be synchronized with one another throughout the game. Assume that the various States that need to be synchronized consist of the 5 "root" (non-Tag) States illustrated in FIG. 4 - i.e., "orientation" 410, "location" 420, "color" 430, "weight" 440, and "temperature" 450.
By creating a "NetworkSync" Trait that is responsible for synchronizing these States, and adding a "published" parent Tag State 460 to each of them (illustrated by lines 411, 421, 431, 441 and 451), the author of the NetworkSync Trait can simply add a Listener over the network to the published State 460 on each of the other player's copies of the application, and receive the appropriate callback whenever the system detects a change in value of any of these States on any player's machine (e.g., "color, published, Player 1" when the color State 430 changes on Player 1 's machine). The handler in each NetworkSync Trait could simply update the local copy of the State corresponding to the one that changed (after querying the corresponding game over the network for the changed State value, if it was not sent along with the callback).
Alternatively, the Listeners could operate locally, and then exchange packets over the network, requiring each copy of the game to unpack the network packets and understand the State changes from the other players' games. In either case, the network overhead is substantially reduced, and the network synchronization task made significantly easier, by having each copy of the game made aware of any change in value of its many local States. Moreover, even if the complexity of the game increases in subsequent versions, each new State merely need be given a "published" Tag 460 to maintain network synchronization. Moreover, the "published" Tag 460 itself could contain data that aids in the synchronization task. For example, it might contain data corresponding to instructions for handling certain "child" States differently. Or it might contain the changed State value information to simplify the task of providing this information over the network.
Finally, "audit" Tag State 470 might be used as a parent Tag State to published Tag State 460 (as illustrated by line 461). Assuming that the value of the published Tag 460 did not change, but that the value of one of its child States did change, then a Listener to the audit Tag 470 would receive virtually the same information (e.g., "color, published, audit") as would the Listener to the published Tag 460 (e.g., "color, published"). Yet, the Listeners to the audit Tag 470 might not need "real-time" synchronization information, but merely accurate data sent periodically, as opposed to Listeners of the published Tag 460, which must synchronize the actual games in real time. In such a case, the system might handle the audit Tag 470 differently (whether inherently, through a condition specified in the Tag itself or through a condition specified by the Listeners), and provide only periodic (though accurate) callbacks.
Because Tag States can be conceptual groups of other existing States, there is virtually no limit to their use. For example, one could define a "physicalProperties" Tag State to various other physical States, such as size, width, color, height, etc. As noted above, the Tag State data may or may not have independent meaning.
The manner in which the system identifies the Listeners to a Tag State that are to receive a callback when the value of a child State changes will become apparent from the discussion of the State Hierarchy data structures below. In essence, the data structure for each State identifies ("points to") each of its "parent" Tag States; so the system merely "walks the hierarchy" (which may branch off in many directions) until no more Tag States with Listeners are present. As will become apparent below, in one embodiment, the links between child and parent (Tag) States are unidirectional, though they could be made bidirectional in another embodiment to optimize performance.
PERSONA OBJECT. TRAIT AND STATE DATA STRUCTURES
As noted above, when an author creates a Persona Object (e.g., by adding it as a physical "child" of a World), the system adds information relating to that Persona Object to a Persona Object data structure, as illustrated in Table I below (for one embodiment of the present invention). This data structure enables the system to maintain various characteristics of the Persona Objects necessary for the performance of certain system functions.
For example, the "Persona Object ID" field provides a unique identifier that enables the system, as well as Traits and other objects, to access the States of a Persona Object, among other characteristics (via the State Hierarchy data structures, discussed below). In order to modify a State, one must know the associated Persona Object ID of the Persona Object containing that State.
The "Persona Object Description" field provides a textual description of the Persona Object that is accessible, e.g., by another Trait within another Persona Object, and could be used to infer information about this Persona Object's capabilities. The "List of Active Trait Names" and "The State Hierarchy Storage" fields provide the system, as well as other interested Traits, with a list of the Traits (with parameters) and States (with hierarchy information and data) exhibited by a Persona Object at any given point in time during runtime. Traits can query a Persona Object for this information in order to determine whether particular Traits or States are supported and, if so, can then query the Persona Object for these values or make other decisions dependent upon this information. The system utilizes this information, for example, to determine which Traits should be exhibited by a particular Persona Object at runtime, as well as to access (read and write, and monitor changes in) the values of the States of the Persona Object.
It should be noted that the fields identified in Table I are supported for each Persona Object created by the author for a particular Movie. The precise data structure format itself is merely one embodiment of an aspect of the present invention. Similar data could of course be maintained in a variety of different formats, and still support the concepts underlying the present invention.
TABLE I: PERSONA OBJECT Data Structure
Figure imgf000032_0001
Figure imgf000033_0001
Although multiple Persona Objects may contain the same Trait at runtime, the code implementing the Trait is stored only once, accessible via the Trait data structure illustrated in Table II below. As noted above, the Persona Object data structure maintains the list of Traits and their parameters contained within (or associated with) each Persona Object, as well as complete State Hierarchy information, as discussed more fully below.
Each distinct Trait (regardless of how many Persona Objects contain that Trait) consists of a "Trait Name" field and a "Script ID" field that identifies the Trait (by name) and its associated script, which contains, for example, the Lingo code that implements the Trait (defines/exposes States, adds Listeners, modifies States, etc.). The "Trait Description" field provides a textual description of the Trait that is accessible, e.g., by other Traits, and could be used to infer information about the Trait's capabilities.
Each Trait may also have a list of "Trait Dependencies" that identify the Traits which must also be present in the same Persona Object that contains this Trait in order for this Trait to function properly. For example, the "IsACamera" Trait (present by default only in Camera Persona Objects) depends upon the "IsAnObject" Trait, which is present in practically every Persona Object, including Cameras (e.g., because Cameras have a "location" in a World).
TABLE II: TRAIT Data Structure
Figure imgf000033_0002
Figure imgf000034_0001
Finally, the State Hierarchy data structure maintains all of the States for each Persona Object (including Tag States that enable the hierarchy). Thus, unlike Traits, if a State (e.g., "location") is present in multiple different Persona Objects, the system will maintain multiple separate data structures for that State, one for each Persona Object containing that State.
Each State has a "State Name" field, which both the system and authors use to reference the State (e.g., from within a Trait's Lingo script). In addition, each State has a "Persona Object ID" (as noted above) which uniquely identifies the Persona Object containing the particular "instance" of that State. The "Trait Name" field identifies the particular Trait that defined/exposed the State - e.g., so that the system can limit changes to a "read-only" State to that Trait. The "State Description" field provides a textual description of the State that is accessible, e.g., by other Traits, and could be used to infer information about the Trait's capabilities.
The "DATA" field contains the actual data values for this "instance" of the State in a particular Persona Object. As noted above, this data is "variant" and can thus be of virtually any data type. The system monitors changes to this DATA, i.e., by controlling this data structure and making all changes on behalf of other Traits, etc. Thus, the system also can enforce (validity checking) the "Range of Values" specified by the author of the Trait that defined/exposed this State. The next few fields relate to this State's list of Listeners (defined, for example, from within Traits that want to be informed of changes to this State). When a Trait defines such a Listener, the system updates this data structure to include that Trait. As noted above, the "addListener" command can include, in addition to the name of the desired State, a "Callback Name" (maintained in the "List of Callback Names of each Trait/Listener" field) which the system uses to issue a "callback" message (when the value of the State changes) that will invoke the Listening Trait's handler of the same name. If the callback handler is not given, a default one will be assumed to be used (stateNameChanged), and the author is expected to implement that handler. The system also needs to know the "Script ID" of the Trait that is to receive the callback, and the "List of Conditions" that must be satisfied for the Listener to receive a callback (in addition to the change in the State's value).
The final few fields support the State Hierarchy (i.e., the "parent" Tags). If the author of a Trait adds a Tag State to a particular State, the "Tag State Names" and "Persona Object IDs" fields are updated. In other words, the system adds the name of the desired Tag State to this list (because States can have multiple "parent" Tag States), as well as the Persona Object ID that is associated with that Tag State (which need not necessarily be the same Persona Object that contains the child State). For example, a Trait could add a Tag State (e.g., "published"), associated with Persona
Object X, to the "location" State of Persona Object Y. Thus, when Persona Object Y's location State changes, the system will notify Listeners of the "published" State of Persona Object X (enabling a Trait in Persona Object X to monitor the location of another object, Persona Object Y). In that case, the system would update the data structure for the "instance" of the location State in Persona Object Y to include a "Tag State Name" of "published" and a "Persona Object ID" pointing to Persona Object X.
As noted above, this data structure maintains "child-parent links" only in one direction, but could be optimized to also include "parent-child links" (i.e., bidirectional links), e.g., for better performance. TABLE III: STATE HIERARCHY
Data Structure
Figure imgf000035_0001
Figure imgf000036_0001
The above description of the various concepts underlying embodiments of the present invention (e.g., the physical hierarchy of Persona Objects, Traits, States, Listeners, Tags, etc.) illustrate how authors can develop applications incorporating these concepts. Like any application development environment, however (including Director), the system of the present invention provides certain "built-in" functionality, both to support the underlying architecture of the system and to provide authors with "building blocks" that they can use to create more complex applications without "reinventing the wheel." It should be noted that this "built-in" functionality is particularly useful in this context, due to the high degree of reusability of the Traits, Actions and States provided for use by authors.
The following discussion will describe much of this "built-in" functionality in greater detail, including the hierarchy of Persona Object Types, default "special" Persona Objects (e.g., Universe, World, Cameras and Lights), primitive 3D objects, default Traits and States, an additional library of Traits and States that authors may add to their Persona Objects, and a description of how the various components of the "core engine" work together to implement the above-described concepts. Any references below to "3D Dreams" are references to the portion of the authoring tool (or the authoring tool itself) that contains an embodiment of the present invention. OBJECT TYPES AND SPECIFIC DEFAULT TRAITS
As mentioned above, one embodiment of the present invention broadly categorizes the various author-created entities in the authoring environment into Object Types. Briefly, Object Types are built into the authoring tool in one embodiment and used to categorize at a very high level all the potential entities or "things" that are created for the stage or Viewport of the authoring tool. Thus, an object such as a dog is associated with the same Object Type (i.e., object) as another dog object. Similarly, this dog object is also categorized under the same Object Type as a cat object. The following Object Types are supported in one embodiment of the present invention:
objects primitives backgrounds universe worlds lights cameras sensors
Thus, when an author (i.e., user of the authoring tool) creates an entity of a particular Object Type, this categorization allows the authoring tool to facilitate the rapid development of the author's project - because certain features are automatically incorporated into his newly created entity by virtue of its categorization as an "object" or a "light" or a "primitive" or any of the aforementioned Object Types. As a result, the author need not concern himself with basic inherent Traits associated with that object, since the system has included these basic features by default into the author's newly created entity.
An example will illustrate the benefits of this feature. Let's say the author created a box as a new cast member in an authoring tool, such as Macromedia's Director. He selects that box from one of the many primitives available to him in the authoring tool, which are categorized as the Object Type "object." By doing so, a Trait called IsAnObject is associated with this box. This Trait monitors and exposes several States that are inherently associated with all Object Types of "object." Some of these States include Visible, Color, IsSolid, Location, Orientation, and Dimensions. Is the box visible or not? What is its color or colors? Is it a solid for the purposes of collision detection? How is it oriented now? When it is moved or rotated, how is it oriented then? What are the box's dimensions at the time of creation? Thus, by merely creating an object of a particular Object Type (e.g., "object"), these and a whole other set of States are monitored and exposed by the IsAnObject Trait, which itself was automatically associated with the box by the system.
The Object Type is hierarchical. In other words, one Object Type is not necessarily at the same level as another Object Type. Referring to FIG. 5, the "entity" Object Type 500 is at the top. Although the "entity" Object Type is an Object Type, it is not visible to the user and accordingly, none of the objects created by the user will be categorized as an "entity" Object Type. This "entity" Object Type provides for State management for all the Object Types. Immediately below the "entity" layer are the "world" 501, "universe" 502, "background" 503, "object" 504, and "sensor" 509 Object Types. Immediately below "object" 504 are the "light" 505, "camera" 506, other "objects" 507 (e.g., imported objects from another software package), and "primitive" 508 Object Types.
The hierarchical nature of these Object Types implies that an Object Type necessarily includes the Traits associated with that Object Type as well as the Object Type directly above it. For example, a light source is categorized as a "light" Object Type 505. The system automatically associates the Trait IsALight with this cast member without any intervention by the user. This IsALight Trait exposes States LightType, IsLightOn, and LightRange, among others, in one embodiment of the present invention. The "light" Object Type is also directly below the Object Type of "object" 504. Accordingly, the system also automatically associates the Trait IsAnObject with the light source and all the States (e.g., Visible, Color, IsSolid, Location, Orientation, and Dimensions) exposed by this Trait. Thus, the light source is associated by default with the IsALight and IsAnObject Traits upon creation.
In one embodiment of the present invention, a Viewport hierarchy window (named "3D Dreams") of the authoring tool is shown in FIG. 6. Below the title bar are the set of drop-down menus. Below the drop-down menus is the edit-related toolbar. On the left side of the window, the primitives tool bar is shown.
In the left pane of the window, a hierarchical directory structure of the objects in the Viewport are shown. A world is created under one universe. Within this world, several objects are created - a box, a camera, and two different lights (directional light and ambient light). The box is categorized as an "object" Object Type. On the other hand, the camera is a "camera" Object Type and the directional light and ambient light are "light" Object Types. The directorylike structure of this left pane view should not be construed to indicate in any manner the Trait inheritances among the Object Types. Rather, the contents of this left pane merely show the various cast members within each Object Type. Thus, just because the box is shown under the "world" Object Type in this directory should not be construed to suggest that the Traits automatically and generally associated with the "world" Object Type are also associated with this box of Object Type "object." What this directory does suggest, however, is that the box is located in this "world" in this Viewport so that if another "world" is created, the objects within this other "world" can be separated from this "world."
In the right pane of the window, some features of each of the objects are listed in tabular format. Here, the box has a particular location, as indicated by the location coordinates (x, y, z). The box also has an orientation, as indicated by the yaw, roll, and pitch. Finally, the box also has dimensions, as indicated by the width, height, and depth. These features individual values for the States called location, orientation, and dimension.
Before a discussion of each of the Object Types is presented, a brief overview of Traits and States will now be provided. A detailed discussion of Traits and States was provided above. As described above, one embodiment of the present invention defines an object in the context of Traits and States; that is, an object is associated with one or more Traits that "listens" to changes in or "exposes" one or more State values. States can be either internal or external. Internal States are those States whose values (and changes therein) are only used and useful to a particular Trait which is associated with this internal State. This particular Trait can "listen" to this State but the Trait does not notify (via the system) other Traits of any changes in this State. External States are those States whose values (and changes therein) are used and useful to any other Trait such that they can be deemed to be commonly shared. Thus, a Trait can inform (via the system) any other Trait who may be interested of the change in value of this State. FIG. 7 shows a diagram generally illustrating this concept. Note that this diagram is an object-based diagram rather than an actually implemented data structure diagram. In FIG. 7, the universe 510 contains one or more worlds, such as world 51 1. World 511 contains one or more objects, such as object 512. Object 512 contains or is associated with any number of Traits (i.e., behavior, such as "can have color") and any number of internal and external States (i.e., parameters that give some bound and meaning to the behavior, such as "the color is red"). Here, object 512 contains Traitl 513 and TraitN 514. Traitl 513 is associated with internal State 1 515 and external State 1 517. Analogously, TraitN 514 is associated with internal StateN 516 and external StateN 518.
Traitl 513 "listens" to State value changes in internal Statel 515. This internal Statel 515 is unique to Traitl 513 and accordingly, the values or changes in value in internal Statel 515 is not provided to TraitN 514. Only Traitl 513 can "listen" to internal Statel 515, as indicated by line 519. Similarly, internal StateN 516 is unique to TraitN 514 and accordingly, the values or changes in value in internal StateN 516 is not provided to Traitl 513. Only TraitN 514 can "listen" to internal Statel 516, as indicated by line 520. These internal States are for the exclusive use of their respective Traits. However, external States are available for anyone (i.e., Trait) who might care to listen. Thus, external Statel 517 can be "exposed" by Traitl 513 to TraitN 514, as indicated by line 522. Similarly, external StateN 518 can be "exposed" by TraitN 514 to Traitl 513, as indicated by line 523. The exposure is not limited to other Traits within the same object. Traits in other objects may also "listen" to these exposed States, or conversely, the Traits in one object can expose their respective external States to other Traits in other objects. Thus, Traitl 513 can expose external Statel 517 to the other Traits via line 524, while TraitN 514 can expose external StateN 518 to the other Traits via line 525.
For example, an object of a dog can have a Trait called HasPhysicalCharacteristics, which as the name implies, is associated with the physical characteristics of that dog. This Trait called HasPhysicalCharacteristics exposes a State called Mass, to name but one of many possible States. Of course, this State called Mass is associated with the mass of that dog and can vary from one dog to the next depending on how massive (i.e., in grams, for example) that dog is. The world (or some object) is associated with a Trait called HasGravity, which as the name implies, provides some gravitational force on the objects in this world. Thus, when the dog jumps up, gravity pulls it back to the world. To affect the gravitational force, the HasGravity Trait in the world must know the mass of the dog as provided in the dog's Mass State. The Mass State is exposed by the HasPhysicalCharacteristics Trait in the dog object and the Has Gravity Trait in the world (or some other object) is one of many possible Traits that listens to it. As described more fully, the mechanism by which "listening" and "exposing" is accomplished is via Trait data structures, State data structures, and object data structures. When a State value has changed, all listeners (i.e., Traits) to this State (as referenced in the State data structure) are alerted as the system delivers the chain of States information to the listeners. Thereafter, the listeners can elect to obtain the actual State values or not. Furthermore, by merely defining or initializing an external State, the Trait has "exposed" that State. Thereafter, this State's hierarchical structure lists all of its listeners (i.e., Traits).
Now, a discussion of each Object Type will now be provided with respect to the following Table IV, which lists the default Traits in one embodiment: TABLE IV: DEFAULT TRAITS
Figure imgf000042_0001
Table IV lists the default Traits that are automatically implemented in an author's newly created entity having one of the high level Object Types - object, background, primitive, universe, world, sensor, camera, and light. The table lists the default Trait names at the leftmost column. Corresponding columns provide an exemplary list of States that each default Trait exposes, the Object Type that each default Trait is supported by, and any other Traits that each default Trait depends from. Although this table only lists some exemplary States that each Trait exposes, the States that each Trait listens to can vary from one implementation to the next.
The Trait IsALight is automatically associated with an entity when the user created an object of Object Type "light." This Trait exposes such States as LightType, IsLightOn, and LightRange. As explained more fully below, LightType indicates the type of light including point light, spot light, directional light, and parallel point light. IsLightOn is a State that holds a Boolean value to indicate whether the light is on or off. LightRange indicates the maximum distance that the light will travel for this light source. Following the Object Type hierarchy, the IsALight Trait depends on the IsAnObject Trait from the Object Type of "object." The Trait IsACamera is automatically associated with an entity when the user creates an object of Object Type "camera." This Trait exposes such States as FieldOfView and FarClip. As explained more fully below, FieldOfView indicates the field of view of the camera that ranges from 0 to δ radians in one embodiment. For those skilled in the art, 0 radians represents 0 degrees and 5 radians represents 180 degrees. FarClip indicates the maximum distance from the camera that objects within this distance will be viewable (and 3D calculations performed on). Any objects beyond this distance from the camera will not be viewable until the camera's movements place these objects within its range. Like the IsALight Trait, the IsACamera Trait depends on the IsAnObject Trait from the Object Type of "object." The Trait IsAObject is automatically associated with an entity when the user creates an object of Object Type "object." This Trait is also automatically associated with those entities whose Object Type is designated as "primitive," "camera," or "light." The IsAnObject Trait is the most widely used Trait in the authoring tool because most cast members are, in one form or another, objects that have either been created internally with the authoring tool's edit/drawing tools or externally with some third party software package to be imported into the authoring tool. Accordingly, the States that it exposes far outnumber that of the other Traits. The IsAnObject Trait exposes such representative States as Parent, Visible, Color, IsSolid, Location, Orientation, and Dimensions. These States are self-explanatory but will be discussed in greater detail below. The Trait IsABackground is automatically associated with an entity when the user creates an object of Object Type "background" or "world." It exposes the State Appearance.
Appearance is a string value that references another object's appearance. In one embodiment of the present invention, only one background can be viewable for a given Viewport even though multiple backgrounds can be saved. In other embodiments, multiple backgrounds can be saved and viewable so that the background is combination of different background files. The Trait IsASensor is automatically associated with an entity when the user creates an object of Object Type "sensor." In one embodiment, sensors are implemented as point sensors so that ifany designated object impinges on its trigger range, it provides an indication. One such indication is the State TriggerProximity which is exposed by the IsASensor Trait. Thus, if a designated object is within a certain distance from the sensor, the TriggerProximity indicates that distance. Based on this distance, other Traits such as CanBeep or CanChangeColor which listen to TriggerProximity can invoke other actions such as beeping or changing the object's color to red if a certain distance threshold is crossed. Because the sensor is implemented as point sensors, multiple sensors will have to be created and placed appropriately if an entire wall is designated as the sensor. Depending on how large the object that is to be detected, the exact placement and relative spacing of the point sensors along the surface area of this exemplary wall will vary. Of course, in other embodiments, sensors can be made directional so that trip wires can be implemented. In further embodiments, the sensors can be made into a variable-sized sphere. In other applications, sensors can be used as a reference point to indicate how far or close an object is to that sensor. In these applications, the sensor need not activate any other action; the sensor merely exist to alert other objects of their proximity.
The Trait IsAUniverse is automatically associated with an entity when the user creates an object of Object Type "universe." The IsAUniverse Trait keeps track of the WorldTime State which is referenced by many different Traits. As described more fully below, the WorldTime State is an integer value that changes as time changes. The IsAUniverse Trait also exposes children States because it could have one or more worlds that it supports.
The Trait IsAWorld is automatically associated with an entity when the user creates an object of Object Type "world." In one embodiment, a Viewport can have multiple worlds but these worlds are mutually exclusive and cannot interact with each other. Thus, Traits in one world cannot listen to Traits in another world. The world Object Type is primarily used to provide authors with some flexibility in their project design. For example, a Viewport could be designed to present a multiple story building. To prevent objects on one floor from hearing sounds emitted by objects in another floor, one implementation may involve creating a world for each floor. The world's parent is the universe and its children are other Object Types within that world. As mentioned above, in addition to default Traits, non-default Traits that the author may elect to directly associate with his objects exist. These non-default Traits will now be discussed.
SPECIFIC NON-DEFAULT TRAITS
The Traits listed in Table 5 are of the non-default variety. Although these Traits are built into the authoring tool, the fact that these Traits are not automatically associated with an author's newly created object makes them non-default. The author must purposely associate any of these Traits with his newly created object in order for the object to acquire the corresponding behaviors. This is in stark contrast to the default Traits (Table 4 above) where the mere creation of an object into one of the eight Object Types automatically associates certain Traits with that object.
Table V below lists the non-default Traits:
TABLE V: ADDITIONAL TRAITS fNON-DEFAULT
Figure imgf000045_0001
A discussion of each of these non-default Traits will now be provided. Table V lists the non-default Traits that are implemented in an author's newly created entity only when he affirmatively associates them with his entity, in contrast to default Traits which are automatically associated with his entity upon creation. The table lists the non-default Trait names at the leftmost column. Corresponding columns provide an exemplary list of States that each non- default Trait exposes, the Object Type that each non-default Trait is supported by, and any other Traits that each non-default Trait depends from. Although this table only lists some exemplary States that each Trait exposes, the States that each Trait listens to can vary from one implementation to the next. The CanBeKeyboardControlled Trait provides the user with the ability to control certain objects with the keyboard instead of the mouse. In some instances where synchronization over the network is necessary (e.g., Internet gameplay), bandwidth limitations may be a concern especially for fast game environments that require rapid changes in States. In these cases, the author can program his score so that the CanBeKeyboardControlled Trait modifies the IsAnObject Trait. For optimum network synchronization, the program may transfer or provide some gameplay control to the keyboard so that the user can continue to play his game at a fast and furious pace while waiting for the mouse-control related Traits to also be synchronized. Accordingly, the CanBeKeyboardControlled Trait listens to these network synchronization Tags (e.g., Published, Audit, Licensed). In most cases, this CanBeKeyboardControlled Trait is provided for all object control functions as a default modifier of the IsAnObject Trait.
The CanBeNetworkSynchronized Trait provides the Trait that listens to Tags for the purpose of network synchronization. As discussed elsewhere in this patent specification, Tags are essentially parents of States so that some sort of grouping of States can be employed. Whenever a child State changes, the system alerts all listeners of the parent State (the Tag) so that the listener (usually the CanBeNetworkSynchronized Trait) can obtain the new State information, if desired. For example, the Published State is used as a Tag in one embodiment to synchronize the various States across a network. Published is a parent or Tag to the location, orientation, and size States. Whenever any one of these States changes value, the system alerts the CanBeNetworkSynchronized Trait because this Trait is a listener of the Published State. Having received the chain of States information, the CanBeNetworkSynchronized Trait can retrieve the changed State information for synchronization across the network. For a more detailed treatment, refer below to the Published Tag description. This process can be repeated for every State that is monitored by the Published Tag for network synchronization.
The CanEmitSounds Trait provides the object with the ability to play a predetermined sound. The particular sound is found in the State PlayedSound, described below. Thus, a lion object has a Trait called CanEmitSounds, which is invoked along with the animation that shows the lion's mouth move. The particular lion's roar sound is found in its PlayedSound State. If the lion's CanEmitSounds Trait listens to its PlayedSound State, which happens to contain the sound file of a kitten, the lion will make a kitten's sound. Because this is a sound emitting Trait, no sensory prerequisites are required of the object, unlike the CanHear Trait.
The CanHear Trait provides the ability of an object to hear sounds. This Trait depends on two requirements - the world (or other objects in the world) should provide sounds for the object to hear and the world (or other objects in the world) should provide the particular sounds that the object is capable of hearing. First, in order for the object to hear any sounds, the world in which the object is in must provide those sounds for the object to hear. Second, the sensory capability must be consistent with the ability of the world to provide senses that are compatible with those senses. For example, if the object can only hear sounds in the frequency range of 0 Hz to 3,000 Hz, a world that only emits sounds in the 4,000 Hz to 5,000 Hz range is not providing any sounds that the object can hear. The world is providing sounds but these sounds are beyond the sensory capabilities of the object. This second requirement is described in greater detail in the HearingQuality State discussion below.
The CanReceiveTimedE vents Trait provides an object, primitive, world, or universe to receive timed events, such as WorldTime. In conjunction with the States TimeOutHappened and TimeOutPeriod, the CanReceiveTimedEvents Trait can invoke other Traits to perform some function. In one application, the TimeOutPeriod could be set for some time period that is triggered at the last mouse click or mouse movement. If the time out period expires, the system interprets this as an idle State because the user had not used the mouse for some specified time period. Accordingly, some Trait such as RunAnimation that runs some animation for the object can be activated in response to the CanReceiveTimedEvents Trait that is triggered by the time out elapsing. Thus, during idle time, an animation of a clock ticking may appear, or some objects on the Viewport could be pacing back and forth while looking at their respective watches, or the screen may be animated in some other way to indicate the idleness of the program (much like a screen saver) in accordance with some pre-scripted animation. Whatever the application, the CanReceiveTimedEvents Trait provides the object with the ability to receive timed events, such as WorldTime, so that some other action may be invoked.
The CanRemember Trait, like many of the Traits and States described in this patent specification, provides an object with the ability to remember things. The actual memory itself can be implemented via a simple database table or a sophisticated neural network in the MemoryCell State. Regardless of the implementation, the CanRemember Trait allows an object to learn and remember data that it may have collected some time ago to govern future behavior. The CanSee Trait provides an object with the ability to see other objects in the world. Like the CanHear Trait, this Trait depends on two requirements - the world (or other objects in the world) should provide some light source for the object to see and the world (or other objects in the world) should provide the particular visual stimulation that the object is capable of seeing. For the first requirement, the lack of any light may prevent the object from seeing anything even though the object may have the ability to see. For the second requirement, the world (or other objects in that world) must be capable of providing the particular visual stimulation that the object is capable of seeing. For example, if an object cannot see the color red, a world filled with red-colored objects and backgrounds will be invisible to the object despite the fact that it is illuminated with light. This Trait listens to (and exposes) the SeenObjects and SeeingQuality States.
The CanSpeak Trait provides an object or primitive with the ability to speak. This Trait relies on the SpeakText, FinishedTalking, and TalkSentence States. Accordingly, an appropriate text-to-speech engine is necessary to fully enable this Trait to function. The Can Walk Trait provides an object with the ability to walk. This Trait is augmented by the physical characteristics of the object. If the object has no legs, it cannot walk. Furthermore, the walking can be implemented in different ways. If inverse kinematics is involved, the Can Walk Trait will invoke other functions that will provide the procedural steps necessary for the object to walk; that is, movements of the knee with respect to the torso and feet are activated to enable the object to progress forward and thus "walk." If an animation is involved, the mechanics of walking becomes less procedural and more pre-determined animation without regard to individually controlling the knees, feet, torso, upper leg, and lower leg.
The HasGoals Trait provides a mechanism by which a series of individual prioritized goals can be set up for an object. Each goal is associated with some priority and some behavior action. The priority would be used by the system to resolve potential conflicts. The behavior action is used to ensure that the object with that particular goal is motivated to take steps to achieve that goal. In order to accomplish a goal, these steps (or conditions) must be satisfied first. By integrating these goals into the object, the author has in effect built in some personality for that object that explain its behavior. These goals are parameterized instances of an instruction set that is part of the HasGoals Trait. These goals may also have one or more listeners which parameterize a CanBeAchieved Trait according to the States to which they listen.
The following is an example of conflict resolution using the HasGoals Trait. Assume that an author set up two goals for his deer object — "must survive" and "must eat periodically." The author programmed these goals so that the "must survive" goal has a higher priority than the "must eat periodically" goal. Note that in most cases, the eating activity will allow the deer to survive and accordingly, any potential conflict between the two goals is not readily apparent. Let's say that the deer is preparing to eat some vegetation out in the wilderness because it is now time to eat. A predator object such as a lion is lurking on the horizon and sees the deer eating. He then chases after it. Obviously, if the deer continues to eat to satisfy its "must eat periodically" goal, it will be devoured by the lion and it will not satisfy its "must survive" goal. If the deer has detected the presence of the lion, not to mention the fact that the lion is now running toward the deer, the deer has to resolve the potential conflict of the "must survive" and "must eat periodically" goals. The deer has not eaten yet even though it is now the period to eat. However, in light of the lion's chase and the higher priority set for "must survive," the deer will attempt to satisfy the "must survive" goal without now fulfilling the "must eat periodically" goal. The deer now runs for its life.
The following is an example of conditions precedent that must be satisfied before a goal can be accomplished. Assume that a human object has a goal called "must fly." Before he accomplishes this goal, he must know how to fly and have access to a plane, helicopter, or any aircraft. This access may also imply that an airport or heliport must be found. Thus, the simple existence of the "must fly" goal in the human object makes that human behave in ways that allow it to fly an aircraft, such as locating and finding an airport or heliport, taking flying lessons, finding an aircraft, and finally flying that aircraft. These sets of behavior would certainly be absent in another human operator whose sole goal is to "must sleep." Many times, both the conflict resolution aspect and achievable condition aspect of the
HasGoals Trait come into play. Assume that the score in Director requires that the human object on the Viewport drive to San Diego and fly to San Diego during the same frame period. This human object obviously cannot accomplish both goals during this overlapping frame period. If, however, the human object does not know how to fly an aircraft or did not find an airport or did not find an aircraft, these unattained conditions prevent the system from labeling the "fly to San Diego" goal as achievable. As a result, the human object will resolve this conflict easily by driving to San Diego instead.
Thus, when an object has a goal, it determines whether this goal is achievable or not. The HasGoals Trait that has this particular goal (e.g., "must fly") listens to various States (e.g., "IsAircraftPilot", "FoundAirport", "FoundAircraft") to determine if this goal is achievable or not. If these various States indicate that the achievement thresholds for these States are satisfied, then this goal is deemed achievable. If this goal is deemed achievable, it attempts to accomplish this goal now (e.g., the human object will now fly the aircraft).
The HasGravity Trait provides some simulated gravitational force in the world or in an object. If an object is selected as having gravity, the actual gravitational force depends on the mass of the object. The more massive the object, the greater the gravitational force, in accordance with the laws of physics. Thus, all other objects are pulled toward the object with the HasGravity Trait. When a Trait such as CanJump is invoked, an object would jump up but its return to the ground is not governed by the CanJump Trait; rather, its return to the ground after jumping up is governed by the HasGravity Trait.
The CanTranslate Trait allows the system to provide an author with a consistent or coherent definition of a State, given that different authors may initialize or define the same State in an inconsistent manner. When a program is authored, many different Traits can define or initialize the same State. Normally, this is not a problem because the same author will take certain precautions to define the same State consistent. However, in cases of carelessness or the intervention of a different author working on the same program, the same States can be defined in an inconsistent manner. The CanTranslate Trait provides an author with the ability to listen on a particular State (any State) and provide the author with a coherent value or range for future use. The translator typically includes a transformation function that enables the integration or combination of different systems. In each system, the original authors may have had a particular perspective about State definitions in that system which may not be appropriate when his system is integrated with or combined with another system. The inconsistent States are put in another namespace. The translator then performs the transformation of the relevant States for the first system that is incorporating or integrating another system into the first system. The translator can even listen to a Tag that is attached to multiple States that have the same convention, decreasing the work needed to attach the translator to each State that is implemented in the systems that are using inconsistent State definitions.
The HasPhysicalCharacteristics Trait provides an object with physical characteristics, such as mass, density, elasticity, and volume. This Trait depends on the IsAnObject Trait and accordingly, collision detection (if solid) and gravity will affect it.
The Has Wind Trait provides wind to exist in and interact with the world. It exposes the State WindSpeed.
The IsADraggableObject Trait provides the user with the ability to drag a selected object. This Trait exposes the States ShouldDrag and Mouse Vector. Typically, this Trait depends on the IsAPickableObject Trait and listens for the MouseDown state being TRUE for the selected object. Thereafter, the dragging occurs by following the MouseVector.
Note that these are a sampling of the non-default Traits that are delivered to the developer as an extension to an authoring tool in accordance with one embodiment of the present invention. The list of Traits described above is not meant to be exhaustive. Other Traits can be written for and be used with this authoring tool so long as they are compliant with the 3D Dreams format. More than likely, the developer will find most of the Traits herein useful for his particular project (e.g., his Movie) but he may have to write new Traits optimized for his particular application. Having discussed some specific Traits, some specific States will now be discussed. SPECIFIC STATES
States can be classified as user-changeable States or system-changeable States, regardless of whether they are internal or external. Thus, both user-changeable States and system- changeable States contain internal and external States. User-changeable States are those States that can be changed by the user (via a Trait or some script) at the user's initiation. System- changeable States are those States that can be changed by the system only. In both cases, various Traits can listen for State changes and receive their actual State values via the system.
Table VI below lists the user-changeable States:
TABLE VI; DEFAULT USER-CHANGEABLE STATES
Figure imgf000052_0001
O 00/49478
Figure imgf000053_0001
Figure imgf000054_0001
Figure imgf000055_0001
Figure imgf000056_0001
A discussion of each of these user-changeable States will now be provided. Table VI lists the user-changeable States that are exposed by any Trait or action script implemented by the author in his Viewport and listened to by any Trait. The table lists the State names at the leftmost column. Corresponding columns provide a list of Object Types that each State is supported by, the reset information (i.e., reset to default values manually or automatically), the data type (e.g., string, boolean, integer, float), and the property list which includes the range of possible values. Note that this table lists some of the many States that could be implemented in the authoring tool in accordance with one embodiment of the present invention. The ActiveBackground State indicates the particular background file (via a Persona
Object) that is active in the Viewport. This State is supported by the world Object Type and hence, the IsAWorld Trait, with manual reset, and a property list which references a Persona Object. As briefly discussed above, only one background can be viewable for a given Viewport even though multiple backgrounds can be saved by the authoring tool, in accordance with one embodiment of the present invention. In other embodiments, multiple backgrounds can be saved and viewable so that the background is combination of different background files. Note that backgrounds are located infinitely far away so that no object on the Viewport can collide with it, nor is any 3D calculations necessary. The background is also viewable within the field of view of the camera even if it is located infinitely far away and accordingly, it is an exception to the FarClip State, which only acts on objects and primitives and not backgrounds.
The Appearance State indicates the general appearance of an object, primitive, or background. This State is typically associated with the IsABackground Trait. Specifically, Appearance represents the geometry that describes the object. Its reset type is manual and the data type is a string value. Thus, when the author already has an object that looks a certain way, that look can be given a string name. Thus, this look can now be replicated in the background by referencing the string. When the appearance of an object is changed, it should look different. For example, the pseudo-code
"member("dog").appearance = member("elephant").appearance" will make one object ("dog") look like another object ("elephant").
The Audit State is used as a Tag in one embodiment of the present invention. Accordingly, it possesses no internal value so to speak. This State is typically associated with the CanBeNetworkSynchronized Trait. As described more fully above, a Tag represents a parent State to one or more child States. The Tag could also be child State to another Tag parent State. As one of its children States change value, the Tag itself does not change value. Rather, the system detects the change in State value and sends the chain of States information (which is analogous to a pointer) to all of the listeners listed in that changed State's hierarchical data structure. The listeners (i.e., Traits) can then decide to retrieve the actual value of that changed State via the system or do nothing. The Audit State itself can be used to monitor changes in the Published Tag. In one example, the Audit Tag can be used in a network gaming environment along with the Published Tag. Player 1 and Player2 are located across the network from each other and are playing the same game against each other. Although the Published Tag is used to keep track of all changes (e.g., location, orientation, size of the players) in the gaming environment to facilitate network synchronization, the Audit Tag can be used to only receive State change notification when any player moves from one level to the next. Thus, the Audit Tag does not require high bandwidth resources to receive notifications of every single State change; rather, only the particular State change information is delivered to the Audit Tag.
The Children State lists the name or names or one or more children Persona Objects of a primitive, object, universe, or world. As discussed above, a Persona Object is an object that is capable of having a Trait; that is, the object is Persona-ized. For example, a Persona Object of a dog in a particular world is considered a child of that world where the world is considered the parent. Similarly, a Persona Object of a bed is considered a parent of a Persona Object of a pillow that is located on top of that bed. The Children State for the bed lists the pillow as one of its Children. Thus, a Trait associated with the bed may listen to the Children State for information on its physical hierarchy. The Color State indicates the color of an object, light, background, or primitive via the standard color components of red, green, blue, and alpha (i.e., translucency). This State is typically associated with the IsAnObject Trait. For each of these color components, the data type is float. For objects of Object Type "light," the Color State indicates the color of the light. The Density State indicates the density (e.g., mass per unit volume) of an object or primitive. This State is typically associated with the IsAnObject Trait. Its data type is float that has a minimum of 0.
The Dimensions State indicates the dimensions of an object or primitive. This State is typically associated with the IsAnObject Trait. Its data type is float. The property list for this State includes width, height, and depth.
The Elasticity State indicates the solidness or softness of an object or primitive. This State is typically associated with the IsAnObject Trait. Its data type is float. Thus, a solid rock may have a very hard surface that is represented by some elasticity index. A piece of rubber will have a different elasticity index because it can return to its original shape after being forcefully deformed (i.e., stretched, pressed). The Elasticity State has a minimum value of 0.
The FarClip State indicates the maximum distance that is viewable by the user with the "camera' Object Type. In particular, FarClip represents the maximum distance from the camera within which the system will render (i.e., calculate and image) objects. Thus, if FarClip is 100 yards, the system will render only those objects that are within a range of 100 yards from the camera. All other objects beyond this range will not be rendered. Of course, backgrounds will be shown but this is because the background is not considered an object which needs 3D resources. Backgrounds are persistently constant in appearance. With such a State, the system can reserve resources for calculating 3D data only for those objects within the viewing distance of the user. This State is typically associated with the IsACamera Trait. The data type is float and the range has a minimum of 0.
The FieldOfView State indicates field of view of the camera; that is, how narrow or wide is the angle of view for the user. This State is typically associated with the IsACamera Trait. The State value is float with a range of 0 radians to δ radians. In other embodiments, more than δ radians is provided to the user as an option. The FinishedTalking State indicates whether the primitive or object is finished talking (or emitting some designated sound). This State is typically associated with the CanSpeak Trait. It is a Boolean value to represent finished talking or not finished talking. Thus, during the talking process, this State would indicate that the object or primitive has not finished talking. The reset type is automatic since the system knows when an object has finished talking or not and changes the State value accordingly.
The Gravity State indicates the graviatational force of the object associated with the HasGravity Trait. The magnitude of the gravitational force may depend on the mass of the object as well as this object's interaction with any other nearby objects that also possess gravity. Thus, if a large object with gravity is located near a smaller and less massive object with gravity, another object may be gravitationally "pulled" more toward the large object than the smaller object. Because of the interaction of these two gravity-laden objects, the gravitational force of the smaller and less massive object is less than it would otherwise be had the larger object not been located nearby. This State is typically associated with the IsAnObject and HasGravity Traits. The Gravity State has a data type of float and has a minimum value of 0. The HearingQuality State indicates the quality of the hearing level of the object, primitive, world, or sensor. This State is typically associated with the CanHear Trait. With this State, the author can give some objects better hearing quality than other objects. This State uses float data type. Its property list includes frequency range (i.e., minimum Hertz to maximum Hertz) and hearing distance. Thus, those objects with better hearing capabilities can hear sounds at a wider frequency range than those objects with lesser hearing capabilities, all things being equal (i.e., same volume, same hearing distance). Analogously, those objects with better hearing capabilities would tend to hear sounds coming from a sound source located farther away than those objects with lesser hearing capabilities, all things being equal (i.e., same sound volume, same frequency range). Of course, the author may program one object to have a wider hearing frequency range than another object, even though this other object can hear farther. Although one cannot absolutely State that the one object has better hearing quality than the other object, one can say that the hearing qualities of the two objects are different. Thus, the author may create a dog object and a human object, where the dog object has a wider hearing frequency range and a greater hearing distance than those of the human object. Their respective hearing qualities will enable the dog object and the human object to respond differently to different sounds and accordingly, the author will be able to accurately simulate a real world environment on his Viewport.
The HearSound State indicates whether or not the sound that had been emitted was heard by a given object. This State is typically associated with the CanHear Trait. Its data type is Boolean and the reset type is automatic. If a given object has heard or is hearing the sound, the system will change the State value accordingly. The HearSound State is dependent on the HearingQuality State because an object's hearing quality (frequency range, hearing distance) determine whether the object has heard a sound or not.
The HorizontalTile State indicates the general pattern-type appearance of an object or primitive. An author may want to use a particular pre-defined tile to "cover" or "decorate" his object or primitive. By indicating the number of this tile to be used, the author will alter the appearance of the object or primitive. When using only one tile, the entire object is covered by one tile which is sized to fit the object. If two tiles are used, the same object is covered by two tiles where half of the object will be covered by one tile and the other half of the object will be covered by the other tile. Also, by using two tiles, the size of each tile is smaller (by one-half) than the former one-tile design. Stated generally, the number of tiles specified dictates the size of each tile, given that the object's size remains constant. To carry this example further, as more and more tiles are used, the smaller and smaller each tile becomes in order to cover the object with the specified number of tiles. Note that the tiling effect is horizontal; that is, if N number of tiles is specified, these N tiles will be laid out horizontally across the surface of the object. See the VerticalTile State for the vertical tile layout. The State's data type is integer and its property list requires a minimum of 1 tile.
The IsLightOn State indicates whether or not the light is on. This State is typically associated with the IsALight Trait. Its data type is Boolean. The reset type is manual, although in other embodiments, the reset type is automatic.
The IsPerspectiveCorrected State indicates whether or not perspective correction is enabled for the objects and primitives. Its data type is Boolean. Its primary use is as a quality flag. Because 3D graphics is supported in the authoring tool, perspective projections are also implemented. Although geometric shapes may be altered to fit the perspective projection model, textures on the geometric shapes are not so easily alterable. Thus, when this State indicates that perspective correction is enables, it is indicating that the textures are not corrupted from the perspective projections and have been appropriately corrected.
The IsPickable State indicates whether or not an object or primitive is pickable with the mouse or keyboard. Its data type is Boolean with manual reset. As anyone familiar with computeFsystems, if an object is pickable and is picked (typically with a mouse), that object is normally highlighted for further action by the user (e.g., edit, drag). These other actions or behaviors are controlled by other Traits and action scripts that dictate how that object should behave once it is picked, given that the IsPickable State is TRUE (or some other equivalent binary logic value). The IsSolid State indicates whether an object, primitive, or camera is solid or not for the purpose of the collision detection scheme implemented in the authoring tool. Its data type is Boolean. This State is typically associated with the IsAnObject Trait. If the object is solid, the collision detection scheme will detect whether this solid object has collided with another object (usually another solid object). In one embodiment of the present invention, the collision detection scheme is a C++ code that exposes itself using several States (e.g., HasCollided,
CollidedWith). Accordingly, it is considered a rudimentry part of the IsAnObject and IsAWorld Traits. It is implemented in the object engine layer (to be discussed further below with respect to the engine), and the associated States are exposed by the IsAnObject Trait.
As discussed above, the collision detection scheme will dictate whether a collision has occurred in light of the IsSolid State. If an object that is solid makes "contact" with and object that is not solid, the collision detection scheme will dictate whether a collision has occurred at all (requiring a response from the objects), or if only one of the objects collided while the other object did not (requiring a response from only one of the objects but not the other). The camera also uses this State in some cases and is therefore subject to collision detection. For example, a first person shooter game places the game player in the gaming environment as the camera; that is, he sees what the camera sees. When he moves about this environment, the camera moves with him showing him what his eyes see. If he approaches a wall too closely, he may make contact with that wall and therefore, his camera makes contact with that wall as well. In this environment, the game player is not represented by a primitive or some object; rather, he is represented by the camera. For a discussion on the distinctions among the IsSolid, IsTransparent, and IsVisible States, see below in the IsVisible State description.
The IsTransparent State indicates whether an object or primitive is transparent or not. This State is typically associated with the IsAnObject Trait. Its data type is Boolean. If the camera can "see through" an object, much like a transparent glass window can be seen through, then it is transparent. Similarly, an opaque glass may not be transparent because the camera cannot "see through" it. For a discussion on the distinctions among the IsSolid, IsTransparent, and IsVisible States, see below in the IsVisible State description.
The IsVisible State indicates whether or not an object or primitive is visible within the field of view of the camera. This State is typically associated with the IsAnObject Trait. Its data type is Boolean. If an object is in the field of view of the camera and not obstructed by any other object, it is considered to be visible. If an object is partially obstructed by another object and is in the field of view of the camera, it considered to be visible. If an object is completely obstructed by another object but is otherwise in the field of view of the camera, this obstructed object is not considered to be visible. If an object is not in the field of view of the camera, regardless of whether or not it is obstructed by another object, it is not considered to be visible. The IsTransparent, IsVisible, and IsSolid States should be distinguished from each other. If the camera can "see through" an object, much like a transparent glass window can be seen through, then it is transparent. This object, whether transparent or not, can be a solid for the purposes of collision detection. Thus, a transparent glass window can be transparent and solid because any other solid object can collide with the glass window. An object of a ghost or apparition, however, can be transparent but not be solid because the author may want other objects to go through this ghost when they are co-located. Similarly, a ghost that shows its form may not be transparent because the camera cannot "see through" it. The IsVisible State is only relevant to the field of view of the camera. An object may be on the Viewport but it may not be visible because of some obstruction that hides its presence. A ghost that is transparent can also be visible if it is on the Viewport, in the field of view of the camera, and not obstructed by any other object.
The Licensed State is used as a Tag in one embodiment of the present invention. Accordingly, it possesses no internal value so to speak. This State is typically associated with the CanBeNetworkSynchronized Trait. Refer above to the description on the Audit State as well as the general Tag discussion. The Licensed State itself can be used to enable users to have access to certain behaviors or features of an object.
For example, let's say a race car driving simulation game is provided on an Internet gaming site. This racing site has 10 different tracks and 10 different cars. Of course, each track features different terrain, scenery, and driving difficulty levels. The same applies to cars where each car "handles" differently (i.e., speed, acceleration, road handling). Initially, the game player can only access only the first 3 tracks and the first 3 cars. By racing each track with whichever car the game player selects and finishing in the top 3 in each track, the Internet game site unlocks some of the remaining tracks and cars. These additional tracks are analogous to bonus tracks and cars. The Licensed Tag is a Tag or parent State for any number of States, such as HasPaidForBonusTrack and FinalRaceResults. The CanRaceBonusTrack Trait listens to changes in these two States via the Licensed Tag to keep track of certain performance criteria (e.g., these final race results) and only when performance goals have been met (e.g., finishing in the top 3 in each track) as well as whether some payment has been provided. If payment has been made already for the bonus cars and tracks (via a CanPay Trait which exposes a State called HasPaidForBonusTrack) and the performance criteria have been met, the system notifies all listeners of the Licensed Tag of the achieved performance goals. A Trait such as CanRaceBonusTrack listens to the Licensed State to invoke some other action that allows the game player to race on that bonus track across the network, if desired. The Licensed Tag can also be used to license reusable objects, such as movie or game characters, across the network. If a user pays for the right to use that object such that a CanPay Trait changes a State called HasPaid, the Licensed Tag can be used and monitored by a Trait called CanLicense across a network to deliver the licensed object to the user. For example, a basketball game site allows any game player to play NBA basketball on its site with any team or combination of known NBA basketball player objects or newly created basketball objects. A particular game player wants to use Michael Jordan in his team to play a game against another team. By paying for the use of the Michael Jordan object, the CanPay Trait changes the State HasPaid to some value indicating that payment has been made for this object. The game player now has control and use of his Michael Jordan object. This object comes with Michael Jordan's image, uniform, certain basketball moves that are unique to Michael Jordan, expressions such as the wagging tongue, and other physical skill attributes such as strength, dunking ability, shooting ability, defensive steal ability, and rebounding ability. When playing the game on this Internet gaming site, the game player will have the impression that he is playing with Michael Jordan on his team. Other applications exist that involve licensing an object, such as a character, for use in games, advertising, and other entertainment fields.
The LightlnnerAngles State indicates the angle of the inner light for spot lights. This State is typically associated with the IsALight Trait. When a spot light is illuminated against a wall, a couple of concentric circles appear- an outer circle and an inner circle. Most of the light's intensity is centered in the inner circle defined. The area between the perimeter of the inner circle and the perimeter of the outer circle also has some light, but at a much lower intensity. The LightlnnerAngles State represents the inner angle of the light projected from the spot light source. The data type is float with a range from 0 radian to 2δ radians. Thus, if the angle is set at 0 radian, the light comes out of the spot light source relatively parallel and the size of the inner circle is relatively small. If the angle is set at δ/4 radians, the inner circle is larger and the inner light comes out of the spot light source at an angle of 45 degrees, or δ/4 radians. The LightOuterAngles State indicates the angle of the outer light for spot lights. This State is typically associated with the IsALight Trait. As described above for the LightlnnerAngles State, when a spot light is illuminated against a wall, a couple of concentric circles appear - an outer circle and an inner circle. This State is applicable to the outer circle, where the lower intensity light appears. The LightOuterAngles State represents the outer angle of the light projected from the spot light source. The data type is float with a range from 0 radian to 2δ radians. Thus, if the angle is set at 0 radian, the light comes out of the spot light source relatively parallel and the size of the outer circle is relatively small and is no different from the inner circle. If the angle is set at δ/4 radians, the outer circle is larger and the outer light comes out of the spot light source at an angle of 45 degrees, or δ/4 radians. The inner circle,- as defined by the LightlnnerAngles State, should be smaller than the outer circle.
The LightRange State indicates the maximum distance that the light travels; that is, how far out does this light shine. This State is typically associated with the IsALight Trait. The data type is float with a minimum value of 0. Thus, if this State indicates 100 yards, the light will only shine 100 yards from the light source. Any object beyond this distance is not illuminated at all by this light source.
The LightType State indicates type of the light source. This State is typically associated with the IsALight Trait. The data type is an integer numerator where an integer value represents the particular light source type. The various light types include unknown, point light, spot light, directional light, parallel point light, and ambient light. The "unknown" type is merely a default setting until the proper light type is selected. Thus, for all implementations, a light type of "unknown" has no meaning. The author should inevitably set it to one of the known light types. A discussion of these different types of light will now be provided. A point light provides a light source that illuminates anything that is within a specified spherical radius from that point light. In contrast, an ambient light illuminates everything in the world without any range limitations. The point light could be used to illuminate a room in a world by limiting its range to a sphere about the room, allowing for some light to spillover out of a door or window of that room.
A spot light is like a flashlight which emits a directional light of a specified diameter (cross-section of the light at the source) and a specified angle of emission. The diameter allows the author to specify the magnitude of his spot light (e.g., pocket flashlight v. car headlights v. spotlight at a movie premier). The angle of emission is measured from the center of the light's cross-section for all points around the perimeter of the light's cross-section. The angle of emission allows the author to specify how large the illuminated area on a surface will be given a constant distance from that surface. So, the illuminated area from a spot light having an emission angle of 0 radian is smaller than the illuminated area from a spot light having an emission angle of δ/4 radians, given that the distances to the surface are the same. As implied above, a directional light emits parallel light (angle of emission is 0 radian) having a specified diameter for the light's cross-section. A parallel point light emits a directional parallel beam of light having minimal cross-sectional diameter.
The Location State indicates the present location of the object, primitive, light, or camera. This State is typically associated with the IsAnObject Trait. The data type is float for each of the x, y, and z coordinates. Another property element of the property list beyond the float values for x, y, z position coordinates is the Persona Object reference. To set the location of an object relative to another object, the Persona Object can be used. Thus, the location of an object relative to a referenced Persona Object is determined by calculating x, y, and z units in 3D space from this Persona Object. Note that the reference to a Persona Object does not necessarily create a parent-child relationship automatically; the author would have to explicitly do this to form that relationship. The Mass State indicates the mass (e.g., in grams) of the primitive or object. This State is typically associated with the IsAnObject and HasGravity Traits. Its data type is float and the property range is a minimum of 0.
The MemoryCell State indicates the kind of memory and the memory itself of the primitive, object, world, universe, sensor, camera, or light. This State is typically associated with the CanRemember Trait. Thus, if an object has a CanRemember Trait, it keeps data in the MemoryCell State. The data type and property list vary depending on the specific implementation. In one embodiment, the kind of memory can vary from a simple database to a sophisticated neural network. The memory itself are the values in the table or the neural network itself of processors and local memories. As known to those skilled in the art, a neural network is a network of many processor units, each possibly having a small amount of local memory. The processor units are connected together via communication channels which usually carry encoded numeric data. The processor units operate only on their local data and on the inputs they receive via the channels. Most neural networks have some sort of "training" rule where the weights of the channels are adjusted on the basis of the input data; that is, neural networks "learn" from examples and exhibit some capability for generalization beyond the training data.
The Orientation State indicates the orientation of the light, primitive, object, or camera. This State is typically associated with the IsAnObject Trait. Its data type is float for each of yaw, roll, and pitch. Another property element of the property list beyond the float values for the yaw, roll, and pitch is the Persona Object reference. To set the orientation of an object relative to another object, a reference to a Persona Object can be used. Thus, the orientation of an object relative to a referenced Persona Object is determined by calculating yaw, roll, and pitch units in 3D space from this Persona Object. Note also that like the Location State, the reference to a Persona Object does not necessarily create a parent-child relationship automatically; the author would have to explicitly do this to form that relationship. The Parent State indicates the Persona Object name of the parent of the object, primitive, camera, or light. Thus, if a pillow object (child) is lying on top of a bed object (parent), the pillow object will list the bed object as a parent in its data structure. Similarly, a Trait associated with the pillow may listen to the Parent State for information on its physical hierarchy.
The Pivot State indicates the pivot point, or the point of rotation, of the object, primitive, light, or camera. This State is typically associated with the IsAnObject Trait. Its data type is float for each of yaw, roll, and pitch. Additionally, the Pivot State can also reference a Persona Object. Thus, the Pivot of an object relative to a referenced Persona Object is determined by calculating yaw, roll, and pitch units in 3D space from this Persona Object.
The PlayedSound State indicates the particular sound that is to be played upon invocation by the object or primitive (via a Trait). This State is typically associated with the
CanEmitSounds Trait. The property list includes the sound itself (whether mono or 3D) and the volume. Normally, an object emits certain unique sounds when invoked. For example, an object of a car can emit one of several different sounds including the engine starting sound, the general whirl of the engine, a high pitched screech due to depression of the brakes, the gear-shifting sound of the transmission system, and a crash sound when it crashes against another object. At any given point in time, the car object may elect to emit one of these sounds in response to some event on the Viewport. Thus, when the user starts the engine, the car object makes the engine starting sound. Thereafter, the general whirl of the engine can be heard. If the user presses on the brakes suddenly at a high speed, the car tries to come to a stop with a loud high pitched screeching sound. Three dimensional sounds can also be associated with the PlayedSound State. Of course, these 3D sounds are associated with location and orientation of both the listener and sound emitter which can provoke other dynamic responses. Usually, the different sounds are associated with different versions of the PlayedSound State; that is, the engine object in the car object may be associated with the engine whirl sound in its PlayedSound State. The brake pad of the car object may be associated with the screeching sound in its own PlayedSound State. The Published State is used as a Tag in one embodiment of the present invention. Accordingly, it possesses no internal value so to speak. This State is typically associated with the CanBeNetworkSynchronized Trait. Refer to the description on the Audit State as well as the general Tag discussion above. In one embodiment, the Published Tag is used to synchronize the various States across a network. For example, assume that two game players are playing a first person shooter game on an Internet gaming site. In order for each game player to succeed, he must know his (and his opponent's) location, orientation, and. After all, in this first person shooter game, each game player is trying to destroy the other game player in some game environment (e.g., a labyrinthian dungeon). Over a network, the Internet game site must keep track of all crucial State changes so that each game player can be synchronized with each other and the game site. Thus, if the first game player's location (or orientation or size) changes, the other game player must also know this fact. As the location changes, system alerts all listeners to this Published Tag, which is the CanBeNetworkSynchronized Trait in this example. The listener can then request that the new location value be written to some variable or memory location, which the listener can read to get the new location information. The
CanBeNetworkSynchronized Trait can retrieve this new location information of that other game player and may include in its script some script that calls for updating the other game player's location. Thus, the two game players are synchronized. This process can be repeated for every State that is monitored by the Published Tag for network synchronization. The SeeingQuality State indicates the quality of the seeing level of an object or primitive.
This State is typically associated with the CanSee Trait. The data type is float and the property list includes a minimum seeing distance of 0. With this State, the author can give some objects better seeing quality than other objects. Thus, an object with a particular seeing ability can see things located farther away than an object with lesser seeing ability. The SeenObjects State indicates whether or not an object or primitive has seen the designated thing. This State is typically associated with the CanSee Trait. Its data type is Boolean.
The ShadingMode State indicates the shading of an object or primitive. Its data type is an integer. The integer value is an index to the type of shading employed: minimum, no lights, flat, gouraud, and maximum.
The ShouldDrag State indicates whether or not an object or primitive can be dragged. This State is typically associated with the IsADraggableObject Trait. Its data type is Boolean. This State relies on other State values for it to be TRUE. For example, the ShouldDrag State becomes modified to true only if an object is pickable and mouse-clicked. However, other implementations may require other conditions before changing the ShouldDrag State. The SpeakText State is similar to the PlayedSound State except that instead of emitting a sound, this State indicates the text that the object or primitive should speak. Its data type is string. This State is typically associated with the CanSpeak Trait. When implemented, this State depends on the system being able to bind to a text-to-speech engine so that any given text string can be converted into audible speech. The range of values is limited only by its specific implementation. For example, if WAV files are used, the range of its values is only limited to the WAV file format.
The TalkSentence State indicates the text sentence that is spoken. This State is typically associated with the CanSpeak Trait. Its data type is a string. While the SpeakText State indicates the text that should be spoken, the TalkSentence State actually invokes the object or primitive to speak a certain sentence from the SpeakText State via the CanSpeak Trait or some other speaking script.
The TimeOutHappened State indicates whether or not a time-out event occurred. This State is typically associated with the CanReceiveTimedEvents Trait. Its data type is Boolean. Thus, if a time-out timer has expired (as provided in the TimeOutPeriod), this State notifies (via the system) all listener Traits. So, if 4 minutes is the time-out period and over 4 minutes have gone by, the system changes the value of the TimeOutHappened State to TRUE to alert all listeners of this timed-out event.
The TimeOutPeriod State indicates the time-out period. This State is typically associated with the CanReceiveTimedEvents Trait. Its data type is float. Any time-out period can be specified and if this period expires, the system changes the value of the TimeOutHappened State to TRUE to alert all listeners of this timed-out event. The State requires a minimum of 0 and this State is manually reset.
The TriggerProximity State indicates the distance of any object from the sensor. This State is typically associated with the IsASensor Trait. Its data type is float. If a designated object is within a certain distance from the point sensor, the TriggerProximity State indicates that distance. Based on this distance, other Traits such as CanBeep or CanChangeColor which listen to TriggerProximity can invoke other actions such as beeping or changing the object's color to red if a certain distance threshold is crossed. In other applications, sensors can be used as a reference point to indicate how far or close an object is to that sensor. In these applications, the sensor need not activate any other action; the sensor merely exist to alert other objects of their proximity.
The VerticalTile State indicates the general pattern-type appearance of an object or primitive. An author may want to use a particular pre-defined tile to "cover" or "decorate" his object or primitive. By indicating the number of this tile to be used, the author will alter the appearance of the object or primitive. When using only one tile, the entire object is covered by one tile which is sized to fit the object. If two tiles are used, the same object is covered by two tiles where half of the object will be covered by one tile and the other half of the object will be covered by the other tile. Also, by using two tiles, the size of each tile is smaller (by one-half) than the former one-tile design. Stated generally, the number of tiles specified dictates the size of each tile, given that the object's size remains constant. To carry this example further, as more and more tiles are used, the smaller and smaller each tile becomes in order to cover the object with the specified number of tiles. Note that the tiling effect is vertical; that is, if N number of tiles is specified, these N tiles will be laid out vertically across the surface of the object. See the HorizontalTile State for the horizontal tile layout. The State's data type is float and its property list requires a minimum of 1 tile.
The WalkComplete State indicates whether or not an object or primitive has completed its walk. This State is typically associated with the Can Walk Trait. Its data type is Boolean. The WalkDestination State indicates the object's or primitive's walk destination by specifiying the location via the x, y, z coordinates (float data type) or referring to a Persona Object. This State is typically associated with the Can Walk Trait.
The WalkDirection State indicates angular direction of the object's or primitive's walk. This State is typically associated with the Can Walk Trait. Its data type is float for the range of 0 radian to 2δ radians. This State can also reference a Persona Object instead of an angular direction; that is, by referring to a specific Persona Object as the walk direction, the angle can be deduced.
The WindSpeed State indicates the speed of the wind so that its effects can be compensated for and perhaps responded to by the object or primitive that is making contact with the wind. This State is typically associated with the Has Wind Trait. Its data type is float. For example, an object running against the wind may run slower than normal if the wind speed was high enough to significantly impede his progress. To elaborate further, an object may be running via the CanRun Trait and modifying its RunSpeed State to 5 MPH. Because the wind speed as indicated in the WindSpeed Trait is large enough to significantly impede the running object, the Has Wind Trait may modify the RunSpeed State of the running object to 4.8 MPH. So, when the CanRun Trait reads its RunSpeed State again, it is no longer 5 MPH but rather 4.8 MPH. Based on this State value, the running object may elect to run faster to compensate for its wind impeded slowdown.
As described above, system-changeable States are those States that are changeable by the system only. The system-changeable States will now be discussed with reference to Table VII below:
TABLE VII: DEFAULT SYSTEM-CHANGEABLE STATES
Figure imgf000071_0001
A discussion of each of these system-changeable States will now be provided. Table VII lists the system-changeable States that are exposed by any Trait or action script implemented by the author in his Viewport and listened to by any Trait. The table lists the State names at the leftmost column. Corresponding columns provide a list of Object Types that each State is supported by, the reset information (i.e., reset to default values manually or automatically?), the data type (e.g., string, boolean, integer, float), and the property list which includes the range of possible values. Note that this table lists some of the many States that could be implemented in the authoring tool in accordance with one embodiment of the present invention.
The CollidedWith State indicates the Persona Object that the primitive, object, light, or camera has collided with for the purpose of the collision detection scheme. This State is typically associated with the IsAnObject Trait. This information is important for all the solid objects that are subject to the collision detection scheme because each object need to know who it collided with in order to react or respond appropriately. For example, assume several billiard balls are on a table. If ball 1 collides with ball2, ball2 will bounce in the opposite direction from where balll came from. Ball2 will not know which direction to bounce toward or how fast to bounce without knowing which ball collided with it. By knowing balll's identity, ball2 can determine balll's velocity (speed and direction) so that ball2 can now calculate the bounce direction and speed.
The HasCollided State indicates the whether or not an object, primitive, light, or camera has collided with any other entity on that Viewport. This State is typically associated with the IsAnObject Trait. Its data type is Boolean with automatic reset. In one embodiment, the collision detection scheme uses a bounding cube around the object. In other embodiments, the collision detection scheme uses a more complex bounding shape to more perfectly encompass the object. The HasObstructed State indicates whether or not the object, light, primitive, or camera is obstructing another entity on that Viewport from the field of view of the camera. If an obstructing object is obstructing some other object, this State would be TRUE for the obstructing object. This State is typically associated with the IsAnObject Trait. Its data type is Boolean with automatic reset. The IsValid State indicates whether or not an object, primitive, light, camera, world, background, universe, or sensor is valid or not. This State is particularly applicable to those objects that have just been created and are still being edited. If it is not ready to interact with other objects on the Viewport (i.e., it is not fully created yet), its IsValid State is FALSE. Accordingly, no other object can modify parameters (State values) associated with this newly created object. Similarly, this newly created object cannot change the parameters of other objects. If the object is fully created and ready to interact with other objects on the Viewport, the IsValid State for this object is TRUE. Its data type is Boolean with manual reset.
The MediaReady State indicates whether or not certain attributes of the object, primitive, or background that is being downloaded over a network (e.g., the Internet) are ready for display and interactivity purposes. If ready, these attributes of the object will be shown for that object. Thus, every aspect of the object need not be completely downloaded for the user to see it; rather, the user can see the object step by step as it is being "constructed" so to speak. These attributes include geometry, materials, and texture. The data type is Boolean for each of these attributes. As an object is being downloaded over the network, the MediaReady State may indicate that the geometry of the object is ready but not the texture or materials. Thus, the computer system will show the skeletal geometry of the object. Indeed, some interactive elements may also be ready as a result of the geometry attribute being ready, such as being pickable and draggable. At a later time, the material attribute may be ready and the MediaReady State for the materials attribute will present TRUE. The materials attribute will then cover the geometry skeleton of the object which was previously downloaded and displayed.
The MouseClicked State indicates whether or not the mouse was clicked on an object, primitive, or background. The data type is Boolean with automatic reset. Based on this mouse click, other actions may follow, such as highlighting the object and dragging it if the mouse button was not released. Of course, these further actions (so to speak) are part of those Traits that are listening to the MouseClicked State. After all, the IsADraggableObject Trait, for example, cannot drag the object unless the mouse has clicked and is held down on the object itself.
The MouseVector State indicates the direction and magnitude of the mouse. The direction component provides mouse position information. A Trait such as IsADraggableObject relies on the ShouldDrag and the direction component of the MouseVector along with mouse down to enable the user to drag the selected object from one position to another. The magnitude component provides speed information so that the rate of change of the mouse position with respect to time is translated to speed, which can be used by various Traits. A Trait such as CanAccelerate with respect to a race car game can give speed control to the user based on how rapidly the user moves his mouse. If the user moves his mouse slowly, the race car moves slowly. If the user moves his mouse quickly, the race car moves quickly.
The WorldTime State indicates the universal time for all worlds. In some embodiments, different worlds in the universe can have their own distinct WorldTime States. This State is typically associated with the IsAUniverse Trait. Its data type is integer with manual reset. The time reference can be obtained internally from the computer system's clock or the authoring tool's frame time in the score. Thus, whenever time changes, the system updates the WorldTime State for all the worlds in the universe to access.
Having discussed some specific Traits and States, some specific Actions will now be discussed. SPECIFIC ACTIONS
The authoring tool in accordance with one embodiment of the present invention uses the concepts of Traits and States to allow the author to program his project with the benefits of selective reusability of objects and parameters. Additionally, the concept of Actions is also employed, where an action is any script that does not define, initialize, expose, or listen to a State. Simply put, an Action merely modifies or changes a State value. In Macromedia's Director, these concepts of Traits and States are not recognized directly. So, Macromedia's Director treats Traits and Actions as behavior and States as properties.
As described above, the score in Director is used to place the cast member itself (the object) as well as the behavior. So, for example, a cast member such as a ball may be scored for frame periods 1-100. This frame period represents the ball's lifespan on the stage. If the author wants some behavior associated with this ball, he will designate another channel in the score to instantiate this behavior. For example, the author may want the ball to turn blue during frame periods 50-60 while retaining its original color for the other frame periods during its lifespan. In contrast, the actual object itself (e.g., the ball) does not appear in the score in 3D dreams according to one embodiment of the present invention. The Action (as well as the Trait), however, does show up on the score in the form of a Persona (i.e., a "container" for Traits and Actions) so that the author can exhibit the behavior at any desired time. So, to use the color- changing ball example, the score would contain the Persona, which in turn contains the Action of Change Color to Blue, associated with the ball object. By adjusting the frame period of this Persona in the score, the author can dictate when and how long this particular Action will occur for the ball object in the Viewport. The author can now adjust the Persona in the score to appear only during frame periods 50-60. The actual object itself (e.g., the ball) does not show up on the score; rather, it is found in the Viewport.
These Actions will now be discussed in some detail. In order to access these Actions, the author/user merely selects one of the drop-down menus under "Actions," selects the desired
Action, and drags it into his Persona on the stage. By dragging it into his Persona, the Persona is now reflected in the score and the author can now dictate the life of this Action. If the author wants multiple Actions to be associated with his object, he may drag multiple Actions into that one Persona, or he may create multiple Personas (by dragging them onto the stage multiple times) and then drag the desired Actions into any combination of Personas.
Some default Actions, in the form of 3D Dreams extensions, are provided in Director via one of its libraries. These Actions are as follows:
3D Texture Animation Action 3D Walk Action
Animation Action
Color Fade
Fade-In
Interpolate Color Interpolate Properties
Jump To Marker
Look At
Move To
Persona Animation Rotate To Set Parent Turn To
Each of these actions will now be discussed in some detail. Note that these Actions are the initial default Actions that are provided in the 3D dreams package in accordance with one embodiment of the present invention. The present invention should not be limited to the Actions listed herein, since other analogous Actions may be readily apparent to those skilled in the art.
3D Texture Animation Action: The texture of the Persona Object may exhibit some animation. The Action requires the author to select the "from" texture and a "to" texture. The system includes an animation handler in the script that takes the bit map of the "from" texture to gradually change to the bit map of the "to" texture.
3D Walk Action: The walking animation can be selected from a number of pre-animated bit map files that change over time.
Animation Action: This Action allows an object to exhibit some animation of its surface. The Action requires the author to select the "from" object and a "to" object. The system includes an animation handler in the script that takes the bit map of the "from" object to gradually change to the bit map of the "to" object.
Color Fade: This Action allows an object to exhibit some animation of its surface color. In particular, this Action allows the author to make his designated object randomly change its surface color at a particular rate. The Action requires the author to select the rate at which the color changes randomly.
Fade-In: This Action allows an object to exhibit some animation of the designated object fading into view from a non-visible state to a visible state at a particular rate. The Action requires the author to select the rate at which the color changes randomly. Interpolate Color: This Action allows an object to exhibit some animation of its surface color. In particular, this Action allows the author to make his designated object change its surface color at a particular rate. The author selects a "from" color and a "to" color and the Interpolate Color Action changes the color of the designated object by cycling through the range of colors in the spectrum between the "from" color and the "to" color. For example, the author may select red as the "from" color and yellow as the "to" color. This Action would then cycle through those colors in the spectrum located between red and yellow. This range excludes the "cold" colors such as blue and green. The Action requires the author to select the rate at which the color changes.
Interpolate Properties: This Action allows an object to exhibit some animation of any property that the author can select from a list. In particular, this Action allows the author to make his designated object change its properties at a particular rate. The author selects a "from" property and a "to" property and the Interpolate Properties Action interpolates the properties between these two boundaries. The Action also requires the author to select the rate at which the property changes. Jump To Marker: This Action allows the author to create an object that jumps from its present location to a new location dictated by a marker. The author must select the marker in his Viewport for the jumping object to jump to. The author also designates how high the object jumps and the time delay between the movement of the marker and the first jump. Thus, as the user moves his marker, the object will automatically jump toward the marker at a certain height after some designated time period has expired. For example, a basket of carrots may be designated as a marker. When the author creates a rabbit, he drags the Jump to Marker Action into the rabbit's Persona. Thereafter, whenever the user moves the basket of carrots (with the mouse or some other object), the rabbit jumps toward the basket of carrots.
Look At: This Action allows the author to create an object that looks at a marker. The author must select the marker in his Viewport for the object to look at. The author also designates the time delay between the movement of the marker and the looking action. Thus, as the user moves his marker, the object will automatically look at the marker at the new location after some designated time delay has expired. For example, a basket of carrots may be designated as a marker. When the author creates a rabbit, he drags the Look At Action into the rabbit's Persona. Thereafter, whenever the user moves the basket of carrots (with the mouse or some other object), the rabbit looks at the basket of carrots at the new location. In contrast to the Turn To Action, this Action makes the object look at the marker so that the CanSee Trait can provide further behavior for the object to modify some relevant states. Also, even if the marker is beyond the vision range of the object, the object will still "look at" or "look toward" the marker even though it cannot see it. Move To: This Action allows the author to create an object that moves to a marker. The author must select the marker in his Viewport for the object to move to. The author also designates the time delay between the movement of the marker and the moving action. Thus, as the user moves his marker, the object will automatically move toward the marker at the new location after some designated time delay has expired. For example, a basket of carrots may be designated as a marker. When the author creates a rabbit, he drags the Move To Action into the rabbit's Persona. Thereafter, whenever the user moves the basket of carrots (with the mouse or some other object), the rabbit moves toward the basket of carrots at the new location.
Persona Animation: This Action allows an object to exhibit some 3D animation of its surface. The Action requires the author to select the "from" object and a "to" object. The system includes an animation handler in the script that takes the bit map of the "from" object to gradually change to the bit map of the "to" object.
Rotate To: This Action allows the author to create an object that rotates around a pivot point toward a marker. The author must select the marker in his Viewport for the object to rotate toward. The author also designates the time delay between the movement of the marker and the rotating action. Furthermore, the author must specify which part of his object will rotate toward or face the marker. Thus, as the user moves his marker, the object will automatically turn toward the marker at the new location after some designated time delay has expired. For example, a sun object may be designated as a marker. When the author creates a plant, he drags the Rotate To Action into the plant's Persona. Thereafter, whenever the user moves the sun (with the mouse, some other object, or pre-programmed to respond to some timer), the plant rotates around its root toward the sun at the new location.
Set Parent: This Action makes the designated object into a Parent of a specified child. This creates a parent-child physical hierarchy relationship between the two objects that may affect such states as location and orientation. Thus, if a bed is made a parent of a child pillow that is lying on top of it, the mere movement of the bed (parent) will necessarily move the pillow in a corresponding manner.
Turn To: This Action allows the author to create an object that turns toward a marker. The author must select the marker in his Viewport for the object to turn toward. The author also designates the time delay between the movement of the marker and the turning action. Furthermore, the author must specify which part of his object will turn toward or face the marker. Thus, as the user moves his marker, the object will automatically turn toward the marker at the new location after some designated time delay has expired. For example, a basket of carrots may be designated as a marker. When the author creates a rabbit, he drags the Turn To Action into the rabbit's Persona. Thereafter, whenever the user moves the basket of carrots (with the mouse or some other object), the rabbit turns toward the basket of carrots at the new location. In contrast to the Look At Action, this Action does not require the object to look at the marker since the author can designate any part of the object to turn toward or face the marker.
COMPUTER SYSTEM AND ITS ENVIRONMENT
In accordance with one embodiment of the present invention, the authoring tool resides in a computing system that typically includes at least one microprocessor, memory (RAM), hard disk memory, some input devices (e.g., mouse, keyboard, microphone, camera), and some output devices (e.g., monitor, printer, sound system). The computing system may also have other components such as a 3D graphics aceelerator. It may be connected to a local network (e.g., LAN) via a network adapter card or a wide network (e.g., WAN, Internet).
The software components will now be discussed. FIG. 8 generally shows the hierarchical layers of software 530 incorporated into the computer system when one embodiment of the present invention is incorporated into an authoring tool. A particular layer of software typically depends on the software at the layers below it and does not depend on software which is at the same layer. The software 530 is stored in memory or stored in some mass storage unit and then loaded into memory when executed.
The software 530 includes an operating system 531 for controlling and coordinating the computer system 100. The invention can be applied to virtually any operating system, but, preferably, the operating system includes the capability to process sound, graphics, video or animation and to provide a windowing environment for display on the display screen of the computer system. The operating system can be, for example, Microsoft Windows on x86 or Pentium-based systems or an Apple Macintosh. As known to those ordinarily skilled in the art, after the operating system is loaded into the memory of computing system by the start-up firmware, control passes to its initialization code to set up necessary data structures, and load and initialize device drivers. Control is then passed to the command line interpreter (CLI), which prompts the user to indicate the program to be run. The operating system then determines the amount of memory needed to run the program, locates the block of memory, or allocates a block of memory and accesses the memory either directly or through BIOS. After completion of the memory loading process, the application program begins execution. During the course of its execution, the application program may require numerous services from the operating system, including, but not limited to, reading from and writing to disk files, performing data communications, and interfacing with the display/keyboard/mouse.
The software 530 further includes a software development environment 532, a tool 533 and one or more multimedia software titles 534. The software development environment 532 conceptually lies between the operating system 132 and the tool 533, providing an interface between the two. The development environment 532 is typically present during the creation of the tool 533 (or components or extensions of the tool 533), but may or may not be present during execution of the tool, depending on the development environment. In one embodiment, environment 532 is a C++ or SmallTalk environment.
The tool 533 is typically an authoring tool for producing multimedia products, such as Macromedia's Director, which incorporates the various embodiments of the present invention. Users of the tool 533 can create, manipulate and execute multimedia products 534. Such users may be authors or end-users of multimedia software titles 534. The software title 534 of course contains content 535, which includes all the files and features for the user.
The tool 533 preferably varies the functionality it provides to a user of the tool based on a user-specified preference or the task being performed by the user. In one embodiment of the invention, a plurality of user-selectable modes are provided so that the user can specify to a certain degree the functionality provided by the tool. For example, the tool 533 can provide two modes: an author mode for creating and editing a multimedia product, and a user mode for simply executing a multimedia product. The two modes are provided to the user as selectable or specifiable items in the user interface of the tool 533.
DIRECTOR EXTENSIONS As known to those skilled in the art, any programmer can write extensions to existing software products provided that the software product supports certain extensions and the programmer complies with its interfaces. Macromedia's Director is no exception. Although some embodiments of the present invention can be used as a standalone software product, other embodiments of the present invention are implemented as extensions to Macromedia's Director using the "Macromedia Open Architecture."
These extensions include Tool, Asset, Sprite and Lingo Xtras (for backward compatibility, as Lingo does not support States). Tool Xtras include such additional resources as editing tools and window tools. Asset Xtras include such additional resources as the Viewport and Viewport hierarchy window, Sprite Xtras, and Personas and Persona Objects. The stock library of default Traits and States for the Persona Objects were discussed above. Both the Asset and Lingo Xtras include the various interfaces to allow these extensions to communicate with Director and the 3D Dreams segmentations. These segmentations include the Object Types (e.g., universe, world, object) and the support for 3D graphics and objects.
THE SOFTWARE CORE ENGINE
The core engine in accordance with one embodiment of the present invention will now be discussed. Referring again to FIG. 8, one embodiment of the present invention resides in the tool 533, software title 534, and content 535. FIG. 9 provides a more detailed view of the layers and components of the software package. Referring now to FIG. 9, at the bottom layers are the Operating System Abstraction
Layer 540 and the Graphics Engine 541. The Operating System Abstraction Layer 540 provides the interface between the Operating System 531 (FIG. 8) and Engine Services 542. The Graphics Engine 541 replaces the graphics engine in Macromedia's Director. The Graphics Engine 541 manages the scene and updates all elements of the scene so that they can be rendered (drawn) in the next frame. The Graphics Engine 541 also has a camera.
When the Graphics Engine 541 receives a command from an upper layer to render a scene, it sends the command to DirectX, which is the name of a technology designed by Microsoft to make Windows-based computers an ideal platform for running and displaying applications rich in multimedia elements such as full-color graphics, video, 3D animation, and surround sound. Built directly into the Microsoft family of operating systems, DirectX is an integral part of Windows 98, as well as Microsoft's Internet Explorer. DirectX components may also be automatically installed on your system by advanced multimedia games and applications written for Windows 95. With DirectX, developers are given a set of instructions and components that would ensure that their multimedia applications would run on any Windows- based PC, regardless of the hardware. Also, DirectX provided the developers with tools that simplify the creation and playback of multimedia content. In the Apple Macintosh, it sends the command to QD3D. The Graphics Engine 541 is usually OS-specific.
At the next layer above the OS Abstraction Layer 540 and the Graphics Engine 541 is Engine Services 542. Engine Services 542 provides a set of functions 550 that generally includes resource management and communications. Engine Services 542 provides file loading and saving functionality, bitmap and geometry reading and writing, and Internet communications. Engine Services 542 also routes Director-related information to the Director- related Engine Services 563 located at an upper layer. Unlike the Graphics Engine 541, Engine Services is not OS-specific. At the next layer above the Engine Services layer 542 is the Object Engine 543, which includes 3D world management and collision detection functionality 551. The task of 3D world management includes dedicating resources for all objects so that they are implemented in the system. These implemented objects are also identified and treated as three-dimensional entities. The collision detection scheme is implemented in the Object Engine 543 and is exposed by the 3D Dreams Layer 544. In particular, the collision detection is exposed by the IsAnObject Trait through various States (e.g., HasCollided, CollidedWith).
At the next layer above the Object Engine layer 543 is the 3D Dreams Layer 544. This layer defines, changes, listens to, and exposes States for all objects and primitives. The objects and primitives are implemented in the Object Engine layer 543, but the States linked to these objects and primitives are exposed in the 3D Dreams Layer 544. When any State values change, the 3D world (i.e., objects, primitives) tied to that State changes accordingly. Aside from the collision detection example above, another example is a proximity sensor. The proximity sensor is implemented in the object engine but its relevant States (e.g., TriggerProximity) are exposed using sensor primitives in the 3D Dreams Layer 544. The 3D Dreams Layer 544 includes several components for managing the various State updates (i.e., listening, exposing) and the Traits. These components are organized according to the various Object Types, including Entity 552, Object 553, Camera 554, Lights 555, World 556, Universe 557, Backgrounds 558, Sensor 559, and Primitives 560. The Entity component 552 contains and manages all of the States. The Entity component 552 performs the primary State management duties such as updating (i.e., changing) State values, maintaining Tags, and dedicating resources for States once they have been created (i.e., added or initialized) by a Trait. The other components (e.g., Object 553, primitives 560) contain and create Traits to give all the various States their meaning. Via the Traits, these components listens to and exposes the States that primarily reside in the Entity component 552. These other components also generally send commands to the Entity component 552 so that the Entity component 552 can update States. At the next layer above the 3D Dreams Layer 544 is the 3D Dreams Xtras layer 545. This layer 545 contains the Director Glue Layer/Viewport 561, Persona 562, and the Director- Related Engine Services 563. With the Director Glue Layer/Viewport, one embodiment of the present invention provides a Viewport which is placed on Director's stage (which, along with the Viewport, enables the author/user to view his Movie). This Viewport is available in both edit mode and playback mode. When a Director cast member is created, it can be placed on the stage, but not in the Viewport. When a Persona Object is created, it can be placed both on the stage and the Viewport. However, all interactivity and score action occurs in the Viewport for those movies created using Persona Objects and 3D Dreams. All interactivity and score action for those objects and scenes that were created using Director (not 3D Dreams and Persona) occur on the stage and not the Viewport. The Viewport is basically a window to the 3D world where Persona Objects are populated, behave, and otherwise interact with other objects, its surrounding, and the user. The portion of the stage outside the Viewport is nothing more than an editing scratchpad, where the author can place his Personas and Persona Objects during edit mode prior to placing them in the Viewport. The Director Glue Layer/Viewport 561 manages these functions so that both the Director scenes on the stage and 3D Dreams scenes in the Viewport can be edited and viewed. As noted in FIG. 9, the Director Glue Layer/Viewport 561 interacts with the Engine Services 542, where resource management occurs. This Engine Services 542, of course, communicates with the Director-Related Engine Services 563. The Persona component 562 interfaces with all Persona Objects in 3D Dreams and can monitor State changes, as can the various components 553-560 in the 3D Dreams Layer 544. The Persona Object and State change information are provided to the Director Glue Layer/Viewport so that they can be properly interpreted in a form understood by Director. This component 562 allows the author/user to transparently use Persona Objects through the Director interface, even though Persona Objects are not directly supported by Director. As discussed above, Director treats Traits as Behaviors and States as properties. The Persona Objects are maintained at a higher level but are implemented in part in the Object Engine 543. For example, collision detection and 3D calculations of these Persona Objects occur in the Object Engine 543. The Director-related Engine services 563 is another glue layer for providing Director- related resource management. These services know how to create Director-recognizable cast members for the Persona Objects. Note that Persona Objects are supported in the Persona component 562 of the 3D Dreams Xtras 545, which also monitors Traits and States in the lower 3D Dreams Layer.
The next layer above the 3D Dreams Xtras 545 is the Director layer. This is the Director user interface that is familiar to developers. Of course, some additional windows and features have been provided, such as the Viewport, Viewport hierarchy window, Persona Objects, Traits, and Actions.
BUMPER CAR EXAMPLE The following example is intended to illustrate the enhanced reusability provided by the present invention over existing application development environments, such as Director. In particular, by making components (e.g., Traits) selectively reusable, authors can reuse not just individual Traits, but also more complex collections of Traits (Personas), or even entire Persona Objects (perhaps having multiple Personas). As will become apparent, this degree of selective reusability is simply not offered by other existing development environments, object-oriented or otherwise.
Consider the example of the classic "bumper car" ride, found in amusement parks for many years. In this computerized example, the ride includes multiple "computer controlled" cars, which drive randomly (i.e., random turn, then drive at fixed speed) and one "user controlled" car, which the user drives (i.e., steers and accelerates/decelerates) by dragging the mouse in a "control pad" section on the screen, and/or typing keyboard commands. Upon a collision, both types of cars "bounce back" (i.e., briefly reverse direction), and then resume their initial driving behavior.
Thus, the two types of cars have both similarities and differences. Each drives differently. The computer-controlled car turns randomly and then drives at a fixed speed; whereas the user-controlled car's direction and speed (i.e., velocity) is determined solely by the user's mouse/keyboard actions. Yet, both types of cars briefly exhibit precisely the same "bounce back" behavior, before returning to their different driving behaviors.
FIG. 10 illustrates an implementation of this example using certain of the concepts underlying the present invention, such as Traits, States, Actions and Listeners. In this example, both types of cars (not shown) would include the three basic car-related Traits: IsAnObject 1114, HasVelocity 1120 and IsACar 1130 (described below). The computer-controlled car would also include the IsAComputerControlled Car Trait 1140 and the ComputerDrive Action 1150; whereas the user-controlled car would include (in addition to the three basic car-related Traits) the IsAUserControlledCar 1160 Trait and the UserDrive Action 1170.
Looking "bottom up," it can be seen that the end result of this example is the actual movement of the cars 1105. Although the cars themselves are not shown, their underlying Traits, States and Actions are shown, as is their interaction with one another. At a relatively low level, movement of the cars is accomplished by Lingo commands (e.g., Walk3D and Jump3D) 1106 that have been added to Director to control the movement of 3D objects. These commands direct the underlying "engine" to move the Persona Objects to particular points (x,y,z) in a 3D coordinate system.
These commands 1106 are issued by the IsAnObject Trait 1110, as illustrated by line 1107, in response to a change in the object's location State 1108. The IsAnObject Trait 1114, as noted above, defines, and is Listening for changes in, location State 1108 (as illustrated by line 1109). In response to a callback from the system when the location State has been changed (regardless of who changed the location State), the IsAnObject Trait 1110 issues the Lingo commands (Walk3D, Jump3D) 1106 that direct the engine to move the cars (Persona Objects) 1105 to the new location. The HasVelocity State 1120 defines, and is Listening for changes in, the velocity State 1112 (as illustrated by line 1113). In response to a callback from the system when the velocity State has been changed (regardless of who changed the velocity State), the HasVelocity Trait 1120 calculates the new location (based on the current location and the new velocity - i.e., direction and speed) and modifies the location State 1108 accordingly (which, as discussed above, triggers the IsAnObject Trait 1110 to move the cars).
It should be noted that there are actually a number of different ways in which the velocity State 1112 can be changed. Either type of car may modify its speed or direction, as discussed below. Moreover, the HasVelocity State 1120 itself may modify its direction (and thus its velocity State 1112) in response to a collision. The HasVelocity State 1120 also is Listening for changes in the hasCollided State 1185 (i.e., a collision as illustrated via line 1180). In response to a collision, it implements the "bounce back" functionality described above by changing the car's direction, and thus modifying its velocity State 1112, which, in turn triggers its own handler to update the location based on this new velocity 1112.
Continuing up the chain, the IsACar Trait 1130 Listens for changes in the driveTo or drivingSpeed States 1135 (i.e., requested changes in direction or speed, simulating the functionality of the steering wheel and accelerator pedal). In response to such changes (illustrated by line 1136), it modifies the velocity State 1112, triggering the handler in the HasVelocity Trait 1120 described above.
Thus, the three Traits of a "generic" car (IsACar 1130, HasVelocity 1120 and IsAnObject 1110) implement the basic functionality of a car - i.e., responding to steering wheel and accelerator pedal requests (as well as collisions) by making appropriate adjustments in the car's velocity and thus its location. These Traits are reusable in virtually any type of car that "requests" changes in direction and/or speed, i.e., by Listening for such changes (as illustrated by line 1136 ) in the driveTo or drivingSpeed States 1135. Each type of car has its own unique driving functionality, which to some extent also is reusable, as will now be discussed. The computer-controlled car has the IsAComputerControlledCar Trait 1140, which Listens (as illustrated by line 1176) for a change in the IsValid State 1175, indicating, in essence, that the car objects are now initialized. It also Listens (as illustrated by line 1178) for a change in the HasCollided State 1185, which is generated when the system detects a collision (illustrated by line 1197) between this car and another car or other obstacle (e.g., the wall).
In this example, the computer-controlled car responds identically upon initialization and upon a collision. Thus, in either case, the IsAComputerControlledCar Trait 1140 invokes the ComputerDrive Action 1150 (i.e., whether in response to a change in the isValid State, illustrated by line 1141, or the hasCollided State, illustrated by line 1142). It could perform this response itself (i.e., computerized driving behavior), but by delegating pure Actions (i.e., modifying States), the ComputerDrive Action can more easily be replaced by another Action, or added as one of many parameterized alternative Actions, thereby further enhancing reusability. In this case, the ComputerDrive Action 1150 implements a random turn, and then drives at a fixed speed - i.e., it modifies the driveTo and drivingSpeed States 1135 (as illustrated by line 1137) and thus triggers the IsACar Trait 1136 as discussed above. Thus, without having to communicate directly with the user-controlled car, or with any of the Traits shared between the two types of cars, the computer-controlled car is able to share State information, and effectively communicate indirectly with, the basic car Traits without een explicitly being aware of their existence.
The user-controlled car has the IsAUserControlledCar Trait 1160, which Listens (as illustrated by line 1177) for a change in the isValid State 1175, indicating, in essence, that the application has started. It also Listens (as illustrated by line 1179) for a change in the HasCollided State 1185, which is generated when the system detects a collision (illustrated by line 1197) between this car and another car or other obstacle (e.g., the wall).
As did the computer-controlled car, this user-controlled car responds identically upon initialization and upon a collision. Thus, in either case, the IsAUserControlledCar Trait 1160 must continue Listening (as illustrated by line 1196) for changes in the mouseVector and/or keyPressed States 1195 (e.g., detected by the system, as illustrated by line 1198), and respond to these user events by calculating a new desired direction and/or speed, and then changing the driveTo and drivingSpeed States 1135 (which will trigger the IsACar State 1130, as discussed above).
Note that the user-controlled car need not take any action in response to changes in the isValid 1175 and/or HasCollided 1185 States. Thus, in this example, it need not call the UserDrive Action 1170 (via lines 1161 and 1162, respectively, for changes in the hasCollided 1185 and isValid 1175 States), as there is nothing for it to do.
The IsAUserControlledCar Trait 1160 could delegate the functionality of calculating the new desired direction and/or speed to a separate Action; but, in this case, because this functionality is dependent upon repeated changes in the mouseVector and/or keyPressed States, it is more efficient to handle this calculation itself (and directly modify the driveTo and drivingSpeed States 1135).
The above example illustrates how a number of different and seemingly interdependent Traits can work together (i.e., share State and State-change information, and communicate indirectly with one another) without explicit knowledge of one another's existence, thereby avoiding many of the dependencies created by explicit message-passing. The messages (i.e., callbacks) which are sent to the Traits by the system are those which the Traits themselves explicitly request (via Listeners), and involve changes in known States that have explicitly been shared/exposed, as opposed to private IDs that are dependent on the existence of particular objects. Moreover, these messages are independent of the physical hierarchy of objects, which relates to relative movement, but not to sharing of information.
It would be far more difficult to implement this example in a reusable fashion using prior/existing authoring systems, such as mTropolis or Director, as various dependencies would be difficult to avoid. For example, in Director, there exists no system-wide mechanism for Behaviors within an object (much less across objects) to share "state-change" information. If implemented in a single script, the script would have to know the "car type" (thereby creating dependencies) in order to know which type of "driving behavior" to perform when the application starts, or following a collision. If implemented in multiple scripts (e.g., one for each type of car's driving behavior, and a shared script for changing the velocity/location in response to a desired change in speed/direction), the scripts would have to communicate the "state- change" information among one another. For example, the shared script would probably implement the common "bounce back" functionality following a collision, but would then have to know the "car type" to determine which of the other two scripts to invoke to resume normal driving behavior.
A development system such as mTropolis, which provided some degree of selective reusability by utilizing the author's physical hierarchy for "anonymous messaging," still would have problems in this regard. These different Traits or Behaviors do not bear any physical relationship to the car or to one another. They are not components of a car, like a steering wheel, accelerator pedal, etc. Yet, mTropolis bases its messaging system on the author's physical hierarchy. Even if one modeled these elements, e.g., by giving each type of car a "child" steering wheel, accelerator pedal and collision-response mechanism (each with its ownHbehavior), these different elements within a particular type of car still would need to communicate with one another. For example, the steering wheel and accelerator pedal objects would both need to evaluate mouse events to determine whether they were intended for the steering wheel or the acceleration pedal. Whichever component handled the calculation of the new location (based on a change in velocity) would have to communicate with the other components in order to share this functionality within the car. Although an author could create a working application, the effort expended to enable the different components to communicate with one another (i.e., workarounds by creating additional elements to provide an application-specific communications interface) would be significant, and may well limit reusability.
By sharing "state-change" information with only those components of an application that need such information, independent of the physical hierarchy of those components, the present invention facilitates far greater selective reusability of those components, both individually and collectively. ADDITIONAL EXAMPLE APPLICATIONS
It should be emphasized that the present invention can be utilized to develop applications well beyond the field of interactive 3D multimedia applications. Illustrated below are examples in such diverse fields as electronic commerce and telecommunications. From these examples, it should be apparent that the advantages of the present invention are equally applicable to a wide variety of applications and components thereof. Feature Auditing
In accordance with one embodiment of the present invention, the concepts described above are incorporated in a client software application that can interface with a server across a network (e.g., WAN, LAN, Internet) so that certain data relating to the user's interaction with the server content can be logged and transmitted to the server (possibly an alternate server as defined by the content). Because of the stateless nature of the world wide web, for example, it is difficult for web servers to monitor user activity on their site without requiring prior user registration and login, which many users are hesitant to accept. Current web servers have only the limited capability of logging the isolated instances of requests sent from a remote client. For example, a web server can be set up to monitor how many times and when a web page is downloaded. Even extending this function using a "cookie," the deepest type of information a web server typically can monitor is that same type of isolated request data, identified by the individual clients making the requests. But there is no convenient method in the art permitting a web server to monitor a steady stream of data from a client regarding the client's use of data as opposed to the transfer of data. Or even if such a method were possible in the current state of the art, typically a web server would have no efficient way of screening such voluminous usage data for the items that it "needs to know."
The following example illustrates how the present invention can be used to overcome this obstacle without requiring client software that must be rewritten for each different web site.
For example, consider a commercial web site that displays a number of retail products. The products could be different models of a particular personal computer. When the user selects a particular PC model, the web site provides the user with the option to further view certain features for that PC model. For the PC model, the user could view the CPU type and speed, type of video card, CD-ROM drive, memory configuration, 3D accelerator, bundled software, monitor, modem, and DVD player, among other features.
Understandably, the company that sells these PC models will probably be interested in gathering information regarding users' interest in various aspects of this web site. In particular, the computer manufacturer may be interested in such information as how long a user stayed at particular areas of the web site, which product(s) the user viewed, and which features were of greatest interest. By accumulating this data for all users across the Internet who accessed this web site, the computer manufacturer is now armed with potentially valuable information that sheds some light into consumer behavior and motivations. For example, why are certain people interested in one product but not another? To implement this application on the server side, the company must first determine the type of information that it desires to track, and define that information as States which the client application can monitor. For example, if the company wanted to track how long a user viewed a particular web page, it might define a set of States on the client application, such as "webPageURL" (which identifies the URL of that web page) and "timeSpentOnWebPageURL" (which the client will determine). The company also would define a Listener on the client application to the timeSpentOnWebPageURL State.
The client application also could have one or more Traits to detect a match with the "webPageURL" State, and would then start a timer (possibly by invoking an Action). Another Trait might Listen for a State change that indicated that the client had left that web page, and then possibly invoke another Action to store the elapsed time in the timeSpentOnWebPageURL State. Upon detecting a change in this State, it would notify the Listener on the server application of such change, and probably send the updated timeSpentOnWebPageURL State value.
Note that, in some embodiments, the Traits and parameters necessary to enable this feature auditing can be delivered with the content being audited in a platform-independent representation. Such an embodiment would eliminate the need for the user to install a specific client application solely for the purpose of enabling the Traits and States described here.
In either case, the company could track virtually any number of desired characteristics relating to the user's interest in its web site. Such information could even be stored by the client application across repeated visits to the web site, despite the stateless nature of the server requests.
Accordingly, the same software could be used for different web sites run by companies who might want such market information. Moreover, one or more Tag States could be used by the client application to track groups of States, and notify the server application whenever the value of any of such States was changed.
Once the server has collected the necessary data for the aggregate of all users who visited the web site, additional server software (well known in the art) could perform some "data mining" analysis on the pool of data to present the data in a format that is useful to marketers, advertisers, company executives, and other interested persons. Telecommunications Network
In another embodiment of the present invention, the concepts described herein can be applied to a technology that has virtually no relationship to 3D multimedia. In particular, a telecommunications network can incorporate the States and Traits concepts at the client side and the telco side without departing from the spirit of the invention. In one application, a smart telephone can determine the various telecommunications services offered by the telco so that the telephone unit can subsequently reconfigure itself to support these services. In another application, the network can obtain usage pattern information for a given customer.
In the former case, a telecommunications network provides a number of services to its customers. As known to those skilled in the art, these services include, but are not limited to, call waiting, call forwarding, and voice mail. However, the services that may be available to any given customer may vary considerably from one telco to another.
To enable the ability of a telephone unit to retrieve these available services so that it can reconfigure itself to support those services, the Traits and States concepts can be utilized. In one embodiment, States are defined for each of these services. A ServiceAvailable Tag groups these States together at the telco.
A telephone unit at the customer's location might include some software functionality including a Trait called CanEnableService. Another trait in the telephone unit called CanDetectService Listens to the ServiceAvailable Tag at the telco. When desired, the CanDetectService Trait in the telephone unit can retrieve State information about each of the available services from the telco, including which ones are available and additional features associated with each service. The CanDetectService Trait retrieves the State information and updates its own set of corresponding States in the telephone unit. The CanEnableService Trait would Listen for these States that have just been modified, and would not enable these services in the telephone unit, if these services are not already implemented therein.
The second application, much like the feature auditing example described above, involves obtaining telecommunications usage pattern information for a given customer or an aggregate number of customers. The usage pattern information can be derived from call data including, but not limited to, long-distance call times, long-distance call duration, local call times, and local call duration. At the telco side, States are associated with the call data. The States which the customer is interested in monitoring are grouped together by a Call Tag. As a customer makes a call, these states are updated by a Trait called CanUpdateCallData.
At the customer side, a Trait called CanDetectCallData Listens for a change in any of the States associated with the Call Tag at the telco side. Periodically, the CanDetectCallData Trait of the customer's smart phone detects a change in one of the States associated with the Call Tag, and actually retrieves the State values themselves.
Once the network has collected the necessary data for a customer or the aggregate of all customers, additional network software could perform some analysis on the pool of data to present the data in a format that is useful to the telco, which can then prepare various reports to inform the user of his usage patterns, possibly to assist him in optimizing his calling habits, or permit his smart phone or the telco to do so automatically on his behalf.
The type of system described above can also be used to detect the maintenance status of various telecommunications equipment in the network. The foregoing description of a preferred embodiment of the invention has been presented for purposes of illustration and description. It is not intended to be exhaustive or to limit the invention to the precise forms disclosed. Obviously, many modifications and variations will be apparent to practitioners skilled in this art. One skilled in the art will readily appreciate that other applications may be substituted for those set forth herein without departing from the spirit and scope of the present invention. Accordingly, the invention should only be limited by the Claims included below.
What is claimed is:

Claims

CLAIMS 1. A system for developing interactive applications, the system comprising: (a) a first state having a value including an initial value; (b) — a first trait that can listen to the first state, and can perform a first function upon being notified by the system of a change in the value of the first state; and (c) a first action that can modify the first state, whereby the first trait can perform the first function when the first action modifies the first state.
2. The system of claim 1 wherein the first state is implemented as an extension to a Macromedia Director property and the first trait and first action both are implemented as extensions to Macromedia Director behaviors.
3. A method for developing interactive applications having a plurality of objects, the method comprising the following steps: (a) creating a first object having a first state with a value including an initial value; (b) associating with the first object a first trait that can listen to the first state, and can cause the first object to perform a first function upon being notified by the system of a change in the value of the first state; and (c) associating with a second object a first action that can modify the first state, whereby the first object can perform the first function when the first action modifies the first state.
4. The method of claim 3 wherein the first state is implemented as an extension to a Macromedia Director property and the first trait and first action both are implemented as extensions to Macromedia Director behaviors.
5. An interactive application, comprising: (a) a first state having a value including an initial value; (b) a first trait that can listen to the first state, and can perform a first function upon being notified by the system of a change in the value of the first state; and (c) a second trait that can modify the first state, whereby the first trait can perform the first function when the first action modifies the first state.
6. The application of claim 5 wherein the first state is implemented as an extension to a Macromedia Director property and the first trait is implemented as an extension to a Macromedia Director behavior.
PCT/IB2000/000330 1999-02-16 2000-02-16 Authoring system for selective reusability of behaviors WO2000049478A2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU33143/00A AU3314300A (en) 1999-02-16 2000-02-16 Authoring system for selective reusability of behaviors

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US12015099P 1999-02-16 1999-02-16
US60/120,150 1999-02-16
US31781199A 1999-05-24 1999-05-24
US09/317,811 1999-05-24

Publications (2)

Publication Number Publication Date
WO2000049478A2 true WO2000049478A2 (en) 2000-08-24
WO2000049478A3 WO2000049478A3 (en) 2001-01-11

Family

ID=26818098

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2000/000330 WO2000049478A2 (en) 1999-02-16 2000-02-16 Authoring system for selective reusability of behaviors

Country Status (2)

Country Link
AU (1) AU3314300A (en)
WO (1) WO2000049478A2 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1895408A1 (en) * 2006-08-18 2008-03-05 Delphi Technologies, Inc. Method of re-using software attributes in graphical programs
WO2013148261A1 (en) * 2012-03-31 2013-10-03 Emc Corporation System and method of hierarchical data containers

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
CADENHEAD R., "Java 1.2 in 24 Hours", July 1998, pages 316-321, XP002931177. *
KELLER ET AL.: 'Design components: Toward software composition at the design level' IEEE, 1998, pages 92-94 - 302-311, XP002931177 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1895408A1 (en) * 2006-08-18 2008-03-05 Delphi Technologies, Inc. Method of re-using software attributes in graphical programs
WO2013148261A1 (en) * 2012-03-31 2013-10-03 Emc Corporation System and method of hierarchical data containers
CN104321739A (en) * 2012-03-31 2015-01-28 Emc公司 System and method of hierarchical data containers
US8949281B2 (en) 2012-03-31 2015-02-03 Emc Corporation System and method of hierarchical data containers

Also Published As

Publication number Publication date
AU3314300A (en) 2000-09-04
WO2000049478A3 (en) 2001-01-11

Similar Documents

Publication Publication Date Title
US6377263B1 (en) Intelligent software components for virtual worlds
Goldstone Unity game development essentials
JP4155691B2 (en) 3D interactive game system and advertising system using the same
CN102681657B (en) Interactive content creates
WO2007130689A2 (en) Character animation framework
Brackeen et al. Developing games in Java
CN113082721B (en) Resource management method and device for application program of integrated game module, electronic equipment and storage medium
CN118176484A (en) Virtual object structure and interrelationships
Peters Foundation ActionScript Animation: making things move!
Davison Pro Java 6 3D Game Development: Java 3D, JOGL, JInput and JOAL APIs
Pape et al. XP: An authoring system for immersive art exhibitions
Thorn Learn unity for 2d game development
Thorn Moving from Unity to Godot
WO2000049478A2 (en) Authoring system for selective reusability of behaviors
Takoordyal Beginning Unity Android Game Development
KR100987654B1 (en) System for presenting interactive content
Anstey et al. Building a VR narrative
Hillmann Unreal for Mobile and Standalone VR
Hillmann Unreal for mobile and standalone VR: Create Professional VR apps without coding
Odom HoloLens Beginner's Guide
Goodwill et al. Beginning Swift games development for iOS
Thorn Unity 5. x by Example
Mendelowitz The Emergence Engine: A behavior based agent development environment for artists
Barsan Software architecture for AR art exhibitions in unreal engine
Zirkle et al. iPhone game development: developing 2D & 3D games in Objective-C

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A2

Designated state(s): AE AL AM AT AU AZ BA BB BG BR BY CA CH CN CR CU CZ DE DK DM EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX NO NZ PL PT RO RU SD SE SG SI SK SL TJ TM TR TT TZ UA UG US UZ VN YU ZA ZW

AL Designated countries for regional patents

Kind code of ref document: A2

Designated state(s): GH GM KE LS MW SD SL SZ TZ UG ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE BF BJ CF CG CI CM GA GN GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
AK Designated states

Kind code of ref document: A3

Designated state(s): AE AL AM AT AU AZ BA BB BG BR BY CA CH CN CR CU CZ DE DK DM EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX NO NZ PL PT RO RU SD SE SG SI SK SL TJ TM TR TT TZ UA UG US UZ VN YU ZA ZW

AL Designated countries for regional patents

Kind code of ref document: A3

Designated state(s): GH GM KE LS MW SD SL SZ TZ UG ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE BF BJ CF CG CI CM GA GN GW ML MR NE SN TD TG

REG Reference to national code

Ref country code: DE

Ref legal event code: 8642

122 Ep: pct application non-entry in european phase