WO1998035320A1 - Systeme et procede d'animation - Google Patents

Systeme et procede d'animation Download PDF

Info

Publication number
WO1998035320A1
WO1998035320A1 PCT/GB1998/000372 GB9800372W WO9835320A1 WO 1998035320 A1 WO1998035320 A1 WO 1998035320A1 GB 9800372 W GB9800372 W GB 9800372W WO 9835320 A1 WO9835320 A1 WO 9835320A1
Authority
WO
WIPO (PCT)
Prior art keywords
data
events
animation
objects
data set
Prior art date
Application number
PCT/GB1998/000372
Other languages
English (en)
Inventor
James Steven Ramsden
Christopher John Mills
Godfrey Maybin Parkin
Karen Olivia Parkin
Original Assignee
Peppers Ghost Productions Limited
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Peppers Ghost Productions Limited filed Critical Peppers Ghost Productions Limited
Priority to AU62206/98A priority Critical patent/AU6220698A/en
Publication of WO1998035320A1 publication Critical patent/WO1998035320A1/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/403D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings

Definitions

  • the present invention relates to an apparatus for managing and generating animation and to a method of generating an animation sequence.
  • the invention relates particularly but not exclusively to photo-realistic three-dimensional animation, i.e. to animation which represents three-dimensional movement in a three- dimensional environment scene on a display in a manner approaching photographic realism.
  • An object of the present invention is to provide an animation system and method which overcomes or alleviates the above problems.
  • the invention provides a system for managing animation comprising a processor arranged to process a dynamically evolving data set including object data and event data, said object data including character data representing avatars, sequencing means arranged to read data from and write data to said dynamically evolving data set in a programmed sequence, said sequence of reading and writing taking place within a frame period and said written data being generated from said read data by stored interactions between said object data and event data, and output means arranged to generate from said dynamically evolving data set a displayable animated output
  • the system is therefore advantageously capable of providing a real-time animated output which enables the managing of a non-linear interactive storyline, e.g. for the playing of an interactive computer game.
  • a non-linear interactive storyline which is also known as a branching script
  • the user or player
  • the system is also capable of managing a linear storyline, such as is used with a non-interactive computer game, where there are no significant time constraints for the generation of each frame of the storyline.
  • said data set includes stored events, each event comprising a set of one or more preconditions associated with a set of one or more results
  • the system includes an events manager controlled by said sequencing means which is arranged to detect said set of preconditions and, on detection of said set of preconditions, to generate said set of results, said output means being responsive to said results to generate said displayable animated output.
  • an event may comprise the set of preconditions:
  • the above result could for example be implemented by a stored animation routine which would move the avatar ABob@ to room 3.
  • the system preferably comprises animation routines arranged to be triggered by said generated sets of results, said output means being arranged to run said triggered animation routines to generate said displayable animated output.
  • the system comprises rendering means arranged to render a scene composed of objects associated with said generated sets of results.
  • the object data represent a three-dimensional world.
  • the invention provides a method of generating an animation sequence comprising the steps of generating a dynamically evolving data set including object data and event data, said object data including character data representing avatars, reading data from and writing data to said dynamically evolving data set in a programmed sequence, aid sequence of reading and writing taking place within a frame period and said written data being generated from said read data by stored interactions between said object data and event data, and generating a displayable animated output from said dynamically evolving data set.
  • a program containing stored input instructions in a high level language and governing the animation sequence is run and is arranged to modify said dynamically evolving data set.
  • the invention also provides a computer programmed with a computer game containing an animation sequence generated by a method in accordance with the above aspect of the invention.
  • Figure 1 is a block diagram showing the architecture of one embodiment of the invention from the standpoint of running a previously created animated game
  • Figure 2 is a block diagram showing a hierarchy of object types which can be used in the above embodiment
  • Figure 3 is a block diagram of one example of a set of objects which can be used in the above embodiment.
  • Figure 4 is a block diagram showing the architecture of the above embodiment from the standpoint of the creator of an animated computer game.
  • the above embodiment is based on a Pentium ® - equipped personal computer running the Windows95 ® operating system as indicated by reference numeral 2 ( Figure 1).
  • the arrangement comprises a conversation engine 3, an events manager 4 and a renderer 5 which are sampled sequentially in that order during each frame period of 1/12 second by a scheduler 1.
  • Conversation engine 3, events manager 4 and renderer 5 are all software modules in this embodiment but could in principle be implemented by dedicated hardware.
  • sampling order could be modified by an output of any of the above modules 3 to 5 and accordingly there is provision for two-way data flow (indicated throughout by double-headed arrows) between these modules and scheduler 1. There is also provision for two-way data flow between each of the above modules and a data structure block 7, which contains a dynamically evolving data set including object data and event data, the object data including character data representing avatars.
  • each of the above modules 3 to 5 reads appropriate portions of the above data set in response to a signal from scheduler 1 , processes the above data and then writes the result in the data set.
  • Each module 3 to 5 reads and writes in the above fashion during each frame.
  • events manager 4 reads what conversation engine 3 has written at the beginning of the frame and renderer 5 subsequently reads what events manager 4 has written in the same frame, generates a rendered output (which is sent to the direct X, direct draw and direct 3D modules of Windows 95@ and then output through a port of the computer) before the frame ends.
  • the above sequence is then repeated for subsequent frames.
  • the conversation engine 3 controls responses of the avatars (Non-Player Characters) to he conversation of the (human) player of the game and has provision for two-way data flow S with the direct sound component of Windows 95 ® , enabling speech input and sound output via an appropriate sound card, microphone and speaker(s) (not shown). It can generate, and write to the data structure block 7, the precondition components of events which are handled by the events manager 4.
  • the conversation engine 3 operates on a conversational Anet@ of sentence units, nodes in the net representing pauses in a conversation, where the participants pause, and consider what to say next.
  • the branches from a node represent the possible sentences which could be spoken in response to the sentence which has led to that node (having just been spoken).
  • the sentences in the net model not only what is spoken but also events (particularly preconditions to which the events manager 4 is responsive).
  • Sentence LD integer
  • the 'node' value is set to zero, (this identifies where the conversation is logically), and the 'Current speaker' is set from the conversation data.
  • An Aevent@ is defined as a set of preconditions associated with a set of results and the function of the events manager 4 is essentially to recognise the appropriate sets of preconditions and to generate the appropriate sets of results, which are written to the data structure block 7.
  • a precondition is some sort of test, to which the result can be TRUE or FALSE.
  • a result is any command which any of the various engines 3 to 5 understand.
  • a typical event may look like this:
  • Each event is associated with an object in the world.
  • an object When an object receives an 'update' command, it will examine the list of events associated with it, and execute the event with all TRUE preconditions, and with the highest priority. The execution of an event's results is known as firing.
  • the head of marketing decides to dynamite the MD's office.
  • the dynamite object has an 'explode' event, with a precondition that it has been in the room for 10 seconds. Until the player is in the MD's office, the dynamite will not be updated. Potentially a lethal situation for the player.
  • the following information defines an event:
  • This tick box is activated if the event is expected to only fire once.
  • An event can have any number of preconditions in its preconditions section. These determine if the event is able to fire. If there is more than one precondition in an event then they are treated with Boolean AND logic, i.e. they must all be true if the event is to fire. There could be other logical operations available, such as OR, NOT and XOR. If there are no preconditions specified, then the event may always fire.
  • This precondition will return TRUE if the given object has collided with something. It is then up to the gameplanner to decide what to do next. A good use would be to combine the precondition COLLIDED with the new RANDOMPLAY result to get various wincing
  • a user interface 6 is provided to enable two-way data flow to and from events manager 4.
  • the renderer 5 reads the necessary data from block 7 and then generates a rendered video output.
  • the renderer 5 can be as described in our co-pending application referred to above and incorporated herein by reference.
  • the described embodiment utilises objects which interact with the events handled by the events manager 4.
  • Players, rooms, non-player characters, items of furniture, synthetic displays, and anything else held in the model of the game world are all objects.
  • the system needs to know what is in the world, what each object is in terms of name, location, associated graphics and behaviours, and gameplan events linked to that object.
  • the events manager 4 can poll each object to determine whether it has an associated event to execute, namely if it has an event to fire. As an event is fired, the events manager 4 routes the results to the appropriate engines 3 to 5 for execution or executes those results itself. The events manager 4 ensures that the correct action is carried out in the correct manner at the correct time by the correct object or engine.
  • Each object has an associated script file which describes in detail the object's graphics, behavioural attributes, and initialisation and cleanup functionality.
  • the basic object type is an Aartefact@ 8. This divides into Acontainer@ 9 and camera 10 categories.
  • a container 9 could be a room 11 or a person 12, and the latter could be a player 13.
  • a person is represented as a container 9 plus any person-specific functionality.
  • the container 9 in turn inherits much from artefact 8, notably the basic ability to have a name, a location, and be associated with gameplan events and 3D graphics.
  • the container 9 adds the functionality of knowing how to store other objects from the gameworld within itself.
  • Room & player have some common functionality and attributes and some distinctive functionality and attributes. For example only rooms have doors and this is an integral part of being a room. Despite this, both people and rooms can contain things (think of a person containing something as a person 'carrying something). So at this t point it is logical to have to separate objects for Room 11 and Person 12 both of which inherit the functionality of containers but redefine their own distinct functionality.
  • Cameras 10 can be set anywhere in the 3D environment to capture a view of a scene.
  • a stand-alone camera 10 may provide a view from vantage points; a moving camera be directed using movie commands; a POV camera may be fixed to a moving object such as a player for first-person or third-person tracking. Multiple cameras may be used; gameplan event results and movie commands allow complex cinematography.
  • the root object 14 is an abstract object which is used to link a camera 15 and its associated light 16 with a table 17 supporting a vase 18 which contains flowers 19.
  • the links between these objects mean that they can behave as a group; for example if the table 17 moves the vase 18 moves with it, as do the flowers 19.
  • the table 17 will always be stationary with respect to the depicted camera 15 (which in turn will be lit in an unchanging manner by light 16) but in principle the table can be viewed by another camera (not shown).
  • a result from an event can be applied to the root object 14 and will then automatically affect all the objects 14 to 19 depending from it. In principle a result of an event could act on the vase 17, for example, to knock it off the table 17, in which case the flowers 19 would be knocked off too.
  • FIG 4 which shows the preferred system from the point of view of the creator of the game (the gameplanner) the user populates the data structure 7 with objects and events via an interface 6 which preferably includes a graphical user interface with a pointing device as well as a keyboard.
  • the objects are stored hierarchically as described above with reference to Figure 3 and the events held in the system represent the gameplan.
  • Each object has one script file associated with it in addition to the object attributes entered into the object's property pages.
  • Script files are held as text data separate from the program. They must have the same name as the object they refer to, they must reside in the SCRIPTS directory and must only be edited with a plain-text editor. 11
  • a script file is divided into several sections, with certain sections reserved to do certain things and custom sections designed by the gameplanners.
  • the sections are as follows:
  • Script file sections may contain behaviour attributes, event results, movie commands or internal script control commands.
  • Script file sections may be generic for all objects such as initialisation - describing initial settings, common for a particular type of object, e.g. wallcing for players, or specific to a particular individual object.
  • Custom sections may be designed for the object which are then referenced from within the script or from gameplan event results. This allows for the creation of object-specific behaviours and attributes under the control of the gameplan or object script.
  • Script file sections may contain movie commands or event results, which allow for 3D manipulation of the environment or more complex gameplan related actions to be triggered from within a script.
  • Scripts can be called from the gameplan by two gameplan event results: Playscript and Runscript.
  • the former will execute all the lines of code between a specified start-marker and end-marker in one sweep.
  • Runscript will execute the first line of a script, (presumed to be a SELECT), then execute one line per update. This allows flexible access to the gameplanner to the lowest level properties of an object during the game progression.
  • Behaviour Attributes are set as flags set in the script file which specify attributes of object behaviour. Whilst the same effects could be duplicated with events, the repetition of creating these events would drive the writers to desperation. In cases of common default behaviours or available behaviours such as blinking, actual code has been provided within the system which can be triggered by the appropriate behaviour attribute flag in the script file, in this case TYPEBLLNKER.
  • TYPEBLLNKER Player avatar has the ability to blink, and do so according to the initialised or currently set behavioural parameters.
  • AAFPLAYER Specifies the script playing object to be the player's object.
  • ISSTATIC Specifies that the script playing object will not move, so only needs to be rendered once when in a third person point of view.
  • An object of this type will randomly glide around the screen, bouncing off other objects and the sides of the screen. Implemented for screen savers.
  • TYPE BLINKERl [N] Specifies that the object will blink randomly. If no value is supplied then a default value of
  • 'blinking' can be any random movement as defined by the actual section in the script file, such as raising of eyebrows, to the swinging of a tail.
  • TYPE NOBLLNKERl Stops an object from blinking. Same as TYPE BLINKERl 0.
  • FLOOR Specifies the object is the floor of a room. If the player collides with the floor, the collision is ignored.
  • ADDSCRLPT start end Adds the new script part, as specified by the start and end identifiers to the controlling objects list of active script parts.
  • KILLSCRIPT start Removes the script specified by the start identifier from the controlling objects list of active script parts.
  • command # Repeats the specified command N times. Not compatible with command +. Executes command, then the command on the next line. Not compatible with [N], and should be used with caution when nearing the end of a script section.
  • a movie engine 20 is provided which is operated by movie commands from interface 6 and runs corresponding pre-stored animation sequences, which could have been captured previously from a motion capture device 22.
  • Non-programmers can perform actions in 3D environments using English-like commands.
  • the required movie commands can be executed from object script files or from event results as generated by the events manager described above.
  • the motion capture can be as described in our co-pending earlier application referred to above.
  • a lip synch engine 21 which generates timing files from sampled speech (.WAV) files (which are in turn associated with sentences in the previously described conversation net) in response to certain types of sound waveform associated with specific lip movements such as the sounds A EE O OO and hard sounds such as T. These timing files are used to call appropriate lip animation routines.
  • .WAV sampled speech
  • An alternative preferred embodiment of the present invention which is described below, is similar to the above described embodiment except for the different way in which the events manager 4 operates. Accordingly, to prevent unnecessary repetition, only the differences between the two embodiments are described below.
  • the events manager 4 polls each object to determine whether any event associated with the object has been fired.
  • the events manager 4 incorporates a state machine map (not shown) for each object to establish how each object is to react to changes in its environmental conditions.
  • the state machine map establishes a set of different states in which the object can be.
  • the object resides in one state at a time and moves to a different state by the action of an event on the object. Movement from one state to another is initiated by a predetermined change in a set of environmental conditions occurring locally to the object. Other non-predetermined changes in the local conditions have no effect on the object and the object remains in its current state. This is best illustrated by way of the following example:
  • New State Bob is having a conversation on the telephone
  • the present invention relates to a method and apparatus for generating moving characters. More particularly the invention relates, in preferred embodiments, to a method and apparatus (particularly a programmed personal computer) for creating real-time-rendered virtual actors (avatars).
  • a method and apparatus particularly a programmed personal computer for creating real-time-rendered virtual actors (avatars).
  • An object of the present invention is to ovecomeor alleviate some or all of the above disadvantages.
  • the invention provides a method of generating a moving character for a display, comprising the steps of i) providing:
  • processing means arranged to combine said representations a), b) and c) to generate a combined representation of the moving character.
  • the present invention enables the creation of fully three-dimensional characters that are capable of moving and behaving in a convincingly human fashion, interacting with their environment on a data-driven level, supporting a variety of traditional photographic and cinematic techniques, such as moving from long-shots to close-ups without break or transition, and which provide lip-synchronisation and sufficient facial expression to enable a wide range of conversation and mood, all within the existing constraints of real-time rendering on PC's.
  • the rapidly changing area of the character conveys speech and facial expression
  • emulation of a range of phonemes and a range of eye and forehead movements is provided for. It has been found that by subdividing the face into upper and lower areas and substituting texture maps accordingly, an acceptably wide range of expressions and mouth movements can be conveyed in an economical fashion.
  • the two-dimensional representations c) are for example of eye, mouth and preferably also forehead regions of the face. These are preferably photographic in origin. Consequently even slight, simple movements in the head convey a feeling of realism, as the viewer reads their own visual experience into the character in a way that cannot be done with any other illustrative technique.
  • Photograph in 35mm format subjects head from front and side viewpoints, for geometry creation, placing a physical right-angled calibration target (a pre-marked set square) horizontally within the photographed image
  • Photograph subject head-on running through a fixed list of phonemes and facial expressions Photograph subject head-on running through a fixed list of phonemes and facial expressions.
  • a suitable 3D engine eg MUMM-E which runs under, for instance, Argonaut's BRender real-time 3D engine.
  • the result is believably human pseudo-video rendered on the fly at frame rates of 12 - 16 frames a second on an ordinary pentium PC.
  • Figure 1 shows the generation of a structure representation mapped onto a surface representation of a human character shown A) in front view and B) in profile;
  • Figure 2 shows the above character in front view with the frequently changing portions ( forehead, eyes and mouth) used for facial expression shown in different states, and
  • Figure 3 is a diagrammatic representation of the apparatus used for the capture of motion of the body of the above character.
  • Lens the focal length of the lens should be ideally around 100m; this allows for perspective flattening effects to be made use of.
  • Lighting should be diffuse and evenly balanced across both sides of the face, most pleasing results have been obtained with warmer images. Photographing subjects under tungsten lighting using 100-400 ASA daylight balanced film has proven effective.
  • Triangulation and calibration The subject is photographed with a calibration target S as shown in Figure 1A and Figure IB.
  • the calibration target works well when it runs under the subject's chin, with about 200mm (8 inches) vertical separation.
  • the front of the target In order for the face to be mirrored correctly the front of the target must be parallel to the face. The Z axis of the target should move back over the subjects left shoulder.
  • the resulting photographs 1 are subsequently used to generate a three-dimensional representation of the surface of the subject onto which is mapped a polygonal model 2 of the underlying structure of the subject which in use is manipulated in three dimensions by the computer running the finished video game. This process will be explained in detail subsequendy .
  • V Angles: best results are obtained with photographs taken at right angles, with profile (0 degrees) and full-face (90 degrees). The profile shot should cover the entire head.
  • VI Mouth movements: the following movements and expressions should be included:
  • Typical expressions are illustrated in Figures 2A and 2B.
  • the forehead, eyes and mouth regions 3, 4 and 5 are recorded two-dimensionally and their different states are stored for use by the finished game program.
  • Such use can involve appropriate combinations of states (eg frown, eyes left, lips compressed) according to the game play eg in response to commands from the user of the game.
  • V Next, the hairline should be followed down to the base of the left ear, and along the jaw. Again, 10-12 points are likely to be needed to keep this line smooth. This line should join the profile at the bottom of the chin.
  • the first complex component is the eye. Plot this as a flat plane, with a point in the centre - on the iris. This can be moved back and forth in the profile view to adjust for curvature. Plot open paths along the eyebrow, and down the eyelid. Connect these with faces.
  • the second complex component is the nose, particularly the underside. Make frequent use of the perspective view tool to ensure that the line of the profile is not distorted. This is easy to do, and results in caricature-like geometry is in the final model. Be especially careful that the points under the nose in the two images relate.
  • XIII The next component is the side and back of head. The entire side and back can be extracted from the profile shot if careful. The first step is to start a new project, and then reverse the position of the photographs, so that the full-face is in the left window, and the profile is in the right.
  • XIV Begin by plotting from the centre of the hairline around to the base of the jaw, as close as possible to the ending point of the faces jawiine. Finish this line with a point buried within the centre of the head.
  • XV Plot the outline of the ear. Use one, or if necessary two, points inside the ear to form faces in the same way as has been done with the eye.
  • XVI Build transverse ribbons through the hair, from the centre to the left edge of the head.
  • XVIII The final component is the neck and shoulders. The point at which you stop building out from the head will be a decision made in accordance with the structure for the rest of the model. It may be appropriate to limit this to the collar line, or extend the neck further into the throat and clavicle area.
  • XIX Start a new project, with the profile in the left window now, and the fullface in the right hand window.
  • the neck is essentially a cylinder, with sufficient modelling to fit into the - base of the head and extend into the head. Three lines of 6 points running around the neck should suffice. Once completed, close faces as before.
  • Mirroring involves a dos-command line utility called, unsurprisingly, mirror. 5
  • the syntax for this is very simply: mirror filename.3dp newfilename.3dp. Perform this function on each component.
  • .3dp files are ASCII based text files, with a structure that can be defined as a space-delimited variable. Excel will import this structure, with some effort.
  • V Open the file in Excel. Specify the file type as a text file (with the extensions .txt, .csv and .pm). Add *.3dp to the file extensions list, and select the face file.
  • X At the top of the file is the points database. This gives the values for the equivalent location in x-y pixel co-ordinates for the vertices in left and right pictures. There are four columns (apart from the numbering of the points) They read x-left hand y-left hand x-right hand y right hand. The column to be concerned with is the x-right hand. Again, locate the mirror portion by noting the discrepancies in values, and identify the number given to the weld line. The same process as with the points database needs to be performed, but in this case the formula is SUM(y-value+y), where y is the vlue in pixels from the welding line.
  • XII The next step is to convert the 3dp files to 3d Studio files and assign texture vertices. Again a command line tool, called convert, is used. The syntax is convert filename.3dp -r name of right-hand photograph.bmp matx.tga .
  • XII It may well be necessary to work on the texturemap files; the hair may need o to be cloned outwards, the same for the neck in the neck texture, and also clean hair off the forehead by cloning.
  • the fiic file should then contain all of the images from the original photoshoot of seventeen expressions, and if necessary any in-between frames that are required to smooth the facial movement. From this flic file, a render in 3d Studio can ensure that none of the fluid facial movements exceed the limits of the static facial geometry. For final export into Brender or Rendermorphics, individual frames can be generated.
  • the body of the character can be animated and combined with the head by standard techniques.
  • One way of capturing motion of the body is illustrated in Figure 3 wherein magnets are mounted at the joints of the subject's body 8 and their position detected by an electromagnetic positon sensor 8 of standard type.
  • the avatar derived from such a technique can be controlled and displayed by a computer game.
  • the invention extends to any novel combination or sub-combination disclosed herein.
  • any photographic representation of the rapidly changing areas of the character may be employed.
  • a method of generating a moving character for a display comprising the steps of i) providing: a) a three-dimensional representation of the structure of the character, the structure representation changing in real time to represent movement of the character, b) a three-dimensional representation of the surface of the character which is mapped onto said structure representation, and c) a two-dimensional representation of frequently changing portions of said surface,
  • said three-dimensional structure representation a) comprises a multiplicity of polygons.
  • step ii) is carried out by a program running in a PC and the character is displayed in real time - £J on the display of the PC.
  • processing means arranged to combine said representations a), b) and c) to generate a combined representation of the moving character.
  • An apparatus as claimed in any preceding claim wherein said three-dimensional structure representation a) comprises coordinates defining a multiplicity of polygons.
  • An apparatus as claimed in any of claims 9 to 15 which is a PC arranged to display the character in real time on its display.
  • An avatar comprising: a) an animated three-dimensional structure representation , b) a three-dimensional surface representation which is mapped onto said structure representation, and c) a two-dimensional representation of frequently changing portions of the surface of the avatar.
  • An avatar is generated by i) providing: a) a three-dimensional representation (2) of the structure of the character, the structure representation changing in real time to represent movement of the character, b) a three-dimensional representation of the surface (1) of the character which is mapped onto said structure representation, and c) a two-dimensional representation of frequently changing portions of said surface, eg the portions used to generate facial expressions and ii) combining said representations a), b) and c) to generate a combined representation of the moving character. Because the frequently changing portions are represented only two-dimensionally, less processing power and ROM is needed to display the avatar.

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Or Creating Images (AREA)

Abstract

L'invention concerne un système d'animation comprenant un ordonnanceur (1) qui déclenche de manière séquentielle un moteur de conversation (3), un gestionnaire d'événements (4) et un module de rendu pour chaque période d'image complète. Chacun des modules (3) et (5) précités lit et écrit des données dans une structure de données (7) qui contient un ensemble de données à évolution dynamique comprenant des données objet hiérarchie et des données événements déterminant le comportement des objets. Lesdits objets comportent des avatars en interaction avec un jeu dans un environnement tridimensionnel. Le système d'animation est capable de gérer un scénario interactif non linéaire comme ceux qui sont utilisés dans les jeux électroniques interactifs.
PCT/GB1998/000372 1997-02-07 1998-02-06 Systeme et procede d'animation WO1998035320A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU62206/98A AU6220698A (en) 1997-02-07 1998-02-06 Animation system and method

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
GBGB9703016.7A GB9703016D0 (en) 1997-02-07 1997-02-07 Animation system and mehod
GB9703016.7 1997-02-07

Publications (1)

Publication Number Publication Date
WO1998035320A1 true WO1998035320A1 (fr) 1998-08-13

Family

ID=10807616

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/GB1998/000372 WO1998035320A1 (fr) 1997-02-07 1998-02-06 Systeme et procede d'animation

Country Status (3)

Country Link
AU (1) AU6220698A (fr)
GB (1) GB9703016D0 (fr)
WO (1) WO1998035320A1 (fr)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2000043975A1 (fr) * 1999-01-26 2000-07-27 Microsoft Corporation Systeme de defi virtuel et procede d'enseignement d'une langue
WO2000069244A2 (fr) * 1999-05-14 2000-11-23 Graphic Gems Procede et appareil d'installation d'un monde partage virtuel
GB2351426A (en) * 1999-06-24 2000-12-27 Stephen James Crampton Method and apparatus for the generation of computer graphic representations of individuals
WO2001026058A1 (fr) * 1999-10-07 2001-04-12 Virtools Procede et systeme pour creer, sur une interface graphique, des images animees en trois dimensions, interactives en temps reel
WO2001037221A1 (fr) * 1999-11-16 2001-05-25 Possibleworlds, Inc. Procede et systeme de manipulation d'image
WO2003102720A2 (fr) * 2002-05-31 2003-12-11 Isioux B.V. Procede et systeme destines a produire une animation
US7184047B1 (en) 1996-12-24 2007-02-27 Stephen James Crampton Method and apparatus for the generation of computer graphic representations of individuals
US7554542B1 (en) 1999-11-16 2009-06-30 Possible Worlds, Inc. Image manipulation method and system
US10282897B2 (en) 2017-02-22 2019-05-07 Microsoft Technology Licensing, Llc Automatic generation of three-dimensional entities
CN110841293A (zh) * 2019-10-30 2020-02-28 珠海西山居移动游戏科技有限公司 一种自动动态输出游戏贴图合适度的方法和系统

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4821220A (en) * 1986-07-25 1989-04-11 Tektronix, Inc. System for animating program operation and displaying time-based relationships
EP0597316A2 (fr) * 1992-11-09 1994-05-18 Virtual Prototypes, Inc. Système de simulation par ordinateur et méthode pour spécifier le comportement d'interfaces d'opérateur graphiques
WO1996023280A1 (fr) * 1995-01-25 1996-08-01 University College Of London Modelisation et analyse de systemes d'entites d'objets en trois dimensions
US5596695A (en) * 1991-07-12 1997-01-21 Matsushita Electric Industrial Co., Ltd. Interactive multi-media event-driven inheritable object oriented programming apparatus and method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4821220A (en) * 1986-07-25 1989-04-11 Tektronix, Inc. System for animating program operation and displaying time-based relationships
US5596695A (en) * 1991-07-12 1997-01-21 Matsushita Electric Industrial Co., Ltd. Interactive multi-media event-driven inheritable object oriented programming apparatus and method
EP0597316A2 (fr) * 1992-11-09 1994-05-18 Virtual Prototypes, Inc. Système de simulation par ordinateur et méthode pour spécifier le comportement d'interfaces d'opérateur graphiques
WO1996023280A1 (fr) * 1995-01-25 1996-08-01 University College Of London Modelisation et analyse de systemes d'entites d'objets en trois dimensions

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
GAILDRAT V ET AL: "DECLARATIVE SCENES MODELING WITH DYNAMIC LINKS AND DECISION RULES DISTRIBUTED AMONG THE OBJECTS", IFIP TRANSACTIONS B. APPLICATIONS IN TECHNOLOGY, vol. 9, 1 January 1993 (1993-01-01), pages 165 - 178, XP000569107 *
STRAUSS P S ET AL: "AN OBJECT-ORIENTED 3D GRAPHICS TOOLKIT", COMPUTER GRAPHICS, vol. 26, no. 2, 1 July 1992 (1992-07-01), pages 341 - 349, XP000569106 *

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7184047B1 (en) 1996-12-24 2007-02-27 Stephen James Crampton Method and apparatus for the generation of computer graphic representations of individuals
US6234802B1 (en) 1999-01-26 2001-05-22 Microsoft Corporation Virtual challenge system and method for teaching a language
WO2000043975A1 (fr) * 1999-01-26 2000-07-27 Microsoft Corporation Systeme de defi virtuel et procede d'enseignement d'une langue
WO2000069244A2 (fr) * 1999-05-14 2000-11-23 Graphic Gems Procede et appareil d'installation d'un monde partage virtuel
WO2000069244A3 (fr) * 1999-05-14 2001-02-01 Graphic Gems Procede et appareil d'installation d'un monde partage virtuel
GB2351426A (en) * 1999-06-24 2000-12-27 Stephen James Crampton Method and apparatus for the generation of computer graphic representations of individuals
WO2001026058A1 (fr) * 1999-10-07 2001-04-12 Virtools Procede et systeme pour creer, sur une interface graphique, des images animees en trois dimensions, interactives en temps reel
US6972765B1 (en) 1999-10-07 2005-12-06 Virtools Method and a system for producing, on a graphic interface, three-dimensional animated images, interactive in real time
FR2799562A1 (fr) * 1999-10-07 2001-04-13 Nemo Procede pour creer et animer de maniere interactive, sur une interface graphique, des images en trois dimensions
WO2001037221A1 (fr) * 1999-11-16 2001-05-25 Possibleworlds, Inc. Procede et systeme de manipulation d'image
US7554542B1 (en) 1999-11-16 2009-06-30 Possible Worlds, Inc. Image manipulation method and system
WO2003102720A2 (fr) * 2002-05-31 2003-12-11 Isioux B.V. Procede et systeme destines a produire une animation
NL1020733C2 (nl) * 2002-05-31 2004-01-13 Isioux B V Werkwijze en systeem voor het vervaardigen van een animatie, alsmede een computerprogramma voor het vervaardigen en het afspelen van een animatie gemaakt volgens de werkwijze.
WO2003102720A3 (fr) * 2002-05-31 2004-10-21 Isioux B V Procede et systeme destines a produire une animation
US10282897B2 (en) 2017-02-22 2019-05-07 Microsoft Technology Licensing, Llc Automatic generation of three-dimensional entities
CN110841293A (zh) * 2019-10-30 2020-02-28 珠海西山居移动游戏科技有限公司 一种自动动态输出游戏贴图合适度的方法和系统
CN110841293B (zh) * 2019-10-30 2024-04-26 珠海西山居数字科技有限公司 一种自动动态输出游戏贴图合适度的方法和系统

Also Published As

Publication number Publication date
AU6220698A (en) 1998-08-26
GB9703016D0 (en) 1997-04-02

Similar Documents

Publication Publication Date Title
US9639974B2 (en) Image transformation systems and methods
EP0807902A2 (fr) Méthode et appareil pour la génération de caractères mobiles
AU718608B2 (en) Programmable computer graphic objects
CN107274466A (zh) 一种实时全身动作捕捉的方法、装置和系统
EP1059614A2 (fr) Système et procédé de génération d'animations 3d
CN111724457A (zh) 基于ue4的真实感虚拟人多模态交互实现方法
WO1998035320A1 (fr) Systeme et procede d'animation
Maraffi Maya character creation: modeling and animation controls
CN1628327B (zh) 自动三维建模系统和方法
Liu An analysis of the current and future state of 3D facial animation techniques and systems
Perng et al. Image talk: a real time synthetic talking head using one single image with chinese text-to-speech capability
Ward Game character development with maya
Ballin et al. Personal virtual humans—inhabiting the TalkZone and beyond
Doroski Thoughts of spirits in madness: Virtual production animation and digital technologies for the expansion of independent storytelling
Bibliowicz An automated rigging system for facial animation
Morishima Dive into the Movie
Santos Virtual Avatars: creating expressive embodied characters for virtual reality
CN117793409A (zh) 视频的生成方法及装置、电子设备和可读存储介质
AU2010221494B2 (en) Image transformation systems and methods
Beskow et al. Expressive Robot Performance Based on Facial Motion Capture.
Magnenat-Thalmann Living in both the real and virtual worlds
Stüvel et al. Mass population: Plausible and practical crowd simulation
Erol Modeling and Animating Personalized Faces
Zhou Application of 3D facial animation techniques for Chinese opera
Magnenat-Thalmann et al. Real-time individualized virtual humans

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AL AM AT AU AZ BA BB BG BR BY CA CH CN CU CZ DE DK EE ES FI GB GE GH GM GW HU ID IL IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MD MG MK MN MW MX NO NZ PL PT RO RU SD SE SG SI SK SL TJ TM TR TT UA UG US UZ VN YU ZW

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): GH GM KE LS MW SD SZ UG ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH DE DK ES FI FR GB GR IE IT LU MC NL PT SE BF BJ CF CG CI CM GA GN ML MR NE SN TD TG

DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
121 Ep: the epo has been informed by wipo that ep was designated in this application
REG Reference to national code

Ref country code: DE

Ref legal event code: 8642

NENP Non-entry into the national phase

Ref country code: JP

Ref document number: 1998534002

Format of ref document f/p: F

122 Ep: pct application non-entry in european phase