WO2005091198A1 - Method and system for on-screen navigation of digital characters or the likes - Google Patents

Method and system for on-screen navigation of digital characters or the likes Download PDF

Info

Publication number
WO2005091198A1
WO2005091198A1 PCT/CA2005/000426 CA2005000426W WO2005091198A1 WO 2005091198 A1 WO2005091198 A1 WO 2005091198A1 CA 2005000426 W CA2005000426 W CA 2005000426W WO 2005091198 A1 WO2005091198 A1 WO 2005091198A1
Authority
WO
WIPO (PCT)
Prior art keywords
digital
entity
cell
recited
movable
Prior art date
Application number
PCT/CA2005/000426
Other languages
French (fr)
Inventor
Paul Kruszewski
Greg Labute
Cory Kumm
Fred Dorosh
Original Assignee
Bgt Biographic Technologies Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Bgt Biographic Technologies Inc. filed Critical Bgt Biographic Technologies Inc.
Priority to JP2007503169A priority Critical patent/JP2007529796A/en
Priority to EP05714659A priority patent/EP1725966A4/en
Priority to CA002558971A priority patent/CA2558971A1/en
Publication of WO2005091198A1 publication Critical patent/WO2005091198A1/en

Links

Classifications

    • A63F13/10
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/40Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment
    • A63F13/42Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/45Controlling the progress of the video game
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/55Controlling game characters or game objects based on the game progress
    • A63F13/57Simulating properties, behaviour or motion of objects in the game world, e.g. computing tyre load in a car race game
    • A63F13/577Simulating properties, behaviour or motion of objects in the game world, e.g. computing tyre load in a car race game using determination of contact between game characters or objects, e.g. to avoid collision between virtual racing cars
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/60Methods for processing data by generating or executing the game program
    • A63F2300/64Methods for processing data by generating or executing the game program for computing dynamical parameters of game objects, e.g. motion determination or computation of frictional forces for a virtual car
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/60Methods for processing data by generating or executing the game program
    • A63F2300/64Methods for processing data by generating or executing the game program for computing dynamical parameters of game objects, e.g. motion determination or computation of frictional forces for a virtual car
    • A63F2300/643Methods for processing data by generating or executing the game program for computing dynamical parameters of game objects, e.g. motion determination or computation of frictional forces for a virtual car by determining the impact between objects, e.g. collision detection
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/60Methods for processing data by generating or executing the game program
    • A63F2300/66Methods for processing data by generating or executing the game program for rendering three dimensional images
    • A63F2300/6607Methods for processing data by generating or executing the game program for rendering three dimensional images for animating game characters, e.g. skeleton kinematics

Definitions

  • the present invention relates to the digital entertainment industry and to computer simulation. More specifically, the present invention concerns a method and system for on-screen navigation of digital objects or characters.
  • FIG. 1 of the appended drawings illustrates a generic 3D application from the prior art which can be in the form of an animation package, a video/computer game, a trainer or a simulator for example.
  • the 3D application shares the basic high-level architecture of objects, which can be more generally referred to as digital entity, being manipulated by controllers via input devices or physics and artificial intelligence (Al) systems and made real to the user by synthesizers, including visual rendering and audio.
  • One level deeper, 3D applications are typically broken down into two components: the simulator and the image generator. As illustrated in Figures 2 and 3, the simulator takes in many inputs from both human operators and CGE the simulator then modifies the world database accordingly and outputs the changes to the image generator for visualization.
  • a typical simulation/animation loop structure is illustrated in Figure 4.
  • the world state manager first loads up the initialisation data from the world database. For each frame/tick of the simulation, the world state manager updates the controllers, the controller acts accordingly and sends back object updates to the manager.
  • the world state manager then resolves all the object updates into a new world state in the world database (WDB) and passes this to the image generator (IG).
  • the IG updates the characters' body limb positions and other objects and renders them out to the screen.
  • Game Al makes games more immersive. Typically game Al is used in the following situations:
  • NPCs intelligent non-player characters
  • PPCs non-player characters
  • Al can be used to fill sporting arenas with animated spectators or to add a flock of bats to a dungeon scene; • to create opponents when there are none.
  • Many games are designed for two or more players however, if there is no one to play against intelligent Al opponents are needed; or • to create team members when there are not enough. Some games require team play, and game Al can fill the gap when there are not enough players.
  • CGFs Computer Generated Forces
  • SAFs Semi-Automated Forces
  • Vehicle drivers and pilots although vehicles have very complex models for physics (e.g., helicopters will wobble realistically as they bank into turns and tanks will bounce as they jump ditches) and weapon / communication systems (e.g., line of sight radios will not work through hills), they tend to have simplistic line of sight navigation systems that fail in the 3D concrete canyons of MOUT (e.g., helicopters fly straight through skyscrapers rather than around them and tanks get confused and stuck in the twisty garbage filled streets ofthe third world).
  • Al can be used to simulate of the brain of the human driver in the vehicle (with or without the actual body being simulated).
  • Groups of individual doctrinal combatants United States government-created SAFs of groups of individual doctrinal combatants limited in their usefulness to MS&T applications since they are limited to US installations only and are unable to navigate properly in urban environments. While aggregate SAFs operate on a strategic and often abstract level, individual combatant simulators operate on a tactical and often 3D physical level. Groups of individual irregular combatants: by definition irregular combatants such as armormen and terrorists are non-doctrinal and hence it is difficult to express their personalities and tactics with a traditional SAF architecture. Crowds of individual non-combatants (clutter): one of the most difficult restrictions of MOUT is how to conduct military operations in an environment that populated with non-combatants. These large civilian populations can affect a mission by acting as only operational "clutter" to actually affecting the outcome of the battle.
  • One of the more specific aspects of game Al and MS&T is real-time intelligent navigation of agent per se and, for example, in the context of crowd simulation.
  • Reece develops a system for modelling crowds within the Dismounted Infantry Semi-Automated Forces (DISAF) system (Reece,
  • Movement is fundamental to all entities in a simulation whether bipedal, wheeled, tracked or aerial.
  • an Al system should allow an entity to navigate in a realistic fashion from point X to point Y.
  • traffic rules such as staying in lane and stopping a traffic lights
  • military operations avoiding roadblocks and trying not to run over civilian bystanders.
  • Intelligent navigation can be broken down into two basic levels: dynamically finding a path from X to Y and avoiding dynamic obstacles along that path.
  • An example of agent navigation is following a pre- determined path (e.g., guards on a patrol path).
  • a predetermined path is an ordered set of waypoints that digital characters may be instructed to follow.
  • a path around a racetrack would consist of waypoints at the turns of the track.
  • Path following consists of locating the nearest waypoint on the path, navigating to it by direct line of sight and then navigating to the next point on the path. The character looks for the next waypoint when it has arrived the current waypoint (i.e., within the waypoint's bounding sphere).
  • Figure 5 shows how a path can be used to instruct a character to patrol an area in a certain way. It has been found that path following as a navigation mechanism works well when not only both the start and destination are known but also the path is explicitly known.
  • An object of the present invention is therefore to provide an improved navigation method for a digital entity in a digital world.
  • Another object of the invention is to provide a method for automatically moving a digital entity on-screen from start to end points.
  • a method in a computer system for moving at least one digital entity on-screen from starting to end points in a digital world comprising: i) providing respective positions of obstacles for the at least one movable digital entity in the digital world; defining at least portions of the digital world without obstacles as reac able space for the at least one movable digital entity; ii) creating a navigation mesh for the at least one movable digital entity by dividing the reachable space into at least one convex cell; iii) locating a start cell and an end cell among the at least one convex cell including respectively the start and end points; and iv) verifying whether the starting cell corresponds to the end cell; if the starting cell corresponds to the end cell, then: iv)a) moving the at least one movable digital entity from the starting point to the end point; if the starting cell does not correspond to the end cell, then iv)b) i) determining a sequence
  • a system for moving a digital entity on-screen from starting to end points in a digital world comprising: a world database for storing information about the digital world and for providing respective positions of obstacles for the movable digital entity in the digital world; a navigation module i) for defining at least portions ofthe digital world without obstacles as reachable space for the movable digital entity; ii) for creating a navigation mesh for the movable digital entity by dividing the reachable space into at least one convex cell; iii) for locating a start cell and an end cell among the at least one convex cell including respectively the start and end points; and iv) for verifying whether the starting cell corresponds to the end cell; and if the starting cell does not correspond to the end cell, for further v) determining a sequence of cells among the at least one convex cell from the starting cell to the end cell, and vi) determining at least one intermediary point located on a respective boundary between consecutive cells in the
  • Figure 1 which is labeled "prior art", is a block diagram illustrating the first level of a generic three-dimensional (3D) application
  • Figure 2 which is labeled "prior art", is a block diagram illustrating the second level of the generic 3D application from Figure 1 ;
  • FIG 3 which is labeled "prior art", is an expanded view of the block diagram from Figure 2;
  • Figure 4 which is labeled “prior art” is a flowchart illustrating the flow of data from the generic 3D application from Figure 1 to the image generator part of the 3D application from Figure 1 ;
  • Figure 6 is a block diagram illustrating a system for onscreen animation of digital entities including a navigation module embodying a system for moving a digital entity on-screen from starting to end points in a digital world according to an illustrative embodiment ofthe present invention
  • Figure 7 is a flowchart illustrating a method for moving a digital entity on-screen from starting to end points in a digital world according to an illustrative embodiment of the present invention
  • Figure 8 is a schematic view illustrating a two-dimensional barrier used with the method of Figure 7;
  • Figure 9 is a schematic view illustrating a three-dimensional barrier used with the method of Figure 7;
  • Figure 10 is a schematic view illustrating a co-ordinate system used with the method of Figure 7;
  • Figure 11 is a top plan schematic view of a two-dimensional world in the form of a one-floor building according to a first example ofthe reachable space for a specific movable digital entity according to the method from Figure 7;
  • Figure 12 is a top plan schematic view of the one-floor building from Figure 11 illustrating a navigation mesh created through the method from Figure 7;
  • Figure 13 is a schematic view of a connectivity graph obtained from the navigation mesh from Figure 12;
  • Figure 14 is a top plan schematic view of a two-dimensional world according to a second example ofthe reachable space for a specific movable digital entity according to the method from Figure 7;
  • Figure 15 is a top plan schematic view of the world from Figure 11 illustrating a navigation mesh created through the method from Figure 7;
  • Figure 16 is a top plan schematic view similar to Figure 15, illustrating steps from the method from Figure 7;
  • Figure 17 is a top plan view schematic view similar to Figure 15, illustrating a path resulting from the method from Figure 7;
  • Figure 18 is a top plan view schematic view similar to Figure 17, illustrating a first alternative path to the path illustrated in Figure 17 resulting from the blocking of a first passage;
  • Figure 19 is a top plan view schematic view similar to
  • Figure 18 illustrating a second alternative path to the path illustrated in Figures 17 and 18 further resulting from the blocking of a second passage
  • Figure 20 is a top plan schematic view similar to Figure 17, illustrating a third alternative path to the path illustrated in Figure 17, resulting from new starting and end points;
  • Figure 21 is a top plan view schematic view similar to Figure 20, illustrating an alternative path to the path illustrated in Figure 20, resulting from doubling the width of the movable digital entity;
  • Figure 22 is a perspective view of a floor plan generator on a small city part of a simulator;
  • Figure 23 is a perspective view illustrating the navigation mesh created from the floor plan generator from Figure 22 using the method from Figure 7;
  • Figure 24 is a perspective view of a digital world in the form of a city street;
  • Figure 25 is a perspective view of the city street from Figure 24, illustrating the navigation mesh creating step according to the method from Figure 7, including the used of blind data to characterize the resulting cells;
  • Figure 26 is a cut out perspective view of a digital world in the form of a building
  • Figure 27 is a perspective view of the navigation mesh resulting from the building from Figure 26 using the method from Figure 7;
  • Figure 28 is flowchart of a collision avoidance method for a digital entity moving on-screen from starting to end points in a digital world according to a specific illustrative embodiment of the present invention;
  • Figure 29 is a perspective view of an entity in a 3D application, in the form of a character, illustrating character's sensor according to the present invention
  • Figure 30 is a top plan view of an entity in a digital world illustrating the entity's vision sensor according to the present invention, and more specifically illustrating the field of view provided by the sensor;
  • Figure 31 is a perspective view of an entity in a digital world similar to Figure 30 illustrating the selection of a sub path to avoid obstacles according to a specific embodiment of the method from the Figure 7;
  • Figures 32A-32C are schematic views illustrating avoidance strategies ( Figures 32B-32C) that can be used by a movable digital entity according to a specific embodiment of the method from Figure 7 in the case of a stationary obstacle ( Figure 32A);
  • Figures 33A-33C are schematic views illustrating avoidance strategies ( Figures 33B-33C) that can be used by a movable digital entity according to a specific embodiment of the method from Figure 7 in the case of an incoming obstacle ( Figure 33A);
  • Figures 34A-34C are schematic views illustrating avoidance strategies ( Figures 34B-34C) that can be used by a movable digital entity according to a specific embodiment of the method from Figure 7 in the case of an outgoing obstacle (Figure 34A);
  • Figures 35A-35C are schematic views illustrating avoidance strategies (Figures 35B-35C) that can be used by a movable digital entity according to a specific embodiment of the method from Figure 7 in the case of an sideswiping obstacle (Figure 35A); and
  • Figures 36A-36E are schematic views illustrating paths for simultaneously moving five movable digital entities using the method from Figure 7 and applying a group-based movement modifier.
  • a system 10 for on-screen animation of digital image entities including a navigation module 12 embodying a method for moving a digital entity on-screen from starting to end points in a digital world according to an illustrative embodiment of the pr&sent invention will now be described with reference to Figure 6.
  • the system 10 comprises a simulator 14 a world database (WDB) 16 coupled to the simulator 14, a plurality o"f image generators (IG) 18 (three shown) coupled to both the world DB and to the simulator 14, a navigation module 12 according to an illustrative embodiment of the present invention, coupled to the simulator 14, a decision-making module, also coupled to the simulator 14, and a plurality (three shown) of animation control module, each coupled to a respective IG 18.
  • WDB world database
  • IG o"f image generators
  • the number of IG 18 may of course vary depending on the application. For example, in the case wherein the system 10 is embodied in a 3D animation application, the number of IG 18 may effect the rendering time.
  • the simulator 14 and IG 18 may be in the form of a single computer.
  • the world database 16 is stored on any suitable memory means, such as, but not limited to, a hard drive, a dvd or cd-rom disk to be read on a corresponding drive, or a random-access memory part of the computer 14.
  • the simulator 14 and IG 18 are in the form of computers or of any processing machines provided with processing units, which are programmed with instructions for animating, simulating or gaming as will be explained hereinbelow in more detail.
  • the simulator 14, IG 18 and world DB 16 can be remotely coupled via a computer network (not shown) such as Internet.
  • the simulator 14 can take another form such as a game engine or a 3D animation system.
  • the modules 12, 20 and 22 are in the form of sub-routines or dedicated instructions programmed in the simulator 14 for example.
  • the characteristics and functions of the modules 20, 22 and more specifically of module 12 will become more apparent upon reading the following non- restrictive description of a method 100 for moving a digital entity on-screen from a starting point to an end point in a digital world according to an illustrative embodiment of the present invention.
  • the method 100 which is illustrated in Figure 7, comprises the following steps:
  • 102 providing respective positions of obstacles for the movable digital entity in the digital world and defining at least portions of the digital world without obstacles as reachable space for the movable digital entity; 104 - creating a navigation mesh for the movable digital entity by dividing the reachable space into convex cells; 106 - locating a starting cell and an end cell among the convex cells including respectively the starting and end points; 108 - verifying whether the starting cell corresponds to the end cell; if the starting cell corresponds to the end cell then : 110 - moving the digital entity from the starting point to the end point and stopping the method; if the starting cell does not correspond to the end cell then : 112 - determining a sequence of cells among the convex cells from the starting cell to the end cell; 114 - determining intermediary points located on a respective boundary between consecutive cells in the sequence of cells; and 116 - moving the digital entity from the starting point to each consecutive intermediary points to the end point.
  • step 102 the respective position of obstacles for the movable digital entity in the digital world are defined yielding the portion of the digital world without obstacles as reachable space for the movable digital entity.
  • the reachable space can be defined as regions ofthe digital world enclosed by barriers.
  • the digital world may have been previously defined including any autonomous or non-autonomous entity with which the movable digital entity may interact.
  • the concept of digital world and of digital entity will now be described according to an illustrative embodiment of the present invention.
  • the digital world model includes image object elements.
  • the image object elements include two or three-dimensional (2D or 3D) graphical representations of objects, autonomous and non-autonomous characters, building, animals, trees, etc. It also includes barriers, terrains, and surfaces.
  • 2D or 3D three-dimensional
  • the movable entity that is to be moved using the method 100 can be either autonomous or non-autonomous.
  • the concepts of autonomous and non-autonomous characters and objects will be described hereinbelow in more detail.
  • the graphical representation of objects and characters can be displayed, animated or not, on a computer screen or on another display device, but can also inhabit and interact in the virtual world without being displayed on the display device.
  • Barriers are triangular planes that can be used to build walls, moving doors, tunnels, etc., or any obstacles for any movable entity in the digital world.
  • Terrains are 2D height-fields to which entities can be automatically bound (e.g. keep soldier characters marching over a hill).
  • Surfaces are triangular planes that may be combined to form fully 3D shapes to which autonomous characters can also be constrained.
  • these elements are to be used to describe the world in which the characters inhabit. They are stored in the world DB 16.
  • the digital world model includes a solver, which allows managing entities, including autonomous characters, and other objects in the digital world.
  • the solver can have a 3D configuration, to provide the entities with complete freedom of movement, or a 2D configuration, which is more computationally efficient, and allows an operator to insert a greater number of movable entities in a scene without affecting performance of the animation system.
  • a 2D solver is computationally more efficient than a 3D solver since the solver does not consider the vertical (y) co-ordinate of an image object element or of an entity.
  • the choice between the 2D and 3D configuration depends on the movements that are allowed in the virtual world by the movable entities and other objects. If they do not move in the vertical plane then there is no requirement to solve for in 3D and a 2D solver can be used. However, if any entity requires complete freedom of movement, a 3D solver is used. It is to be noted that the choice of a 2D solver does not limit the dimensions of the virtual world, which may be 2D or 3D.
  • Non-autonomous characters are objects in the digital world that, even though they may potentially interact with the digital world, are not driven by the solver. These can range from traditionally animated characters (e.g. the leader of a group) to player characters to objects (e.g. flying debris) driven by other components of the simulator.
  • Barriers are used to represents obstacles for movable entities, and are equivalent to one-way walls, i.e. an object or a digital entity inhabiting the digital world can pass through them in one direction but not in the other.
  • spikes forward orientation vectors
  • an object or an entity can pass from the non-spiked side to the spiked side, but not vice-versa.
  • a specific avoidance constraint can be defined and activated for a digital entity to attempt to avoid the barriers in the digital world. The concept of behaviours and constraints will be described hereinbelow in more detail.
  • a barrier is represented in a 2D solver by a line and by a triangle in a 3D solver.
  • the direction of the spike for 2D and 3D barriers is also shown in Figures 8-9 (see arrows 24 and 26 respectively) where P1-P3 refers to the order in which the points ofthe barrier are drawn. Since barriers are unidirectional, two-sided barriers are made by superimposing two barriers and by setting their spikes opposite to each other.
  • Each barrier can be defined by the following parameters:
  • a bounding box is a rectilinear box that encapsulates and bounds a 3D object.
  • the solver of the digital world model may include subsolvers, which are the various engines of the solver that are used to run the simulation. Each subsolver manages a particular aspect of object and simulation in order to optimize computations.
  • each animated digital entity is associated to animation clips allowing representing the entity in movement in the digital world.
  • virtual sensors are assigned to and used by some entities to allow them gathering data information about image object elements or other entities within the digital world. Decision trees can also be used for processing the data information resulting in selecting and triggering one of the animation cycle or selecting a new behaviour.
  • an animation cycle which will also be referred to herein as "animation clip” is a unit of animation that typically can be repeated.
  • animation clip is a unit of animation that typically can be repeated.
  • the animator creates a "walk cycle". This walk cycle makes the character walks one iteration. In order to have the character walk more, more iterations ofthe cycle are played. If the character speeds up or slows down during time, the cycle is "scaled” accordingly so that the cycle speed matches the character displacement so that there is no slippage (e.g., it looks like the character is slipping on the ground).
  • the autonomous image entities are tied to transform nodes of the animating engine (or platform).
  • the nodes can be in the form of locators, cubes or models of animals, vehicles, etc. Since animation clips and transform nodes are believed to be well known in the art, they will not be described herein in more detail.
  • Figure 10 shows a co-ordinate system for moving the IE and used by the solver.
  • IE from the present invention can also be characterized by behaviours.
  • the behaviours are the low-level thinking apparatus of an IE. They take raw input from the digital world using virtual sensors, process it, and change the lE's condition accordingly.
  • Behaviours can be categorized, for example, as Locomotive behaviours allowing an IE to move. These locomotive behaviours generate steering forces that can affect any or all of an lE's direction of motion, speed, and orientation (i.e. which way the IE is facing) for example.
  • a locomotive behaviour can be seen as a force that acts on the IE.
  • This force is a behavioural force, and is analogous to a physical force (such as gravity), with a difference that the force seems to come from within the IE itself.
  • behavioural forces can be additive.
  • an autonomous character may simultaneously have more then one active behaviours.
  • the solver calculates the resulting motion of the character by combining the component behavioural forces, in accordance with behaviour's priority and intensity.
  • the resultant behavioural force is then applied to the character, which may impose its own limits and constraints (specified by the character's turning radius attributes, etc) on the final motion.
  • Behaviours can be divided into four subgroups: simple behaviours, targeted behaviours, and group behaviours.
  • Targeted behaviours apply to an IE and a target object, which can be any other object in the digital world (including groups of objects).
  • Group behaviours allow lEs to act and move as a group where the individual lEs included in the group will maintain approximately the same speed and orientation as each other.
  • Avoid Barriers The Avoid Barriers behaviour allows a character to avoid colliding with barriers.
  • Parameters specific to this behaviour may include, for example:
  • the Avoid Obstacles behaviour allows an IE to avoid colliding with obstacles, which can be other autonomous and non- autonomous image entities. Similar parameters than those detailed for the Avoid Barriers behaviour can also be used to define this behaviour.
  • the Accelerate At behaviour attempts to accelerate the IE by the specified amount. For example, if the amount is a negative value, the
  • the IE will decelerate by the specified amount.
  • the actual acceleration/deceleration may be limited by max acceleration and max deceleration attributes of the IE.
  • Acceleration which represents the change in speed (distance units/frame2) that the IE will attempt to maintain.
  • the Maintain Speed At behaviour attempts to set the target lE's speed to a specified value. This can be used to keep a character at rest or moving at a constant speed. If the desired speed is greater than the character's maximum speed attribute, then this behaviour will only attempt to maintain the character's speed equal to its maximum speed. Similarly, if the desired speed is less than the character's minimum speed attribute, this behaviour will attempt to maintain the character's speed equal to its minimum speed.
  • a parameter allowing defining this behaviour is the desired speed (distance units/frame) that the character will attempt to maintain.
  • the Wander Around behaviour applies random steering forces to the IE to ensure that it moves in a random fashion within the solver area.
  • Parameters allowing defining this behaviour may be for example:
  • the Orient To behaviour allows an IE to attempt to face a specific direction.
  • Targeted Behaviours The following behaviours apply to an IE (the source) and another object in the world (the target).
  • Target objects can be any object in the world such as autonomous or non-autonomous image entities, paths, groups and data. If the target is a group, then the behaviour applies only to the nearest member of the group at any one time. If the target is a datum, then it is assumed that this datum is of type ID and points to the true target of the behaviour. An ID is a value used to uniquely identify objects in the world. The concept of datum will be described in more detail hereinbelow.
  • the following parameters, shared by all targeted behaviours, are:
  • the Seek To behaviour allows an IE to move towards another IE or towards a group of lEs. If an IE seeks a group, it will seek the nearest member of the group at any time.
  • a Seek To behaviour may be programmed according to the navigation method 100.
  • Attribute Description Look This parameter instructs the IE to move towards a projected Ahead future point of the object being sought. Increasing the Time amount of look-ahead time does not necessarily make the Seek To behaviour any "smarter” since it simply makes a linear interpolation based on the target's current speed and position. Using this parameter gives the behaviour
  • the Flee From behaviour allows an IE to flee from another IE or from a group of lEs. When an IE flees from a group, it will flee from the nearest member of the group at any time.
  • the Flee From behaviour has the same attributes as the Seek To behaviour, however, it produces the opposite steering force. Since the parameters allowing defining the Flee From behaviour are very similar to those of the Seek To behaviour, they will not be described herein in more detail.
  • the Look At behaviour allows an IE to face another IE or a group of lEs. If the target of the behaviour is a group, the IE attempts to look at the nearest member of the group.
  • the Strafe behaviour causes the IE to "orbit" its target, in other words to move in a direction perpendicular to its line of sight to the target.
  • a probability parameter allows to determine how likely it is at each frame that the IE will turn around and start orbiting in the other direction. This can be used, for instance, to make a moth orbit a flame.
  • the effect of a guard walking sideways while looking or shooting at its target can be achieved by turning off the g uard's
  • a parameter specific to this behaviour may be, for example, the Probability, which may take a value between 0 and 1 that determines how often the IE change direction of orbit. For example, at 24 frames per second, a value of 0.04 will trigger a random direction change on average every second, whereas a value of 0.01 will trigger a change on average every four seconds.
  • the Go Between behaviour allows an IE to get in-between the first target and a second target.
  • this behaviour can be used to enable a bodyguard character to protect a character from a group of enemies.
  • the following parameter allow specifying this behaviour:, which may take a value between 0 and 1 that determines how close to the. second target one wish the entity to go.
  • the follow Path behaviour allows an IE to follow a path.
  • this behaviour can be used to enable a racecarto move around a racetrack.
  • Group behaviours allow grouping individual lEs so that they act as a group while still maintaining individuality. Examples include a school of fish, a flock of birds, etc.
  • the Align With behaviour allows an IE to maintain the same orientation and speed as other members of a group.
  • the IE may or may not be a member of the group.
  • the Join With behaviour allows an IE to stay close to members of a group.
  • the IE may or may not be a member of the group.
  • Join Distance is similar to the "contact radius" in targeted behaviours. Each member ofthe group within the neighbourhood radius and outside the join distance is taken into account when calculating the steering force of the behaviour.
  • the join distance is the external distance between the characters (i.e. the distance between the outsides of the bounding spheres of the characters). The value of this parameter determines the closeness that members ofthe g roup attempt to maintain.
  • the Separate From behaviour al rws an IE to keep a certain distance away from members of a group. For example, this can be used to prevent a school of fish from becoming too crowded.
  • the IE to which the behaviour is applied may or may not be a member of the group.
  • the Separation Distance is an example of parameters that can be used to define this behaviour. Each member ofthe group within the neighbourhood radius and inside the separation distance will be taken into account when calculating the steering force of the behaviour.
  • the separation distance is the external distance b etween the lEs (i.e. the distance between the outsides of the bounding spheres of the lEs). The value of this parameter determines the external separation distance that members of the group will attempt to maintain.
  • An IE can have multiple active behaviours associated thereto at any given time. Therefore, means can be provided to assign importance to a given behaviour.
  • a first means to achieve this is by assigning intensity and priority to a behaviour.
  • the assigned intensity of a behaviour affects how strong the steering force generated by the behaviour will be. The higher the intensity the greater the generated behavioural steering forces.
  • the priority of a behaviour defines the precedence the behaviour should have over other behaviours. When a behaviour of a higher priority is activated, those of lower priority are effectively ignored.
  • the animator informs the solver which behaviours are more important in which situations in order to produce a more realistic animation.
  • the solver calculates the desired motion of all behaviours, sums up these motions based on each behaviour's intensity, while ignoring those with lower priority, and enforces the maximum speed, acceleration, deceleration, and turning radii defined in the lE's attributes. Finally, braking due to turning may be taken into account. Indeed, based on the values of the character's Braking Softness and Brake Padding attributes, the character may slow down in order to turn.
  • a navigation mesh 35 is created forthe movable digital entity (not shown). This is achieved by dividing or converting the reachable space 34 into convex cells 36 as illustrated in Figure 12 for the example of the one-floor building from Figure 11.
  • the navigation mesh 35 can be created either manually or automatically using, for example, the collision layer or the rendering geometry.
  • a collision layer is a geometric mesh that is a simplification of the rendering geometry for the purposes of physics collision detection/resolution.
  • the navigation mesh is the subset ofthe collision layer upon which the movable entity could move (this is typically the floors and not the walls).
  • Deriving the navigation mesh from the rendering geometry requires simplifying the geometry as much as possible and fusing the geometry into a seamless a mesh as possible (e.g., removal of intersecting polygons, etc.).
  • a 3D operator typically a 3D artist inspects the input geometry, fuses the polygons correctly and strips out the non-reachable space. It is to be noted that algorithms exist that can automatically handle this to a high degree. Convex polygons are used as cells in the creation of the navigation mesh 35 since any point within such a cell is directly reachable in a straight line to any other point in the cell.
  • step 106 An edge Exy connecting cells Cx and Cy in the navigation mesh will be considered “passable”, if the entity can pass from cell Cx to Cy via Exy.
  • step 106 the starting and end points (not shown) are located and the corresponding cells that includes each of those two points are identified.
  • the expressions "starting point” and “end point” should not be construed herein in a limited way. Indeed, unless the digital movable entity is pixel size, the starting and end point will refer to a location or a zone in the virtual world.
  • step 108 a first verification is done whether the starting and end points are both located in the same cell. If this is the case, the method 100 proceeds with step 110, wherein the digital entity is moved from the starting to the end point before the method stops.
  • a method 100 may yield a movable digital entity with such an adaptive behaviour.
  • Step 112 can be achieved first by constructing a connectivity graph 38, which is obtained by replacing each cell 36 by a node 40 and connecting each pair of passable cell (node) by a line 42.
  • An example of connectivity graph 38 is illustrated in Figure 13 forthe example illustrated in Figures 11 and 12. Of course, such a graph 38 is purely virtual and is not actually graphically created.
  • the resulting graph 38 is searched to find a path between the two nodes 40 representing respectively the starting and end points.
  • Many known techniques can be used to solve such a graph searching problem so as to yield a path between these two corresponding nodes.
  • the path, if it exists is returned as corresponding cells.
  • a breadth first search can be used to search the graph 38.
  • the well known BFS method allows providing the path of lowest cost but can be very expensive in terms of number of nodes explored.
  • a depth first search which can also be used, would be significantly less expensive in terms of nodes explored but does not allow to provide the path at the lowest cost.
  • Heuristics can be placed on the DFS to try to improve path quality while maintaining computational efficiency.
  • the centerpoint of each edge can be selected. Of course, other points can alternatively be chosen.
  • the point on the cell edge (cells interface) can be chosen so as to reduce the distance traveled between cells and then further smooth the path.
  • the digital entity is then moved from the starting point to each selected consecutive intermediary points, finally to the ending points (step 116).
  • the method 100 will be illustrated with reference to another simple world 44 (see Figure 14) delimitated by walls 46.
  • a navigation mesh 47 is created and the world 44 is divided in convex cells 48 identified from A to Z for reference purposes.
  • the starting and end points 50 and 52 are shown in
  • step 106 of the method 100 they are found in cells 'Z and 'J' respectively. Since they are not located in the same cell, the method continues with step 112 with the determination of a sequence of cells between the points 50 and 52, yielding the following sequence as illustrated in Figure 16: 'Z', ⁇ , '0', 'X', 'W', 'V, 'U', T, ⁇ ', and 'J'.
  • step 114 After determining intermediary points on the respective boundary between consecutive cells in the sequence of cells (step 114), the method continues with the entity moving along the determined path 54 as illustrated in Figure 17. As can be seen in Figure 17, the path can be smooth to yield a more realistic trail.
  • the navigation mesh can be dynamically modified at run-time (step 104).
  • cells can be turned off via blind data to simulate road blocks or congestion due to excess people or physics-driven debris or turned on to simulate a door opening, congestion ending or a passage through a destroyed wall.
  • former cell ⁇ ' has been turned off, resulting in a first alternative path 56
  • both former cells 'B' and ⁇ ' have been turned off, resulting in a second alternative path 58.
  • the method 100 also allows taking into consideration the dimension of the movable digital entity, and more specifically its transversal dimension relatively to its moving direction such as its width.
  • the creation of the navigation mesh in step 104 may take into account such characteristic ofthe movable entity so that the method 100 outputs a path that the entity can pass through. This is illustrated in Figures 20 and 21.
  • Figure 20 illustrates a path 60 obtained from the method 100 to move a digital entity from a starting point 62 to and end point 64.
  • the former cell 'L' is no longer part of the navigation mesh 68 and a new path 70 is provided by the method 100.
  • the method 100 is not limited to two-dimensional digital world. As it is illustrated in Figures 22 and 23, the method 100 can be used to determine the path between a starting point to an end point and then move a digital movable entity, such as an animated character, between those two points in a three dimensional digital world.
  • Figure 22 illustrates the output of the floor plan generator on a small city part of a simulator, a game or an animation.
  • Figure 23 illustrates the navigation mesh resulting from step 104 from the method 100.
  • the method 100 can be adapted for outdoor and indoor environments.
  • An outdoor environment typically consists of buildings and open spaces such as market spaces, parks with trees, separated by sidewalks, roads and rivers.
  • a floor plan generator uses the exterior building walls to cut out holes in the navigation mesh.
  • blind data are used to characterize different part of the reachable space.
  • blind data can then be associated to the cells of the navigation mesh to specify the differences in navigable surfaces (e.g., roads and sidewalks) and have the entities navigate accordingly (e.g., keep vehicles on the road and humans on the sidewalks).
  • Figure 24 illustrates a digital world in the form of a city street.
  • Figure 25 illustrates a navigation mesh obtained from step 104 of method 100. Blind data are used to differentiate between roadway (white cells) and sidewalk (grey cells).
  • an indoor environment is typically multi-layer and consists of floors divided into rooms via inner walls and doors; and connected by stairways.
  • a floor plan generator calculates the navigation mesh for each floor using the walls as barriers and then links the navigation surfaces by the cells that correspond to the stairways. This results in a 3D navigation mesh in which cells may be on top on one another. Path finding is now modified to determine which surface cell the digital movable entity is on rather than which cell the character is in.
  • a navigation mesh is created for each ofthe levels and two consecutive navigation meshes are interconnected by connecting cells.
  • the method 100 allows the movable entity to move from a predetermined intermediary point to the next in a straight line. Therefore, the only things that can prevent the entity from going in a straight line may be dynamic obstacles. To cope with this situation, the movable entity may be provided with sensors.
  • sensors Before describing in more details a dynamic collision avoidance method according to a method from the present invention, the concept of sensor and other relevant concepts such as data information, commands, decisions and decision trees will first be described briefly. It is to be noted however that neither the collision avoidance method according to the present invention nor the navigation method 100 are to be construed as being limited to a specific embodiment of sensors or decision rules for the digital movable entity, etc.
  • An entity's data information can be thought of as its internal memory.
  • Each datum is an element of information stored in the entity's internal memory.
  • a datum could hold information such as whether or not an enemy is seen or who is the weakest ally.
  • a Datum can also be used as a state variable for an IE.
  • Data are written to by an entity's Sensors, or by Commands within a Decision Tree.
  • the Datum's value is used by the Decision Tree to activate and deactivate behaviours and animations, or to test the entity's state. Sensors and Decision trees will be described hereinbelow in more detail.
  • Sensors Entities use sensors to gain information about the world.
  • a sensor will store its sensed information in a datum belonging to the entity.
  • a parameter can be used to trigger the activation of a sensor. If a sensor is set off, it will be ignored by the solver and will not store information in any datum.
  • the vision sensor is the eyes and ears of a character and allows the character to sense other physical objects or movable entities in the virtual world, which can be autonomous or non-autonomous characters, barriers, and waypoints, for example.
  • the following parameters allow, for example, defining the vision sensor:
  • Decision trees are used to process the data information gathered using sensors.
  • a command is used to activate a behaviour or an animation, or to modify an lE's internal memory. Commands are invoked by decisions. A single Decision includes a conditional expression and a list of commands to invoke.
  • a decision tree includes a root decision node, which can own child decision nodes. Each of those children may in turn own children of their own, each of which may own more children, etc.
  • a parameter indicative of whether or not the decision tree is to be evaluated can be used in defining the decision tree. Whenever the command corresponds to activating an animation and a transition is defined between the current animation and the new one, then that transition is first activated. Similarly, whenever the command corresponds to activating a behaviour, a blend time can be provided between the current animation and the new one. Moreover, whenever the command corresponds to activating a behaviour, the target is changed to the object specified by a datum.
  • obstacle avoidance can also be seen as a two-step method 200, which is illustrated in Figure 28, of: 202 - assessing threats of potential collisions between the movable digital entity and a moving obstacle, and 204 - if there is such a threat, the movable digital entity responding accordingly by adopting a strategy to avoid the moving obstacle
  • each entity 72 uses its sensor (see Figure 29) to detect what potential obstacles are in its vicinity and decide which of those obstacles poses the greatest threat of collision.
  • the sensor is configured as a field of view 74 around the entity 72, characterized by a depth of field, defining how far the entity can see.
  • the field of view of each movable entity can be defined as a pie (or a sphere depending on the application) surrounding the entity.
  • each obstacle removes a piece of this pie. The size ofthe piece removed depends on the obstacle's size and its distance from the entity.
  • holes 76-78 Unobstructed sections ofthe pie are will be referred to herein as holes 76-78 (see Figure 31 ).
  • the entity 72 searches forthe best hole to continue through.
  • the best hole can be determined in several ways. The typical way is as follows: • The holes 76-78 are sorted in order of increasing radial distance from the desired direction of the entity 72; • The first hole that is large enough for the entity to pass through is chosen; • If there is no hole, the agent is completely blocked and will stop moving. Depending on the chosen hole, the movable entity can move into that hole by either turning or reversing.
  • the obstacles can be characterized as having different avoidance importance.
  • a Hummvee may consider avoiding other vehicles as its highest priority, pedestrians as a secondary priority and small animals such as dogs as a very low priority; and a civilian pedestrian may consider vehicles as its highest priority and other pedestrians as a secondary priority.
  • extreme situations such as a riot may dynamically change these priorities.
  • different obstacle groups have different avoidance constraints.
  • the most basic constraint is assigning an obstacle to be a threat. Accordingly, for each obstacle group, there is an associated awareness radius. As the character moves through its world, its sensor sweeps around it, for every obstacle detected in its sweep that is within its awareness radius, it is flagged as a potential collision threat.
  • step 204 For each collision threat, there are two kinds of avoidance strategies (step 204): • circumvention: try avoiding the collision by going around the obstacle; and • queuing: try avoiding the collision by slowing down (and potentially stopping) until the obstacle exits the collision path.
  • Figures 32A-32C, 33A-33C, 34A-34C, and 35A-35C illustrate three examples of collision threats ( Figures 32A, 33A, 34A, and 35A), each with both corresponding avoidance strategies.
  • Figures 32A-35C show how circumvention is useful for going around stationary obstacles and getting out ofthe way of incoming obstacles. However, it can cause a lot of jostling on outgoing obstacles. Nonetheless, circumvention has the advantage of minimizing gridlock and eventually finding a way around.
  • the group movement modifier that is most rapidly identified with computer graphic artificial intelligence is flocking, made famous by
  • Reynolds who modelled flocks of birds called boids as super particles. Reynolds identified three basic elements to flocking: • alignment: the tendency of group members to harmonize their motion by aligning themselves in the same direction with the same speed; • separation: the tendency of group members to maintain a certain amount of space between them; and • joining: the tendency of group members to maintain a certain proximity with one another. Considering now a group of friends walking down the street, slower members of the group will speed up to catch up to the others, the fastest members (assuming they are polite) will slow down slightly to allow the stragglers to catch up. Depending on the cultural background of the group more or less space is required or tolerated between the friends (cf. urban dwellers to rural dwellers).
  • Figures 36A-36F shows the effects of different flocking strategies on a group of five characters following a leader character.
  • Such group-based modifier can be used to yield a more natural effect when the method 100 is used to simultaneously move a group of entities in a digital world between starting and end points.
  • the method and system for moving a digital entity on-screen from starting to end points in a digital world has been described as being included in a specific illustrative embodiment of a 3D application, it can be included in any 3D application requiring the autonomous displacement on-screen of image element.
  • a navigation method according to the present invention can be used to move digital entity not characterized by behaviours such as described hereinabove.
  • a navigation method according to present invention can be used to navigate any number of entities and is not limited to any type or configuration of digital world.
  • the present method and system can be used to plan the displacement of a movable object or entity in a virtual world without further movement of the object or entity.

Abstract

A method for moving a digital entity such as a character or an object on-screen from a starting point to an end point in a digital world by: providing the position of the obstacles for the digital entity and defining the portion of the digital world without obstacles as reachable space; creating a navigation mesh for digital entity by dividing the reachable space into convex cells; locating the starting and ending cells among the convex cells; if the starting cell corresponds to the ending cell then the digital entity is moved from the starting to the ending point. If the starting cell does not correspond to the end cell determining intermediary points located on the boundary between consecutive cells in a sequence of cells among the convex cells from the starting to the end point and moving the digital entity from the starting point to each consecutive intermediary point to the end point.

Description

TITLE OF THE INVENTION
METHOD AND SYSTEM FOR ON-SCREEN NAVIGATION OF DIGITAL CHARACTERS OR THEtlKES
FIELD OF THE INVENTION
The present invention relates to the digital entertainment industry and to computer simulation. More specifically, the present invention concerns a method and system for on-screen navigation of digital objects or characters.
BACKGROUND OF THE INVENTION Since many years three-dimensional (3D) computer graphics is a well established field having many applications including movie animation and digital effects, gaming and simulations.
Figure 1 of the appended drawings illustrates a generic 3D application from the prior art which can be in the form of an animation package, a video/computer game, a trainer or a simulator for example. The 3D application shares the basic high-level architecture of objects, which can be more generally referred to as digital entity, being manipulated by controllers via input devices or physics and artificial intelligence (Al) systems and made real to the user by synthesizers, including visual rendering and audio. One level deeper, 3D applications are typically broken down into two components: the simulator and the image generator. As illustrated in Figures 2 and 3, the simulator takes in many inputs from both human operators and CGE the simulator then modifies the world database accordingly and outputs the changes to the image generator for visualization.
A typical simulation/animation loop structure is illustrated in Figure 4. The world state managerfirst loads up the initialisation data from the world database. For each frame/tick of the simulation, the world state manager updates the controllers, the controller acts accordingly and sends back object updates to the manager. The world state manager then resolves all the object updates into a new world state in the world database (WDB) and passes this to the image generator (IG). The IG updates the characters' body limb positions and other objects and renders them out to the screen.
More recently, procedural animation, which is driven by physics and artificial intelligence (Al) techniques, has been introduced in the fields of 3D animation, visual effects and gaming. Al animation allows augmenting the abilities of digital entertainers and simulators across disciplines. It gives game designers and simulators the breadth, independence and tactics of film actors. Film-makers get the temporal performance of game characters and behavioural realism of simulation entities. For over twenty years, the visual effects departments of film studios have increasingly relied on computer graphics for whenever a visual effect is too expensive, too dangerous or just impossible to create any other way than via a computer. Unsurprisingly, the demands on an animator's artistic talent to produce even more stunning and realistic visual effects have also increased. It is currently not uncommon that the computer animation team is just as important to the success of a film as the lead actors. Large crowd scenes, in particular battle scenes, are ideal candidates for computer graphics techniques since the sheer number of extras required make them extremely expensive, their violent nature make them very dangerous, and the use of fantastic elements such as beast warriors make them impractical, if not impossible, to film with human extras. Given the complexity, expense, and danger of such scenes, it is clear that an effective artificial intelligence (Al) animation solution is preferable to actually staging and filming such battles with real human actors. However, despite the clear need for a practical commercial method to generate digital crowd scenes, a satisfactory solution has been a long time in coming.
Commercial animation packages such as Maya™ by Alias Systems have made great progress in the last twenty years to the point that virtually all 3D production studios rely on them to form the basis of their production pipelines. These packages are excellent for producing special effects and individual characters. However, automatic/crowd animation remains a significant problem. In the computer game field, game Al has been in existence since the dawn of video games in the 970s. However, it has come a long way since the creation of Pong™ and Pac-Man™. Nowadays, game Al is increasingly becoming a critical factor to a game's success and game developers are demanding more and more from their Al. Today's Al need to be able to seemingly think for themselves and act according to their environment and their experience giving the impression of intelligent behaviour, i.e. they need to be autonomous.
Game Al makes games more immersive. Typically game Al is used in the following situations:
• to create intelligent non-player characters (NPCs), which could be friend or foe to the player-control characters; • to add realism to the world. Simply adding some non- game play related game Al that reacts to the changing game world can increase realism and enhance the game experience. For example, Al can be used to fill sporting arenas with animated spectators or to add a flock of bats to a dungeon scene; • to create opponents when there are none. Many games are designed for two or more players however, if there is no one to play against intelligent Al opponents are needed; or • to create team members when there are not enough. Some games require team play, and game Al can fill the gap when there are not enough players. Another application for real-time intelligent 3D agents (such as vehicles and people that can interact in 3D cities for example) are military simulation and training (MS&T) and more specifically in the context of Military Operations Urban Terrain (MOUT). Indeed,, all conflicts within the last 15 years involving western forces have been urban in nature (e.g., Mogadishu, Bosnia, and Iraq). Nonetheless, real-time crowd simulation in complex urban terrain has been neglected for a multitude of reasons:
• Up until recently the hardware simply was not powerful enough; • The military establishment moves very slowly to acknowledge change; so they have been correspondingly slow to emphasize it as a need; and • Application developers have avoided human simulation as it is much more difficult than machine simulation.
Artificial Intelligence's role is to simulate the people (not only the ground forces but also the drivers of vehicles) in the battle, the combatants (blue and red forces) and increasingly the civilians (green forces). In the context of MS&T, these are often called Computer Generated Forces (CGFs) or Semi-Automated Forces (SAFs). The types of possible simulated agents are now described. Vehicle drivers and pilots: although vehicles have very complex models for physics (e.g., helicopters will wobble realistically as they bank into turns and tanks will bounce as they jump ditches) and weapon / communication systems (e.g., line of sight radios will not work through hills), they tend to have simplistic line of sight navigation systems that fail in the 3D concrete canyons of MOUT (e.g., helicopters fly straight through skyscrapers rather than around them and tanks get confused and stuck in the twisty garbage filled streets ofthe third world). Al can be used to simulate of the brain of the human driver in the vehicle (with or without the actual body being simulated).
Groups of individual doctrinal combatants: United States government-created SAFs of groups of individual doctrinal combatants limited in their usefulness to MS&T applications since they are limited to US installations only and are unable to navigate properly in urban environments. While aggregate SAFs operate on a strategic and often abstract level, individual combatant simulators operate on a tactical and often 3D physical level. Groups of individual irregular combatants: by definition irregular combatants such as militiamen and terrorists are non-doctrinal and hence it is difficult to express their personalities and tactics with a traditional SAF architecture. Crowds of individual non-combatants (clutter): one of the most difficult restrictions of MOUT is how to conduct military operations in an environment that populated with non-combatants. These large civilian populations can affect a mission by acting as only operational "clutter" to actually affecting the outcome of the battle.
One of the more specific aspects of game Al and MS&T is real-time intelligent navigation of agent per se and, for example, in the context of crowd simulation.
Computer graphics crowd simulation has been pioneered by C.W. Reynolds in "Flocks, Herds, and Schools: A Distributed Behavioural Model", In Computer Graphics, 21 (4) (SIGGRAPH' 87 Conference Proceedings), 25-34, 1987. Reynolds established the core idea that movement is a fundamental goal of intelligence and as such intelligent characters can be represented by a simple vehicle model. This system was extremely effective at producing simple "flocking" behaviours and has been used with great success in the simulation of animals such as bats and birds in non-real-time applications such as film special effects.
Musse and Thalmann in "Hierarchical model for real time simulation of virtual human crowds", in IEEE Transactions on Visualization and Computer Graphics 7, (No. 2 Apr.-Jun. 2001 ), 152-164, developed a hierarchical model for real-time crowd simulation called ViCrowd in which humans are controlled at not only at the individual but also group level.
By representing crowds as a particle system and using clever pre-computed alpha-channel rendering techniques, Tecchia et al.
(Tecchia F. Loscos C, Chrysanthou Y. "Visualizing crowds in real-time", In
Computer Graphics Forum 21 :1 , Eurographics, 2002.) succeed in simulating and rendering large numbers of very simple milling crowds in pseudo-real-time on a PC. However, the crowds could not react to stimuli.
Reece develops a system for modelling crowds within the Dismounted Infantry Semi-Automated Forces (DISAF) system (Reece,
D.A. "Crowd Modeling in DISAF.", In Proceedings of the Eleventh
Conference on Computer-Generated Forces and Behavior Representation
(Orlando, FL, May 7-9, 2002), 87-95.). Sung et al. expand on the notion of crowd members as intelligent particles by allowing the author to paint information for the crowd's behaviour directly to the world geometry. This authoring optimization limits the crowd members to dynamically change between varied behaviours such as transitioning from milling to panicking (Sung, M., Gleicher, M., and Chenney, S. "Scalable behaviors for Crowd Simulation", In Computer Graphics Forum 23:3, Eurographics, 2004.).
Movement is fundamental to all entities in a simulation whether bipedal, wheeled, tracked or aerial. Hence, an Al system should allow an entity to navigate in a realistic fashion from point X to point Y. For example, for a vehicle this means in normal operations driving on the road and obeying traffic rules such as staying in lane and stopping a traffic lights; and in military operations, avoiding roadblocks and trying not to run over civilian bystanders. For a militiaman, this means in normal operations walking down the sidewalk in an inconspicuous fashion but in military operations quickly running into a building, up its staircase and onto the rooftop to launch an RPG. Intelligent navigation can be broken down into two basic levels: dynamically finding a path from X to Y and avoiding dynamic obstacles along that path.
An example of agent navigation is following a pre- determined path (e.g., guards on a patrol path). Such a predetermined path is an ordered set of waypoints that digital characters may be instructed to follow. For example, a path around a racetrack would consist of waypoints at the turns of the track. Path following consists of locating the nearest waypoint on the path, navigating to it by direct line of sight and then navigating to the next point on the path. The character looks for the next waypoint when it has arrived the current waypoint (i.e., within the waypoint's bounding sphere). For example, Figure 5 shows how a path can be used to instruct a character to patrol an area in a certain way. It has been found that path following as a navigation mechanism works well when not only both the start and destination are known but also the path is explicitly known.
However, this is the exception rather than the rule as normally we want to tell the character the destination and let it find its own path around barriers based on its knowledge of itself and its world.
A method for automatically moving a digital entity on-screen from starting to end points in a digital world is however desirable. OBJECTS OF THE INVENTION
An object of the present invention is therefore to provide an improved navigation method for a digital entity in a digital world.
Another object of the invention is to provide a method for automatically moving a digital entity on-screen from start to end points.
SUMMARY OF THE INVENTION
More specifically, in accordance with a first aspect of the present invention, there is provided a method in a computer system for moving at least one digital entity on-screen from starting to end points in a digital world, comprising: i) providing respective positions of obstacles for the at least one movable digital entity in the digital world; defining at least portions of the digital world without obstacles as reac able space for the at least one movable digital entity; ii) creating a navigation mesh for the at least one movable digital entity by dividing the reachable space into at least one convex cell; iii) locating a start cell and an end cell among the at least one convex cell including respectively the start and end points; and iv) verifying whether the starting cell corresponds to the end cell; if the starting cell corresponds to the end cell, then: iv)a) moving the at least one movable digital entity from the starting point to the end point; if the starting cell does not correspond to the end cell, then iv)b) i) determining a sequence of cells among the at least one convex cell from the starting cell to the end cell, and iv)b)ii)determining at least one intermediary point located on a respective boundary between consecutive cells in the sequence of cells, and iv)b)iii) moving the at least one movable digital entity from the starting point to each consecutive the at least one intermediary point to the end point.
In accordance to a second aspect of the present invention, there is provided a system for moving a digital entity on-screen from starting to end points in a digital world, comprising: a world database for storing information about the digital world and for providing respective positions of obstacles for the movable digital entity in the digital world; a navigation module i) for defining at least portions ofthe digital world without obstacles as reachable space for the movable digital entity; ii) for creating a navigation mesh for the movable digital entity by dividing the reachable space into at least one convex cell; iii) for locating a start cell and an end cell among the at least one convex cell including respectively the start and end points; and iv) for verifying whether the starting cell corresponds to the end cell; and if the starting cell does not correspond to the end cell, for further v) determining a sequence of cells among the at least one convex cell from the starting cell to the end cell, and vi) determining at least one intermediary point located on a respective boundary between consecutive cells in the sequence of cells; and a simulator coupled to the navigation module and to the world database for moving the digital entity on-screen via an image generator coupled to the simulator from the starting point to the end point if the starting cell corresponds to the end cell as verified in iv); or for moving the digital entity from the starting point to each consecutive the at least one intermediary point to the end point; if the starting cell does not correspond to the end cell.
It is to be noted that the expression "character" should be construed herein as broadly as "entity". Other objects, advantages and features of the present invention will become more apparent upon reading the following non restrictive description of preferred embodiments thereof, given by way of example only with reference to the accompanying drawings.
BRIEF DESCRIPTION OF THE DRAWINGS
In the appended drawings:
Figure 1 , which is labeled "prior art", is a block diagram illustrating the first level of a generic three-dimensional (3D) application;
Figure 2, which is labeled "prior art", is a block diagram illustrating the second level of the generic 3D application from Figure 1 ;
Figure 3, which is labeled "prior art", is an expanded view of the block diagram from Figure 2;
Figure 4, which is labeled "prior art", is a flowchart illustrating the flow of data from the generic 3D application from Figure 1 to the image generator part of the 3D application from Figure 1 ; Figure 5, which is labeled "prior art", is a schematic view illustrating the use of waypoint to navigate a digital entity in a digital world;
Figure 6 is a block diagram illustrating a system for onscreen animation of digital entities including a navigation module embodying a system for moving a digital entity on-screen from starting to end points in a digital world according to an illustrative embodiment ofthe present invention;
Figure 7 is a flowchart illustrating a method for moving a digital entity on-screen from starting to end points in a digital world according to an illustrative embodiment of the present invention;
Figure 8 is a schematic view illustrating a two-dimensional barrier used with the method of Figure 7;
Figure 9 is a schematic view illustrating a three-dimensional barrier used with the method of Figure 7; Figure 10 is a schematic view illustrating a co-ordinate system used with the method of Figure 7; Figure 11 is a top plan schematic view of a two-dimensional world in the form of a one-floor building according to a first example ofthe reachable space for a specific movable digital entity according to the method from Figure 7; Figure 12 is a top plan schematic view of the one-floor building from Figure 11 illustrating a navigation mesh created through the method from Figure 7;
Figure 13 is a schematic view of a connectivity graph obtained from the navigation mesh from Figure 12;
Figure 14 is a top plan schematic view of a two-dimensional world according to a second example ofthe reachable space for a specific movable digital entity according to the method from Figure 7;
Figure 15 is a top plan schematic view of the world from Figure 11 illustrating a navigation mesh created through the method from Figure 7; Figure 16 is a top plan schematic view similar to Figure 15, illustrating steps from the method from Figure 7; Figure 17 is a top plan view schematic view similar to Figure 15, illustrating a path resulting from the method from Figure 7;
Figure 18 is a top plan view schematic view similar to Figure 17, illustrating a first alternative path to the path illustrated in Figure 17 resulting from the blocking of a first passage;
Figure 19 is a top plan view schematic view similar to
Figure 18, illustrating a second alternative path to the path illustrated in Figures 17 and 18 further resulting from the blocking of a second passage;
Figure 20 is a top plan schematic view similar to Figure 17, illustrating a third alternative path to the path illustrated in Figure 17, resulting from new starting and end points;
Figure 21 is a top plan view schematic view similar to Figure 20, illustrating an alternative path to the path illustrated in Figure 20, resulting from doubling the width of the movable digital entity; Figure 22 is a perspective view of a floor plan generator on a small city part of a simulator;
Figure 23 is a perspective view illustrating the navigation mesh created from the floor plan generator from Figure 22 using the method from Figure 7; Figure 24 is a perspective view of a digital world in the form of a city street;
Figure 25 is a perspective view of the city street from Figure 24, illustrating the navigation mesh creating step according to the method from Figure 7, including the used of blind data to characterize the resulting cells;
Figure 26 is a cut out perspective view of a digital world in the form of a building;
Figure 27 is a perspective view of the navigation mesh resulting from the building from Figure 26 using the method from Figure 7; Figure 28 is flowchart of a collision avoidance method for a digital entity moving on-screen from starting to end points in a digital world according to a specific illustrative embodiment of the present invention;
Figure 29 is a perspective view of an entity in a 3D application, in the form of a character, illustrating character's sensor according to the present invention;
Figure 30 is a top plan view of an entity in a digital world illustrating the entity's vision sensor according to the present invention, and more specifically illustrating the field of view provided by the sensor; Figure 31 is a perspective view of an entity in a digital world similar to Figure 30 illustrating the selection of a sub path to avoid obstacles according to a specific embodiment of the method from the Figure 7;
Figures 32A-32C are schematic views illustrating avoidance strategies (Figures 32B-32C) that can be used by a movable digital entity according to a specific embodiment of the method from Figure 7 in the case of a stationary obstacle (Figure 32A);
Figures 33A-33C are schematic views illustrating avoidance strategies (Figures 33B-33C) that can be used by a movable digital entity according to a specific embodiment of the method from Figure 7 in the case of an incoming obstacle (Figure 33A);
Figures 34A-34C are schematic views illustrating avoidance strategies (Figures 34B-34C) that can be used by a movable digital entity according to a specific embodiment of the method from Figure 7 in the case of an outgoing obstacle (Figure 34A);
Figures 35A-35C are schematic views illustrating avoidance strategies (Figures 35B-35C) that can be used by a movable digital entity according to a specific embodiment of the method from Figure 7 in the case of an sideswiping obstacle (Figure 35A); and
Figures 36A-36E are schematic views illustrating paths for simultaneously moving five movable digital entities using the method from Figure 7 and applying a group-based movement modifier.
DETAILED DESCRIPTION
A system 10 for on-screen animation of digital image entities (IE) including a navigation module 12 embodying a method for moving a digital entity on-screen from starting to end points in a digital world according to an illustrative embodiment of the pr&sent invention will now be described with reference to Figure 6.
Since the system 10 shares similarity with conventional 3D applications, such as the one illustrated in Figure 2, and for concision purposes, only the differences between the system 10 and conventional 3D application systems will be described herein in more detail.
The system 10 comprises a simulator 14 a world database (WDB) 16 coupled to the simulator 14, a plurality o"f image generators (IG) 18 (three shown) coupled to both the world DB and to the simulator 14, a navigation module 12 according to an illustrative embodiment of the present invention, coupled to the simulator 14, a decision-making module, also coupled to the simulator 14, and a plurality (three shown) of animation control module, each coupled to a respective IG 18.
The number of IG 18 may of course vary depending on the application. For example, in the case wherein the system 10 is embodied in a 3D animation application, the number of IG 18 may effect the rendering time. The simulator 14 and IG 18 may be in the form of a single computer. The world database 16 is stored on any suitable memory means, such as, but not limited to, a hard drive, a dvd or cd-rom disk to be read on a corresponding drive, or a random-access memory part of the computer 14.
The simulator 14 and IG 18 are in the form of computers or of any processing machines provided with processing units, which are programmed with instructions for animating, simulating or gaming as will be explained hereinbelow in more detail.
The simulator 14, IG 18 and world DB 16 can be remotely coupled via a computer network (not shown) such as Internet.
Of course, depending on the application of the system 10, the simulator 14 can take another form such as a game engine or a 3D animation system. The modules 12, 20 and 22 are in the form of sub-routines or dedicated instructions programmed in the simulator 14 for example. The characteristics and functions of the modules 20, 22 and more specifically of module 12 will become more apparent upon reading the following non- restrictive description of a method 100 for moving a digital entity on-screen from a starting point to an end point in a digital world according to an illustrative embodiment of the present invention. The method 100, which is illustrated in Figure 7, comprises the following steps:
102 - providing respective positions of obstacles for the movable digital entity in the digital world and defining at least portions of the digital world without obstacles as reachable space for the movable digital entity; 104 - creating a navigation mesh for the movable digital entity by dividing the reachable space into convex cells; 106 - locating a starting cell and an end cell among the convex cells including respectively the starting and end points; 108 - verifying whether the starting cell corresponds to the end cell; if the starting cell corresponds to the end cell then : 110 - moving the digital entity from the starting point to the end point and stopping the method; if the starting cell does not correspond to the end cell then : 112 - determining a sequence of cells among the convex cells from the starting cell to the end cell; 114 - determining intermediary points located on a respective boundary between consecutive cells in the sequence of cells; and 116 - moving the digital entity from the starting point to each consecutive intermediary points to the end point.
Each of these steps will now be described in more details. In step 102, the respective position of obstacles for the movable digital entity in the digital world are defined yielding the portion of the digital world without obstacles as reachable space for the movable digital entity. The reachable space can be defined as regions ofthe digital world enclosed by barriers.
It is to be noted that what the simulator 14 will consider an obstacle for a specific movable digital entity may not be an obstacle for another one as further examples given hereinbelow will allow to enlighten.
Of course, the digital world may have been previously defined including any autonomous or non-autonomous entity with which the movable digital entity may interact. The concept of digital world and of digital entity will now be described according to an illustrative embodiment of the present invention.
The digital world model includes image object elements. The image object elements include two or three-dimensional (2D or 3D) graphical representations of objects, autonomous and non-autonomous characters, building, animals, trees, etc. It also includes barriers, terrains, and surfaces. As will become more apparent upon reading the following description, the movable entity that is to be moved using the method 100 can be either autonomous or non-autonomous. The concepts of autonomous and non-autonomous characters and objects will be described hereinbelow in more detail. As it is believed to be commonly known in the art, the graphical representation of objects and characters can be displayed, animated or not, on a computer screen or on another display device, but can also inhabit and interact in the virtual world without being displayed on the display device.
Barriers are triangular planes that can be used to build walls, moving doors, tunnels, etc., or any obstacles for any movable entity in the digital world. Terrains are 2D height-fields to which entities can be automatically bound (e.g. keep soldier characters marching over a hill). Surfaces are triangular planes that may be combined to form fully 3D shapes to which autonomous characters can also be constrained.
In combination, these elements are to be used to describe the world in which the characters inhabit. They are stored in the world DB 16.
In addition to the image object elements, the digital world model includes a solver, which allows managing entities, including autonomous characters, and other objects in the digital world.
The solver can have a 3D configuration, to provide the entities with complete freedom of movement, or a 2D configuration, which is more computationally efficient, and allows an operator to insert a greater number of movable entities in a scene without affecting performance of the animation system. A 2D solver is computationally more efficient than a 3D solver since the solver does not consider the vertical (y) co-ordinate of an image object element or of an entity. The choice between the 2D and 3D configuration depends on the movements that are allowed in the virtual world by the movable entities and other objects. If they do not move in the vertical plane then there is no requirement to solve for in 3D and a 2D solver can be used. However, if any entity requires complete freedom of movement, a 3D solver is used. It is to be noted that the choice of a 2D solver does not limit the dimensions of the virtual world, which may be 2D or 3D.
Non-autonomous characters are objects in the digital world that, even though they may potentially interact with the digital world, are not driven by the solver. These can range from traditionally animated characters (e.g. the leader of a group) to player characters to objects (e.g. flying debris) driven by other components of the simulator.
Barriers are used to represents obstacles for movable entities, and are equivalent to one-way walls, i.e. an object or a digital entity inhabiting the digital world can pass through them in one direction but not in the other. When a barrier is created, spikes (forward orientation vectors) are used to indicate the side of the wall that can be detected by an object or an entity. Therefore, an object or an entity can pass from the non-spiked side to the spiked side, but not vice-versa. It is to be noted that, in some application, a specific avoidance constraint can be defined and activated for a digital entity to attempt to avoid the barriers in the digital world. The concept of behaviours and constraints will be described hereinbelow in more detail.
As illustrated in Figures 8 and 9 respectively, a barrier is represented in a 2D solver by a line and by a triangle in a 3D solver. The direction of the spike for 2D and 3D barriers is also shown in Figures 8-9 (see arrows 24 and 26 respectively) where P1-P3 refers to the order in which the points ofthe barrier are drawn. Since barriers are unidirectional, two-sided barriers are made by superimposing two barriers and by setting their spikes opposite to each other.
Each barrier can be defined by the following parameters:
Figure imgf000026_0001
Figure imgf000027_0001
As it is commonly known, a bounding box is a rectilinear box that encapsulates and bounds a 3D object.
The solver of the digital world model may include subsolvers, which are the various engines of the solver that are used to run the simulation. Each subsolver manages a particular aspect of object and simulation in order to optimize computations. As it is commonly known in the art, each animated digital entity is associated to animation clips allowing representing the entity in movement in the digital world. According to a specific embodiment of the present invention, virtual sensors are assigned to and used by some entities to allow them gathering data information about image object elements or other entities within the digital world. Decision trees can also be used for processing the data information resulting in selecting and triggering one of the animation cycle or selecting a new behaviour.
As it is believed to be well known in the art, an animation cycle, which will also be referred to herein as "animation clip" is a unit of animation that typically can be repeated. For example, in order to get a character to walk, the animator creates a "walk cycle". This walk cycle makes the character walks one iteration. In order to have the character walk more, more iterations ofthe cycle are played. If the character speeds up or slows down during time, the cycle is "scaled" accordingly so that the cycle speed matches the character displacement so that there is no slippage (e.g., it looks like the character is slipping on the ground).
The autonomous image entities are tied to transform nodes of the animating engine (or platform). The nodes can be in the form of locators, cubes or models of animals, vehicles, etc. Since animation clips and transform nodes are believed to be well known in the art, they will not be described herein in more detail. Figure 10 shows a co-ordinate system for moving the IE and used by the solver.
Examples of attributes that can be associated to a movable digital image entity are briefly described in the following tables.
Figure imgf000028_0001
Figure imgf000029_0001
Figure imgf000030_0001
Figure imgf000031_0001
Figure imgf000032_0001
Of course, other attributes can also be used to characterize an IE. The concept of IE behaviour will now be described hereinbelow in more detail.
In addition to attributes, IE from the present invention can also be characterized by behaviours. Along with the decision trees, the behaviours are the low-level thinking apparatus of an IE. They take raw input from the digital world using virtual sensors, process it, and change the lE's condition accordingly.
Behaviours can be categorized, for example, as Locomotive behaviours allowing an IE to move. These locomotive behaviours generate steering forces that can affect any or all of an lE's direction of motion, speed, and orientation (i.e. which way the IE is facing) for example.
The following table includes examples of behaviours:
Simple behaviours Targeted behaviours: > Avoid Barriers > Seek To > Avoid Obstacles > Flee From > Accelerate At > Look At > Maintain Speed At > Follow Path > Wander Around > Seek To Via Network > Orient To Group behaviours: > Align With > Join With > Separate From > Flock With
A locomotive behaviour can be seen as a force that acts on the IE. This force is a behavioural force, and is analogous to a physical force (such as gravity), with a difference that the force seems to come from within the IE itself.
It is to be noted that behavioural forces can be additive. For example, an autonomous character may simultaneously have more then one active behaviours. The solver calculates the resulting motion of the character by combining the component behavioural forces, in accordance with behaviour's priority and intensity. The resultant behavioural force is then applied to the character, which may impose its own limits and constraints (specified by the character's turning radius attributes, etc) on the final motion.
The behaviours allow creating a wide variety of actions for lEs. Behaviours can be divided into four subgroups: simple behaviours, targeted behaviours, and group behaviours.
Simple behaviours are behaviours that only involve a single IE.
Targeted behaviours apply to an IE and a target object, which can be any other object in the digital world (including groups of objects). Group behaviours allow lEs to act and move as a group where the individual lEs included in the group will maintain approximately the same speed and orientation as each other.
Examples of behaviours will now be provided in each ofthe four categories. Of course, it is believed to be within a person skilled in the art to provide an IE with other behaviours.
Simple Behaviours
Avoid Barriers The Avoid Barriers behaviour allows a character to avoid colliding with barriers.
Parameters specific to this behaviour may include, for example:
Figure imgf000035_0001
The Avoid Obstacles behaviour allows an IE to avoid colliding with obstacles, which can be other autonomous and non- autonomous image entities. Similar parameters than those detailed for the Avoid Barriers behaviour can also be used to define this behaviour.
Accelerate At
The Accelerate At behaviour attempts to accelerate the IE by the specified amount. For example, if the amount is a negative value, the
IE will decelerate by the specified amount. The actual acceleration/deceleration may be limited by max acceleration and max deceleration attributes of the IE.
A parameter specific to this behaviour is the Acceleration, which represents the change in speed (distance units/frame2) that the IE will attempt to maintain.
Maintain Speed At
The Maintain Speed At behaviour attempts to set the target lE's speed to a specified value. This can be used to keep a character at rest or moving at a constant speed. If the desired speed is greater than the character's maximum speed attribute, then this behaviour will only attempt to maintain the character's speed equal to its maximum speed. Similarly, if the desired speed is less than the character's minimum speed attribute, this behaviour will attempt to maintain the character's speed equal to its minimum speed. A parameter allowing defining this behaviour is the desired speed (distance units/frame) that the character will attempt to maintain.
Wander Around
The Wander Around behaviour applies random steering forces to the IE to ensure that it moves in a random fashion within the solver area. Parameters allowing defining this behaviour may be for example:
Figure imgf000037_0001
Figure imgf000038_0001
Orient To
The Orient To behaviour allows an IE to attempt to face a specific direction.
Parameters allowing defining this behaviour are:
Figure imgf000039_0001
Targeted Behaviours The following behaviours apply to an IE (the source) and another object in the world (the target). Target objects can be any object in the world such as autonomous or non-autonomous image entities, paths, groups and data. If the target is a group, then the behaviour applies only to the nearest member of the group at any one time. If the target is a datum, then it is assumed that this datum is of type ID and points to the true target of the behaviour. An ID is a value used to uniquely identify objects in the world. The concept of datum will be described in more detail hereinbelow. The following parameters, shared by all targeted behaviours, are:
Figure imgf000040_0001
Seek To
The Seek To behaviour allows an IE to move towards another IE or towards a group of lEs. If an IE seeks a group, it will seek the nearest member of the group at any time. Of course, a Seek To behaviour may be programmed according to the navigation method 100.
Parameters allowing defining this behaviour are for example:
Attribute Description Look This parameter instructs the IE to move towards a projected Ahead future point of the object being sought. Increasing the Time amount of look-ahead time does not necessarily make the Seek To behaviour any "smarter" since it simply makes a linear interpolation based on the target's current speed and position. Using this parameter gives the behaviour
Figure imgf000041_0001
Figure imgf000042_0001
Flee From
The Flee From behaviour allows an IE to flee from another IE or from a group of lEs. When an IE flees from a group, it will flee from the nearest member of the group at any time. The Flee From behaviour has the same attributes as the Seek To behaviour, however, it produces the opposite steering force. Since the parameters allowing defining the Flee From behaviour are very similar to those of the Seek To behaviour, they will not be described herein in more detail.
Look At
The Look At behaviour allows an IE to face another IE or a group of lEs. If the target of the behaviour is a group, the IE attempts to look at the nearest member of the group.
Strafe The Strafe behaviour causes the IE to "orbit" its target, in other words to move in a direction perpendicular to its line of sight to the target. A probability parameter allows to determine how likely it is at each frame that the IE will turn around and start orbiting in the other direction. This can be used, for instance, to make a moth orbit a flame.
For example, the effect of a guard walking sideways while looking or shooting at its target can be achieved by turning off the g uard's
Forward Motion Only property, and adding a Look At behaviour set towards the guard's target. It is to be noted that, to do this, Strafe is set to
Affects direction only, whereas Look At is set to Affects orientation only.
A parameter specific to this behaviour may be, for example, the Probability, which may take a value between 0 and 1 that determines how often the IE change direction of orbit. For example, at 24 frames per second, a value of 0.04 will trigger a random direction change on average every second, whereas a value of 0.01 will trigger a change on average every four seconds.
Go Between
The Go Between behaviour allows an IE to get in-between the first target and a second target. For example this behaviour can be used to enable a bodyguard character to protect a character from a group of enemies. The following parameter allow specifying this behaviour:, which may take a value between 0 and 1 that determines how close to the. second target one wish the entity to go.
Follow Path
The Follow Path behaviour allows an IE to follow a path. For example this behaviour can be used to enable a racecarto move around a racetrack.
The following parameters allow defining this behaviour:
Figure imgf000044_0001
Group Behaviours
Group behaviours allow grouping individual lEs so that they act as a group while still maintaining individuality. Examples include a school of fish, a flock of birds, etc.
The following parameters may be used to define group behaviours:
Figure imgf000045_0001
The following includes brief descriptions of examples of group behaviours.
Align With
The Align With behaviour allows an IE to maintain the same orientation and speed as other members of a group. The IE may or may not be a member of the group.
Join With
The Join With behaviour allows an IE to stay close to members of a group. The IE may or may not be a member of the group.
An example of parameter that can be used to define this behaviour is the Join Distance, which is similar to the "contact radius" in targeted behaviours. Each member ofthe group within the neighbourhood radius and outside the join distance is taken into account when calculating the steering force of the behaviour. The join distance is the external distance between the characters (i.e. the distance between the outsides of the bounding spheres of the characters). The value of this parameter determines the closeness that members ofthe g roup attempt to maintain.
Separate From The Separate From behaviour al rws an IE to keep a certain distance away from members of a group. For example, this can be used to prevent a school of fish from becoming too crowded. The IE to which the behaviour is applied may or may not be a member of the group. The Separation Distance is an example of parameters that can be used to define this behaviour. Each member ofthe group within the neighbourhood radius and inside the separation distance will be taken into account when calculating the steering force of the behaviour. The separation distance is the external distance b etween the lEs (i.e. the distance between the outsides of the bounding spheres of the lEs). The value of this parameter determines the external separation distance that members of the group will attempt to maintain.
Flock With This behaviour allows lEs to flock with each other. It combines the effects of the Align With, Join With, and Separate From behaviours.
The following table describes parameters that can be used to define this behaviour:
Figure imgf000047_0001
Combining Behaviours
An IE can have multiple active behaviours associated thereto at any given time. Therefore, means can be provided to assign importance to a given behaviour.
A first means to achieve this is by assigning intensity and priority to a behaviour. The assigned intensity of a behaviour affects how strong the steering force generated by the behaviour will be. The higher the intensity the greater the generated behavioural steering forces. The priority of a behaviour defines the precedence the behaviour should have over other behaviours. When a behaviour of a higher priority is activated, those of lower priority are effectively ignored. By assigning intensities and priorities to behaviours the animator informs the solver which behaviours are more important in which situations in order to produce a more realistic animation.
In order for the solver to calculate the new speed, position, and orientation of an IE, the solver calculates the desired motion of all behaviours, sums up these motions based on each behaviour's intensity, while ignoring those with lower priority, and enforces the maximum speed, acceleration, deceleration, and turning radii defined in the lE's attributes. Finally, braking due to turning may be taken into account. Indeed, based on the values of the character's Braking Softness and Brake Padding attributes, the character may slow down in order to turn.
Returning to Figure 7, the detailed description ofthe method
100 will now continue with reference to a simple digital world in the form of inner rooms of a one-floor building 30 is illustrated in Figure 11 , the walls
32, represented by dark lines, being obstacles and the area enclosed within defining the reachable space 34 for the movable entity (not shown).
In step 104, a navigation mesh 35 is created forthe movable digital entity (not shown). This is achieved by dividing or converting the reachable space 34 into convex cells 36 as illustrated in Figure 12 for the example of the one-floor building from Figure 11. The navigation mesh 35 can be created either manually or automatically using, for example, the collision layer or the rendering geometry. A collision layer is a geometric mesh that is a simplification of the rendering geometry for the purposes of physics collision detection/resolution. In this case, the navigation mesh is the subset ofthe collision layer upon which the movable entity could move (this is typically the floors and not the walls). Deriving the navigation mesh from the rendering geometry requires simplifying the geometry as much as possible and fusing the geometry into a seamless a mesh as possible (e.g., removal of intersecting polygons, etc.). In a manual creation, a 3D operator (typically a 3D artist) inspects the input geometry, fuses the polygons correctly and strips out the non-reachable space. It is to be noted that algorithms exist that can automatically handle this to a high degree. Convex polygons are used as cells in the creation of the navigation mesh 35 since any point within such a cell is directly reachable in a straight line to any other point in the cell.
An edge Exy connecting cells Cx and Cy in the navigation mesh will be considered "passable", if the entity can pass from cell Cx to Cy via Exy. In step 106, the starting and end points (not shown) are located and the corresponding cells that includes each of those two points are identified. The expressions "starting point" and "end point" should not be construed herein in a limited way. Indeed, unless the digital movable entity is pixel size, the starting and end point will refer to a location or a zone in the virtual world. In step 108, a first verification is done whether the starting and end points are both located in the same cell. If this is the case, the method 100 proceeds with step 110, wherein the digital entity is moved from the starting to the end point before the method stops. As it has been mentioned hereinabove, the use of convex cells allows the digital movable entity to move or to be moved in straight line within a cell. However, in some cases, objects or other movable entities for example, may force the movable entities to adopt a collision avoidance strategy. As will be described hereinbelow in more detail, a method 100 according to a more specific illustrative embodiment of the present invention may yield a movable digital entity with such an adaptive behaviour.
Returning to the method 100, if the starting and end points are not located in the same cell, a sequence of cells among the convex cells is determined from the starting cell to the end cell so as to yield a path for the digital movable entity therebetween (step 112). Step 112 can be achieved first by constructing a connectivity graph 38, which is obtained by replacing each cell 36 by a node 40 and connecting each pair of passable cell (node) by a line 42. An example of connectivity graph 38 is illustrated in Figure 13 forthe example illustrated in Figures 11 and 12. Of course, such a graph 38 is purely virtual and is not actually graphically created.
Then, the resulting graph 38 is searched to find a path between the two nodes 40 representing respectively the starting and end points. Many known techniques can be used to solve such a graph searching problem so as to yield a path between these two corresponding nodes. The path, if it exists is returned as corresponding cells.
For example, a breadth first search (BFS) can be used to search the graph 38. The well known BFS method allows providing the path of lowest cost but can be very expensive in terms of number of nodes explored. A depth first search (DFS), which can also be used, would be significantly less expensive in terms of nodes explored but does not allow to provide the path at the lowest cost. Heuristics can be placed on the DFS to try to improve path quality while maintaining computational efficiency.
It is to be noted that there can be situations where there is no such sequence of cells and the method stop at step 112 with the entity prevented from being able to navigate to the end point. If there exists such a sequence of cells, intermediary points (not shown) are determined on respective boundary between consecutive cells in the sequence of cells (step 114). In other words, entry/exit points to and from each convex cell are selected.
For example, the centerpoint of each edge can be selected. Of course, other points can alternatively be chosen. The point on the cell edge (cells interface) can be chosen so as to reduce the distance traveled between cells and then further smooth the path.
The digital entity is then moved from the starting point to each selected consecutive intermediary points, finally to the ending points (step 116). Turning now to Figures 14 to 21 , the method 100 will be illustrated with reference to another simple world 44 (see Figure 14) delimitated by walls 46.
As illustrated in Figure 15, a navigation mesh 47 is created and the world 44 is divided in convex cells 48 identified from A to Z for reference purposes.
The starting and end points 50 and 52 are shown in
Figure 16. Following step 106 of the method 100, they are found in cells 'Z and 'J' respectively. Since they are not located in the same cell, the method continues with step 112 with the determination of a sequence of cells between the points 50 and 52, yielding the following sequence as illustrated in Figure 16: 'Z', Υ, '0', 'X', 'W', 'V, 'U', T, Η', and 'J'. After determining intermediary points on the respective boundary between consecutive cells in the sequence of cells (step 114), the method continues with the entity moving along the determined path 54 as illustrated in Figure 17. As can be seen in Figure 17, the path can be smooth to yield a more realistic trail.
Since navigation method according to the present invention is used when an entity is to seek a target, the navigation mesh can be dynamically modified at run-time (step 104). For example, as illustrated in Figures 18 and 19, cells can be turned off via blind data to simulate road blocks or congestion due to excess people or physics-driven debris or turned on to simulate a door opening, congestion ending or a passage through a destroyed wall. In Figure 18, former cell Η' has been turned off, resulting in a first alternative path 56 and in Figure 19, both former cells 'B' and Η' have been turned off, resulting in a second alternative path 58.
The method 100 also allows taking into consideration the dimension of the movable digital entity, and more specifically its transversal dimension relatively to its moving direction such as its width. The creation of the navigation mesh in step 104 may take into account such characteristic ofthe movable entity so that the method 100 outputs a path that the entity can pass through. This is illustrated in Figures 20 and 21.
Figure 20 illustrates a path 60 obtained from the method 100 to move a digital entity from a starting point 62 to and end point 64. By doubling the width of the movable digital entity (illustrated by the sphere 66) in Figure 21 (see sphere 66' in Figure 21), the former cell 'L' is no longer part of the navigation mesh 68 and a new path 70 is provided by the method 100.
The method 100 is not limited to two-dimensional digital world. As it is illustrated in Figures 22 and 23, the method 100 can be used to determine the path between a starting point to an end point and then move a digital movable entity, such as an animated character, between those two points in a three dimensional digital world. Figure 22 illustrates the output of the floor plan generator on a small city part of a simulator, a game or an animation. Figure 23 illustrates the navigation mesh resulting from step 104 from the method 100. As will now be described with reference to Figures 24-27, the method 100 can be adapted for outdoor and indoor environments.
An outdoor environment typically consists of buildings and open spaces such as market spaces, parks with trees, separated by sidewalks, roads and rivers. Correspondingly, a floor plan generator uses the exterior building walls to cut out holes in the navigation mesh.
In order to allow the movable entity to differentiate between different kinds of reachable space, blind data are used to characterize different part of the reachable space. In some application, blind data can then be associated to the cells of the navigation mesh to specify the differences in navigable surfaces (e.g., roads and sidewalks) and have the entities navigate accordingly (e.g., keep vehicles on the road and humans on the sidewalks).
Figure 24 illustrates a digital world in the form of a city street. Figure 25 illustrates a navigation mesh obtained from step 104 of method 100. Blind data are used to differentiate between roadway (white cells) and sidewalk (grey cells).
As illustrated in Figure 26, an indoor environment is typically multi-layer and consists of floors divided into rooms via inner walls and doors; and connected by stairways. Correspondingly, a floor plan generator calculates the navigation mesh for each floor using the walls as barriers and then links the navigation surfaces by the cells that correspond to the stairways. This results in a 3D navigation mesh in which cells may be on top on one another. Path finding is now modified to determine which surface cell the digital movable entity is on rather than which cell the character is in. In brief, when a three-dimensional world is defined by a plurality of levels, a navigation mesh is created for each ofthe levels and two consecutive navigation meshes are interconnected by connecting cells.
Returning to Figure 7 and more specifically to step 116, it is reminded that within a convex cell, the method 100 allows the movable entity to move from a predetermined intermediary point to the next in a straight line. Therefore, the only things that can prevent the entity from going in a straight line may be dynamic obstacles. To cope with this situation, the movable entity may be provided with sensors. Before describing in more details a dynamic collision avoidance method according to a method from the present invention, the concept of sensor and other relevant concepts such as data information, commands, decisions and decision trees will first be described briefly. It is to be noted however that neither the collision avoidance method according to the present invention nor the navigation method 100 are to be construed as being limited to a specific embodiment of sensors or decision rules for the digital movable entity, etc.
Data Information
An entity's data information can be thought of as its internal memory. Each datum is an element of information stored in the entity's internal memory. For example, a datum could hold information such as whether or not an enemy is seen or who is the weakest ally. A Datum can also be used as a state variable for an IE.
Data are written to by an entity's Sensors, or by Commands within a Decision Tree. The Datum's value is used by the Decision Tree to activate and deactivate behaviours and animations, or to test the entity's state. Sensors and Decision trees will be described hereinbelow in more detail.
Sensors Entities use sensors to gain information about the world. A sensor will store its sensed information in a datum belonging to the entity. A parameter can be used to trigger the activation of a sensor. If a sensor is set off, it will be ignored by the solver and will not store information in any datum.
An example of sensor will now be described in more detail. Of course, it is believed to be within the reach of a person skilled in the art to provide additional or alternate sensors depending on the application.
Vision Sensor
The vision sensor is the eyes and ears of a character and allows the character to sense other physical objects or movable entities in the virtual world, which can be autonomous or non-autonomous characters, barriers, and waypoints, for example.
The following parameters allow, for example, defining the vision sensor:
Figure imgf000057_0001
Figure imgf000058_0001
Figure imgf000059_0001
Commands, Decisions, and Decision Trees
Decision trees are used to process the data information gathered using sensors.
A command is used to activate a behaviour or an animation, or to modify an lE's internal memory. Commands are invoked by decisions. A single Decision includes a conditional expression and a list of commands to invoke.
A decision tree includes a root decision node, which can own child decision nodes. Each of those children may in turn own children of their own, each of which may own more children, etc.
A parameter indicative of whether or not the decision tree is to be evaluated can be used in defining the decision tree. Whenever the command corresponds to activating an animation and a transition is defined between the current animation and the new one, then that transition is first activated. Similarly, whenever the command corresponds to activating a behaviour, a blend time can be provided between the current animation and the new one. Moreover, whenever the command corresponds to activating a behaviour, the target is changed to the object specified by a datum.
Dynamic collision avoidance
It will now be described substeps of both the steps 110 and 116 from the method 100 allowing the entity to avoid hitting any moving obstacle (e.g., another character or a physics object) as it travels along its path as determined through steps 102-114.
Just as path finding can be seen as a two-step process of i) determining the path and then ii) following it; obstacle avoidance can also be seen as a two-step method 200, which is illustrated in Figure 28, of: 202 - assessing threats of potential collisions between the movable digital entity and a moving obstacle, and 204 - if there is such a threat, the movable digital entity responding accordingly by adopting a strategy to avoid the moving obstacle
Collision threat assessment In step 202, each entity 72 uses its sensor (see Figure 29) to detect what potential obstacles are in its vicinity and decide which of those obstacles poses the greatest threat of collision. As illustrated in Figure 30, the sensor is configured as a field of view 74 around the entity 72, characterized by a depth of field, defining how far the entity can see. The field of view of each movable entity can be defined as a pie (or a sphere depending on the application) surrounding the entity.
If there are no obstacles, then the entity's field of view 74 around itself would be completely free and the pie is complete. However, as illustrated in Figure 30, each obstacle removes a piece of this pie. The size ofthe piece removed depends on the obstacle's size and its distance from the entity.
Unobstructed sections ofthe pie are will be referred to herein as holes 76-78 (see Figure 31 ). The entity 72 searches forthe best hole to continue through.
The best hole can be determined in several ways. The typical way is as follows: • The holes 76-78 are sorted in order of increasing radial distance from the desired direction of the entity 72; • The first hole that is large enough for the entity to pass through is chosen; • If there is no hole, the agent is completely blocked and will stop moving. Depending on the chosen hole, the movable entity can move into that hole by either turning or reversing.
Depending on the application of the simulator and on the characteristics ofthe entity, the obstacles can be characterized as having different avoidance importance. For example, a Hummvee may consider avoiding other vehicles as its highest priority, pedestrians as a secondary priority and small animals such as dogs as a very low priority; and a civilian pedestrian may consider vehicles as its highest priority and other pedestrians as a secondary priority. Of course, extreme situations such as a riot may dynamically change these priorities. Thus, different obstacle groups have different avoidance constraints. The most basic constraint is assigning an obstacle to be a threat. Accordingly, for each obstacle group, there is an associated awareness radius. As the character moves through its world, its sensor sweeps around it, for every obstacle detected in its sweep that is within its awareness radius, it is flagged as a potential collision threat.
For each perceived threat, different kinds of collision threat states can be assigned, including: • stationary: the obstacle is not moving but is in the movable entity's way; • incoming: the obstacle is coming towards the movable entity; • outgoing: the obstacle is going away from the movable entity but the entity will rear-end it; and sideswiping: the obstacle is expected to hit the movable entity from the side.
Avoidance strategies
For each collision threat, there are two kinds of avoidance strategies (step 204): • circumvention: try avoiding the collision by going around the obstacle; and • queuing: try avoiding the collision by slowing down (and potentially stopping) until the obstacle exits the collision path.
It is to be noted that both above-described avoidance strategies are heuristics and as such neither is guaranteed to work in all circumstances.
Figures 32A-32C, 33A-33C, 34A-34C, and 35A-35C illustrate three examples of collision threats (Figures 32A, 33A, 34A, and 35A), each with both corresponding avoidance strategies. Figures 32A-35C show how circumvention is useful for going around stationary obstacles and getting out ofthe way of incoming obstacles. However, it can cause a lot of jostling on outgoing obstacles. Nonetheless, circumvention has the advantage of minimizing gridlock and eventually finding a way around.
Queuing always works well on outgoing obstacles and incoming obstacles (provided they are circumventing). However, it has been found that queuing incoming obstacles and stationary obstacles can cause gridlock. In most cases, the most effective way to avoid gridlock is to use decision logic to change strategies on the fly.
Group-based movement modifiers (formations)
It is a well-observed fact that herd-based animals including birds, fish and humans move differently when in groups. This movement behaviour is commonly called flocking. With humans, group based movement is very varied from the loose groups of Somali militiamen running through a twisty city street to rigid formations of marching soldiers on parade to coordinated cover and sweep routines of modern Delta Force operators. In each case, the group's "collective conscience" influences the movement of each navigating individual in the group.
The group movement modifier that is most rapidly identified with computer graphic artificial intelligence is flocking, made famous by
Reynolds [Reynolds, 1987] who modelled flocks of birds called boids as super particles. Reynolds identified three basic elements to flocking: • alignment: the tendency of group members to harmonize their motion by aligning themselves in the same direction with the same speed; • separation: the tendency of group members to maintain a certain amount of space between them; and • joining: the tendency of group members to maintain a certain proximity with one another. Considering now a group of friends walking down the street, slower members of the group will speed up to catch up to the others, the fastest members (assuming they are polite) will slow down slightly to allow the stragglers to catch up. Depending on the cultural background of the group more or less space is required or tolerated between the friends (cf. urban dwellers to rural dwellers).
Figures 36A-36F shows the effects of different flocking strategies on a group of five characters following a leader character.
Such group-based modifier can be used to yield a more natural effect when the method 100 is used to simultaneously move a group of entities in a digital world between starting and end points. Even though the method and system for moving a digital entity on-screen from starting to end points in a digital world has been described as being included in a specific illustrative embodiment of a 3D application, it can be included in any 3D application requiring the autonomous displacement on-screen of image element. For example, a navigation method according to the present invention can be used to move digital entity not characterized by behaviours such as described hereinabove.
A navigation method according to present invention can be used to navigate any number of entities and is not limited to any type or configuration of digital world. The present method and system can be used to plan the displacement of a movable object or entity in a virtual world without further movement of the object or entity.
Although the present invention has been described hereinabove byway of preferred embodiments thereof, it can be modified without departing from the spirit and nature of the subject invention, as defined in the appended claims.

Claims

WHAT IS CLAIMED IS:
1. A method in a computer system for moving at least one digital entity on-screen from starting to end points in a digital world, comprising: i) providing respective positions of obstacles forthe at least one movable digital entity in the digital world; defining at least portions of said digital world without obstacles as reachable space forthe at least one movable digital entity; ii) creating a navigation mesh for the at least one movable digital entity by dividing said reachable space into at least one convex cell; iii) locating a start cell and an end cell among said at least one convex cell including respectively the start and end points; and iv) verifying whether said starting cell corresponds to said end cell; if said starting cell corresponds to said end cell, then: iv)a) moving the at least one movable digital entity from the starting point to the end point; if said starting cell does not correspond to said end cell, then iv)b) i) determining a sequence of cells among said at least one convex cell from said starting cell to said end cell, and iv)b)ii)determining at least one intermediary point located on a respective boundary between consecutive cells in said sequence of cells, and iv)b)iii) moving the at least one movable digital entity from the starting point to each consecutive said at least one intermediary point to said end point.
2. A method as recited in claim 1 , wherein said obstacles being dynamic obstacles, yielding changes in said digital world; wherein in ii) said navigation mesh is dynamically created to cope for said changes in said digital world caused by said dynamic obstacles.
3. A method as recited in claim 1 , wherein iv)b)i) determining a sequence of cells among said at least one convex cell from said starting cell to said end cell includes constructing a connectivity graph and searching said connectivity graph for a path between said starting cell to said end cell.
4. A method as recited in claim 2, wherein a breadth first search (BFS) or depth first search (DFS) is used in searching said connectivity graph.
5. A method as recited in claim 1 , wherein in iv)b)ii) at least one intermediary point located on a respective boundary between consecutive cells in said sequence of cells includes a centerpoint.
6. A method as recited in claim 1 , wherein in iv)b)ii) at least one intermediary point located on a respective boundary between consecutive cells in said sequence of cells is selected so as to improve the quality of motion.
7. A method as recited in claim 6, wherein said sequence of cells is selected so as to reduce a distance travelled within at least one of said consecutive cells.
8. A method as recited in claim 1 , wherein said digital world further includes at least one moving obstacle within said at least one of said convex cell; wherein at least one of said iv)a) moving the at least one movable digital entity from the starting point to the end point and iv)b)iii) moving the at least one movable digital entity from the starting point to each consecutive said at least one intermediary point to said end point further including a) assessing whether there is a collision threat between said at least one moving obstacle and said at least one digital movable entity, and b) said at least one digital movable entity adopting a strategy to avoid said at least one moving obstacle when there is a collision threat.
9. A method as recited in claim 8, wherein a) assessing whether there is a collision threat between said at least one moving obstacle and said at least one digital movable entity includes using a sensor to detect said at least one moving obstacle.
10. A method as recited in claim 9, wherein using said sensor yields a field of view between said at least one movable digital entity characterized by a depth of field. I.
11. A method as recited in claim 10, wherein using a sensor to detect said at least one moving obstacle yields a hole in said field of view; said hole characterizing said at least one moving obstacle.
12. A method as recited in claim 11 , wherein said hole having at least one characteristic; said at least one digital movable entity prioritizes its strategy to avoid said at least one moving obstacle according to said at least one characteristic of said hole.
13. A method as recited in claim 12, wherein said at least one characteristic of said hole includes a radial distance and a width.
14. A method as recited in claim 13, wherein said strategy to avoid said at least one moving obstacle is circumvention or queuing.
15. A method as recited in claim 1 , yielding a path between the staring and end points; the method further comprising smoothing said path so as to yield a more realistic path.
16. A method as recited in claim 15, wherein said at least one movable digital entity includes a plurality of movable digital entity all adopting a flocking strategy to follow said path.
17. A method as recited in claim 16, wherein said flocking strategy includes at least one of following, alignment, separation and joining.
18. A method as recited in claim 1 , wherein said at least one movable digital entity includes a plurality of movable digital entity.
19. A method as recited in claim 1 , wherein at least one of said convex cell being characterized by a blind datum allowing differentiating between types of navigational cells.
20. A method as recited in claim 19, wherein the digital world further comprises state changing entity; said state changing entity being operative onto said blind datum.
21. A method as recited in claim 1 , wherein said navigation mesh is created manually or automatically.
22. A method as recited in claim 1 , wherein said digital world is defined by a rendering or physics geometry; said navigation mesh being created using said rendering or physics geometry.
23. A method as recited in claim 1 , wherein said digital world is a two-dimensional world.
24. A method as recited in claim 1 , wherein said digital world is a three-dimensional world.
25. A method as recited in claim 24, wherein said three- dimensional world is defined by a plurality of levels; in ii) a level navigation mesh is created for each of said plurality of levels; consecutive level navigation meshes being interconnected by connecting cells.
26. A method as recited in claim 1 , wherein said at least one movable digital entity is autonomous or non-autonomous.
27. A system for moving a digital entity on-screen from starting to end points in a digital world, comprising: a world database for storing information about the digital world and for providing respective positions of obstacles for the movable digital entity in the digital world; a navigation module i) for defining at least portions of said digital world without obstacles as reachable space for said movable digital entity; ii) for creating a navigation mesh for said movable digital entity by dividing said reachable space into at least one convex cell; iii) for locating a start cell and an end cell among said at least one convex cell including respectively the start and end points; and iv) for verifying whether said starting cell corresponds to said end cell; and if said starting cell does not correspond to said end cell, for further v) determining a sequence of cells among said at least one convex cell from said starting cell to said end cell, and vi) determining at least one intermediary point located on a respective boundary between consecutive cells in said sequence of cells; and a simulator coupled to said navigation module and to said world database for moving the digital entity on-screen via an image generator coupled to said simulator from the starting point to the end point if said starting cell corresponds to said end cell as verified in iv); or for moving the digital entity from the starting point to each consecutive said at least one intermediary point to said end point; if said starting cell does not correspond to said end cell.
28. A system as recited in claim 27, wherein said world database being further for storing information about at least one moving obstacle within said at least one of said convex cell; said navigation module being further for a) assessing whether there is a collision threat between said at least one moving obstacle and said at least one digital movable entity, and b) said at least one digital movable entity adopting a strategy to avoid said at least one moving obstacle when there is a collision threat.
29. A system as recited in claim 27, wherein said navigation module is part of said simulator.
30. A system as recited in claim 27, further comprising a decision-making module coupled to the simulatorfor dynamic adjustment ofthe navigation unit following said digital world acting on said at least one digital movable entity.
PCT/CA2005/000426 2004-03-19 2005-03-18 Method and system for on-screen navigation of digital characters or the likes WO2005091198A1 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
JP2007503169A JP2007529796A (en) 2004-03-19 2005-03-18 Method and system for on-screen navigation of digital characters or the like
EP05714659A EP1725966A4 (en) 2004-03-19 2005-03-18 Method and system for on-screen navigation of digital characters or the likes
CA002558971A CA2558971A1 (en) 2004-03-19 2005-03-18 Method and system for on-screen navigation of digital characters or the likes

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US55435704P 2004-03-19 2004-03-19
US60/554,357 2004-03-19

Publications (1)

Publication Number Publication Date
WO2005091198A1 true WO2005091198A1 (en) 2005-09-29

Family

ID=34993915

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CA2005/000426 WO2005091198A1 (en) 2004-03-19 2005-03-18 Method and system for on-screen navigation of digital characters or the likes

Country Status (4)

Country Link
EP (1) EP1725966A4 (en)
JP (1) JP2007529796A (en)
CA (1) CA2558971A1 (en)
WO (1) WO2005091198A1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8572204B2 (en) 2009-07-09 2013-10-29 Movix (Uk) Limited Data processing system using geographical locations
US9164653B2 (en) 2013-03-15 2015-10-20 Inspace Technologies Limited Three-dimensional space for navigating objects connected in hierarchy
US11532139B1 (en) * 2020-06-07 2022-12-20 Apple Inc. Method and device for improved pathfinding
US11804012B1 (en) * 2020-06-07 2023-10-31 Apple Inc. Method and device for navigation mesh exploration

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4862373A (en) * 1987-05-13 1989-08-29 Texas Instruments Incorporated Method for providing a collision free path in a three-dimensional space

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4862373A (en) * 1987-05-13 1989-08-29 Texas Instruments Incorporated Method for providing a collision free path in a three-dimensional space

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
KUFFNER J.J. ET AL: "Goal-Directed Navigator for Animated Characters Using Real-Time Path Planning and Control.", PROCEEDINGS OF CAPTECH'98., November 1998 (1998-11-01) *
O'NEILL J. ET AL: "Efficient Navigator Mesh Implementation.", JOURNAL OF GAME DEVELOPERS., March 2004 (2004-03-01) *
PINTER M. ET AL: "Toward More Realistic Pathfinding.", GAMASUTRA., 14 April 2004 (2004-04-14), Retrieved from the Internet <URL:http://www.gamasutra.com/features/200103014/pinter_01.htm> *
REYNOLDS C.W. ET AL: "Steering Behaviors for Autonomous Characters.", PROCEEDINGS OF GAME DEVELOPERS CONFERENCE., 1999, pages 763 - 782 *
SALOMON ET AL: "Interactive Navigation in Complex Environments Using Path Planning.", PROCEEDINGS OF THE 2003 SYMPOSIUM OF INTERACTIVE 3D GRAPHICS., 27 April 2003 (2003-04-27) - 30 April 2003 (2003-04-30), pages 41 - 50 *
See also references of EP1725966A4 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8572204B2 (en) 2009-07-09 2013-10-29 Movix (Uk) Limited Data processing system using geographical locations
USRE45750E1 (en) 2009-07-09 2015-10-13 Movix (Uk) Limited Data processing system using geographical locations
US9164653B2 (en) 2013-03-15 2015-10-20 Inspace Technologies Limited Three-dimensional space for navigating objects connected in hierarchy
US11532139B1 (en) * 2020-06-07 2022-12-20 Apple Inc. Method and device for improved pathfinding
US11804012B1 (en) * 2020-06-07 2023-10-31 Apple Inc. Method and device for navigation mesh exploration

Also Published As

Publication number Publication date
CA2558971A1 (en) 2005-09-29
EP1725966A1 (en) 2006-11-29
JP2007529796A (en) 2007-10-25
EP1725966A4 (en) 2008-07-09

Similar Documents

Publication Publication Date Title
US20050071306A1 (en) Method and system for on-screen animation of digital objects or characters
Müller et al. Sim4cv: A photo-realistic simulator for computer vision applications
Reynolds Steering behaviors for autonomous characters
US20070188501A1 (en) Graphical computer simulation system and method
Ren et al. Group modeling: A unified velocity‐based approach
Sorokin et al. Learning to navigate sidewalks in outdoor environments
WO2005091198A1 (en) Method and system for on-screen navigation of digital characters or the likes
CN114470775A (en) Object processing method, device, equipment and storage medium in virtual scene
Stone et al. Robocup-2000: The fourth robotic soccer world championships
Thompson Scale, spectacle and movement: Massive software and digital special effects in the lord of the rings
Thalmann et al. Geometric issues in reconstruction of virtual heritage involving large populations
Patel et al. Agent tools, techniques and methods for macro and microscopic simulation
Tomlinson The long and short of steering in computer games
Rojas et al. Safe navigation of pedestrians in social groups in a virtual urban environment
Boes et al. Intuitive method for pedestrians in virtual environments
Hasegawa et al. How to build a fantasy world based on reality: A case study of final fantasy xv: Part ii
Simola Bergsten et al. Flocking Behaviour as Demonstrated in a Tower-Defense Game
Ho et al. Fame, soft flock formation control for collective behavior studies and rapid games development
Thalmann et al. Behavioral animation of crowds
Schweizer Grand Theft Antecedents
Cozic Automated cinematography for games.
Moudhgalya Language Conditioned Self-Driving Cars Using Environmental Object Descriptions For Controlling Cars
Lee Using Global Objectives to Control Behaviors in Crowds
Estradera Benedicto Design and development of top down 2D action-adventure video game with hack & slash and bullet hell elements
Metoyer Building behaviors with examples

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BW BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE EG ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NA NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SM SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): BW GH GM KE LS MW MZ NA SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LT LU MC NL PL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
WWE Wipo information: entry into national phase

Ref document number: 2558971

Country of ref document: CA

WWE Wipo information: entry into national phase

Ref document number: 2007503169

Country of ref document: JP

WWE Wipo information: entry into national phase

Ref document number: 2005714659

Country of ref document: EP

NENP Non-entry into the national phase

Ref country code: DE

WWW Wipo information: withdrawn in national office

Country of ref document: DE

WWP Wipo information: published in national office

Ref document number: 2005714659

Country of ref document: EP

WWW Wipo information: withdrawn in national office

Ref document number: 2005714659

Country of ref document: EP