WO2003054804A2 - Virtual reality files for complex environments - Google Patents

Virtual reality files for complex environments Download PDF

Info

Publication number
WO2003054804A2
WO2003054804A2 PCT/IB2002/005579 IB0205579W WO03054804A2 WO 2003054804 A2 WO2003054804 A2 WO 2003054804A2 IB 0205579 W IB0205579 W IB 0205579W WO 03054804 A2 WO03054804 A2 WO 03054804A2
Authority
WO
WIPO (PCT)
Prior art keywords
elements
tessellation
dynamic tessellation
dynamic
triangles
Prior art date
Application number
PCT/IB2002/005579
Other languages
French (fr)
Other versions
WO2003054804A3 (en
Inventor
Alain Yves Nestor Hubrecht
Tom Nuydens
Flavien Brebion
Original Assignee
Vrcontext S.A./N.V.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US10/023,996 external-priority patent/US6809738B2/en
Priority claimed from US10/023,999 external-priority patent/US6791549B2/en
Priority claimed from US10/023,818 external-priority patent/US20030117398A1/en
Priority claimed from US10/023,997 external-priority patent/US20030117397A1/en
Application filed by Vrcontext S.A./N.V. filed Critical Vrcontext S.A./N.V.
Priority to AU2002353394A priority Critical patent/AU2002353394A1/en
Publication of WO2003054804A2 publication Critical patent/WO2003054804A2/en
Publication of WO2003054804A3 publication Critical patent/WO2003054804A3/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/40Hidden part removal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/12Bounding box

Definitions

  • the present invention generally relates to the fields of computer graphics and virtual reality. More particularly, the present invention relates to generating virtual reality (VR) files, simulating frames, performing memory management, and rendering frames of complex virtual environments, including complex virtual environments based on massive models.
  • VR virtual reality
  • CAD computer graphics and computer- aided design
  • 2D and 3D modeling techniques permit engineering models of mechanical parts and other physical objects to be computer generated for various purposes, such as research and development.
  • complex environments involving offshore platforms, industrial plants, building complexes and other facilities can be modeled using conventional, computer-aided modeling methods.
  • 3D models of such environments are often used in the architecture, engineering and construction (AEC) sectors and for plant design.
  • Elements or objects in a complex environment may be represented in a 3D model through polygons.
  • Polygons such as triangles and quadrilaterals
  • Polygons are closed figures made by joining line segments, wherein each line segment intersects exactly two other line segments.
  • polygons may be combined to represent the surface(s) of boxes, stairs, windows, walls, floors, pipes, beams and other objects in the modeled environment.
  • Data defining the attributes of each object, including lighting and texture, may also be included in a 3D model.
  • graphics systems may be employed. Such graphics systems may be provided with graphics cards that are capable of processing 3D model data to generate and display the modeled environment. While triangle elements can be directly handled and rendered by most graphics cards, other polygon elements in a 3D model (such as four or five- sided polygons) require additional processing before they can be rendered. Such processing may include the triangulation of complex polygon elements into triangles so that it can be directly handled by the graphics hardware and rendered on a display terminal.
  • virtual reality refers to the use of computer-based simulation of real or imaginary environments that permit a user to perform operations within the simulated environment.
  • virtual reality software has been created to provide real-time animation within simulated environments for users.
  • 3D software games permit users to navigate and/or perform operations (such as lifting, walking, running, jumping, climbing, etc.) in a manner that resembles the real world.
  • operations such as lifting, walking, running, jumping, climbing, etc.
  • the effects of gravity and/or other real-world phenomena may be simulated in virtual reality applications for 3D game users.
  • some applications eliminate real- world phenomena (such as the effect of gravity and/or collisions) to provide simple navigation features in a complex virtual environment.
  • the user is often left in a confused state since the user is permitted to navigate through walls, floors and other objects represented in the modeled environment.
  • the viewpoint of the user may be positioned within walls or between floors, making it difficult for the user to orientate himself/herself with respect to the rendered frame.
  • a method for generating virtual reality (VR) file(s).
  • the method comprises: providing a three-dimensional (3D) model of the complex environment, the 3D model comprising data defining elements represented in the complex environment; and analyzing the data of the 3D model to generate VR file(s) including a collision detection hierarchy.
  • 3D three-dimensional
  • the collision detection hierarchy may be generated by: filtering the data of the 3D model to identify dynamic tessellation elements and non-dynamic tessellation elements; generating a plurality of triangles representing the identified dynamic tessellation elements and nondynamic tessellation elements; performing a grid subdivision on the plurality of triangles to define a plurality of cubes; creating a bounding box tree for each cube, each bounding box tree comprising a plurality of bounding boxes; and allocating the plurality of triangles to the bounding boxes.
  • a method for generating virtual reality (VR) file(s), wherein the method comprises: providing a three-dimensional (3D) model of the complex environment, the 3D model comprising data defining elements represented in the complex environment; and analyzing the data of the 3D model to generate VR file(s) including an octree file, the octree file comprising data defining an octree of spatially partitioned cells, each cell of the octree containing one or more elements of the complex environment.
  • VR virtual reality
  • the octree file may be generated by: filtering the data of the 3D model to identify dynamic tessellation elements and non-dynamic tessellation elements; generating a plurality of triangles representing the identified non-dynamic tessellation elements; and defining cells of the octree by spatially partitioning the identified dynamic tessellation elements and triangles representing the non-dynamic tessellation elements.
  • a computer program product for generating virtual reality (VR) file(s).
  • the computer program product may comprise a computer readable medium embodying a computer program, wherein the computer program comprises instructions that are executable by a system to perform a method for generating VR file(s) based on data from a massive model representing the complex environment.
  • Such a method may comprise: analyzing the data of the massive model to generate VR file(s), the VR file(s) including data defining an octree and data defining a collision detection hierarchy, the octree comprising a plurality of cells containing elements of the complex environment, and the collision detection hierarchy comprising a plurality of bounding boxes containing elements of the complex environment.
  • the octree may be generated by: filtering the data of the massive model to identify dynamic tessellation elements and non-dynamic tessellation elements; generating a plurality of triangles representing the identified non-dynamic tessellation elements; and defining cells of the octree by spatially partitioning the identified dynamic tessellation elements and triangles representing the non-dynamic tessellation elements.
  • a system for generating interactive displays of a complex virtual environment.
  • a converter component may be provided for generating virtual reality (VR) file(s).
  • the converter component may comprise: means for accessing a massive model of the complex environment, the massive model including data defining elements represented in the complex environment; and means for generating VR file(s) based on the massive model data of the complex environment, the VR file(s) including an octree file and a collision detection file.
  • the octree file may include data defining an octree of spatially partitioned cells and/or data defining a plurality of occluder candidates.
  • the collision detection file may include a collision detection hierarchy of bounding boxes that contain one or more elements of the complex virtual environment.
  • FIG. 1 A block diagram illustrating an exemplary computing environment
  • FIG. 1 A block diagram illustrating an exemplary computing environment
  • FIG. 1 A block diagram illustrating an exemplary computing environment
  • FIG. 1 A block diagram illustrating an exemplary computing environment
  • FIG. 1 A block diagram illustrating an exemplary computing environment
  • FIG. 1 A block diagram illustrating an exemplary computing environment
  • FIG. 1 A block diagram illustrating an exemplary computing environment
  • FIG. 1 A block diagram illustrating an octree of spatially partitioned cells, wherein each cell of the octree containing one or more elements of the complex virtual environment.
  • the collision detection file may comprise a collision detection hierarchy of bounding boxes that contain one or more elements of the complex virtual environment.
  • a method for performing simulation to facilitate the rendering of frames of a complex virtual environment.
  • the method comprises: detecting, for a next frame to be rendered, input from a user; determining, based on the user input, desired movements from a current position to a desired position within the complex virtual environment; and performing, using a collision detection hierarchy of bounding boxes that contain one or more elements of the complex environment, a predictive collision detection operation to determine if a collision will occur with an element represented in the complex environment if the desired movements are performed between the current position and the desired position. If a collision is detected, an adjusted position may be calculated based on the desired movements in the complex environment and a viewer position for rendering the next frame may be set to the adjusted position. Further, if a collision is not detected, a viewer position for rendering the next frame may be set to the desired position.
  • a method for performing simulation to facilitate the rendering of frames of a complex virtual environment.
  • the method comprises: detecting, for a next frame to be rendered, input from a user; determining, based on the user input, desired movements from a current position to a desired position within the complex virtual environment; and performing, using a collision detection hierarchy of bounding boxes that contain one or more elements of the complex environment, a predictive collision detection operation to determine if a collision will occur with an element represented in the complex environment if the desired movements are performed between the current position and the desired position. If a collision is detected, an adjusted position may be calculated based on the desired movements in the complex environment and a viewer position for rendering the next frame may be set to the adjusted position.
  • a viewer position for rendering the next frame may be set to the desired position.
  • the method may comprise: detecting, for a next frame to be rendered, input from a user; determining, based on the user input, desired movements from a current position to a desired position within the complex virtual environment; predicting, using a collision detection hierarchy of bounding boxes that contain one or more elements of the complex environment, whether a collision will occur with an element in the complex environment based on the desired movements between the current position and the desired position; and calculating an adjusted position for rendering the next frame in response to the detection of a collision.
  • methods consistent with embodiments of the invention may calculate a path between the current position and the desired position based on the user input. Moreover, to predict whether a collision will occur, methods of the invention may perform a proximity query based on the calculated path between the current position and the desired position to identify intersecting bounding boxes, and analyze elements contained in each identified bounding box to determine if a collision will occur based on the desired movements.
  • a computer program product for generating interactive displays of a complex virtual environment.
  • the computer program product may comprise a computer readable medium embodying a computer program, wherein the computer program includes instructions that are executable by a system to perform a simulation method to facilitate the rendering of frames of the complex environment.
  • Such a method may include: determining, based on user input for a frame to be rendered, desired movements from a current position to a desired position within the complex virtual environment; performing, using a collision detection hierarchy of bounding boxes that contain one or more elements of the complex environment, a predictive collision detection operation to determine if a collision will occur with an element represented in the complex environment if the desired movements are performed between the current position and the desired position; and in response to performing the collision detection operation, setting a viewer position for the frame to be rendered.
  • systems may be provided for generating interactive displays of a complex virtual environment.
  • a simulation component may be provided for performing simulations based on virtual reality (VR) file(s).
  • the VR file(s) may include data defining a collision detection hierarchy of bounding boxes that contain elements represented in the complex environment.
  • the simulation component may comprise: means for determining, based on input from a user, desired movements from a current position to a desired position within the complex virtual environment; means for detecting, using the collision detection hierarchy of bounding boxes, whether a collision will occur with an element represented in the complex virtual environment based on the desired movements from the current position to the desired position; and means for calculating an adjusted position when it is determined that a collision will occur based on the desired movements indicated by the user input.
  • Other embodiments of the invention may provided, such as a method for simulating movement of an avatar in a complex virtual environment.
  • Such methods may comprise: determining, based on input from a user, desired movements of the avatar from a current position to a desired position within the complex virtual environment; and detecting, using a collision detection hierarchy of bounding boxes, whether a collision will occur with an element represented in the complex virtual environment if the avatar is moved from the current position to the desired position.
  • method consistent with embodiments of the invention may perform a proximity query based on the desired movements of the avatar to identify bounding boxes that intersect with the avatar when the avatar is moved to the desired position. Such methods may also analyze elements contained in each identified bounding box to determine if a collision will occur with the avatar.
  • VR virtual reality
  • a method may be provided for performing memory management operations, wherein the method comprises: allocating video memory in graphics hardware of the VR system and allocating accelerated graphics port (AGP) memory as part of system memory of the VR system; pre-loading, prior to simulating and rendering frames of the complex environment, all non-dynamic tessellation elements defined in the VR file(s) into memory, wherein pre-loading includes pre-loading non-dynamic tessellation elements into at least one of the video memory and the AGP memory; determining, after pre-loading all non-dynamic tessellation elements, whether memory space is available in at least one of the video memory and the AGP memory; and if memory space is determined to be available, pre-loading dynamic tessellation elements defined in the VR file(s) into the available memory space of the video memory or the AGP memory.
  • AGP accelerated graphics port
  • methods may be provided for performing memory management operations in a virtual reality (VR) system, wherein the VR system being adapted to provide displays of a complex virtual environment based on VR file(s).
  • Such methods may comprise: buffering, in memory of the VR system, vertex data for rendered dynamic tessellation elements; determining, for a dynamic tessellation element to be rendered, whether the dynamic tessellation element was rendered in a previous frame at a tessellation level that is the same as a tessellation level for the element in a frame to be rendered; and when it is determined that the dynamic tessellation element was rendered in a previous frame at the same tessellation level, using the buffered vertex data for the previous rendering of the dynamic tessellation element to facilitate the display of the frame to be rendered.
  • a computer program product may be provided for generating interactive displays of a complex virtual environment.
  • the computer program product may include a computer readable medium embodying a computer program, wherein the computer program comprises instructions that are executable by a system to perform a method for performing memory management operations.
  • such methods may comprise: allocating video memory in graphics hardware of the VR system and allocating accelerated graphics port (AGP) memory as part of system memory of the VR system; estimating, for a dynamic tessellation element to be rendered, memory space requirements for storing rendering data for the dynamic tessellation element, the rendering data comprising vertex data defining one or more triangles representing the dynamic tessellation element; determining whether there is sufficient memory space available in at least one of the video memory and the AGP memory for storing the rendering data based on the estimated memory space requirements; and using the available memory space in at least one of the video memory and the AGP memory for dynamically tesselating the element when it is determined that there is sufficient memory space for the rendering data.
  • AGP accelerated graphics port
  • systems may be provided for performing memory management operations.
  • Such systems may comprise: means for buffering, in memory of a virtual reality (VR) system, vertex data for rendered dynamic tessellation elements; means for determining, for a dynamic tessellation element to be rendered, whether the dynamic tessellation element was rendered in a previous frame at a tessellation level that is the same as a tessellation level for the element in a frame to be rendered; and means for using the buffered vertex data for the previous rendering of the dynamic tessellation element to facilitate the display of the frame to be rendered when it is determined that the dynamic tessellation element was rendered in a previous frame at the same tessellation level.
  • VR virtual reality
  • a method for rendering frames of a complex virtual environment based on an octree file, the octree file including data defining an octree for the complex environment that includes spatially partitioned cells.
  • Such methods may comprise: determining, using the octree for the complex virtual environment, cells that are visible based on a viewer position for a frame to be rendered; identifying, in each of the visible cells, dynamic tessellation elements; and dynamically tessellating the identified dynamic tessellation elements in the visible cells.
  • such methods may determine, for each identified dynamic tessellation element, a tessellation level, and then render each identified dynamic tessellation element according to the determined tessellation level.
  • a method for performing rendering operations to facilitate interactive displays a complex virtual environment.
  • the rendering operations may be performed as part of a simulation loop during run-time operations.
  • the method may include: determining, using an octree of cells for the complex virtual environment, cells that are visible based on a viewer position for a frame to be rendered, the cells of the octree containing elements of the complex environment; identifying, in each of the visible cells, dynamic tessellation elements; and dynamically tessellating the identified dynamic tessellation elements in the visible cells.
  • the method may further include: determining a view frustum for the frame to be rendered based on the viewer position; traversing the octree to identify non-visible cells that are outside of the view frustum; and culling the identified non-visible cells of the octree.
  • a computer program product for generating interactive displays of a complex virtual environment.
  • the computer program product comprises a computer readable medium embodying a computer program, wherein the computer program comprises instructions that are executable by a system to perform a rendering method to facilitate the display of frames of the complex environment.
  • the method comprises: determining, using an octree of cells that spatially partition the complex virtual environment, elements of the complex environment that are visible based on a viewer position for a frame to be rendered, the elements of the complex environment including dynamic tessellation elements and non-dynamic tessellation elements; and for each of visible element that is a dynamic tessellation element, dynamically tessellating the element.
  • Elements may be dynamically tessellated by: determining a tessellation level for the visible dynamic tessellation element based on the viewer position for the frame to be rendered; and generating vertex data for rendering the visible dynamic tessellation element based on the determined tessellation level.
  • a graphics system may be provided for rendering frames based on virtual reality (VR) file(s).
  • the VR file(s) may include data defining a list of occluder candidates and/or data defining an octree of cells that contain one or more elements of the complex environment.
  • the graphics system may comprise: means for determining, using the octree for the complex virtual environment, cells that are not visible based on a viewer position for a frame to be rendered; means for culling all cells of the octree that are determined not to be visible; means for analyzing the list of occluder candidates to determine cells of the octree that are occluded based on the viewer position; means for culling all cells of the octree that are determined to be occluded; and means for rendering elements contained in the visible, non-culled cells of the octree to facilitate the display the frame to a user.
  • Other embodiments of the invention may be provided, such as methods for rendering frames of a complex virtual environment based on an octree of cells that spatially partition the complex virtual environment.
  • Such methods may comprise: determining, using the octree for the complex virtual environment, cells that are not visible based on a viewer position for the frame to be rendered; culling all cells of the octree that are determined not to be visible; analyzing a list of occluder candidates to determine cells of the octree that are occluded based on the viewer position; culling all cells of the octree that are determined to be occluded; and rendering elements contained in visible cells of the octree to display the frame to a user, wherein the elements contained in the visible cells include dynamic tessellation elements and non-dynamic tessellation elements.
  • FIG. 1 illustrates an exemplary system environment for providing interactive displays of complex virtual environments, consistent with embodiments of the invention
  • FIG. 2 is an exemplary network environment for providing interactive displays of complex virtual environments, consistent with embodiments of the invention
  • FIG. 3 is an exemplary virtual reality (VR) system, consistent with embodiments of the invention.
  • FIG. 4 is an exemplary VR system and graphics system, consistent with embodiments of the invention.
  • FIG. 5 illustrates the contents of exemplary VR file(s), consistent with embodiments of the invention
  • FIG. 6 is an exemplary pre-processing pipeline, consistent with embodiments of the invention.
  • FIG. 7 is an exemplary run-time pipeline, consistent with embodiments of the invention.
  • FIG. 8 is an exemplary flow diagram for providing interactive displays of complex virtual environments, consistent with embodiments of the invention.
  • FIG. 9 is an exemplary flow diagram for providing VR file(s) based on a 3D model, consistent with embodiments of the invention.
  • FIGS. 10A, 10B and 10C illustrate examples of a cone element type, a torus element type and a surface-of-revolution element type, respectively;
  • FIG. 11 illustrates an example of stretching an octree cell, consistent with embodiments of the invention
  • FIGS. 12A and 12B illustrate examples of triangles
  • FIG. 13 illustrates an example of a triangle strip, consistent with embodiments of the invention
  • FIG. 14 is an exemplary flow diagram for identifying and building a list of occluder candidates, consistent with embodiments of the invention
  • FIG. 15 is an exemplary flow diagram for creating a collision detection hierarchy, consistent with embodiments of the invention.
  • FIGS. 16A and 16B are exemplary flow diagrams for performing collision detection, consistent with embodiments of the invention.
  • FIG. 17 is an exemplary flow diagram for performing a sliding motion operation to determine an adjusted position, consistent with embodiments of the invention.
  • FIGS. 18A, 18B and 18C illustrate examples of determining an adjusted position in response to collision detection, consistent with embodiments of the invention
  • FIG. 19 is an exemplary flow diagram for rendering complex virtual environments, consistent with embodiments of the invention.
  • FIG. 20 is an exemplary flow diagram for analyzing occluder candidates and blocked cells during rendering, consistent with embodiments of the invention
  • FIGS. 21 A and 21 B are exemplary illustrations of determining the closest perimeter point and measuring an angle ⁇ for an occluder candidate, consistent with embodiments of the invention
  • FIGS. 22A, 22B, 22C, 22D, and 22E illustrate examples of cells that are blocked, non-blocked or partially-blocked by occluder candidates
  • FIG. 23 is an exemplary flow diagram for performing dynamic tessellation during rendering, consistent with embodiments of the invention.
  • FIG. 24A, 24B and 24C are exemplary diagrams related to performing memory management operations, consistent with embodiments of the invention;
  • FIGS. 25A and 25B illustrate examples of rendered frames of a complex virtual environment that is representative of a building complex
  • FIGS. 26A and 26B illustrate examples of rendered frames of a complex virtual environment that is representative of an offshore platform
  • FIGS. 27A and 27B illustrate examples of rendered frames of a complex virtual environment that is representative of an energy plant
  • FIGS. 28A and 28B illustrate examples of rendered frames of a complex virtual environment that is representative of a city landscape.
  • Appendix A includes exemplary programming code for performing a sweep test for intersection between two bounding spheres
  • Appendix B includes exemplary programming code for performing a sweep test for intersection between a bounding sphere and a triangle element
  • Appendix C includes exemplary programming code for dynamically tessellating a cone element type
  • Appendix D includes exemplary programming code for dynamically tessellating a torus element type
  • Appendix E includes exemplary programming code for dynamically tessellating a surface-of-revolution element type.
  • Systems and methods consistent with embodiments of the present invention provide interactive graphical displays of complex virtual environments. Such systems and methods may be implemented, according to embodiments of the invention, to render complex virtual environments representing, for example, building complexes, industrial plants, offshore platforms, and other facilities or environments.
  • the rendering of such complex environments may be performed at interactive frame rates to provide real-time simulation to the user.
  • real-world phenomena including the effect of gravity and collisions with objects, may be simulated to provide a higher degree of realism for users.
  • the complex virtual environments may be based on 3D models. Such models may be defined by a large set of polygons, such as hundreds of thousands of polygons. Further, in the case of massive models, the complex environment may incorporate approximately one million or more polygons. Massive or large-scale models may be used to model various complex environments, including those related to the AEC sector, where many objects of different sizes and type are represented in the environment.
  • Embodiments of the invention may be implemented to facilitate various virtual reality applications for complex models.
  • interactive displays of complex virtual environments may be used for design and research, project review, public relations, familiarization training, maintenance planning and training, security review and training, and a wide array of other applications requiring real-time visualization and/or simulation of complex environments.
  • Such applications may be employed in various areas, including areas in which walk-through and/or other full immersion capabilities are required for effective training or use.
  • VR file(s) may be created from a 3D model and used in place of the 3D model during run-time operations for performing simulation and rendering operations.
  • Such VR file(s) may be constructed with an octree and a list of occluder candidates to permit visibility and culling operations to be performed more efficiently.
  • VR file(s) may include a collision detection hierarchy to facilitate collision detection operations during run-time operations.
  • systems and methods consistent with embodiments of the invention collect and analyze input from a user to determine the user's desired movements or operations.
  • predictive collision detection operations may be performed during a simulation phase to determine if the desired movements or operations will cause collision(s) with objects or elements represented in the complex environment.
  • an adjusted position or movement may be computed before rendering the frame to the user.
  • culling operations may be performed based on the camera or viewer position for the frame, and visible elements may be dynamically tessellated according to data from the VR file(s).
  • FIG. 1 illustrates an exemplary diagram of a system environment 10 for providing interactive displays of complex virtual environments.
  • system 10 includes a display 12, graphics hardware 18, a computing platform 20, a user interface 40 and a storage device or memory 30.
  • computing platform 20 may include a number of components for processing data and rendering complex virtual environments, such as a graphics system 24 and a virtual reality (VR) system 28. Exemplary embodiments of VR system 28 and graphics system 24 are provided below with reference to, for example, FIGS. 3 and 4.
  • VR virtual reality
  • display 12 is adapted to graphically display rendered frames of a modeled environment according to operations performed by computing platform 20 and graphics hardware 18.
  • display 12 may be implemented with a color display terminal or device with predetermined quality and resolution.
  • display 12 may be implemented with a color display screen having 16 or 32 bits color depth.
  • display 12 may be provided as a separate display component connected to computing platform 20 (as represented in FIG. 1) or can be provided as a display device that is integrated with computing platform 20 (such as a display screen of a laptop computer).
  • Computing platform 20 of system environment 10 may be configured to process data and render complex virtual environments.
  • computing platform 20 may be implemented with a workstation, a server, a personal desktop computer, a laptop or any other computing-based platform.
  • platform 20 may be implemented with a personal desktop computer or laptop with an Intel Pentium III, 350 MHz or higher-class central processing unit (CPU), such as 500 MHz or higher CPU.
  • CPU central processing unit
  • Other types of CPUs may also be used, such as the AMD Athlon XP Processor manufactured by Advanced Micro Devices, Inc.
  • Computing platform 20 may also be configured with Microsoft Windows 98SE, ME, NT4, SP6, 2000 or XP.
  • computing platform 20 may be implemented to provide accelerated graphics port (AGP) capabilities for facilitating higher 3D graphics throughput.
  • AGP accelerated graphics port
  • computing platform 20 may be implemented with an AGP chipset and motherboard equipped with an AGP bus slot.
  • an operating system such as the OSR 2.1 version of Windows 98 or Windows NT 4.0 may be used.
  • computing platform 20 may be adapted to perform a number of operations, including the preprocessing of a 3D model file to generate VR file(s) used for rendering a complex virtual environment. Further, during run-time operations, computing platform 20 may be adapted to receive input from user interface 40 and/or other devices, analyze data from the VR file(s) stored in memory 30, and generate instructions to display frames of the complex environment with display 12. Optionally, computing platform 20 may also provide instructions to user interface 40 to control one or more haptic devices that provide feedback to the user. For example, haptic devices such as gloves, helmet devices with automated tracking, force feedback joysticks, and/or other types of skeleton or human interface devices may be provided with user interface 40.
  • haptic devices such as gloves, helmet devices with automated tracking, force feedback joysticks, and/or other types of skeleton or human interface devices may be provided with user interface 40.
  • computing platform 20 may include graphics system 24 and VR system 28.
  • components 24 and 28 may facilitate the processing of data and rendering of complex virtual environments.
  • VR system 28 may be adapted to perform various simulation operations, including collision detection operations based on user input and data from the VR file(s). As a result of such operations, VR system 28 may provide output indicating, for example, the camera or viewer position for each frame to be rendered.
  • Rendering operations may be performed by graphics system 24 based on the results of the simulation operations performed by VR system 28.
  • the rendering operations performed by graphics system 24 may include visibility and culling operations, as well as dynamic tessellation operations for specific types of elements defined in the VR file(s).
  • the results of these rendering operations may include a list of vertex arrays for the elements determined to be visible, as well as vertex arrays for any dynamically tessellated elements.
  • commands (such as Open GL commands) may be issued by system 24 to graphics hardware 18 to cause the display of the rendered frame on display 12.
  • graphics hardware 18, graphics system 24 and VR system 28 can be implemented through any suitable combination of hardware, firmware and/or software.
  • graphics system 24 may be implemented as a software-based application using C++ and include an OpenGL software interface.
  • graphics hardware 18 may be implemented with commercially available graphics hardware, such as graphics cards that include transformation and lighting functions and suitable graphics memory (such as 32Mb or more of graphics memory). Examples of commercially available graphics cards include GeForce II or III graphics cards available from Nvidia of Santa Clara, California. For GeForce graphics cards, the latest available drivers may also be installed (such as those available for downloading at Nvidia's website ⁇ http://www.nvidia.com>).
  • VR system 28 may be implemented as a software- based application and programmed using C++ and/or graphics development languages or libraries, such as Microstation Development Language (MDL) available from Bentley Systems, Inc. of Exton, Pennsylvania.
  • MDL Microstation Development Language
  • Memory 30 may be implemented with a database or storage device that provides sufficient memory capacity for all of the operations performed with system 10.
  • memory 30 may be implemented with approximately 256 Mb or greater memory capacity depending on, for example, the complexity of the 3D model and/or data handled by system 10.
  • memory 30 can be provided as a separate storage device connected to computing platform 20 (as illustrated in FIG. 1) or can be an integrated storage device that is provided as part of computing platform 20 (such as the internal memory or hard disk of a computer or laptop).
  • memory 30 may store data accessed by computing platform 20.
  • memory 30 may store data for complex virtual environments represented by 3D model files.
  • Such files may be developed using commercially available software, such as Microstation (available from Bentley Systems, Inc.), AutoCAD (available from Autodesk, Inc.) or PDMS (available from Cadcentre Ltd.).
  • the 3D model files may be stored in conventional or proprietary file formats, such as .dgn, .dwg, etc.
  • memory 30 may also store VR file(s) that are used by computing platform 20 to process and render complex environments.
  • VR file(s) may be generated from 3D model files.
  • VR file(s) may include optimized data representing the entire complex environment, as well as other data for facilitating the interactive display of the environment. Accordingly, consistent with embodiments of the invention, VR file(s) may be used in place of any 3D model files during run-time operations to render the complex environment. Exemplary embodiments of VR file(s) are provided below with reference to, for example, FIG. 5.
  • pre-processing operations and exemplary flow diagrams for generating VR file(s), consistent with embodiments of the invention, are provided below with reference to, for example, FIGS. 6 and 9.
  • User interface 40 collects and receives input from a user and provides all user input to computing platform 20.
  • User input may indicate instructions from the user concerning movements (forward, back, left, right, etc.) or operations (select, lift, rotate, etc.) to be virtually performed in the complex environment. Such input may permit a user to control walk-though operations in the complex virtual environment, as well as perform operations on elements (such as boxes, chairs, etc.) represented in environment. Additionally, user input may include instructions to control the operations of system 10, such as the loading of files, the execution of components and applications, etc.
  • user interface 40 can be implemented with any number of input devices, such as a keyboard, a mouse device, a joystick and/or a track ball.
  • the input devices may be provided as a separate device or as an integrated device with computing platform 20.
  • User interface 40 may also include haptic devices to provide feedback to the user during simulation and rendering. As noted above, such haptic devices include gloves, helmet devices with automated tracking, force feedback joysticks, speakers and/or other types of skeleton or human interface devices.
  • VR file(s) are first prepared (if not previously generated and stored by the user).
  • VR file(s) may be created from 3D model for the complex environment.
  • computing platform 20 may generate the VR file(s) used for during simulation and rendering operations.
  • VR system 28 may be implemented with a converter component (see, for example, FIG. 3) to perform pre-processing operations that convert or generate VR file(s) from a 3D model file.
  • VR file(s) consistent with embodiments of the present invention, may be optimized to facilitate real-time, interactive display of the complex environment.
  • the VR file(s) may be constructed with an octree and/or a list of occluder candidates to permit visibility and culling operations to be performed more efficiently during run-time operations.
  • the VR file(s) may include a collision detection hierarchy to facilitate collision detection operations during runtime operations (see, for example, FIG. 5).
  • VR file(s) are prepared, simulation and rendering operations may be performed by computing platform 20 to display successive frames of the complex virtual environment to the user.
  • input from the user may be collected and analyzed by VR system 28 to determine desired movements or operations for the next frame to be rendered.
  • VR system 28 may perform collision detection operations to determine if the desired movements or operations cause any collision(s) with objects or elements represented in the complex environment (see, for example, FIGS. 16A and 16B).
  • an adjusted position or movement may be computed by VR system 28 and provided as output for the next frame. If no collisions are detected, the desired position or operation of the user may be provided as output by VR system 28 to permit rendering operations to be performed.
  • rendering operations are performed by graphics system 24. For instance, during a rendering phase, visibility and culling operations may be performed by graphics system 24 to determine what cells or elements are visible for the next frame. As further described below, an octree and/or occluder candidates defined in the VR file(s) may be used by graphics system 24 to perform visibility and occlusion culling operations. In addition, curved or rounded elements that are visible in the frame may be dynamically tessellated by graphics system 24 to provide optimum rendering (see, for example, FIG. 23). In response to such operations, OpenGL commands may be issued by graphics system 24 to graphics hardware 18 to cause the next frame to be displayed to the user with display 12.
  • Successive frames of the complex virtual environment can be generated with system environment 10 of FIG. 1 by providing simulation and rendering operations as part of a simulation loop (see, for example, FIG. 7).
  • interactive displays may be provided to a user for various complex virtual environments, including those based on large-scale or massive models that contain hundreds of thousands of polygons or even one million or more polygons.
  • Embodiments consistent with the invention therefore, may be implemented to provide interactive displays of building complexes, off- shore platforms, industrial plants, city dwellings, and other facilities and environments that are highly detailed or contain many different types of elements.
  • FIGS. 25-28 illustrate exemplary complex environments that may be interactively rendered by system environment 10 of FIG. 1.
  • FIGS. 25A and 25B illustrate exemplary rendered frames of the Reichstag building complex (designed by the architectural firm of Foster and Partners of London, England) that includes many exterior as well as interior details and elements.
  • FIGS. 26A and 26B are exemplary rendered frames of an offshore platform (representative of a platform designed by Shell) that contains numerous structural details including curved features and rounded piping.
  • Complex environments of industrial plants such as the exemplary electrical energy plant of FIGS. 27A and 27B, may also be interactively displayed to a user with system 10 while providing various simulation capabilities (such as interactive walk-through operations with collision detection).
  • Such features may also be provided for modeled environments of complex dwelling environments or landscapes.
  • FIGS. 28A and 28B are exemplary rendered frames of a city landscape that can be interactively displayed to a user with system environment 10.
  • FIG. 2 illustrates an exemplary network environment 50 for providing interactive displays to a set of users located at client stations 70A-70N.
  • Client stations 70A-70N may function in accordance with a peer-to-peer relationship or according to a client-server relationship with a server 54.
  • Such arrangements may be used for various applications, including training sessions where one user acts as an instructor to train or instruct one or more other users.
  • Network environments such as that illustrated in FIG. 2 may also be used for VR applications, where role-playing or involvement by two or more users is required.
  • client stations 70A-70N and server 54 are connected to a network 62. While FIG. 2 illustrates multiple client stations 70A- 70N, it will be appreciated that system 50 may be implemented for any number of users. Depending on the location and/or relationship between the users, network 62 may be implemented with a local area network, a wide area network, a virtual private network or any other type of private and/or secure communications network. Network 62 may also be implemented using public communications networks, such as the Internet. To facilitate communication over network 62, a network protocol (such as TCP/IP) may be used by and between client stations 70A-70N and/or server 54.
  • TCP/IP network protocol
  • simulation and rendering operations may be performed entirely at each of the client stations 70A-70N.
  • desired movements or operations by a user at one client station may be communicated via network 62 to users at other client stations in accordance with a peer-to-peer relationship.
  • each client station 70A-70N may perform simulation and rendering operations similar to that performed by computing platform 20 in FIG. 1.
  • identical VR file(s) may be stored in a memory 74A-74N associated with each of the client stations 70A-70N.
  • server 54 may be omitted from network environment 50. However, to provide interactive displays in accordance with a client-server relationship, server 54 may be provided to function with client stations 70A-70N through network 62. For example, simulation operations may be performed centrally at server 54 based on user input from each of the client stations. Using data from VR file(s) stored in memory 58, server 54 may simulate the desired movements or operations by one or more users in the complex environment and provide output through network 62 to indicate the camera or viewer position for each of the client stations 70A-70N. Based on the output of server 54, each of the client stations may then perform rendering operations and graphically display each frame.
  • server 54 may also perform synchronization and user management functions for client stations 70A-70N. For instance, server 54 may coordinate and synchronize the transfer of data between the client stations. Further, server 54 may inform users when other users are connected or disconnected from the network.
  • VR file(s) may be stored in memory 58 associated with server 54, as well as in memory 74A-74N associated with each of the client stations 70A-70N.
  • the VR file(s) may include optimized data representing a complex environment, as well as other data to facilitate simulation and rendering of the complex environment on an interactive basis with each user.
  • the VR file(s) may be generated from 3D model files by pre-processing operations performed by, for example, server 54.
  • memory 58 and memory devices 74A-74N may be implemented in a similar fashion to memory 30 of FIG. 1 , and may also store data (such as 3D model files) in addition to the VR file(s).
  • server 54 may be implemented in a similar manner as VR system 28 of FIG. 1.
  • server 54 may be implemented with a workstation, a server, a personal desktop computer, a laptop or any other computing-based platform.
  • server 54 may include software-based components for performing simulation operations (see simulation component 25 of FIG. 3).
  • server 54 may include communication hardware for network 62 and other hardware or software-based components for performing synchronization and user management functions.
  • client stations 70A-70N may be configured to receive simulation results from server 54 over network 62.
  • client stations 70A-70N may be implemented with a laptop, a workstation, a personal desktop computer or computing-based platform installed with communication hardware.
  • Client stations 70A-70N may also be implemented with components (similar to graphics hardware 18, graphics system 24 and display 12 of FIG. 1) for performing rendering operations and displaying each rendered frame.
  • client stations 70A-70N may also include a user interface, such as user interface 40 of FIG. 1.
  • each of the client stations 70A-70N may be additionally configured with components or systems for performing simulation operations (such as VR system 28 of FIG. 1). During simulation, client stations 70A-70N may also be adapted to track and synchronize movements with other client stations to detect and resolve collisions based on user input.
  • server 54 may periodically poll network 62 and detect user input from one or more of the client stations 70A-70N. User input from each client station is used by server 54 to perform simulation operations for each frame based on the VR file(s) stored in memory 58. If user input is received by more than one client station 70A-70N over a predetermined polling period, then server 54 may: analyze all of the user input; perform simulation operations (such as collision detection); and update the viewer or camera position of each user for the next frame to be rendered.
  • Server 54 may be adapted to provide simulation such that one or more users are represented within a complex virtual environment by an avatar and/or users can observe other users represented within the environment. With such simulation capabilities, various applications can be provided for multiple users with system 50, such as training exercises where one user acts as an instructor for other users.
  • each client station may perform rendering operations for the frame to be rendered.
  • rendering operations may include visibility and occlusion culling operations, as well as dynamic tessellation of curved or rounded elements that are determined to be visible in the frame.
  • each client station may also extrapolate and determine the best fit for displaying the rendered frame to the user at the client station.
  • a set of commands (such as OpenGL commands) may be issued to the graphics hardware at each client station.
  • FIG. 3 illustrates exemplary VR system 28, consistent with embodiments of the present invention.
  • VR system 28 may be implemented as part of computing platform 20 (see FIG. 1) or server 54 (see FIG. 2) to perform various operations.
  • VR system 28 includes a simulation component 25 and a converter component 21. These components may be implemented as software-based applications or components that are developed using a programming language and/or a graphics development language or libraries.
  • Converter component 21 may be configured to perform preprocessing operations including the conversion of 3D model files into VR file(s).
  • Converter component 21 may be implemented using, for example, C++ and Microstation development language for generating VR file(s).
  • An exemplary embodiment of VR file(s) is provided in FIG. 5 and exemplary embodiments of the pre-processing and conversion operations performed by converter component 21 to generate VR file(s) are described below with reference to, for example, FIGS. 6 and 9.
  • simulation component 25 is configured to perform run-time operations including the processing of user input and simulation to permit the rendering of frames of a complex virtual environment.
  • Simulation component 25 may be implemented using a programming language such as C++ or other comparable programming languages.
  • An exemplary embodiment of simulation component 25 is explained below with reference to FIG. 4 and exemplary embodiments of the run-time and simulation operations performed by simulation component 25 are further described below with reference to, for example, FIGS. 7 and 8.
  • FIG. 4 illustrates an exemplary embodiment of VR system 28 and graphics system 24, consistent with embodiments of the invention. As illustrated in FIG. 4, simulation component 25 of VR system 28 may be implemented with a number of software-based engines or sub-components.
  • simulation component 25 may include a physics engine 254 and an I/O engine 256.
  • I/O engine 256 may perform any necessary processing operations on the user input (such as the translation of signals received from user interface or input devices) and provide the user input to physics engine 254. If feedback is provided to the user, I/O engine 256 may also output feedback instructions generated by simulation component 25 to haptic device(s) controlled or worn by the user. This may cause various sensations (such as pressure, resistance, vibration, sound, etc.) to be provided to the user in order to simulate activity or occurrences in the complex virtual environment.
  • physics engine 254 determines a user's desired movements or operations for each frame to be rendered. As part of this determination, physics engine 254 may consider the direction and velocity of a moving object or avatar based on the user input. The effect of gravity may also be analyzed to calculate the desired position for the next frame. Further, using the VR file(s), physics engine 254 may detect if any collisions exist with objects in the simulated environment due to the desired movement or operations of the user. If a collision is detected, physics engine 254 may resolve any such collisions to determine the user's final adjusted position or viewer position for the next frame. To perform such operations, physics engine 254 may be adapted to perform collision detection operations, such as those further described below with reference to, for example, FIGS. 8 and 16A-16B.
  • graphics system 24 may include a graphics engine 242.
  • Graphics engine 242 may be a software-based engine or component that performs various rendering operations for each frame to be rendered to a user. For example, based on the viewer position and data contained in the VR file(s), graphics engine 242 may determine which objects are visible and which objects are occluded or hidden from view for each frame. In addition, for specific visible elements or objects defined in the VR file(s) (such as rounded or curved objects), graphics engine 242 may perform dynamic tessellation operations to control the level of detail by which such elements are displayed to the user. To perform such rendering operations, graphics engine 242 may be implemented with occlusion culling and dynamic tessellation functions, such as those further described below with reference to, for example, FIGS. 8, 19 and 23.
  • the results of the rendering operations performed by graphics engine 242 may include a list of vertex arrays for the elements determined to be visible, as well as vertex arrays for each dynamically tessellated element. Based on these results, commands (such as Open GL commands) may be generated by graphics engine 242 to the graphics hardware 18 (not illustrated in FIG. 4) to cause the display of each rendered frame to the user.
  • commands such as Open GL commands
  • FIG. 5 illustrates exemplary VR file(s) 50, consistent with embodiments of the invention.
  • VR file(s) contain data and information relating to a complex environment for run-time operations.
  • the VR file(s) may be optimized for facilitating realtime rendering and interactive displays of a complex environment for users.
  • VR file(s) 50 may include an octree file 52 to facilitate, for example, visibility determinations.
  • a collision detection file 56 may be generated to optimize the VR file(s) for collision detection operations.
  • files 52 and 56 may be stored as separate VR files or they may be stored as part of a single VR file. For ease of reference, whether stored separately or jointly, these files are herein referred to as "VR file(s)."
  • Octree file 52 and collision detection file 56 may be generated from data representing a complex environment.
  • octree file 52 and collision detection file 56 may be generated by VR system 28 based on data from conventional 3D model files, such as Microstation or AutoCAD files.
  • models of complex environments typically include data for describing primitives (such as points, lines, polygons, polyhedra and/or free-form surfaces) that define the shape of components of an object.
  • a model for a complex environment may also include data for describing object attributes, such as line style, color, or surface texture, and connectivity relationships and positioning that define how the components fit together.
  • curved surfaces can be approximated with polygons and solid objects can be represented by polyhedra, each of which represents a set of polygons connected at a common edge and enclosing a volume.
  • octrees can be used to subdivide a complex environment represented by a 3D model.
  • Octrees provide a hierarchical approach to spatially-partitioning, whereby an environment is divided into eight equally sized, three-dimensional cells (e.g., cubes) and each cell is further divided into eight cells until a cell is empty or contains fewer than a predetermined number of elements or primitives (such as polygons or triangles).
  • each level of the octree may be represented by nodes that are hierarchically related to one another in accordance with each level of division.
  • an octree may be constructed for a complex environment represented by a 3D model.
  • data representing the octree can be stored as part of octree file 52.
  • Octree file 52 includes data representing the octree of elements.
  • the octree can be defined by any number of cells, whereby each cell includes a vertex array for every material used by the elements or primitives of the cell.
  • Octree file 52 may also include one or more occluder candidates 54. Occluder candidates may be generated during pre-processing of a 3D model based on elements or primitives that have characteristics that classify them as global candidates for consideration when perform occlusion culling (such as a large surface area).
  • occluder candidates 54 may be stored together as a separate list in octree file 52. Alternatively, each occluder candidate may be merged with the octree and stored relative to nodes of the octree for cells occupied by the occluder candidate.
  • collisions with objects may be detected and simulated to replicate real-world phenomena.
  • the task of collision detection can be complex, especially for environments based on massive models.
  • spatial partitioning of the environment may also be performed during pre-processing to provide a collision detection hierarchy.
  • a collision detection file 56 may be generated from a 3D model by creating grid subdivisions.
  • the modeled environment may be divided into a grid of equally sized cubes.
  • a bounding box tree such as an axis-aligned bounding box (AABB) tree
  • AABB axis-aligned bounding box
  • Each bounding box can be created by dividing a cube into two equally sized cells or cubes and then further dividing the cubes until a cube is empty or contains fewer than a predetermined number of elements or primitives.
  • each level of the bounding box may be represented by nodes that are hierarchically related to one another in accordance with each level of division. This hierarchy of bounding boxes may be stored to provide the collision detection hierarchy of collision detection file 56.
  • FIG. 6 illustrates an exemplary pre-processing pipeline.
  • the exemplary pre-processing pipeline of FIG. 6 may be implemented to generate VR file(s) based on a 3D model of the complex virtual environment. Such operations may be performed by VR system 28 of system environment 10 (FIG. 1) or server 54 of network environment 50 (FIG. 2). For purposes of illustration, however, reference will be made to the embodiment of FIG. 1 to describe the operations and features of FIG. 6.
  • the pre-processing pipeline begins with the loading of a 3D model file (operation 102).
  • VR system 28 may load and/or access a 3D model file from memory 30.
  • the 3D model comprises data for defining all objects in the modeled complex environment, including object attributes (such as color, surface texture, etc.). Since most conventional 3D model files are not optimized for real-time simulations or rendering, the data from the 3D model is analyzed during pre-processing to generate VR file(s), consistent with embodiments of the invention.
  • object attributes such as color, surface texture, etc.
  • the elements or primitives (such as polygons or triangles) of the complex environment are analyzed and filtered (operation 104).
  • VR system 28 may analyze the 3D model data to identify and filter elements that can be dynamically tessellated during run-time operations.
  • elements that have curved or rounded features may be filtered from other elements in the 3D model and identified as dynamic tessellation elements. For instance, elements including an arc, an ellipse, a curve and/or a b-spline can be filtered from other elements.
  • Such dynamic tessellation elements can be identified based on the name, structure and/or format of the data.
  • 3D models developed with Microstation may designate element types by specific name or header.
  • the structure or format of the data can be analyzed to confirm the identity of a particular element type. For instance, based on the type of files used (Microstation, AutoCAD, etc.), element types in a 3D model may be identified in accordance with their predetermined data structure or format.
  • the name, structure and/or format of elements defined in 3D model files may vary according to the software version or edition with which it was developed. To overcome such drawbacks, routines may be developed to identify element types regardless of the version or edition with which the 3D model was created. For instance, a software-based component may be created using Microstation Development Language (MDL) to analyze data sets defining elements in a 3D model regardless of the version of Microstation, 3D model file.
  • MDL Microstation Development Language
  • VR system 28 may reformat and store any identified dynamic tessellation elements according to element type. As further described below with reference to FIGS. 10A-10C, specific element types can be stored in predetermined data formats that permit the VR file(s) to be processed and handled easier during run-time operations. For all remaining elements that are not filtered (i.e., all non-dynamic tessellation elements), triangulation operations may performed for polygons having more than three line segments (operation 106). In particular, each non-dynamic tessellation element is analyzed and stored either as a single triangle or set of triangles. Thus, if an element is represented as a single triangle in the 3D model, the element will be stored as a single triangle in the VR file(s).
  • each polygon may be triangulated and stored as a group or set of triangles. While storing all non-dynamic tessellation elements as triangles may increase the overall volume of data related to these elements, this allows the system to take advantage of state-of-the -art graphics hardware (such as graphics cards) that can directly handle and process triangle elements.
  • state-of-the -art graphics hardware such as graphics cards
  • the final operations of the pre-processing pipeline are performed in order to create the VR file(s). As illustrated in FIG. 6, such operations may include: building a list of occluder candidates (operation 108); constructing an octree (operation 110); and building a collision detecting hierarchy (operation 112). These operations may be performed in parallel or in any sequence by VR system 28. Further, the results of the octree may be stored as part of an octree file (such as octree file 52) and the results of the collision detection hierarchy may be stored as part of a collision detection file (such as collision detection file 56). As described with reference to FIG.
  • the list of occluders can be stored as a separate list in the VR file(s).
  • the list of occluders can be merged into the related nodes of the octree.
  • any identified occluder candidates can be inserted and merged into the octree (operation 114) after the octree is constructed.
  • FIG. 7 illustrates an exemplary run-time pipeline for providing interactive displays of complex virtual environments.
  • the run-time pipeline of FIG. 7 includes simulation and rendering operations that can be performed with VR file(s) that have been generated for a complex environment.
  • the operations of FIG. 7 can be performed by computing platform 20 of system environment 10 (FIG. 1) or server 54 and/or client stations 70A-70N of network environment 50 (FIG. 2).
  • FIG. 1 For purposes of illustration, however, reference again will be made to the embodiment of FIG. 1 to facilitate the description of FIG. 7.
  • Run-time operations are initiated by loading VR file(s) (operation 132).
  • VR file(s) may be stored in memory 30.
  • a dialog box may be displayed to the user so that the appropriate VR file(s) can be identified or selected.
  • various system configurations may be performed. For example, based on input from a user, simulation features (such as collision detection or gravity effects) may be activated or deactivated by computing platform 20. Also, other system configurations (such as memory management operations) may be initiated by computing system 20. Exemplary embodiments for performing memory management operations are described below with reference, for example, to FIGS. 24A, 24B and 24C.
  • a simulation loop is performed (operation 140) based on user input (138).
  • the simulation loop for a complex virtual environment may be reiterated to render successive frames of the environment until the run-time pipeline is terminated (based on, for example, an escape or exit command entered by the user).
  • the simulation loop includes frame simulation (operation 134) and frame rendering (operation 136).
  • frame simulation operation 1314
  • frame rendering operation 136
  • user input is analyzed to determine desired movements or operations of a moving object or avatar representing the user in the complex virtual environment.
  • VR system 28 can determine the position of the moving object or avatar and the viewer position for the next frame to be rendered.
  • Information determined during simulation is then used to perform rendering operations.
  • occlusion culling and dynamic tessellation operations can be performed, in accordance with embodiments of the invention.
  • the results of the rendering operations can be used by graphics system 24 to cause the rendered frame to be displayed to the user.
  • various system configuration operations may be performed, such as memory management operations.
  • FIG. 8 is an exemplary flow diagram for providing interactive displays of complex virtual environments, consistent with embodiments of the invention.
  • the exemplary flow diagram of FIG. 8 will be described with reference to the system environment 10 of FIG. 1.
  • the exemplary flow diagram of FIG. 8, however, may be implemented with other environments, including the network environment 50 of FIG. 2.
  • the VR file(s) are provided (step S.10).
  • VR file(s) may be generated from a 3D model of the complex environment.
  • the 3D model may be generated from a 2D drawing by using 3D modeling software or a 3D reconstructor program based on a laser scanning of the environment.
  • the 3D model may also be created independently by a designer or programmer using 3D modeling software.
  • 3D features may be selectively defined depending on the complex environment.
  • a 3D model may be created by projecting one or more elements in the drawing (such as walls and/or walkways) to permit better visualization of the floor plan.
  • the elements to be projected in 3D may be uniquely identified in a 2D file (such as by color or texture) to assist in the designer or programmer in creating the 3D model.
  • the VR file(s) may be created.
  • pre-processing operations may be performed by VR system 28.
  • An exemplary pre-processing pipeline for generating VR file(s) is disclosed herein with reference to FIG. 6. Further, an exemplary flow diagram for generating VR file(s) from a 3D model is described below with reference to FIG. 9.
  • any necessary system configurations can be performed (step S.12).
  • various system configuration options may be set by the user. For instance, simulation features (such as collision detection or gravity effects) may be activated or deactivated by a user. Also, specific system configurations (such as memory management operations) may be initiated by computing system 20. An initial viewer position may also be set according to a predetermined default setting for the first frame to be displayed to the user.
  • Simulation and rendering is then performed to display successive frames of the complex virtual environment based on input from the user.
  • system 10 can poll for user input over a predetermined time period for each frame (step S.14).
  • VR system 28 may detect desired movements or operations entered by the user during the polling period. Using a keyboard, a trackball or a joystick, the user may enter desired movements (forward, back, left, right, etc.) and/or operations (lift, rotate, select, etc.). Based on input detected from the user, VR system 28 then performs simulation operations in order to render the next frame.
  • predictive collision detection may be performed based on the detected user input (step S.16).
  • VR system 28 analyzes all moving objects or avatars to determine if any collisions occur with other objects when moving the object or avatar between its current position and desired position for the next frame.
  • the collision detection file 56 of the VR file(s) may be used to detect collisions (see, for example, FIGS. 16A and 16B). If a collision is found (step S.18; Yes), then VR system 28 responds to the collision (step S.20). For example, VR system 28 may project the movement of the object or avatar and determine an adjusted position from the first point of contact.
  • step S.18 If another collision is detected between the first point of contact and the adjusted position, then another adjusted position is detected until no further collisions are detected (step S.18; No). The final adjusted position is then used in place of the desired destination for purposes of setting the viewer position for the next frame. If no collisions are detected, then the viewer position is set to the desired position.
  • An exemplary flow diagram for performing predictive collision detection operations is described below with reference to FIGS. 16A-16B.
  • graphics system 24 performs culling operations (including occlusion culling) based on the determined viewer or camera position for the next frame (step S.22).
  • culling operations including occlusion culling
  • the octree file 52 of the VR file(s) can be used. Exemplary flow diagrams for performing culling operations are described below with reference to FIGS. 19 and 20.
  • graphics system 24 may perform dynamic tessellation operations (step S.24).
  • graphics system 24 determines a tessellation level for rendering each object.
  • the tessellation level can control the number of triangles used to approximate the object in the rendered frame. For instance, objects or elements may be rendered with increasing number of triangles for higher tessellation levels. Further, objects or elements that appear closer to the camera position or screen may be assigned higher tessellation levels to cause the object or element to be rendered with a greater number of triangles and, therefore, greater detail.
  • the frame is rendered and displayed to the user (step S.26).
  • the results of the rendering operations may be used by graphics system 24 to issue or generate commands (such as OpenGL commands) to graphics hardware 18.
  • the results of the rendering operations may indicate what elements or objects are visible and need to rendered, as well as how to render dynamic tessellation elements that are within the view frustum.
  • Each visible element or object may be represented by a vertex array of x, y, z coordinate points that define the vertices of the element.
  • An index array may be generated that reads into the vertex array and controls the number of triangles used to render the element or object.
  • a set of OpenGL commands may be generated based on a specific vertex array data for the elements determined to be visible, as well as vertex array data for each visible dynamically tessellated element.
  • graphics hardware 18 such as a graphics card
  • graphics hardware 18 may cause the display of the next frame on display 12. Thereafter, successive frames may be simulated and rendered (steps S.14-S.26) until runtime operations are terminated by the user.
  • FIG. 9 is an exemplary flow diagram for providing VR file(s), consistent with embodiments of the invention.
  • VR file(s) can be generated from conventional 3D models that include data representing a complex environment.
  • the VR file(s) can be generated during pre-processing in order to prepare and store the files before run-time operations are performed. Once created, the VR file(s) can be accessed from memory and used for simulation and rendering operations.
  • the exemplary flow diagram of FIG. 9 may be implemented through various systems or environments, such as the system environment 10 of FIG. 1 or the network environment 50 of FIG. 2. However, for purposes of illustration, reference will be made to the embodiment of FIG. 1 to describe the flow diagram of FIG. 9.
  • a 3D model file may be designated by the user (using, for example, a displayed dialog box) and accessed from memory (such as memory 30).
  • the 3D model file may be a CAD or other type of 3D model file that represents a complex environment (such as a plant, a building facility, an off-shore platform, etc.). Further, the 3D model file may be generated using Microstation or other types of commercially available software.
  • modeling software may be used to generate the 3D model.
  • the 3D model may also be generated from a 2D drawing or file, such as an engineering or architectural drawing.
  • the 3D model may be based on a 2D file in which one or more elements have been projected into 3D.
  • the 3D model may be created by laser scanning the environment and using a 3D reconstructor program.
  • the file may be prepared with various features or functions to facilitate updates to the objects represented in the 3D model.
  • a virtual reality (VR) library may be provided with a number of special VR functions in order to incorporate such functions into the 3D model file.
  • VR virtual reality
  • the VR functions may permit the name, texture, lighting and/or rotational aspects of an object to be specified and, if needed, updated in the future.
  • An initial, viewer or camera position for an avatar may also be defined in the modeled environment.
  • functional links may be defined for an object to permit functions such as a database link to retrieve information for the object (such as material and/or cost information, etc.).
  • the elements or primitives of the model are filtered (step S.42).
  • VR system 28 may analyze the 3D model data to identify and filter dynamic tessellation elements from other elements that are static or non-tessellated during rendering.
  • elements that have curved or rounded features may be filtered and distinguished from other elements in the 3D model. For instance, elements including an arc, an ellipse, a curve and/or a fa- spline can be identified as dynamic tessellation elements and filtered from other elements identified as non-dynamic tessellation elements.
  • VR system 28 may identify one or more major types of dynamic tessellation elements, such as cone element types, torus element types, and/or surface-of-revolution element types.
  • a cone element type is defined by two circles or ellipses and one or more lines (see, for example, FIG. 10A). If the dimensions of the two circles or ellipses are identical, the cone element type may represent a cylinder.
  • a torus element type is generally defined by a center point, two ellipses or circles and one or more arcs.
  • Torus element types can represent elbows or donuts (see, for example, FIG. 10B).
  • VR system 28 may analyze the name, structure and/or format of the data representing each element in the 3D model. For example, as indicated above, 3D model files developed with Microstation may designate certain element types by specific element names. In addition, the structure or format of the data can be analyzed to confirm the identity of a particular element type. For instance, a cone element type in a Microstation file is defined by a data structure comprising a header followed by two ellipses and four lines.
  • a torus element type developed is defined by a data structure comprising a header followed by two ellipses and a set of arcs (such as four arcs) and a surface-of- revolution element type is defined by a header and a data set indicating a surface-of-revolution.
  • a software-based component may be created using a software-manufacturer-specific, development language (such as Microstation Development Language (MDL)) that permits data sets defining elements in the 3D model to be identified regardless of the version of 3D model file.
  • MDL Microstation Development Language
  • VR system 28 may store the element in an optimized data format in the VR file(s) to better facilitate run-time operations.
  • a cone represented by a cone element type may be stored as a data structure including: a center point of the top (T); a center point of the bottom (B); a center point (C) on the axis between T and B; a radius of the top (R t ); and a radius of the bottom (R ). Further, as indicated in FIG.
  • a torus represented by a torus element type may be defined by a data structure including: an axis of revolution (A); a center point on the axis of revolution (C); an angle of revolution ( ⁇ ); an internal radius (Ri); and an outer radius (R 2 ).
  • 10C may be represented by a surface-of-revolution element type stored as a data structure comprising: an axis of revolution (A s ); a center point of the revolution (C s ); an angle of the revolution ( ⁇ s ); a radius of the revolution (R s ); a center point of the ellipse (CA); an angle of the ellipse (OA); a shorter radius of the ellipse (RA I ); and a longer radius of the ellipse (RA 2 ).
  • a s axis of revolution
  • C s center point of the revolution
  • ⁇ s an angle of the revolution
  • R s radius of the revolution
  • CA center point of the ellipse
  • OA angle of the ellipse
  • RA I a shorter radius of the ellipse
  • RA 2 a longer radius of the ellipse
  • triangulation operations may be performed (step S.44).
  • VR system 28 may analyze each non-dynamic tessellation element in the 3D model and store it either as a single triangle or set of triangles. Accordingly, if an element is represented as a single triangle in the 3D model, the element will be stored as a single triangle in the VR file(s). However, if the element is represented by a polygon with more than three sides or a set of polygons, then each polygon is triangulated and stored as a group or set of triangles.
  • VR system 28 may be implemented to generate sets of triangles by analyzing the vertices of a polygon.
  • a conventional graphics utility such as OpenGL utility library (GLU)
  • GLU OpenGL utility library
  • a specific utility may be called by VR system 28 by providing the vertices of the polygon to be triangulated, and in response the utility may perform triangulation operations and output a set of triangles for the polygon.
  • an octree is constructed (step S.46).
  • an octree can be constructed by spatially partitioning or dividing the complex environment into cells or cubes. Octree as well as other types of partitioning techniques can be used to simplify the task of determining which objects and surfaces are visible versus those that are hidden or occluded when rendering the complex environment.
  • the octree may be constructed from the filtered dynamic tessellation elements (reformatted and stored in an optimized data format) and non-dynamic tessellation elements (including polygonal elements that have been triangulated).
  • a single cell is first defined that contains the entire modeled environment, including all dynamic and non-dynamic tessellation elements. This cell is then divided into eight cells by cutting or dividing along each of three axes (x, y, z). For a 3D environment, each cell of the octree represents a three-dimensional volume defined by two x, y, z points (i.e., a minimum point and a maximum point for the cell).
  • each divided cell is then further divided into eight cells, with such cell division continuing until a cell is empty or contains fewer than a predetermined number of elements or primitives (such as polygons or triangles).
  • each level of the octree may be represented by nodes that are hierarchically related to one another in accordance with each level of division.
  • VR system 28 may cut or divide the cell at the center point of the cell to thereby create eight, equally-sized cells.
  • VR system 28 may determine the median location of all triangles within the cell and divide the cell at the median location.
  • Such a technique may be used to provide a more balanced octree, especially for environments in which triangles are not evenly distributed in space.
  • the resulting cells may be unequal in size. However, this will not effect the usage of the octree, since all cells are simply defined by a set of coordinates that define the vertices of the cell.
  • VR system 28 may analyze and count non-dynamic tessellation elements (i.e., triangles) for the purposes of determining whether to further divide a cell.
  • non-dynamic tessellation elements i.e., triangles
  • cell partitioning for the octree may continue until a cell contains fewer than approximately 10,000 triangles.
  • dynamic tessellation elements may be analyzed to determine whether to further divide a cell.
  • cell partitioning for an octree may continue if the cell contains more than 1 ,000 dynamic tessellation elements.
  • the number of non-dynamic tessellation elements and/or the number of dynamic tessellation elements may be considered when performing cell division.
  • all elements including dynamic and non-dynamic tessellation elements
  • FIG. 11 illustrates an example of stretching an octree cell to accommodate non-dynamic tessellation elements (i.e., triangles).
  • non-dynamic tessellation elements i.e., triangles
  • FIG. 11 a cell division may result in the creation of Original Cell #1 and Original Cell #2.
  • Triangle T2 may be located within Original Cell #1
  • triangle T1 may be located within Original Cell #1 and Original Cell #2.
  • Original Cell #1 may be stretched along one direction to accommodate triangle T1 and define a new cell, Stretched Cell #1.
  • the cell When stretching the dimension(s) of a cell, the cell may be stretched only to the extent that it is necessary to fully contain the divided element (such as triangle T1).
  • the vertices of divided or split elements may be used to determine the extent and the axis (x, y or z in 3D environments) along which the stretch should be performed.
  • Stretched Cell #1 may be generated as a result of stretching the Original Cell #1 in the direction of and only to the extent of the outer vertex of triangle T1 that is not contained within Original Cell #1.
  • the cell determined to have the majority of the volume or area of the divided element(s) may be stretched.
  • VR system 28 would only perform stretching on the cell determined to require the least amount of stretching.
  • the cell may be stretched in one or more directions until all elements are within the stretched cell.
  • stretching can be performed along one or more axes (x, y and/or z). Further, since all triangle elements should be contained within a cell before it is divided, the vertices of the divided elements may be compared against the center point or median location where the cell division or cut occurred to determine the direction and amount of stretching required for divided cells. Such an analysis may be performed with respect to each divided element by comparing the x, y and z coordinates of the triangle to the center point or median location. This process may be repeated for divided cells requiring stretching based on more than one divided element or triangle.
  • various techniques may be performed for allocating dynamic tessellation elements to the cells of the octree.
  • dynamic tessellation elements may be allocated only to cells within which they are fully contained.
  • a bounding box may be generated about the dynamic tessellation element and the bounding box may be compared to the cells of the octree to determine the lowest level cell of the octree that fully contains the bounding box.
  • the allocation of dynamic tessellation elements may occur during the process of creating the octree or as a separate process that is performed after all of the cells of the octree are constructed.
  • each cell of the octree may be analyzed to perform a number of operations on the elements contained in the cell (step S.48). As illustrated in FIG. 9, such operations may include sorting elements by material, optimizing vertex arrays, and/or building triangle strips.
  • the elements of each cell may be sorted according to their material type.
  • the material type for each element which may be extracted from the 3D model, may define various rendering properties for the element such as color, opacity, brightness and/or texture.
  • the material type for each element may be determined and then one or more linked lists may be created to list the elements according to material type. Sorting elements by material type in the final VR file(s) can provide faster rendering rates, since most conventional graphics cards can only process and render elements with one material that is active at a time. As a result, organizing elements by material type can reduce the amount of processing time required by the graphics hardware to switch between material types during run-time operations.
  • the elements of each cell may also be processed to optimize vertex arrays.
  • Such an operation may be performed to optimize the manner in which triangle elements are stored in the VR file(s).
  • triangle elements in a cell can be analyzed to determine shared vertices between triangles and, for triangles identified with shared vertices, the data defining such triangles may be reformatted to reduce the overall amount of data required to define the triangles.
  • Triangle elements that do not share a common vertex with one or more other triangles can also be reformatted to define such triangles with vertex data to provide similar formatting with the triangles identified to have common vertices.
  • FIGS. 12A and 12B To illustrate the concept of providing optimized vertex array data, reference will now be made to FIGS. 12A and 12B to illustrate an embodiment consistent with the principles of the invention.
  • a triangle (T1) is defined by a set of x, y, z points in the 3D model.
  • three such points P1 , P2, P3) are required to define the triangle (T1).
  • each of these points P1 , P2, P3) may be defined by different x, y, z coordinates.
  • T1 can be defined by a combined data set of x, y, z coordinates (x1 , y1 , z1 ; x2, y2, z2; x3, y3, z3) in the 3D model.
  • triangle (T1) does not share a common vertex with any other triangle element in the cell.
  • FIG. 12B assume that two other triangles in the cell (T2 and T3) have one common vertex (P5).
  • these triangles may be represented by combined data sets (i.e., (P4, P5, P6) and (P5, P7, P8), respectively) where each point (P) in the combined data sets is uniquely defined by x, y, z coordinates.
  • T2 may be defined by a combined data set of x, y, z coordinates (x4, y4, z4; x5, y5, z5; x6, y6, z6) and T3 can be defined by another combined data set of x, y, z coordinates (x5, y5, z5; x7, y7, z2; x8, y8, z8).
  • repetitive data is stored in the 3D model for at least the coordinates defining the triangles (T2 and T3) that have a shared vertex.
  • VR system 28 may generate one or more vertex arrays (A v ) that define the x, y, z coordinates for the vertex points (P) of all triangle elements contained in the cell.
  • a v a vertex array (P1 , P2, P3, P4, P5, P6, P7, P8), where each vertex point in the array is defined by a unique set of x, y, z coordinates.
  • the index array may be generated with pointers that identify positions in the vertex array. For example, assume that vertices P1 -P8 are defined in positions 1-8, respectively, of a vertex array (A v ).
  • a t (1 , 2, 3, 4, 5, 6, 7, 8).
  • the pointers from every three positions of the index array may be used to read into the vertex array and extract the data.
  • the pointers from the first three positions of the index array A t i.e., 1 , 2, 3) may be used to read the corresponding positions in the vertex array A v and extract the coordinates for the triangle T1 (i.e., x1 , y1 , z1 ; x2, y2, z2; and x3, y3, z3).
  • optimized vertex array data can be provided and stored with respect to the cells of the octree for the VR file(s). Optimizing vertex array data for triangle elements may reduce the overall amount of data needed to define the complex virtual environment, as well as improve graphics processing and cache performance since previously transformed vertices can be pulled from cache (rather than recalculated) during the rendering phase.
  • FIG. 9 Another operation that may be performed on the triangle elements of each cell (FIG. 9; step S.48) is the building of triangle strips.
  • an index array for a triangle strip (A ts ) may be built to replace triangle array data representing triangle elements that have common edges.
  • FIG. 13 illustrates a triangle strip comprising three triangles (T1 , T2, T3).
  • the triangles of a triangle strip may result from triangulated polygons (see FIG. 9, step S.44) or may be original triangle elements in the 3D model that have common edges.
  • the index array for the triangle strip may be generated with pointers that identify positions in the vertex array. For instance, with respect to the example of FIG.
  • vertices P1-P5 are defined in positions 1-5, respectively, of a vertex array (A v ).
  • a sliding window that is three positions wide may be used to read the pointers from the index array and extract the data from the vertex array. For example, returning to the example for FIG.
  • the pointers from the first three positions of the index array At s may be used to read the corresponding positions in the vertex array A v and extract the coordinates for the triangle T1 (i.e., x1 , y1 , z1 ; x2, y2, z2; and x3, y3, z3).
  • the sliding window may then be moved forward one position and the next three positions within the window could be read from the index array A ts (i.e., 2, 3, 4) to extract the vertex data from the vertex array A v for the triangle T2, and a similar process may be repeated to extract the vertex data for the triangle T3 of the triangle strip.
  • VR system 28 may be implemented to analyze and identify triangles that have common edges and generate triangle strips for such triangle elements.
  • a conventional graphics utility such as NVtristrip-an Nvidia utility library, may be provided to perform triangle strip operations.
  • the utility may be called by VR system 28 by providing an index array for the triangles, and in response the utility may perform triangle strip operation and output an index array (such as A ts ) for the corresponding triangle strip.
  • GUI graphics hardware 18
  • API application program interface
  • OpenGL an API maintained by a consortium of companies
  • Direct 3D an API maintained by Microsoft Corporation
  • a list of occluder candidates can be built from the 3D model data (step S.50). Additionally, a collision detection hierarchy can be generated based on the 3D model (step S.52).
  • FIG. 9 illustrates the octree as being constructed before the occluders list and collision detection hierarchy are built, steps S.46 and S.48 (related to the octree) can be performed in any order relative to steps S.50 (related to the occluders list) and S.52 (related to the collision detection hierarchy). For example, as indicated above, these steps may be performed simultaneously or in parallel by VR system 28. Therefore, the exemplary flow diagram of FIG. 9 should not be construed as limiting the order by which certain steps are performed and, additionally, modifications to the order and nature of the steps of FIG. 9 may be implemented, consistent with the principles of the invention.
  • occluders candidate list may be provided in the VR file(s). As further described below, occluder candidates can be analyzed during rendering to more quickly identify those elements or objects that are occluded for each frame.
  • occluder candidates may be generated during pre-processing of a 3D model.
  • occluder candidates represent elements or primitives that have characteristics (such as a large surface areas) that classify them as global candidates for consideration when perform occlusion culling.
  • VR system 28 may analyze elements represented in the 3D model to identify large objects (such as walls, floors, etc.) that have characteristics indicative of good occluder properties (i.e., they are likely to block the view of other elements).
  • Various techniques such area and/or shape analysis, may be performed to determine whether an object classifies as a good occluder candidate. Consistent with embodiments of the invention, FIG.
  • each occluder candidate may be merged with the octree and stored relative to nodes of the octree for cells occupied by the occluder candidate.
  • collisions with objects may be detected and simulated to replicate real-world phenomena.
  • the task of collision detection can be complex, especially for environments based on massive models.
  • spatial partitioning of the environment may also be performed during pre-processing to provide a collision detection hierarchy.
  • a collision detection hierarchy may be developed from the 3D model.
  • Various techniques, such as grid subdivision processes, may be used to generate the collision detection hierarchy. With grid subdivision, the modeled environment (including all dynamic and non-dynamic tessellation elements) may be divided into a grid of equally sized cubes.
  • a bounding box tree (such as an axis-aligned bounding box (AABB) tree) may be created for each cube.
  • Each bounding box can be created by dividing a cube into two equally sized cells or cubes and then further dividing the cubes until a cube is empty or contains fewer than a predetermined number of elements or primitives.
  • each level of the bounding box may be represented by nodes that hierarchically related to one another in accordance with each level of division.
  • This hierarchy of bounding boxes may be stored to provide the collision detection hierarchy (such as collision detection file 56) for the VR file(s)
  • FIG. 15 illustrates an exemplary flow diagram for creating a collision detection hierarchy.
  • the VR file(s) are stored (step S.54).
  • the VR file(s) may include an octree file 52 and a collision detection file 56, which may be stored separately or together as part of one general VR file.
  • run-time operations may be performed to provide interactive displays of the complex virtual environment on a real-time basis.
  • FIG. 14 is an exemplary flow diagram for identifying and building a list of occluder candidates, consistent with embodiments of the invention.
  • the exemplary flow diagram of FIG. 14 may be implemented through various systems or environments during pre-processing, such as the system environment 10 of FIG. 1 or the network environment 50 of FIG. 2. For purposes of illustration, however, reference will be made to the embodiment of FIG. 1 to describe the flow diagram of FIG. 14.
  • neighboring triangle elements are identified (step S.70).
  • VR system 28 may analyze the data for all triangle elements represented in the complex environment (i.e., all non-dynamic tessellation elements, including those that were triangulated).
  • the defined edges of each triangle element may be analyzed to determine if it has a common edge with other triangle elements.
  • Triangle elements that are found to have common edges and that are within the same plane may be identified as neighboring triangles.
  • dynamic tessellation elements such elements may also be analyzed in a similar manner (in which case the dynamic tessellation elements may be triangulated) or they may be entirely ignored for the purposes of identifying occluder candidates.
  • the neighboring triangles are merged into polygons (step S.72).
  • VR system 28 may merge each set of neighboring triangles that are determined to be co-planar and redefine them through data representative of a polygon.
  • Each merged set of neighboring triangles are then analyzed to determine if the resulting polygon is convex (step S.74).
  • a merged polygon may be considered convex if it is not possible to pass a line through the polygon and intersect more than two sides of the polygon. If a line can be passed through the polygon and intersect more than two sides, then the polygon is not convex but instead concave. Consistent with embodiments of the invention, only merged triangles resulting in convex polygons may be considered for purposes of identifying candidate occluders.
  • step S.74 If a merged polygon is convex (step S.74; Yes), then processing continues to determine the area of the polygon (step S.78). The polygon (or candidate occluder) and its determined area may then be stored for further processing and sorting. If a set of neighboring triangles when merged do not form a convex polygon (step S.74; No), then the polygon is rejected from further consideration (step S.76). Additional neighboring triangles are merged and analyzed in a similar fashion until all neighboring triangles have been processed (step S.80; No).
  • candidate occluders i.e., all merged polygons that have been determined to be convex
  • the global occluder candidates are then stored (step S.84).
  • the global occluder candidates may either be stored as a separate list in the VR file(s) or they may merged into the octree according to the cell level or node of the octree within which they are fully contained.
  • candidate occluders may be sorted and selected according to predetermined thresholds relative to characteristics such as area or shape.
  • a predetermined number of candidate occluders may be selected which exhibit the largest number of triangles or the largest area.
  • all candidate occluders may be selected and sorted relative to area and/or size.
  • the sorting of candidate occluders may also be performed dynamically. For instance, to select global occluder candidates, a predetermined number of candidate occluders that exhibit the largest area may be selected based on the total number of candidates that were identified. By way of example, a predetermined percentage of candidates (such as 0.1 %) may be selected according to those that exhibit the largest area. Thus, if one million candidate occluders were identified, only a predetermined number of those candidates (such as 0.1 % or 1 ,000) that had the largest area would be selected and stored as global occluder candidates.
  • FIG. 15 illustrates an exemplary flow diagram for creating a collision detection hierarchy, consistent with embodiments of the invention.
  • the exemplary flow diagram of FIG. 15 may be implemented through various systems or environments during pre-processing, such as the system environment 10 of FIG. 1 or the network environment 50 of FIG. 2. For purposes of illustration, however, reference will again be made to the embodiment of FIG. 1 to describe the exemplary flow diagram.
  • a grid subdivision is first performed on the complex environment (step S.100).
  • VR system 28 may divide the complex environment into a plurality of equal sized cubes.
  • the size of each cube may be predetermined or selected according to the size of the complex environment.
  • data representative of massive models may be divided into cubes that are 10 x 10 x 10 meters.
  • all elements or primitives of the complex environment represented by the 3D model should be pre-processed, in accordance with embodiments of the invention.
  • all non-dynamic tessellation elements should be pre-processed so that they are represented by triangle elements.
  • non-dynamic tessellation elements represented by polygons containing more than three sides should be triangulated into a set of triangles.
  • all dynamic tessellation elements should be pre-processed and triangulated so that they are represented by triangle elements.
  • the results of such pre-processing of the dynamic and nondynamic tessellation elements may provide data representing list(s) of triangles for the performing the grid subdivision.
  • dynamic tessellation elements may be represented by a number of triangles based on computed tessellation level(s).
  • the tessellation level may be set according to a number of factors to control the number of triangles and, thus, the degree of detail by which the dynamic tessellation level is rendered for the user.
  • objects that appear closer to the user may be rendered with higher detail than objects that appear more distant from the user.
  • the tessellation level may be computed to approximate the dynamic tessellation element with sufficient detail to permit collision detection operations to be performed accurately.
  • the tessellation level computed for the purposes of generating triangles for the collision detection hierarchy may be determined according to the dynamic tessellation element type (e.g., cone element type, torus element type, and/or surface-of-revolution element type).
  • the dynamic tessellation element corresponds to a cone element type (see, for example, FIG. 10A)
  • R represents the larger of the two radii (R t or R b ) of the cone.
  • a torus element type see, for example, FIG.
  • Ri represents the radius of the torus measured from the center point (C).
  • a revolution tessellation level (L R ) and an arc tessellation level (LA) may be computed.
  • Lv (O.A X (RAI + RA2))/50, where ⁇ A represents the angle of the profile or ellipse, R A ⁇ and R A2 represent the radii of the profile or ellipse.
  • the above-noted examples for the different dynamic tessellation element types assume a scale size that is measured in centimeters. If a different scale size is used, then the exemplary algorithms for computing the tessellation levels may be adjusted accordingly.
  • the tessellation level for generating triangles for the collision detection hierarchy may be restricted to an upper value and/or a lower value.
  • tessellation levels may be restricted by a lower clamp levels to guarantee that the tessellation level always has at least a predetermined minimum value.
  • a lower clamp level of 8 may be predefined. With such a lower clamp level, the tessellation level for a cone element may be restricted and reset to 8 in cases where the computed tessellation level is lower than 8.
  • both the horizontal tessellation level LH and the vertical tessellation level Lv may be restricted by a lower clamp level, such a lower clamp level of 8.
  • clamp levels may also be predefined for the revolution tessellation level LR and the arc tessellation level LA.
  • a lower clamp level of 8 may be predefined for the revolution tessellation level LR.
  • a lower clamp level of 4 may be predefined for the arc tessellation level LA.
  • the above-noted clamp levels are exemplary only and other clamp levels may be predefined, consistent with embodiments of the present invention.
  • the number of triangles for representing the dynamic tessellation element may be determined. As further described below with reference to Table 1 , predetermined relationships or algorithms may be used for determining the number of triangle segments based on the computed tessellation level(s). In addition, various techniques may be used for generating vertex data for the triangle segments representing each dynamic tessellation element, including the techniques further described below for dynamically tessellating elements with triangles during run-time operations.
  • step S.102 After dividing the environment into equally-sized cubes (step S.100), all triangle elements are allocated to cubes of the grid (step S.102).
  • VR system 28 may allocate or associate triangle elements according to the cubes within which they are fully contained. For triangle elements that are partially within one or more cubes, such triangle elements may be allocated or associated with each and every cube that it is partially present in. Alternatively, cubes of the grid may be stretched in order to allocate triangle elements for the collision detection hierarchy.
  • each cube of the grid subdivision is further processed by creating a bounding box tree for each cube of the grid (step S.104).
  • VR system 28 may create a bounding box tree such as an AABB tree for each cube of the grid.
  • each cube is first divided into two equally sized cubes along one of the three major x, y, z planes. This division of cubes is then repeated along alternate directions of the three major x, y, z planes until a cube is empty or contains fewer than a predetermined number of triangle elements (such as 10 or less triangle elements).
  • AABB trees other known types of bounding box trees can be utilized as part of step S.104, consistent with embodiments of the invention.
  • All triangle elements are allocated to the cubes or bounding boxes of the tree (step S.106). For instance, triangle elements may be allocated or associated with bounding boxes that they are fully contained within. If a cube division dissects a triangle element, the dissected triangle element may be allocated or associated with only the previous cube or bounding box within which it was contained. Other techniques and methods may be utilized for allocating dissected triangle elements. For example, bounding boxes may be stretched to allocate triangle elements.
  • the collision detection hierarchy is stored (step S.108).
  • the collision detection hierarchy may be filed as part of a collision detection file (such as file 56) of the VR file(s).
  • a collision detection file such as file 56
  • Such a file may include data indicating the hierarchy of bounding boxes and, for each bounding box, the triangle elements that are allocated or associated with the bounding box.
  • each bounding box may be stored with data representing a bounding sphere.
  • the bounding sphere may be a sphere that bounds the bounding box and may be used during run-time operations to determine whether a collision or intersection occurs between an object and the bounding area (represented by the bounding sphere).
  • collision detection operations for objects (such as an avatar also represented by a bounding sphere) may be performed faster and with less processing resources being required.
  • user input and data from the VR file(s) are used to simulate and rendered successive frames of a complex virtual environment.
  • the VR file(s) for the complex environment can be accessed or loaded from memory (such as memory 30 in FIG. 1).
  • a dialog box or window may be displayed to the user so that the appropriate VR file(s) can be identified or selected.
  • a user may also be given the option to set various system configurations, such as screen resolution, color depth and/or key or input device selection for navigation.
  • an initial frame of the complex environment may be displayed to the user based on a predetermined or default camera or viewer position. Thereafter, a simulation loop can be executed to simulate and render frames according to input from the user.
  • various operations may be performed to simulate real-world phenomena. Examples of such operations include gravity simulation and predictive collision detection.
  • the position and viewpoint for the next frame can be determined.
  • information determined during simulation can be used to generate rendering instructions.
  • the rendering instructions are then used by the graphics system to cause the frame to be displayed to the user.
  • FIGS. 16A and 16B illustrate exemplary flow diagrams for performing predictive collision detection, consistent with embodiments of the invention.
  • predictive collision detection may be performed during run-time operations to simulate frames of a complex virtual environment based on user input.
  • the features of FIGS. 16A and 16B can be performed on a frame-by-frame basis during a simulation loop or the consecutive display of frames of the virtual environment.
  • the position of the user may be represented in a third-person mode by, for example, an avatar.
  • the avatar may be represented by various images, such as an image of a human.
  • user input is analyzed for the next frame to be simulated and rendered (step S.120).
  • user input collected over a predetermined polling period may be analyzed to determine position, direction and velocity of an avatar or a moving object.
  • the predetermined polling period may be set to the rate at which frames are rendered to the user (i.e., the frame rate).
  • the frame rate may be actively measured (e.g., by measuring and taking the average rate for the last 10-20 frames).
  • the current position of the avatar (or moving object) and the desired position for the next frame are then determined (step S.122). For example, a virtual projection or path may be calculated from the current position of the avatar (at the beginning of the polling period) to the desired position (at the end of the polling period) based on the position, direction and velocity data derived from the user input. Similar calculations could also be performed with respect to a moving object controlled by the user input. Further, when determining the desired position, the effect of gravity may be considered to determine the distance traveled by the avatar or moving object.
  • a rough proximity query is performed (step S.124).
  • a rough proximity determination can be performed by identifying which cubes of the grid subdivision intersect with the avatar or moving object when it is moved along the calculated path between the current position and desired position.
  • the avatar or moving object may be approximated with a bounding volume, such as an ellipsoid or sphere, that is centered relative to the calculated path.
  • cubes of the grid subdivision may be approximated with a sphere or other types of bounding volumes.
  • a refined proximity query is performed based on the bounding box tree associated with each cube (step S.126). For instance, a refined proximity determination can be performed for each identified cube by identifying which bounding boxes of the associated bounding box tree intersect with the avatar or moving object when it is moved along the calculated path between the current position and the desired position.
  • the avatar or moving object may be approximated with a bounding volume, such as an ellipsoid or sphere, that is centered relative to the calculated path as it is moved between the current position and desired position.
  • each bounding box can be approximated with a bounding volume, such as a sphere.
  • each of the identified bounding boxes are analyzed to identify intersecting triangles elements with the moving avatar or object (step S.128)
  • the avatar or moving object can be approximated with a bounding volume (such as an ellipsoid) and moved along the calculated path between the current position and the desired position.
  • a determination can then be made whether there are any intersecting triangles within the identified bounding box based on the projection of the avatar or object.
  • various parameterization and/or other types of techniques may be employed.
  • a sweep test may be employed for detecting for intersections between the avatar (approximated with a sphere) and grid cells or bounding boxes (also approximated with a sphere).
  • all points on the surface of the bounding sphere for the avatar may be parameterized as a function of location (x, y, z) and time as the bounding sphere is moved from its current position to the desired position.
  • a test may be performed for any intersections between the bounding sphere for the avatar and the bounding sphere for the cell or bounding box.
  • the bounding sphere for the cell or bounding box may be treated as a fixed or stationary entity, and all points on the bounding sphere may be parameterized as a function of location (x, y, z).
  • Appendix A contains exemplary programming code prepared in C++ for performing a sweep test with respect to two bounding spheres.
  • the exemplary programming code of Appendix A may be used as part of steps S.124 and S.126 to determine any intersections that occur when one bounding sphere (representing the avatar or moving object) is moved relative to another bounding sphere (representing the grid cell or bounding box).
  • Similar techniques may also be used to perform the operations of step S.128.
  • a sweep test may be employed for detecting intersections between the avatar or moving object (approximated with an ellipsoid or sphere) and stationary triangle elements.
  • the avatar or moving object may be approximated with a bounding sphere that is parameterized as a function of location (x, y, z) and time as it is moved relative to a triangle element. Any intersections between the bounding sphere and the triangle element may be detected and recorded during the sweep test.
  • Appendix B contains exemplary programming code prepared in C++ for performing a sweep test with respect to a bounding sphere and a triangle element. The exemplary programming code of Appendix B may be used as part of step S.128 to determine any intersections that occur when the bounding sphere (representing the avatar or moving object) is moved toward a desired position relative to a stationary triangle.
  • steps S.124, S.126 and S.128 are not limited to the above-noted examples.
  • Other parameterization techniques and 3D intersection tests may be used, consistent with embodiments of the invention.
  • 3D intersection algorithms and approaches may be used for moving objects, such as those described in Eberly, David H., "3D Game Engine Design: A Practical Approach to Real-Time Computer Graphics," Morgan Kaufmann Publishers, ISBN 1558605932, (September 2000), the disclosure of which is expressly incorporated herein by reference to its entirety.
  • various intersection algorithms and programming code for testing for intersections of moving objects may be used, such as that provided at ⁇ http://www.magic-software.com/lntersection3D.html>.
  • step S.130 a determination is made as to whether a collision was detected (step S.130). For instance, if at least one intersecting triangle is identified, then it is determined that a collision was found. When a collision is detected (step S.130; Yes), each intersecting triangle may be analyzed to determine the time of contact and the exact point where the intersection or contact occurred with the avatar or moving object (step S.132). Thereafter, processing continues to step S.144 (see FIG. 16B). If no collisions are found (step S.130; No), then the avatar or moving object can be moved or adjusted without interference to the desired position and, as a result, the desired position is provided in order to generate the rendering instructions for the next frame (step S.134).
  • the intersecting triangle with the earliest contact time is identified (step S.144).
  • all intersecting triangles and their associated contact times may be analyzed to determine the intersecting triangle that has the earliest time of contact.
  • the position of the avatar or moving object is determined at the earliest time of contact (step S.146). For example, based on the determined velocity from the user input, the avatar can be moved along the calculated path from the current position until the earliest time of contact (which can be measured in time relative to the beginning of the polling period). If the avatar or moving object is represented by a bounding volume (such as an ellipsoid for the avatar), then the center point or position of the bounding volume may be moved along the calculated path.
  • a bounding volume such as an ellipsoid for the avatar
  • the simulation of a collision may be handled in a number of different ways.
  • the avatar or moving object may simply be stopped at the point of contact for the first collision detected.
  • the collision can be rendered in the next frame and the user would be responsible to provide appropriate input to reposition the avatar or moved the object to avoid future collisions.
  • an adjusted position may be determined before rendering the next frame to the user.
  • an adjusted position in the complex virtual environment may be determined using various techniques.
  • the adjusted position may be determined by projecting the desired position onto a plane that is tangent to the point of contact (see, for example, FIG. 18A).
  • the adjusted position may be defined at a point where the projection intersects the tangent plane at a ninety degree angle (i.e., orthogonal to the tangent plane).
  • FIG. 17 illustrates an exemplary flow diagram for performing a sliding motion operation, consistent with an embodiment of the invention.
  • a sliding motion may be performed to determine an adjusted position for the avatar or moving object (step S.148). If another collision is detected when performing the sliding motion operation (step S.150, Yes), then a new point of contact may be set based on the earliest point of contact with the avatar or moving object during the sliding motion (step S.154). Thereafter, a sliding motion operation is repeated with the reset point of contact until no collisions are detected (step S.150; No). When no collisions are detected, the avatar or moving object can be moved to the adjusted position without interference and, as a result, the adjusted position may be provided in order to generate the rendering instructions for the next frame (step S.152).
  • FIG. 17 is an exemplary flow diagram for performing a sliding motion operation, consistent with embodiments of the invention.
  • the embodiment of FIG. 17 is described with reference to performing a sliding motion operation on an avatar.
  • the exemplary flow diagram of FIG. 17 may also performed with other moving objects, including those approximated with a bounding ellipsoid or any other type of bounding volume.
  • the point of contact with the avatar is provided (step S.170).
  • the point of contact may be determined with the avatar being approximated with a bounding volume such as an ellipsoid.
  • the center of the bounding ellipsoid may be positioned at the current position and the bounding ellipsoid may be moved along the calculated path toward the desired position until the first point of contact is detected (see, for example, FIGS. 18B and 18C).
  • the avatar or moving object may be represented simply by a point and moved from the current position toward the desired position until the first point of contact with another element or object is detected (see, for example, FIG. 18A).
  • a tangent plane is then calculated at the point of contact (step S.172). For example, a plane tangent to the bounding ellipsoid at the point of contact may be calculated.
  • FIG. 18B illustrates an example of a tangent plane calculated from the point of contact between a bounding ellipsoid for an avatar and a flight of steps.
  • FIG. 18C illustrates another example of a tangent plane calculated from the point of contact between a bounding ellipsoid for an avatar and a wall. In the example of FIG. 18C, the tangent plane is co-planar with the wall.
  • the tangent plane may represent a sliding plane along which a sliding motion is performed to determine the adjusted position of the avatar.
  • the adjusted position of the avatar may be calculated by, for example, projecting the desired position onto the tangent plane (step S.174).
  • the adjusted position may be defined at a point where the projection intersects the tangent plane at a ninety degree angle (i.e., orthogonal to the tangent plane).
  • the projection of the desired position onto the tangent plane results in an adjusted position that is located up the flight of steps away from the point of contact.
  • the projection of the desired position onto the tangent plane results in an adjusted position that is identical to the point of contact. This is because the user is attempting to move the avatar directly into the wall and, as a result, no sliding motion is possible.
  • the point at which the projection intersects the tangent plane at a ninety-degree angle is set as the adjusted position (step S.176).
  • the effects of gravity may be incorporated.
  • a new adjusted position may be set by incorporating a gravity function to cause the calculated position to float down.
  • the adjusted position could be recalculated to compensate for gravity and cause the avatar to be rendered at a position on the steps above the first point of contact.
  • sliding motion operations may be incorporated to respond to collisions detected during simulation.
  • Sliding motion operations may permit the avatar or moving object to effectively slide to an adjusted position from the first point of contact. Further, only the adjusted position may be rendered to the user. This would permit the user to glide up or down objects such as a flight of steps or stairs.
  • no sliding motion may be simulated and the next frame may be rendered with the avatar stopped at the point of contact (such as in the case of FIG. 18C).
  • the desired or adjusted position calculated during simulation can be used to reset the viewer or camera position for rendering the next frame.
  • view frustum and occlusion culling operations may be performed based on the viewer position.
  • VR file(s) consistent the embodiments of the invention may be used to provide more efficient processing and rendering of the frame.
  • FIG. 19 illustrates an exemplary flow diagram for rendering complex virtual environments using VR file(s), consistent with embodiments of the invention.
  • the exemplary flow diagram of FIG. 19 may be performed during run-time operations using various systems or environments, such as the system environment 10 of FIG. 1 or the network environment 50 of FIG. 2. For purposes of illustration, however, reference will be made to the embodiment of FIG. 1 to describe the flow diagram of FIG. 19.
  • the viewer or camera position is set for the next frame (step S.180). As indicated above, the viewer position for rendering the next frame may be set according to the desired or adjusted position calculated during simulation by VR system 28.
  • the view frustum can be determined and the octree stored as part of the VR file(s) can be traversed and analyzed to cull nodes that are outside of the view frustum (step S.182).
  • view frustum culling can be performed efficiently by graphics system 24 to eliminate cells (i.e., portions of the complex virtual environment) that are outside of the view frustum for the next frame.
  • graphics system 24 may eliminate all nodes or cells of the octree that are completely outside the view frustum for the next frame. If a cell is determined to be partially outside of the view frustum, then graphics system 24 may traverse down the octree to analyze more finite cells and determine which portions are in or outside of the view frustum. Graphics system 24 may traverse the octree until all nodes or cells within the view frustum are identified and all other nodes or cells outside of the view frustum are culled.
  • the stored global list of occluder candidates may be analyzed in order to cull nodes or cells of the octree that are occluded (step S.184).
  • the VR file(s) may include a global list of occluder candidates which may be stored as a separate list or merged into the octree.
  • these global occluder candidates may be analyzed to identify nodes or cells of the octree that are occluded by other objects or elements within the view frustum.
  • other occlusion culling operations may be performed to cull occluded nodes or cells for the next frame.
  • an exemplary flow diagram for analyzing occluder candidates and performing occlusion culling is described below with reference to FIG. 20.
  • elements to be dynamic tessellated may be identified and dynamically tessellated (step S.186).
  • graphics system 24 may analyze each visible cell to locate and identify dynamic tessellation elements. The identified elements may then be dynamically tessellated according to various factors, such as the size of the element or object on the screen.
  • FIG. 23, described in detail below, illustrates an exemplary flow diagram for performing dynamic tessellation operations, consistent with embodiments of the invention.
  • next frame is rendered to the user (step S.188).
  • graphics system 24 may generate a set of commands (such as OpenGL commands) based on the results of the rendering operations for the next frame to be rendered.
  • the OpenGL commands may be issued by the graphics engine (such as graphics engine 242) to the graphics card or hardware (such as graphics hardware 18) to cause display 12 to render the next frame to the user.
  • FIG. 20 is an exemplary flow diagram for analyzing global occluder candidates and performing occlusion culling, consistent with embodiments of the invention.
  • the exemplary flow diagram of FIG. 20 may be performed during runtime operations to perform occlusion culling using, for example, the system environment 10 of FIG. 1 or the network environment 50 of FIG. 2. For purposes of illustration, however, reference will be made to the embodiment of FIG. 1 to describe the flow diagram of FIG. 20.
  • an associated quality metric may be determined for each global occluder candidate that is within the view frustum (step S.190). For instance, based on the view frustum or octree nodes determined to be visible for the next frame, global occluder candidates in the VR file(s) may be analyzed by graphics system 24 to determine a quality metric. The quality metric may be determined based on a number of factors that indicate whether the occluder candidate may function as a good occluder for the next frame.
  • factors for determining a quality metric may include: the visible surface area of the occluder candidate; the distance of the occluder candidate from the viewer or camera position; the shortest distance between the center of the occluder candidate and the perimeter of the occluder candidate; the angle between the viewer or camera position for the next frame, and the point on the perimeter of the occluder that is closest to the center of the occluder (the "closest perimeter point"); and/or the number of elements occluded by the occluder candidate in previous frame(s).
  • the center of the occluder candidate may be a middle point or center of gravity for the occluder candidate from which the closest perimeter point is determined.
  • the angle ⁇ may be determined from the angle between the viewer point and closest perimeter point relative to the center of the occluder candidate.
  • the number n may be determined from the number of occluded elements in the previous frame or a weighted average of occluded elements (giving more weight to the number of occluded elements from more recent frames) of a plurality of previous frames (such as the last ten frames).
  • the occluder candidates that are within the view frustum are sorted in accordance with their quality metric (step S.192). For instance, graphics system 24 may sort the occluder candidates in descending order based on their associated quality metric, such that occluder candidates with higher quality metrics come before other occluder candidates with lower quality metrics. By sorting occluder candidates, occluder candidates with higher quality metrics can be analyzed first and occluded cells can be identified more quickly and efficiently.
  • the analysis of occluder candidates may be performed over a predetermined time period.
  • such an analysis may also be limited in time (e.g., over a 2ms time period) to prevent processing time by the graphics system 24 being wasted on excessive occluder candidates that have low quality metrics or marginal chances of acting as an occluder.
  • the time period may be set according to the speed of the graphics hardware 18 to prevent processing time of the graphics system 24 from taking longer than that which could otherwise be handled more efficiently through rendering with the graphics hardware 18.
  • each of the fully occluded cells may be culled (step S.200).
  • a cell may be determined to be fully occluded if the cell is within the shadow volume of the occluder candidate (i.e., the points defining the cell are contained within all edges of the occluder candidate (OC-1 )).
  • a cell may be determined not to be fully occluded if the points defining the cell are outside at least one edge of the occluder candidate (OC-1) and if the cell does not intersect any one of the edges of the occluder candidate.
  • step S.202 If any cells are determined to be partially occluded (step S.202; Yes), then an analysis may be performed to determine if any of such cells are fully occluded by a combination of occluders. If cells are occluded by a combination of occluders, then any such cells may be occluded (step S.204). After inspecting each occluder candidate, a determination is made as to whether the predetermined time period has expired (step S.206). If the timer has not expired (step S.206, No), then additional occluder candidates are examined. Otherwise, if the timer has expired (step S.206; Yes), the process of inspecting occluder candidates is terminated.
  • partially occluded cells may be further inspected to determine if they are fully occluded by a combination of occluders (see step S.204). For each partially blocked cell, an inspection can be made for neighboring occluders that, in combination with the occluder candidate, fully occlude the cell. As illustrated in FIG. 22C, a neighboring occluder (OC-2) may be identified by searching along the intersecting edge of the occluder candidate (OC-1) that intersects the cell. For each neighboring occluder that has a common edge with the occluder candidate along the intersecting edge, the other edges of the neighboring occluder may be analyzed to determine if the cell is occluded.
  • a neighboring occluder OC-2
  • the other edges of the neighboring occluder may be analyzed to determine if the cell is occluded.
  • the cell is fully occluded by the combination of the occluders. As a result, the cell is culled (step S.204).
  • an intersecting edge is detected with the neighboring cell, then additional searching may be performed for other neighboring cells along the intersecting edge. This process may continue until the cell is determined to be fully occluded or until no additional neighboring cells are located. In cases where an intersecting edge is identified along an edge of a neighboring cell and no other neighboring cells are located with respect to the neighboring edge, the cell may be determined to be only partially occluded and the process may terminate.
  • a cell may first be determined to be partially occluded by an occluder candidate (OC-1).
  • An inspection along the intersecting edge of the occluder candidate (OC-1) would reveal a neighboring occluder candidate (OC-2).
  • the other edges of the neighboring occluder (OC-2) may be analyzed. With the example of FIG. 22D, such an analysis would reveal that the points defining the cell are within two of the other edges of the occluder (OC-2). However, an intersecting edge also exists with respect to the neighboring occluder (OC-2).
  • the cell is not fully occluded by the combination of the identified occluders (OC-1 and OC-2).
  • a search along the intersecting edge of the neighboring occluder (OC-2) would reveal that there are no additional neighboring occluders. Therefore, in the example of FIG. 22D, the process would terminate with the cell being identified as being only partially occluded.
  • FIG. 22E Another example is illustrated in FIG. 22E.
  • the cell would first be determined to be partially occluded by an occluder candidate (OC- 1 ). An inspection along the intersecting edge of the occluder candidate (OC-1) would then reveal a neighboring occluder candidate (OC-2). To determine if the cell is fully occluded by the neighboring occluder (OC-2), the other edges of the neighboring occluder may be analyzed. In the case of FIG. 22E, such an analysis would reveal that the points defining the cell are within all of the other edges of the occluder (OC-2) and that no other intersecting edges exist.
  • the cell is fully occluded by the identified occluders (OC-1 and OC-2) and the cell may be culled. While another neighboring occluder (OC-3) exists, processing may terminate once it is determined that the cell is fully occluded and, thus, there is no need to search or analyze additional neighboring occluders.
  • FIG. 23 illustrates an exemplary flow diagram for performing dynamic tessellation, consistent with embodiments of the invention.
  • the exemplary flow diagram of FIG. 23 may be performed during run-time operations to perform dynamic tessellation using, for example, the system environment 10 of FIG. 1 or the network environment 50 of FIG. 2. For purposes of illustration, however, reference will be made to the embodiment of FIG. 1 to describe the flow diagram of FIG. 23.
  • the dynamic tessellation elements for each visible cell within the view frustum are first identified (step S.220).
  • curved and rounded elements may be filtered and stored as dynamic tessellation elements in the VR file(s).
  • graphics system 24 may readily identify and single out those elements stored in the VR file(s) as dynamic tessellation elements.
  • one or more types of dynamic tessellation elements may be stored in the VR file(s), such as cone element types, torus element types, and surface-of-revolution types.
  • a filtering process may be performed based on the screen size of the element or object (step S. 224).
  • the filtering process may be performed so that only identified elements of a significant or predetermined size in the next frame will be dynamically tessellated.
  • one or more factors may be considered, including the actual size of the element and/or the distance of the element from the viewer or camera position.
  • graphics system 24 can determine whether to filter the element so that it will be dynamically tessellated for the next frame or whether to reject an element so that it will not be dynamically tessellated and/or displayed.
  • elements that are too small or insignificant for dynamic tessellation such as elements that have a screen size of only a few pixels) may be identified and rejected by graphic system 24.
  • the screen size for an element may be estimated and compared with a predetermined threshold. If the estimated screen size is below the predetermined threshold, then the element may be rejected for filtering. Otherwise, if the estimated screen size is equal to or greater than the predetermined threshold, the element may be filtered so that it will be dynamically tessellated and rendered in the next frame.
  • elements determined to have a screen size that is below a predetermined threshold may have a tessellation level that is set to zero so that they are not displayed.
  • Embodiments consistent with the invention may estimate or calculate a screen size for each type of dynamic tessellation element.
  • the visible radius H of a cone may be computed by taking the distance between the center point (C) and either the center point of the top (T) or the center point of the bottom (B) of the cone.
  • the screen size S c of the cone element type is determined to be less than 0.3, then element may be rejected. Otherwise, if the screen size S c of the cone is equal to or greater than 0.3, then the element may be further processed for dynamic tessellation.
  • the above-noted example assumes a scale size that is measured in centimeters. If a different scale size is used, then the exemplary algorithm for computing the screen size (Sc) and the exemplary thresholds may be adjusted accordingly.
  • Similar processing may be performed for identified torus element types.
  • the identified element is a torus element type (see, for example, FIG. 10B)
  • the visible radius V of a torus may be calculated by summing the internal and outer radii (Ri and R 2 ) of the torus.
  • the element may be rejected if the screen size Sj of the element is determined to be less than 0.3.
  • the element may be filtered for dynamic tessellation.
  • the above-noted example assumes a scale size that is measured in centimeters. If a different scale size is used, then the exemplary algorithm for computing the screen size (ST) and the exemplary thresholds may be adjusted.
  • S s (50 x V)/(D A + Dc), where V represents the visible radius of the element, D A represents the distance between the viewer or camera position (E) and the center point (C A ) of the arc or ellipse of the element, and D c represents the distance between the viewer or camera position (E) and the center point (Cs) of the surface-of-revolution element.
  • the visible radius V may be calculated by summing the radii of the arc or ellipse (RAI and R A2 ) with the radius of the of the revolution (Rs).
  • Rs radius of the of the revolution
  • the element may be rejected if the screen size Ss of the element is determined to be less than 0.03. Otherwise, if the screen size of the Ss of the element is equal to or greater than 0.03, then the surface-of-revolution element may be filtered for dynamic tessellation.
  • the above-noted example assumes a scale size that is measured in centimeters. If a different scale size is used, then the exemplary algorithm for computing the screen size (Ss) and the exemplary thresholds may be adjusted accordingly.
  • a tessellation level may be computed (step S.228).
  • the tessellation level may depend on or relate to the screen size of the element or object.
  • the tessellation level determined during runtime operations may control the manner or level of detail by which an element is rendered.
  • the tessellation level may be based on a predetermined scale or related to a value which indicates the number of triangles used for rendering the element or object. Further, as indicated above, if the screen size of an element is determined to be below a predetermined threshold, then the tessellation level for the element may be set to zero so that the element will not be displayed.
  • a tessellation level may be computed for each type of dynamic tessellation element.
  • the quality level Q may be user defined to permit control or adjustment to the level of detail in the rendered frame.
  • a default value (such as 1) may be assigned to the quality level Q to represent normal processing.
  • a user can adjust the level of detail in the rendered frame so that a dynamically tessellated cone appears with lesser or greater detail.
  • the above- noted example assumes a scale size that is measured in centimeters. If a different scale size is used, then the exemplary algorithm for computing the tessellation level (L c ) may be adjusted accordingly.
  • the tessellation level for torus element types can also be determined, consistent with embodiments of the invention. For instance, to dynamically tessellate a torus element type (see, for example, FIG. 10B), a horizontal tessellation level (LH) and a vertical tessellation level (Lv) may be computed.
  • LH horizontal tessellation level
  • Lv vertical tessellation level
  • the quality level Q may be defined by a user to permit control or adjustment to the level of detail in the rendered frame.
  • a user can adjust the level of detail in the rendered frame so that dynamically tessellated torus appears with lesser or greater detail.
  • the above-noted example assumes a scale size that is measured in centimeters. If a different scale size is used, then the exemplary algorithms for computing the tessellation levels (LH and Lv ) may be adjusted accordingly.
  • Surface-of-revolution elements may also be dynamically tessellated. For instance, in order to dynamically tessellate a surface-of-revolution element type (see, for example, FIG. 10C), a revolution tessellation level (L R ) and an arc tessellation level (L A ) may be computed.
  • L R revolution tessellation level
  • L A arc tessellation level
  • the quality level Q may be defined by a user to permit control or adjustment to the level of detail in the rendered frame.
  • the default value such as a default value of 1
  • a user can adjust the level of detail in the rendered frame so that dynamically tessellated element appears with lesser or greater detail.
  • the tessellation level for a dynamically tessellated element may be restricted to an upper value and/or a lower value.
  • upper and/or lower clamp levels may be predefined.
  • an upper clamp level such as 36
  • the tessellation level for a cone element may be restricted and reset to 36 in cases where the computed tessellation level is greater than the upper clamp level.
  • Clamp levels may also be predefined for torus type elements.
  • upper and lower clamp levels may be predefined for the horizontal tessellation level LH and the vertical tessellation level Lv.
  • an upper clamp level of 36 and a lower clamp level of 3 may be predefined for the horizontal tessellation level LH.
  • an upper clamp level of 64 and a lower clamp level of 1 may be predefined for the vertical tessellation level Lv.
  • the tessellation level may be restricted and reset when the computed value for the tessellation level exceeds the predefined upper or lower clamp levels.
  • the tessellation levels for a surface-of-revolution element may also be restricted by predefined clamp levels.
  • upper and lower clamp levels may be predefined for the revolution tessellation level L R and the arc tessellation level LA-
  • an upper clamp level of 64 and a lower clamp level of 8 may be predefined for the revolution tessellation level LR.
  • an upper clamp level of 32 and a lower clamp level of 4 may be predefined for the arc tessellation level LA.
  • the tessellation level may be restricted and reset when the computed value for the tessellation level exceeds the predefined upper or lower clamp levels.
  • the vertex data for dynamically tessellated elements may be buffered in memory to facilitate such an analysis.
  • the tessellation levels for previously rendered dynamic tessellation elements may be stored in a table or file. If the element was not rendered in the previous frame at the same tessellation level (step S.232; No), then a vertex array may be computed for rendering the dynamically tessellated element according to the tessellation level for the next frame (step S.238).
  • the stored vertex data for rendering the element in the previous frame may be used (step S.236).
  • the element can be rendered faster in the next frame without the need to use any processing resources for computing the array.
  • the stored vertex data may be stored in a memory buffer such as video, AGP or backend memory (see FIG. 24 A).
  • various techniques may be employed. For example, the number of triangles for rendering an element may be computed for each type of dynamic tessellation element as a function of the tessellation level. Consistent with embodiments of the invention, Table 1 includes exemplary relationships between the computed tessellation level and the number of triangle segments to render various types of the dynamic tessellation elements.
  • an open cone refers ' to a cone element in which both ends of the cone are rendered as an open surface.
  • the side surface of the cone element may be rendered with 2Lc number of triangles.
  • a closed cone refers to a cone element in which both ends of the cone are rendered with a solid or closed surface (such as a cap). Closed cones may be rendered using 2Lc number of triangles for the side surface and Lc triangles for each end of the cone. Thus, a total of 4L C triangles (2L C + L c + L c ) may be used to rendered a closed cone. Of course, if only one end of a cone element is closed, then a total of 3L C triangles (2Lc + Lc) may be used to render the cone.
  • Torus elements may also be rendered in a similar fashion.
  • an open torus refers to a torus element in which both ends of the torus are rendered as an open surface.
  • the side surface of the torus element may be rendered with (2Lv x LH) number of triangles.
  • a closed torus refers to a torus element in which both ends of the torus are rendered with a solid or closed surface (such as a cap).
  • a closed torus may be rendered using (2Lv x LH) number of triangles for the side surface and Lv triangles for each end of the torus
  • a total of (2Lv x LH ) + 2Lv triangles may be used to rendered a closed torus.
  • a total of (2Lv x LH ) + Lv may be used to render the torus.
  • Various techniques may be used to render dynamic tessellation elements according to the computed tessellation level(s).
  • the top and bottom ellipses of the cone may be divided by L c number of points placed about the perimeter of each ellipse.
  • a set of triangles may be defined for rendering the cone by connecting the points of the top and bottom ellipses.
  • the triangle segments may be defined as a triangle list or triangle strip. For an open cone, this will result in the cone being rendered with 2 Lc number of triangles.
  • For a closed cone the above- noted process will result in the cone being rendered with 4 L c number of triangles, defined as a single triangle list or defined with up to three triangle strips.
  • Similar techniques may be used for rendering other types of dynamic tessellation elements.
  • the inner ellipse or circle of the cone (having a radius R2-see FIG. 10B) may be divided by LH number of points placed about the perimeter of the ellipse.
  • L v number of inner ellipses may be defined at equal intervals about the center (C) of the torus, with the center of each ellipse being positioned at a distance equal to the larger radius (R1) from the center (C) of the torus.
  • a set of triangles may be defined for rendering the torus by connecting the points of the ellipses.
  • the triangle segments may be defined as a triangle list or triangle strip.
  • Surface-of-revolution element types may also be dynamically tessellated and rendered according to the computed tessellation levels.
  • the arc of the profile or ellipse of the element may be divided by L A number of points.
  • L R number of arc profile segments may be defined at equal intervals about the center (Cs- see FIG. 10C) of the surface-of-revolution.
  • a set of triangles may be defined for rendering the surface-of-revolution by connecting the points of the arc profile segments.
  • the surface-of-revolution will be rendered with 2 LR x L A number of triangles, defined as a single triangle list or defined as a single triangle strip.
  • Appendix C includes exemplary programming code prepared in C++ for rendering a cone element.
  • the exemplary programming code of Appendix C generates triangle segments for a cone element based on a determined tessellation level for the element.
  • Appendix D includes exemplary programming code prepared in C++ for tessellating torus or elbow elements. In the exemplary code of Appendix D, triangle segments for an elbow or torus element are generated based on determined tessellation levels.
  • Appendix E exemplary C++ programming code is provided for generating triangle segments for rendering a surface-of-revolutions element based on determined tessellation levels for the element.
  • memory management operations may be performed to improve graphics rendering and system performance. For instance, memory management operations may be performed for allocating and handling data stored in video memory and/or accelerated graphics port (AGP) memory.
  • one or more rendering modules may be provided with the graphics system (such as graphics system 24 of FIG. 1) so that rendering operations are optimized for the particular type of graphics hardware employed by the system.
  • a software-based rendering module may be provided for each type of graphics hardware (GeForce type graphics cards, etc.) supported by the system. The specifics of the graphics hardware may be automatically detected when the VR file(s) are loaded or may be specified by the user during system set-up.
  • an optimum rendering module may be provided that includes a custom memory manager.
  • the custom memory manager may load and store vertex array data from the VR file(s) into the video memory of the graphics card. If video memory is not available, then the data may be stored in the system or AGP memory. In either case, storing as much vertex array data as possible into the video memory of the graphics card can reduce system access and improve the efficiency of run-time operations.
  • FIGS. 24A-24C an exemplary systems and methods for performing memory management operations will be described, consistent with embodiments of the invention.
  • the memory management operations described below may be performed to provide efficient rendering of complex virtual environments based on data from VR file(s).
  • the exemplary features of the memory management operations, such as those described below with reference to FIGS. 24B and 24C, may be performed by one or more software-based modules.
  • Video memory 254 may be implemented with the video memory of a graphics card (such as a graphics card provided as part of graphics hardware 18 in FIG. 1).
  • AGP memory 256 may be implemented with memory reserved in the system environment or platform (such as reserved memory in computing platform 20 or memory 30 in FIG. 1). Further, consistent with embodiments of the invention, direct access to AGP memory 256 may be provided to the graphics card through an AGP bus or system bus. Such access by the graphics card may be performed without involvement or use of the system CPU 260 (such as a CPU of computing platform 20 in FIG. 1).
  • a backend memory 252 may also be reserved or defined.
  • Backend memory 252 may be allocated using video memory 254 and/or AGP memory 256.
  • video memory 254 and AGP memory 256 may be approximately 16M in total memory size.
  • backend memory 252 may be used as a first-in, first-out (FIFO) queue to overwrite vertex arrays or data representing previously rendered elements with data representing other elements to be rendered to the user.
  • FIFO first-in, first-out
  • FIG. 24B is an exemplary flow diagram of memory management operations that may be performed during system set-up (i.e., during loading of the VR file(s)).
  • video memory 254, AGP memory 256 and backend memory 252 may be allocated or reserved (step S.242).
  • all non-dynamic tessellation elements defined in the VR file(s) may be loaded and stored into video memory 254 and/or AGP memory 256 (step S.244).
  • all non-dynamic tessellation elements defined in the octree file (such as octree file 52) may be loaded and stored in video memory 254 and/or AGP memory 256.
  • any additional elements may be stored in system memory (such as memory 30 in FIG. 1) and later transferred by system CPU 260 into backend memory 252 when it is needed to render a particular frame.
  • system memory such as memory 30 in FIG. 1
  • dynamic tessellation elements defined in the VR file(s) may be stored in the remaining storage areas of the video memory and/or AGP memory as needed or permitted (step S.248).
  • dynamic tessellation elements may be pre-loaded into memory. Dynamic tessellation elements may be loaded according to a default or initial position of the camera for the first frame to be rendered. As run-time operations are initiated and performed, additional memory management operations may be performed, as further described below.
  • FIG. 24C is an exemplary flow diagram of memory management operations that may be performed during run-time operations. Consistent with embodiments of the invention, the features of FIG. 24C may be performed for rendering frames of a complex virtual environment. For example, the exemplary memory management operations of FIG. 24C may be performed to facilitate the rendering of dynamic tessellation elements in each frame.
  • a simulation loop may be executed to simulate and render each frame based on input from a user (see, for example, FIG. 7).
  • collision detection and other operations may be performed to determine the viewer or camera position for the frame to be rendered.
  • visibility and culling operations may be performed as part of a rendering phase to identify the visible elements for the frame.
  • visible elements may include nondynamic tessellation elements and dynamic tessellation elements.
  • all of the visible non-dynamic tessellation that are defined in the video or AGP memory may first be displayed by the graphics hardware on a display terminal.
  • any such elements may be copied to the backend memory to permit the graphics hardware to display them to the user. Thereafter, each of the dynamic tessellation elements to be displayed in the frame may be handled in accordance with the features of FIG. 24C.
  • the vertex data for dynamically tessellated elements may be buffered in memory (such as the video, AGP or backend memory) to facilitate such an analysis.
  • the tessellation levels for previously rendered dynamic tessellation elements may be stored in a table or file. If the element was previously rendered (step S.250; Yes), then an analysis can be made to determine if the tessellation level has changed (step S.252).
  • the tessellation level may be compared to the tessellation level for the same element in the previous frame. If there is no change in the tessellation level (step S.252; No), then the previously stored or buffered vertex data for rendering the dynamic tessellation element may be identified and used for the next frame (step S.254). However, if there is a change in the tessellation level (step S.252; Yes) or if the element was not previously rendered (step S.250; No), then an analysis may be performed to estimate the amount of memory space required for storing the triangle segments for the dynamic tessellation element (step S.256).
  • step S.256 To estimate the memory requirements (step S.256), various approaches may be used. For example, based on the tessellation level, the number of triangles for rendering the dynamic tessellation element may be computed. As indicated above, the number of triangles can be computed directly from the tessellation level depending on the element type (open cone, closed cone, open torus, closed torus, surface-of-revolution, etc.). Relationships, such as the exemplary relationships provided in Table 1 , may be used to compute the number of triangles based on the determined tessellation level(s) for the element. With the computed number of triangles, an estimate of memory requirements can be determined based on the number of bytes required per triangle or triangle strip.
  • triangle lists or triangle strips may be defined using vertex arrays and index arrays.
  • Each vertex defined in the vertex array may require a predetermined amount of memory (such as 32 bytes), and each index or pointer in the index array may require another predetermined amount of memory (such as 4 bytes).
  • each triangle is built using two vertices from a previous triangle plus one new vertex.
  • n - 2 triangles can be constructed from n vertices with a triangle strip.
  • an the rendered frame (step S.266).
  • OpenGL commands may be used by the graphics system to cause the graphics hardware to display visible elements and objects in the rendered frame.
  • the dynamic tessellation element may be displayed by the graphics hardware based on the vertex data stored in the video, AGP and/or backend memory.
  • each dynamic tessellation element after identifying or generating the vertex data may be performed to facilitate re-use of the backend memory for other elements to be rendered and displayed. This is because any memory space used in the backend memory for the dynamic tessellation element may be freed for use as soon as the element is displayed.
  • all visible dynamic tessellation elements may be prepared in video, AGP and/or backend memory before commands are issued to the graphics hardware to permit their display. In such a case, the graphics hardware would not display the visible dynamic tessellation elements until the vertex data for all of the elements have been identified or prepared in the video, AGP, and/or backend memory.
  • step S.268 the above-described memory management operations may be repeated for each dynamic tessellation element of the frame to be rendered.
  • step S.268 Yes
  • step S.268 the process may terminate until the visible dynamic tessellation elements for the next frame are identified and prepared for display.
  • the enter process of FIG. 24C may be re-initiated for each element (starting at step S.250).
  • various training applications may be implemented to provided interactive displays to multiple users.
  • Such training applications may be implemented with a pointer to permit a trainer or any user to highlight visible objects or guide users through the complex virtual environment.
  • the pointer may be a predetermined 2D or 3D object (such as an arrow) that is displayed in rendered frame according to input from the trainer or user.
  • the pointer may always be displayed at the center of the rendered frame. This technique may provide better visualization and immersion effects for the user, especially for navigating through the complex virtual environment.
  • the size of the pointer may be scaled according to the distance from the viewer position to the collision point with an object along the line-of-sight.
  • the pointer may be displayed in the rendered frame according to their distance and/or angle from the position of the pointer or trainer in complex environment.
  • predetermined pointers such as a question mark symbol ("?") may be displayed to permit a user to get information or specifications (such as weight, dimensions, cost, etc.) for objects represented in the complex environment.
  • Other predetermined pointers may also be provided to permit a user to perform specific functions when selected.
  • predetermined pointers may be displayed in rendered frames to permit
  • 106 estimate of the total memory requirements for the element can be computed.
  • a dynamic tessellation element relates to a closed cone that is rendered by three triangle strips, including one triangle strip of 100 triangles for the side surface and one triangle strip of 50 triangles for each of the closed end surfaces.
  • step S.256 After determining the memory requirements for the dynamic tessellation element (step S.256), a determination is made whether there is sufficient available memory in the video or AGP memory (step S.258). If there is sufficient memory space (step S.258; Yes), then the dynamic tessellation element is tessellated by the system CPU using the available memory space in the video or AGP memory (step S.260). If the tessellation level of the element is less than the level used in the previous frame, the same memory location in the video or AGP memory may be used for tessellating the element for the next frame and any free or additional space in memory may be flagged for other elements to be rendered in the frame.
  • step S.258 If there is not sufficient memory space in the video or AGP memory (step S.258; No), then the dynamic tessellation element may be tessellated by the system CPU using available system memory (step S.262). Thereafter, the vertex array data for the tessellated element may be copied and stored in the backend memory to facilitate the subsequent display of the element by the graphics hardware (step S.264).
  • commands are issued to the graphics hardware by the graphics engine to cause the display of the element in
  • 104 a user to rotate or move objects, open or close doors, and/or perform other functions.
  • dir vrVecScalarMult(dir, inv en) ;
  • edgeAC vrVecSubtracc (edgeAC, A);
  • edgeAB vrVecLength
  • edgeAB vrVecScalarMul
  • te -vrVecDot
  • VRlndexArray **ia // Index arrays to fill const int base_va, // stare in vertex array const int base ia // 5care in index arrays
  • VRColorR ⁇ G8B8A8 hite ⁇ (char) 255, (char) 255, (char) 255, (char) 255 ⁇ ;
  • VRVector3D tovie l vrVecSubtract (pBase, pViewPos) ; vrVecNormalize(Stoviewl) ;
  • VRVector3D dirl vrVecCross (toviewl,pAxis) ; vrVecNor-nalizet&dirl) ;
  • VRVector3D tovie 2 vrVecSubtract (pTop,pViewPos) ; vrVecNormaliae (stovie 2) ;
  • ⁇ vc i*2; snt top ⁇ center; snt base_center;
  • VR_ENT£R(VRElbo : .-endTessellation) ; va+ base_va;
  • VRMatrix m VRVertex V; VRQuaternion q; VRVector3D flatdir; VRVector3D dir; for (int i*0;i ⁇ -pNuroHorizSegs;i++)
  • VRQuaternion q vrQuaternion (pAxis, i*pAngle/ (float) pNumHorizSegs) ;
  • VRJ2NTBR VRSweptArc: : endTessellation
  • float alpha pArc ⁇ tartAngle + (i * (float) pArcSweepAngle/ (float) pNumArcSegs) ;
  • float cosalpha cos (alpha);
  • float sinalpha sinf (alpha);
  • float rlcosa pArcRadiusl * cosalpha;
  • float r2sina pArcRadius2 * sinalpha;

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Processing Or Creating Images (AREA)

Abstract

Providing interactive displays of complex virtual environments includes generation of virtual reality (VR) files from a 3D model of the complex environment. The VR file(s) include octree and collision detection information that is used to simulate and render frames of the complex environment. During simulation, moving objects are evaluated to detect for collisions with other objects. Further, during rendering, objects or elements are dynamically tessellated during run-time operations to actively control their appearance when displayed to a user. Memory management operations facilitate the display of complex virtual environments.

Description

VIRTUAL REALITY FOR COMPLEX ENVIRONMENTS
BACKGROUND OF THE INVENTION
I. Field of the Invention
[001] The present invention generally relates to the fields of computer graphics and virtual reality. More particularly, the present invention relates to generating virtual reality (VR) files, simulating frames, performing memory management, and rendering frames of complex virtual environments, including complex virtual environments based on massive models.
II. Background Information
[002] As a result of the advances in computer graphics and computer- aided design (CAD), a wide variety of objects and environments can be modeled and rendered with technical detail and precision. For example, 2D and 3D modeling techniques permit engineering models of mechanical parts and other physical objects to be computer generated for various purposes, such as research and development. In addition, complex environments involving offshore platforms, industrial plants, building complexes and other facilities can be modeled using conventional, computer-aided modeling methods. 3D models of such environments are often used in the architecture, engineering and construction (AEC) sectors and for plant design.
[003] Elements or objects in a complex environment may be represented in a 3D model through polygons. Polygons (such as triangles and quadrilaterals) are closed figures made by joining line segments, wherein each line segment intersects exactly two other line segments. In a 3D model, polygons may be combined to represent the surface(s) of boxes, stairs, windows, walls, floors, pipes, beams and other objects in the modeled environment. Data defining the attributes of each object, including lighting and texture, may also be included in a 3D model.
[004] To render 3D models, conventional graphics systems may be employed. Such graphics systems may be provided with graphics cards that are capable of processing 3D model data to generate and display the modeled environment. While triangle elements can be directly handled and rendered by most graphics cards, other polygon elements in a 3D model (such as four or five- sided polygons) require additional processing before they can be rendered. Such processing may include the triangulation of complex polygon elements into triangles so that it can be directly handled by the graphics hardware and rendered on a display terminal.
[005] With the developments in computer-aided graphics and 3D modeling, there has been an increasing demand for interactive displays and simulation to provide virtual reality. The term virtual reality refers to the use of computer-based simulation of real or imaginary environments that permit a user to perform operations within the simulated environment. In the area of 3D game technology, virtual reality software has been created to provide real-time animation within simulated environments for users. To achieve higher levels of realism, many 3D software games permit users to navigate and/or perform operations (such as lifting, walking, running, jumping, climbing, etc.) in a manner that resembles the real world. For this purpose, the effects of gravity and/or other real-world phenomena (such as collisions) may be simulated in virtual reality applications for 3D game users. [006] Presently, the ability to provide interactive displays and real-time simulation for complex environments presents many challenges. For example, complex environments can be based on 3D models that are comprised of data representing hundreds of thousands of polygons. Although the throughput of graphics systems has improved over the years, rendering complex environments at interactive frame rates (such as 5 frames/second or higher) is impossible with current applications due to the volume of data that must be processed. For instance, attempts to render complex environments represented by approximately 700,000 or more polygons with current systems will typically cause the screen to freeze or lock when a user tries to navigate through the rendered model.
[007] To overcome such obstacles, some applications eliminate real- world phenomena (such as the effect of gravity and/or collisions) to provide simple navigation features in a complex virtual environment. In such applications, however, the user is often left in a confused state since the user is permitted to navigate through walls, floors and other objects represented in the modeled environment. Further, in some cases, the viewpoint of the user may be positioned within walls or between floors, making it difficult for the user to orientate himself/herself with respect to the rendered frame.
[008] Other applications attempt to provide real-time simulation by reducing the level of detail in the rendered frame and/or eliminating the range of interactive options available to a user. In such applications, however, curved surfaces may appear jagged as opposed to smooth. Further, the user may be limited to basic movements (such as forward, left, right, etc.) that are not practical for most applications. Therefore, while such approaches may improve the frame rate, they fail to provide an acceptable level of visualization and simulation to the user. As a result, past attempts do not provide an effective virtual reality solution for complex environments.
SUMMARY OF THE INVENTION
[009] In accordance with embodiments of the invention, a method is provided for generating virtual reality (VR) file(s). The method comprises: providing a three-dimensional (3D) model of the complex environment, the 3D model comprising data defining elements represented in the complex environment; and analyzing the data of the 3D model to generate VR file(s) including a collision detection hierarchy. The collision detection hierarchy may be generated by: filtering the data of the 3D model to identify dynamic tessellation elements and non-dynamic tessellation elements; generating a plurality of triangles representing the identified dynamic tessellation elements and nondynamic tessellation elements; performing a grid subdivision on the plurality of triangles to define a plurality of cubes; creating a bounding box tree for each cube, each bounding box tree comprising a plurality of bounding boxes; and allocating the plurality of triangles to the bounding boxes.
[010] According to additional embodiments of the invention, a method is provided for generating virtual reality (VR) file(s), wherein the method comprises: providing a three-dimensional (3D) model of the complex environment, the 3D model comprising data defining elements represented in the complex environment; and analyzing the data of the 3D model to generate VR file(s) including an octree file, the octree file comprising data defining an octree of spatially partitioned cells, each cell of the octree containing one or more elements of the complex environment. Optionally, the octree file may be generated by: filtering the data of the 3D model to identify dynamic tessellation elements and non-dynamic tessellation elements; generating a plurality of triangles representing the identified non-dynamic tessellation elements; and defining cells of the octree by spatially partitioning the identified dynamic tessellation elements and triangles representing the non-dynamic tessellation elements.
[011] In accordance with other embodiments of the invention, a computer program product is provided for generating virtual reality (VR) file(s). As disclosed herein, the computer program product may comprise a computer readable medium embodying a computer program, wherein the computer program comprises instructions that are executable by a system to perform a method for generating VR file(s) based on data from a massive model representing the complex environment. Such a method may comprise: analyzing the data of the massive model to generate VR file(s), the VR file(s) including data defining an octree and data defining a collision detection hierarchy, the octree comprising a plurality of cells containing elements of the complex environment, and the collision detection hierarchy comprising a plurality of bounding boxes containing elements of the complex environment. The octree may be generated by: filtering the data of the massive model to identify dynamic tessellation elements and non-dynamic tessellation elements; generating a plurality of triangles representing the identified non-dynamic tessellation elements; and defining cells of the octree by spatially partitioning the identified dynamic tessellation elements and triangles representing the non-dynamic tessellation elements.
[012] According to still additional embodiments of the invention, a system is provided for generating interactive displays of a complex virtual environment. In the system, a converter component may be provided for generating virtual reality (VR) file(s). The converter component may comprise: means for accessing a massive model of the complex environment, the massive model including data defining elements represented in the complex environment; and means for generating VR file(s) based on the massive model data of the complex environment, the VR file(s) including an octree file and a collision detection file. The octree file may include data defining an octree of spatially partitioned cells and/or data defining a plurality of occluder candidates. Further, the collision detection file may include a collision detection hierarchy of bounding boxes that contain one or more elements of the complex virtual environment.
[013] Moreover, other embodiments of the invention are directed to a method for generating virtual reality (VR) file(s) for facilitating interactive displays of a complex virtual environment. The method comprises: providing a three- dimensional (3D) model of the complex virtual environment, the 3D model comprising data defining elements represented in the complex virtual environment; and analyzing the data of the 3D model to generate VR file(s) including an octree file and collision detection file. The octree file may comprise data defining an octree of spatially partitioned cells, wherein each cell of the octree containing one or more elements of the complex virtual environment. Further, the collision detection file may comprise a collision detection hierarchy of bounding boxes that contain one or more elements of the complex virtual environment. In accordance with embodiments of the invention, a method is provided for performing simulation to facilitate the rendering of frames of a complex virtual environment. The method comprises: detecting, for a next frame to be rendered, input from a user; determining, based on the user input, desired movements from a current position to a desired position within the complex virtual environment; and performing, using a collision detection hierarchy of bounding boxes that contain one or more elements of the complex environment, a predictive collision detection operation to determine if a collision will occur with an element represented in the complex environment if the desired movements are performed between the current position and the desired position. If a collision is detected, an adjusted position may be calculated based on the desired movements in the complex environment and a viewer position for rendering the next frame may be set to the adjusted position. Further, if a collision is not detected, a viewer position for rendering the next frame may be set to the desired position.
[014] In accordance with embodiments of the invention, a method is provided for performing simulation to facilitate the rendering of frames of a complex virtual environment. The method comprises: detecting, for a next frame to be rendered, input from a user; determining, based on the user input, desired movements from a current position to a desired position within the complex virtual environment; and performing, using a collision detection hierarchy of bounding boxes that contain one or more elements of the complex environment, a predictive collision detection operation to determine if a collision will occur with an element represented in the complex environment if the desired movements are performed between the current position and the desired position. If a collision is detected, an adjusted position may be calculated based on the desired movements in the complex environment and a viewer position for rendering the next frame may be set to the adjusted position. Further, if a collision is not detected, a viewer position for rendering the next frame may be set to the desired position. [015] According to other embodiments of the invention, a method is provided for performing simulation operations. The simulation operations being performed as part of a simulation loop to provide interactive displays of the complex environment to a user. Further, the method may comprise: detecting, for a next frame to be rendered, input from a user; determining, based on the user input, desired movements from a current position to a desired position within the complex virtual environment; predicting, using a collision detection hierarchy of bounding boxes that contain one or more elements of the complex environment, whether a collision will occur with an element in the complex environment based on the desired movements between the current position and the desired position; and calculating an adjusted position for rendering the next frame in response to the detection of a collision.
[016] In order to determine desired movements based on the user input, methods consistent with embodiments of the invention may calculate a path between the current position and the desired position based on the user input. Moreover, to predict whether a collision will occur, methods of the invention may perform a proximity query based on the calculated path between the current position and the desired position to identify intersecting bounding boxes, and analyze elements contained in each identified bounding box to determine if a collision will occur based on the desired movements.
[017] Consistent with still additional embodiments of the invention, a computer program product is provided for generating interactive displays of a complex virtual environment. The computer program product may comprise a computer readable medium embodying a computer program, wherein the computer program includes instructions that are executable by a system to perform a simulation method to facilitate the rendering of frames of the complex environment. Such a method may include: determining, based on user input for a frame to be rendered, desired movements from a current position to a desired position within the complex virtual environment; performing, using a collision detection hierarchy of bounding boxes that contain one or more elements of the complex environment, a predictive collision detection operation to determine if a collision will occur with an element represented in the complex environment if the desired movements are performed between the current position and the desired position; and in response to performing the collision detection operation, setting a viewer position for the frame to be rendered.
[018] In accordance with other embodiment of the invention, systems may be provided for generating interactive displays of a complex virtual environment. In such systems, a simulation component may be provided for performing simulations based on virtual reality (VR) file(s). The VR file(s) may include data defining a collision detection hierarchy of bounding boxes that contain elements represented in the complex environment. Further, consistent with embodiments of the invention, the simulation component may comprise: means for determining, based on input from a user, desired movements from a current position to a desired position within the complex virtual environment; means for detecting, using the collision detection hierarchy of bounding boxes, whether a collision will occur with an element represented in the complex virtual environment based on the desired movements from the current position to the desired position; and means for calculating an adjusted position when it is determined that a collision will occur based on the desired movements indicated by the user input. [019] Other embodiments of the invention may provided, such as a method for simulating movement of an avatar in a complex virtual environment. Such methods may comprise: determining, based on input from a user, desired movements of the avatar from a current position to a desired position within the complex virtual environment; and detecting, using a collision detection hierarchy of bounding boxes, whether a collision will occur with an element represented in the complex virtual environment if the avatar is moved from the current position to the desired position.
[020] To detect collisions, method consistent with embodiments of the invention may perform a proximity query based on the desired movements of the avatar to identify bounding boxes that intersect with the avatar when the avatar is moved to the desired position. Such methods may also analyze elements contained in each identified bounding box to determine if a collision will occur with the avatar.
[021] In accordance with embodiments of the present invention, virtual reality (VR) systems are provided for generating displays of a complex virtual environment based on VR file(s). In such VR systems, a method may be provided for performing memory management operations, wherein the method comprises: allocating video memory in graphics hardware of the VR system and allocating accelerated graphics port (AGP) memory as part of system memory of the VR system; pre-loading, prior to simulating and rendering frames of the complex environment, all non-dynamic tessellation elements defined in the VR file(s) into memory, wherein pre-loading includes pre-loading non-dynamic tessellation elements into at least one of the video memory and the AGP memory; determining, after pre-loading all non-dynamic tessellation elements, whether memory space is available in at least one of the video memory and the AGP memory; and if memory space is determined to be available, pre-loading dynamic tessellation elements defined in the VR file(s) into the available memory space of the video memory or the AGP memory.
[022] According to other embodiments of the invention, methods may be provided for performing memory management operations in a virtual reality (VR) system, wherein the VR system being adapted to provide displays of a complex virtual environment based on VR file(s). Such methods may comprise: buffering, in memory of the VR system, vertex data for rendered dynamic tessellation elements; determining, for a dynamic tessellation element to be rendered, whether the dynamic tessellation element was rendered in a previous frame at a tessellation level that is the same as a tessellation level for the element in a frame to be rendered; and when it is determined that the dynamic tessellation element was rendered in a previous frame at the same tessellation level, using the buffered vertex data for the previous rendering of the dynamic tessellation element to facilitate the display of the frame to be rendered.
[023] Consistent with still other embodiments of the invention, a computer program product may be provided for generating interactive displays of a complex virtual environment. The computer program product may include a computer readable medium embodying a computer program, wherein the computer program comprises instructions that are executable by a system to perform a method for performing memory management operations. Consistent with embodiments of the invention, such methods may comprise: allocating video memory in graphics hardware of the VR system and allocating accelerated graphics port (AGP) memory as part of system memory of the VR system; estimating, for a dynamic tessellation element to be rendered, memory space requirements for storing rendering data for the dynamic tessellation element, the rendering data comprising vertex data defining one or more triangles representing the dynamic tessellation element; determining whether there is sufficient memory space available in at least one of the video memory and the AGP memory for storing the rendering data based on the estimated memory space requirements; and using the available memory space in at least one of the video memory and the AGP memory for dynamically tesselating the element when it is determined that there is sufficient memory space for the rendering data.
[024] In accordance with yet additional embodiments of the invention, systems may be provided for performing memory management operations. Such systems may comprise: means for buffering, in memory of a virtual reality (VR) system, vertex data for rendered dynamic tessellation elements; means for determining, for a dynamic tessellation element to be rendered, whether the dynamic tessellation element was rendered in a previous frame at a tessellation level that is the same as a tessellation level for the element in a frame to be rendered; and means for using the buffered vertex data for the previous rendering of the dynamic tessellation element to facilitate the display of the frame to be rendered when it is determined that the dynamic tessellation element was rendered in a previous frame at the same tessellation level.
[025] In accordance with embodiments of the invention, a method is provided for rendering frames of a complex virtual environment based on an octree file, the octree file including data defining an octree for the complex environment that includes spatially partitioned cells. Such methods may comprise: determining, using the octree for the complex virtual environment, cells that are visible based on a viewer position for a frame to be rendered; identifying, in each of the visible cells, dynamic tessellation elements; and dynamically tessellating the identified dynamic tessellation elements in the visible cells. In order to dynamically tessellate elements, such methods may determine, for each identified dynamic tessellation element, a tessellation level, and then render each identified dynamic tessellation element according to the determined tessellation level.
[026] According to other embodiments of the invention, a method is provided for performing rendering operations to facilitate interactive displays a complex virtual environment. The rendering operations may be performed as part of a simulation loop during run-time operations. Further, the method may include: determining, using an octree of cells for the complex virtual environment, cells that are visible based on a viewer position for a frame to be rendered, the cells of the octree containing elements of the complex environment; identifying, in each of the visible cells, dynamic tessellation elements; and dynamically tessellating the identified dynamic tessellation elements in the visible cells. Consistent with embodiments of the invention, in order to determine visible cells based on the viewer position, the method may further include: determining a view frustum for the frame to be rendered based on the viewer position; traversing the octree to identify non-visible cells that are outside of the view frustum; and culling the identified non-visible cells of the octree.
[027] Consistent with still other embodiments of the invention, a computer program product is provided for generating interactive displays of a complex virtual environment. The computer program product comprises a computer readable medium embodying a computer program, wherein the computer program comprises instructions that are executable by a system to perform a rendering method to facilitate the display of frames of the complex environment. Optionally, the method comprises: determining, using an octree of cells that spatially partition the complex virtual environment, elements of the complex environment that are visible based on a viewer position for a frame to be rendered, the elements of the complex environment including dynamic tessellation elements and non-dynamic tessellation elements; and for each of visible element that is a dynamic tessellation element, dynamically tessellating the element. Elements may be dynamically tessellated by: determining a tessellation level for the visible dynamic tessellation element based on the viewer position for the frame to be rendered; and generating vertex data for rendering the visible dynamic tessellation element based on the determined tessellation level.
[028] In accordance with other embodiments of the invention, systems are provided for generating interactive displays of a complex virtual environment. In such systems, a graphics system may be provided for rendering frames based on virtual reality (VR) file(s). The VR file(s) may include data defining a list of occluder candidates and/or data defining an octree of cells that contain one or more elements of the complex environment. Further consistent with embodiments of the invention, the graphics system may comprise: means for determining, using the octree for the complex virtual environment, cells that are not visible based on a viewer position for a frame to be rendered; means for culling all cells of the octree that are determined not to be visible; means for analyzing the list of occluder candidates to determine cells of the octree that are occluded based on the viewer position; means for culling all cells of the octree that are determined to be occluded; and means for rendering elements contained in the visible, non-culled cells of the octree to facilitate the display the frame to a user.
[029] Other embodiments of the invention may be provided, such as methods for rendering frames of a complex virtual environment based on an octree of cells that spatially partition the complex virtual environment. Such methods may comprise: determining, using the octree for the complex virtual environment, cells that are not visible based on a viewer position for the frame to be rendered; culling all cells of the octree that are determined not to be visible; analyzing a list of occluder candidates to determine cells of the octree that are occluded based on the viewer position; culling all cells of the octree that are determined to be occluded; and rendering elements contained in visible cells of the octree to display the frame to a user, wherein the elements contained in the visible cells include dynamic tessellation elements and non-dynamic tessellation elements.
[030] It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only, and should not be deemed restrictive of the full scope of the embodiments invention.
BRIEF DESCRIPTION OF THE DRAWINGS [031] The accompanying drawings, which are incorporated herein and constitute a part of this specification, illustrate various features and aspects of embodiments of the invention. In the drawings:
[032] FIG. 1 illustrates an exemplary system environment for providing interactive displays of complex virtual environments, consistent with embodiments of the invention; [033] FIG. 2 is an exemplary network environment for providing interactive displays of complex virtual environments, consistent with embodiments of the invention;
[034] FIG. 3 is an exemplary virtual reality (VR) system, consistent with embodiments of the invention,
[035] FIG. 4 is an exemplary VR system and graphics system, consistent with embodiments of the invention;
[036] FIG. 5 illustrates the contents of exemplary VR file(s), consistent with embodiments of the invention;
[037] FIG. 6 is an exemplary pre-processing pipeline, consistent with embodiments of the invention;
[038] FIG. 7 is an exemplary run-time pipeline, consistent with embodiments of the invention;
[039] FIG. 8 is an exemplary flow diagram for providing interactive displays of complex virtual environments, consistent with embodiments of the invention;
[040] FIG. 9 is an exemplary flow diagram for providing VR file(s) based on a 3D model, consistent with embodiments of the invention;
[041] FIGS. 10A, 10B and 10C illustrate examples of a cone element type, a torus element type and a surface-of-revolution element type, respectively;
[042] FIG. 11 illustrates an example of stretching an octree cell, consistent with embodiments of the invention;
[043] FIGS. 12A and 12B illustrate examples of triangles;
[044] FIG. 13 illustrates an example of a triangle strip, consistent with embodiments of the invention; [045] FIG. 14 is an exemplary flow diagram for identifying and building a list of occluder candidates, consistent with embodiments of the invention;
[046] FIG. 15 is an exemplary flow diagram for creating a collision detection hierarchy, consistent with embodiments of the invention;
[047] FIGS. 16A and 16B are exemplary flow diagrams for performing collision detection, consistent with embodiments of the invention;
[048] FIG. 17 is an exemplary flow diagram for performing a sliding motion operation to determine an adjusted position, consistent with embodiments of the invention;
[049] FIGS. 18A, 18B and 18C illustrate examples of determining an adjusted position in response to collision detection, consistent with embodiments of the invention;
[050] FIG. 19 is an exemplary flow diagram for rendering complex virtual environments, consistent with embodiments of the invention;
[051] FIG. 20 is an exemplary flow diagram for analyzing occluder candidates and blocked cells during rendering, consistent with embodiments of the invention;
[052] FIGS. 21 A and 21 B are exemplary illustrations of determining the closest perimeter point and measuring an angle α for an occluder candidate, consistent with embodiments of the invention;
[053] FIGS. 22A, 22B, 22C, 22D, and 22E illustrate examples of cells that are blocked, non-blocked or partially-blocked by occluder candidates;
[054] FIG. 23 is an exemplary flow diagram for performing dynamic tessellation during rendering, consistent with embodiments of the invention; [055] FIG. 24A, 24B and 24C are exemplary diagrams related to performing memory management operations, consistent with embodiments of the invention;
[056] FIGS. 25A and 25B illustrate examples of rendered frames of a complex virtual environment that is representative of a building complex;
[057] FIGS. 26A and 26B illustrate examples of rendered frames of a complex virtual environment that is representative of an offshore platform;
[058] FIGS. 27A and 27B illustrate examples of rendered frames of a complex virtual environment that is representative of an energy plant; and
[059] FIGS. 28A and 28B illustrate examples of rendered frames of a complex virtual environment that is representative of a city landscape.
BRIEF DESCRIPTION OF THE APPENDICES
[060] In order to facilitate the detailed description of the embodiments of the present invention, reference is made to the noted appendices by way of non- limiting examples of features of the invention, wherein:
[061] Appendix A includes exemplary programming code for performing a sweep test for intersection between two bounding spheres;
[062] Appendix B includes exemplary programming code for performing a sweep test for intersection between a bounding sphere and a triangle element;
[063] Appendix C includes exemplary programming code for dynamically tessellating a cone element type;
[064] Appendix D includes exemplary programming code for dynamically tessellating a torus element type; and
[065] Appendix E includes exemplary programming code for dynamically tessellating a surface-of-revolution element type. DETAILED DESCRIPTION
[066] Systems and methods consistent with embodiments of the present invention provide interactive graphical displays of complex virtual environments. Such systems and methods may be implemented, according to embodiments of the invention, to render complex virtual environments representing, for example, building complexes, industrial plants, offshore platforms, and other facilities or environments. The rendering of such complex environments may be performed at interactive frame rates to provide real-time simulation to the user. Further, real-world phenomena, including the effect of gravity and collisions with objects, may be simulated to provide a higher degree of realism for users.
[067] The complex virtual environments may be based on 3D models. Such models may be defined by a large set of polygons, such as hundreds of thousands of polygons. Further, in the case of massive models, the complex environment may incorporate approximately one million or more polygons. Massive or large-scale models may be used to model various complex environments, including those related to the AEC sector, where many objects of different sizes and type are represented in the environment.
[068] Embodiments of the invention may be implemented to facilitate various virtual reality applications for complex models. For examples, consistent with embodiments of the invention, interactive displays of complex virtual environments may be used for design and research, project review, public relations, familiarization training, maintenance planning and training, security review and training, and a wide array of other applications requiring real-time visualization and/or simulation of complex environments. Such applications may be employed in various areas, including areas in which walk-through and/or other full immersion capabilities are required for effective training or use.
[069] As further disclosed herein, interactive displays of complex environments may be generated using one or more virtual reality (VR) file(s). Consistent with embodiments of the invention, VR file(s) may be created from a 3D model and used in place of the 3D model during run-time operations for performing simulation and rendering operations. Such VR file(s) may be constructed with an octree and a list of occluder candidates to permit visibility and culling operations to be performed more efficiently. Moreover, VR file(s) may include a collision detection hierarchy to facilitate collision detection operations during run-time operations.
[070] To display frames of a complex virtual environment, systems and methods consistent with embodiments of the invention collect and analyze input from a user to determine the user's desired movements or operations. Using the VR file(s), predictive collision detection operations may be performed during a simulation phase to determine if the desired movements or operations will cause collision(s) with objects or elements represented in the complex environment. In response to the detection of a collision, an adjusted position or movement may be computed before rendering the frame to the user. Further, during a rendering phase, culling operations may be performed based on the camera or viewer position for the frame, and visible elements may be dynamically tessellated according to data from the VR file(s).
[071] Consistent with embodiments of the invention, FIG. 1 illustrates an exemplary diagram of a system environment 10 for providing interactive displays of complex virtual environments. As shown in FIG. 1 , system 10 includes a display 12, graphics hardware 18, a computing platform 20, a user interface 40 and a storage device or memory 30. As illustrated in FIG. 1 , computing platform 20 may include a number of components for processing data and rendering complex virtual environments, such as a graphics system 24 and a virtual reality (VR) system 28. Exemplary embodiments of VR system 28 and graphics system 24 are provided below with reference to, for example, FIGS. 3 and 4.
[072] In system environment 10, display 12 is adapted to graphically display rendered frames of a modeled environment according to operations performed by computing platform 20 and graphics hardware 18. By way of example, display 12 may be implemented with a color display terminal or device with predetermined quality and resolution. For instance, display 12 may be implemented with a color display screen having 16 or 32 bits color depth. Further, consistent with embodiments of the invention, display 12 may be provided as a separate display component connected to computing platform 20 (as represented in FIG. 1) or can be provided as a display device that is integrated with computing platform 20 (such as a display screen of a laptop computer).
[073] Computing platform 20 of system environment 10 may be configured to process data and render complex virtual environments. For this purpose, computing platform 20 may be implemented with a workstation, a server, a personal desktop computer, a laptop or any other computing-based platform. By way of non-limiting examples, platform 20 may be implemented with a personal desktop computer or laptop with an Intel Pentium III, 350 MHz or higher-class central processing unit (CPU), such as 500 MHz or higher CPU. Other types of CPUs may also be used, such as the AMD Athlon XP Processor manufactured by Advanced Micro Devices, Inc. Computing platform 20 may also be configured with Microsoft Windows 98SE, ME, NT4, SP6, 2000 or XP.
[074] In accordance with an embodiment of the invention, computing platform 20 may be implemented to provide accelerated graphics port (AGP) capabilities for facilitating higher 3D graphics throughput. In such a case, computing platform 20 may be implemented with an AGP chipset and motherboard equipped with an AGP bus slot. Further, an operating system such as the OSR 2.1 version of Windows 98 or Windows NT 4.0 may be used.
[075] Consistent with embodiments of the invention, computing platform 20 may be adapted to perform a number of operations, including the preprocessing of a 3D model file to generate VR file(s) used for rendering a complex virtual environment. Further, during run-time operations, computing platform 20 may be adapted to receive input from user interface 40 and/or other devices, analyze data from the VR file(s) stored in memory 30, and generate instructions to display frames of the complex environment with display 12. Optionally, computing platform 20 may also provide instructions to user interface 40 to control one or more haptic devices that provide feedback to the user. For example, haptic devices such as gloves, helmet devices with automated tracking, force feedback joysticks, and/or other types of skeleton or human interface devices may be provided with user interface 40.
[076] As stated above, computing platform 20 may include graphics system 24 and VR system 28. Collectively, components 24 and 28 may facilitate the processing of data and rendering of complex virtual environments. For instance, VR system 28 may be adapted to perform various simulation operations, including collision detection operations based on user input and data from the VR file(s). As a result of such operations, VR system 28 may provide output indicating, for example, the camera or viewer position for each frame to be rendered.
[077] Rendering operations may be performed by graphics system 24 based on the results of the simulation operations performed by VR system 28. The rendering operations performed by graphics system 24 may include visibility and culling operations, as well as dynamic tessellation operations for specific types of elements defined in the VR file(s). The results of these rendering operations may include a list of vertex arrays for the elements determined to be visible, as well as vertex arrays for any dynamically tessellated elements. Based on these results, commands (such as Open GL commands) may be issued by system 24 to graphics hardware 18 to cause the display of the rendered frame on display 12.
[078] Consistent with embodiments of the invention, graphics hardware 18, graphics system 24 and VR system 28 can be implemented through any suitable combination of hardware, firmware and/or software. By way of non- limiting example, graphics system 24 may be implemented as a software-based application using C++ and include an OpenGL software interface. Further, graphics hardware 18 may be implemented with commercially available graphics hardware, such as graphics cards that include transformation and lighting functions and suitable graphics memory (such as 32Mb or more of graphics memory). Examples of commercially available graphics cards include GeForce II or III graphics cards available from Nvidia of Santa Clara, California. For GeForce graphics cards, the latest available drivers may also be installed (such as those available for downloading at Nvidia's website <http://www.nvidia.com>). By way of further example, VR system 28 may be implemented as a software- based application and programmed using C++ and/or graphics development languages or libraries, such as Microstation Development Language (MDL) available from Bentley Systems, Inc. of Exton, Pennsylvania.
[079] Memory 30 may be implemented with a database or storage device that provides sufficient memory capacity for all of the operations performed with system 10. By way of example, memory 30 may be implemented with approximately 256 Mb or greater memory capacity depending on, for example, the complexity of the 3D model and/or data handled by system 10. As with display 12, memory 30 can be provided as a separate storage device connected to computing platform 20 (as illustrated in FIG. 1) or can be an integrated storage device that is provided as part of computing platform 20 (such as the internal memory or hard disk of a computer or laptop).
[080] Consistent with embodiments of the invention, memory 30 may store data accessed by computing platform 20. For example, memory 30 may store data for complex virtual environments represented by 3D model files. Such files may be developed using commercially available software, such as Microstation (available from Bentley Systems, Inc.), AutoCAD (available from Autodesk, Inc.) or PDMS (available from Cadcentre Ltd.). Further, the 3D model files may be stored in conventional or proprietary file formats, such as .dgn, .dwg, etc.
[081] Referring again to FIG. 1 , memory 30 may also store VR file(s) that are used by computing platform 20 to process and render complex environments. In accordance with embodiments of the invention, VR file(s) may be generated from 3D model files. Further, as described below, VR file(s) may include optimized data representing the entire complex environment, as well as other data for facilitating the interactive display of the environment. Accordingly, consistent with embodiments of the invention, VR file(s) may be used in place of any 3D model files during run-time operations to render the complex environment. Exemplary embodiments of VR file(s) are provided below with reference to, for example, FIG. 5. In addition, pre-processing operations and exemplary flow diagrams for generating VR file(s), consistent with embodiments of the invention, are provided below with reference to, for example, FIGS. 6 and 9.
[082] User interface 40 collects and receives input from a user and provides all user input to computing platform 20. User input may indicate instructions from the user concerning movements (forward, back, left, right, etc.) or operations (select, lift, rotate, etc.) to be virtually performed in the complex environment. Such input may permit a user to control walk-though operations in the complex virtual environment, as well as perform operations on elements (such as boxes, chairs, etc.) represented in environment. Additionally, user input may include instructions to control the operations of system 10, such as the loading of files, the execution of components and applications, etc.
[083] By way of non-limiting examples, user interface 40 can be implemented with any number of input devices, such as a keyboard, a mouse device, a joystick and/or a track ball. The input devices may be provided as a separate device or as an integrated device with computing platform 20. User interface 40 may also include haptic devices to provide feedback to the user during simulation and rendering. As noted above, such haptic devices include gloves, helmet devices with automated tracking, force feedback joysticks, speakers and/or other types of skeleton or human interface devices.
[084] To provide interactive displays of a complex environment, VR file(s) are first prepared (if not previously generated and stored by the user). As stated above, VR file(s) may be created from 3D model for the complex environment. By loading the 3D model from memory 30, computing platform 20 may generate the VR file(s) used for during simulation and rendering operations. For this purpose, VR system 28 may be implemented with a converter component (see, for example, FIG. 3) to perform pre-processing operations that convert or generate VR file(s) from a 3D model file. In contrast to conventional 3D model files, VR file(s) consistent with embodiments of the present invention, may be optimized to facilitate real-time, interactive display of the complex environment. For example, the VR file(s) may be constructed with an octree and/or a list of occluder candidates to permit visibility and culling operations to be performed more efficiently during run-time operations. Further, the VR file(s) may include a collision detection hierarchy to facilitate collision detection operations during runtime operations (see, for example, FIG. 5).
[085] Once the VR file(s) are prepared, simulation and rendering operations may be performed by computing platform 20 to display successive frames of the complex virtual environment to the user. During simulation, input from the user may be collected and analyzed by VR system 28 to determine desired movements or operations for the next frame to be rendered. Based on the collision detection hierarchy of the VR file(s), VR system 28 may perform collision detection operations to determine if the desired movements or operations cause any collision(s) with objects or elements represented in the complex environment (see, for example, FIGS. 16A and 16B). In response to the detection of a collision, an adjusted position or movement may be computed by VR system 28 and provided as output for the next frame. If no collisions are detected, the desired position or operation of the user may be provided as output by VR system 28 to permit rendering operations to be performed.
[086] Based on the output of VR system 28, rendering operations are performed by graphics system 24. For instance, during a rendering phase, visibility and culling operations may be performed by graphics system 24 to determine what cells or elements are visible for the next frame. As further described below, an octree and/or occluder candidates defined in the VR file(s) may be used by graphics system 24 to perform visibility and occlusion culling operations. In addition, curved or rounded elements that are visible in the frame may be dynamically tessellated by graphics system 24 to provide optimum rendering (see, for example, FIG. 23). In response to such operations, OpenGL commands may be issued by graphics system 24 to graphics hardware 18 to cause the next frame to be displayed to the user with display 12.
[087] Successive frames of the complex virtual environment can be generated with system environment 10 of FIG. 1 by providing simulation and rendering operations as part of a simulation loop (see, for example, FIG. 7). Further, as indicated above, interactive displays may be provided to a user for various complex virtual environments, including those based on large-scale or massive models that contain hundreds of thousands of polygons or even one million or more polygons. Embodiments consistent with the invention, therefore, may be implemented to provide interactive displays of building complexes, off- shore platforms, industrial plants, city dwellings, and other facilities and environments that are highly detailed or contain many different types of elements.
[088] By way of non-limiting examples, FIGS. 25-28 illustrate exemplary complex environments that may be interactively rendered by system environment 10 of FIG. 1. For instance, FIGS. 25A and 25B illustrate exemplary rendered frames of the Reichstag building complex (designed by the architectural firm of Foster and Partners of London, England) that includes many exterior as well as interior details and elements. In contrast, FIGS. 26A and 26B are exemplary rendered frames of an offshore platform (representative of a platform designed by Shell) that contains numerous structural details including curved features and rounded piping. Complex environments of industrial plants, such as the exemplary electrical energy plant of FIGS. 27A and 27B, may also be interactively displayed to a user with system 10 while providing various simulation capabilities (such as interactive walk-through operations with collision detection). Such features may also be provided for modeled environments of complex dwelling environments or landscapes. For instance, FIGS. 28A and 28B are exemplary rendered frames of a city landscape that can be interactively displayed to a user with system environment 10.
[089] Consistent with embodiments of the invention, interactive displays of complex virtual environments may be provided to one or more users through a network environment. For instance, FIG. 2 illustrates an exemplary network environment 50 for providing interactive displays to a set of users located at client stations 70A-70N. Client stations 70A-70N may function in accordance with a peer-to-peer relationship or according to a client-server relationship with a server 54. Such arrangements may be used for various applications, including training sessions where one user acts as an instructor to train or instruct one or more other users. Network environments such as that illustrated in FIG. 2 may also be used for VR applications, where role-playing or involvement by two or more users is required.
[090] As illustrated in FIG. 2, client stations 70A-70N and server 54 are connected to a network 62. While FIG. 2 illustrates multiple client stations 70A- 70N, it will be appreciated that system 50 may be implemented for any number of users. Depending on the location and/or relationship between the users, network 62 may be implemented with a local area network, a wide area network, a virtual private network or any other type of private and/or secure communications network. Network 62 may also be implemented using public communications networks, such as the Internet. To facilitate communication over network 62, a network protocol (such as TCP/IP) may be used by and between client stations 70A-70N and/or server 54.
[091] In the network environment 50 of FIG. 2, simulation and rendering operations may be performed entirely at each of the client stations 70A-70N. In such a case, desired movements or operations by a user at one client station may be communicated via network 62 to users at other client stations in accordance with a peer-to-peer relationship. Based on the communication of such information, each client station 70A-70N may perform simulation and rendering operations similar to that performed by computing platform 20 in FIG. 1. In order to perform these operations, identical VR file(s) may be stored in a memory 74A-74N associated with each of the client stations 70A-70N.
[092] In a peer-to-peer relationship, such as that described above, server 54 may be omitted from network environment 50. However, to provide interactive displays in accordance with a client-server relationship, server 54 may be provided to function with client stations 70A-70N through network 62. For example, simulation operations may be performed centrally at server 54 based on user input from each of the client stations. Using data from VR file(s) stored in memory 58, server 54 may simulate the desired movements or operations by one or more users in the complex environment and provide output through network 62 to indicate the camera or viewer position for each of the client stations 70A-70N. Based on the output of server 54, each of the client stations may then perform rendering operations and graphically display each frame. In such a client-server arrangement, server 54 may also perform synchronization and user management functions for client stations 70A-70N. For instance, server 54 may coordinate and synchronize the transfer of data between the client stations. Further, server 54 may inform users when other users are connected or disconnected from the network.
[093] As indicated above, VR file(s) may be stored in memory 58 associated with server 54, as well as in memory 74A-74N associated with each of the client stations 70A-70N. As with the embodiment of FIG. 1 , the VR file(s) may include optimized data representing a complex environment, as well as other data to facilitate simulation and rendering of the complex environment on an interactive basis with each user. In the embodiment of FIG. 2, the VR file(s) may be generated from 3D model files by pre-processing operations performed by, for example, server 54. Further, memory 58 and memory devices 74A-74N may be implemented in a similar fashion to memory 30 of FIG. 1 , and may also store data (such as 3D model files) in addition to the VR file(s). [094] For client-server arrangements, server 54 may be implemented in a similar manner as VR system 28 of FIG. 1. Thus, server 54 may be implemented with a workstation, a server, a personal desktop computer, a laptop or any other computing-based platform. Further, as with VR system 28, server 54 may include software-based components for performing simulation operations (see simulation component 25 of FIG. 3). Also, server 54 may include communication hardware for network 62 and other hardware or software-based components for performing synchronization and user management functions.
[095] As indicated above, client stations 70A-70N may be configured to receive simulation results from server 54 over network 62. To handle such information, client stations 70A-70N may be implemented with a laptop, a workstation, a personal desktop computer or computing-based platform installed with communication hardware. Client stations 70A-70N may also be implemented with components (similar to graphics hardware 18, graphics system 24 and display 12 of FIG. 1) for performing rendering operations and displaying each rendered frame. Further, to receive user input and/or provide feedback, client stations 70A-70N may also include a user interface, such as user interface 40 of FIG. 1.
[096] In peer-to-peer arrangements, each of the client stations 70A-70N may be additionally configured with components or systems for performing simulation operations (such as VR system 28 of FIG. 1). During simulation, client stations 70A-70N may also be adapted to track and synchronize movements with other client stations to detect and resolve collisions based on user input.
[097] When operating in accordance with a client-server relationship, server 54 may periodically poll network 62 and detect user input from one or more of the client stations 70A-70N. User input from each client station is used by server 54 to perform simulation operations for each frame based on the VR file(s) stored in memory 58. If user input is received by more than one client station 70A-70N over a predetermined polling period, then server 54 may: analyze all of the user input; perform simulation operations (such as collision detection); and update the viewer or camera position of each user for the next frame to be rendered. Server 54 may be adapted to provide simulation such that one or more users are represented within a complex virtual environment by an avatar and/or users can observe other users represented within the environment. With such simulation capabilities, various applications can be provided for multiple users with system 50, such as training exercises where one user acts as an instructor for other users.
[098] After simulating each frame, the output of server 54 is provided to client stations 70A-70N via network 62. Based on the output of server 54 and the data in the VR file(s), each client station may perform rendering operations for the frame to be rendered. Such rendering operations may include visibility and occlusion culling operations, as well as dynamic tessellation of curved or rounded elements that are determined to be visible in the frame. During the rendering phase, each client station may also extrapolate and determine the best fit for displaying the rendered frame to the user at the client station. To display the rendered frame, a set of commands (such as OpenGL commands) may be issued to the graphics hardware at each client station. Further, if feedback is simulated for users, then each client station may provide feedback (where applicable) to a user through the haptic devices located at the client station. [099] FIG. 3 illustrates exemplary VR system 28, consistent with embodiments of the present invention. VR system 28 may be implemented as part of computing platform 20 (see FIG. 1) or server 54 (see FIG. 2) to perform various operations. As illustrated in FIG. 3, VR system 28 includes a simulation component 25 and a converter component 21. These components may be implemented as software-based applications or components that are developed using a programming language and/or a graphics development language or libraries.
[0100] Converter component 21 may be configured to perform preprocessing operations including the conversion of 3D model files into VR file(s). Converter component 21 may be implemented using, for example, C++ and Microstation development language for generating VR file(s). An exemplary embodiment of VR file(s) is provided in FIG. 5 and exemplary embodiments of the pre-processing and conversion operations performed by converter component 21 to generate VR file(s) are described below with reference to, for example, FIGS. 6 and 9.
[0101] In the embodiment of FIG. 3, simulation component 25 is configured to perform run-time operations including the processing of user input and simulation to permit the rendering of frames of a complex virtual environment. Simulation component 25 may be implemented using a programming language such as C++ or other comparable programming languages. An exemplary embodiment of simulation component 25 is explained below with reference to FIG. 4 and exemplary embodiments of the run-time and simulation operations performed by simulation component 25 are further described below with reference to, for example, FIGS. 7 and 8. [0102] FIG. 4 illustrates an exemplary embodiment of VR system 28 and graphics system 24, consistent with embodiments of the invention. As illustrated in FIG. 4, simulation component 25 of VR system 28 may be implemented with a number of software-based engines or sub-components. For example, simulation component 25 may include a physics engine 254 and an I/O engine 256. During run-time operations, user input is received and pre-processed by I/O engine 256. I/O engine 256 may perform any necessary processing operations on the user input (such as the translation of signals received from user interface or input devices) and provide the user input to physics engine 254. If feedback is provided to the user, I/O engine 256 may also output feedback instructions generated by simulation component 25 to haptic device(s) controlled or worn by the user. This may cause various sensations (such as pressure, resistance, vibration, sound, etc.) to be provided to the user in order to simulate activity or occurrences in the complex virtual environment.
[0103] Based on user input, physics engine 254 determines a user's desired movements or operations for each frame to be rendered. As part of this determination, physics engine 254 may consider the direction and velocity of a moving object or avatar based on the user input. The effect of gravity may also be analyzed to calculate the desired position for the next frame. Further, using the VR file(s), physics engine 254 may detect if any collisions exist with objects in the simulated environment due to the desired movement or operations of the user. If a collision is detected, physics engine 254 may resolve any such collisions to determine the user's final adjusted position or viewer position for the next frame. To perform such operations, physics engine 254 may be adapted to perform collision detection operations, such as those further described below with reference to, for example, FIGS. 8 and 16A-16B.
[0104] The viewer or camera position determined by physics engine 254 is provided to graphics system 24. As illustrated in FIG. 4, graphics system 24 may include a graphics engine 242. Graphics engine 242 may be a software-based engine or component that performs various rendering operations for each frame to be rendered to a user. For example, based on the viewer position and data contained in the VR file(s), graphics engine 242 may determine which objects are visible and which objects are occluded or hidden from view for each frame. In addition, for specific visible elements or objects defined in the VR file(s) (such as rounded or curved objects), graphics engine 242 may perform dynamic tessellation operations to control the level of detail by which such elements are displayed to the user. To perform such rendering operations, graphics engine 242 may be implemented with occlusion culling and dynamic tessellation functions, such as those further described below with reference to, for example, FIGS. 8, 19 and 23.
[0105] The results of the rendering operations performed by graphics engine 242 may include a list of vertex arrays for the elements determined to be visible, as well as vertex arrays for each dynamically tessellated element. Based on these results, commands (such as Open GL commands) may be generated by graphics engine 242 to the graphics hardware 18 (not illustrated in FIG. 4) to cause the display of each rendered frame to the user.
[0106] FIG. 5 illustrates exemplary VR file(s) 50, consistent with embodiments of the invention. Generally, VR file(s) contain data and information relating to a complex environment for run-time operations. In contrast to conventional 3D model files, the VR file(s) may be optimized for facilitating realtime rendering and interactive displays of a complex environment for users. Thus, as illustrated in FIG. 5, VR file(s) 50 may include an octree file 52 to facilitate, for example, visibility determinations. Further, a collision detection file 56 may be generated to optimize the VR file(s) for collision detection operations. In accordance with embodiments of the invention, files 52 and 56 may be stored as separate VR files or they may be stored as part of a single VR file. For ease of reference, whether stored separately or jointly, these files are herein referred to as "VR file(s)."
[0107] Octree file 52 and collision detection file 56 may be generated from data representing a complex environment. For example, octree file 52 and collision detection file 56 may be generated by VR system 28 based on data from conventional 3D model files, such as Microstation or AutoCAD files. Such models of complex environments typically include data for describing primitives (such as points, lines, polygons, polyhedra and/or free-form surfaces) that define the shape of components of an object. A model for a complex environment may also include data for describing object attributes, such as line style, color, or surface texture, and connectivity relationships and positioning that define how the components fit together. In a 3D model, curved surfaces can be approximated with polygons and solid objects can be represented by polyhedra, each of which represents a set of polygons connected at a common edge and enclosing a volume.
[0108] When rendering a complex environment, objects and surfaces that are hidden or occluded by other objects or surfaces closer to the viewpoint may be eliminated. For complex environments, the complexity of visibility determination can be simplified by spatially partitioning or dividing the environment into cells or cubes. For example, according to an embodiment of the invention, octrees can be used to subdivide a complex environment represented by a 3D model. Octrees provide a hierarchical approach to spatially-partitioning, whereby an environment is divided into eight equally sized, three-dimensional cells (e.g., cubes) and each cell is further divided into eight cells until a cell is empty or contains fewer than a predetermined number of elements or primitives (such as polygons or triangles). Consistent with embodiments of the invention, each level of the octree may be represented by nodes that are hierarchically related to one another in accordance with each level of division.
[0109] During pre-processing, an octree may be constructed for a complex environment represented by a 3D model. As illustrated in FIG. 5, data representing the octree can be stored as part of octree file 52. Octree file 52 includes data representing the octree of elements. The octree can be defined by any number of cells, whereby each cell includes a vertex array for every material used by the elements or primitives of the cell. Octree file 52 may also include one or more occluder candidates 54. Occluder candidates may be generated during pre-processing of a 3D model based on elements or primitives that have characteristics that classify them as global candidates for consideration when perform occlusion culling (such as a large surface area). According to embodiments of the invention, occluder candidates 54 may be stored together as a separate list in octree file 52. Alternatively, each occluder candidate may be merged with the octree and stored relative to nodes of the octree for cells occupied by the occluder candidate. [0110] When performing simulation operations, collisions with objects may be detected and simulated to replicate real-world phenomena. The task of collision detection can be complex, especially for environments based on massive models. To improve such performance, spatial partitioning of the environment may also be performed during pre-processing to provide a collision detection hierarchy. In particular, a collision detection file 56 may be generated from a 3D model by creating grid subdivisions. As further described below, the modeled environment may be divided into a grid of equally sized cubes. After allocating elements or primitives (such as triangles) to the cubes, a bounding box tree (such as an axis-aligned bounding box (AABB) tree) may be created for each cube. Each bounding box can be created by dividing a cube into two equally sized cells or cubes and then further dividing the cubes until a cube is empty or contains fewer than a predetermined number of elements or primitives. As with octrees, each level of the bounding box may be represented by nodes that are hierarchically related to one another in accordance with each level of division. This hierarchy of bounding boxes may be stored to provide the collision detection hierarchy of collision detection file 56.
[0111] In accordance with an embodiment of the invention, FIG. 6 illustrates an exemplary pre-processing pipeline. The exemplary pre-processing pipeline of FIG. 6 may be implemented to generate VR file(s) based on a 3D model of the complex virtual environment. Such operations may be performed by VR system 28 of system environment 10 (FIG. 1) or server 54 of network environment 50 (FIG. 2). For purposes of illustration, however, reference will be made to the embodiment of FIG. 1 to describe the operations and features of FIG. 6. [0112] As illustrated in FIG. 6, the pre-processing pipeline begins with the loading of a 3D model file (operation 102). For example, VR system 28 may load and/or access a 3D model file from memory 30. The 3D model comprises data for defining all objects in the modeled complex environment, including object attributes (such as color, surface texture, etc.). Since most conventional 3D model files are not optimized for real-time simulations or rendering, the data from the 3D model is analyzed during pre-processing to generate VR file(s), consistent with embodiments of the invention.
[0113] After loading the 3D model file, the elements or primitives (such as polygons or triangles) of the complex environment are analyzed and filtered (operation 104). In particular, VR system 28 may analyze the 3D model data to identify and filter elements that can be dynamically tessellated during run-time operations. In accordance with an embodiment of the invention, elements that have curved or rounded features may be filtered from other elements in the 3D model and identified as dynamic tessellation elements. For instance, elements including an arc, an ellipse, a curve and/or a b-spline can be filtered from other elements. Such dynamic tessellation elements can be identified based on the name, structure and/or format of the data. For example, 3D models developed with Microstation may designate element types by specific name or header. In addition, the structure or format of the data can be analyzed to confirm the identity of a particular element type. For instance, based on the type of files used (Microstation, AutoCAD, etc.), element types in a 3D model may be identified in accordance with their predetermined data structure or format.
[0114] The name, structure and/or format of elements defined in 3D model files may vary according to the software version or edition with which it was developed. To overcome such drawbacks, routines may be developed to identify element types regardless of the version or edition with which the 3D model was created. For instance, a software-based component may be created using Microstation Development Language (MDL) to analyze data sets defining elements in a 3D model regardless of the version of Microstation, 3D model file.
[0115] When filtering elements (operation 104), VR system 28 may reformat and store any identified dynamic tessellation elements according to element type. As further described below with reference to FIGS. 10A-10C, specific element types can be stored in predetermined data formats that permit the VR file(s) to be processed and handled easier during run-time operations. For all remaining elements that are not filtered (i.e., all non-dynamic tessellation elements), triangulation operations may performed for polygons having more than three line segments (operation 106). In particular, each non-dynamic tessellation element is analyzed and stored either as a single triangle or set of triangles. Thus, if an element is represented as a single triangle in the 3D model, the element will be stored as a single triangle in the VR file(s). However, if the element is represented by a polygon or a set of polygons having more than three line segments, then each polygon may be triangulated and stored as a group or set of triangles. While storing all non-dynamic tessellation elements as triangles may increase the overall volume of data related to these elements, this allows the system to take advantage of state-of-the -art graphics hardware (such as graphics cards) that can directly handle and process triangle elements.
[0116] After triangulating polygons (operation 106), the final operations of the pre-processing pipeline are performed in order to create the VR file(s). As illustrated in FIG. 6, such operations may include: building a list of occluder candidates (operation 108); constructing an octree (operation 110); and building a collision detecting hierarchy (operation 112). These operations may be performed in parallel or in any sequence by VR system 28. Further, the results of the octree may be stored as part of an octree file (such as octree file 52) and the results of the collision detection hierarchy may be stored as part of a collision detection file (such as collision detection file 56). As described with reference to FIG. 5, the list of occluders can be stored as a separate list in the VR file(s). Alternatively, the list of occluders can be merged into the related nodes of the octree. In such a case, any identified occluder candidates can be inserted and merged into the octree (operation 114) after the octree is constructed.
[0117] Consistent with embodiments of the invention, FIG. 7 illustrates an exemplary run-time pipeline for providing interactive displays of complex virtual environments. The run-time pipeline of FIG. 7 includes simulation and rendering operations that can be performed with VR file(s) that have been generated for a complex environment. The operations of FIG. 7 can be performed by computing platform 20 of system environment 10 (FIG. 1) or server 54 and/or client stations 70A-70N of network environment 50 (FIG. 2). For purposes of illustration, however, reference again will be made to the embodiment of FIG. 1 to facilitate the description of FIG. 7.
[0118] Run-time operations are initiated by loading VR file(s) (operation 132). As discussed above, VR file(s) may be stored in memory 30. To select particular VR file(s), a dialog box may be displayed to the user so that the appropriate VR file(s) can be identified or selected. When loading the VR file(s), various system configurations may be performed. For example, based on input from a user, simulation features (such as collision detection or gravity effects) may be activated or deactivated by computing platform 20. Also, other system configurations (such as memory management operations) may be initiated by computing system 20. Exemplary embodiments for performing memory management operations are described below with reference, for example, to FIGS. 24A, 24B and 24C.
[0119] Consistent with embodiments of the invention, once the VR file(s) are loaded (operation 132), a simulation loop is performed (operation 140) based on user input (138). The simulation loop for a complex virtual environment may be reiterated to render successive frames of the environment until the run-time pipeline is terminated (based on, for example, an escape or exit command entered by the user).
[0120] As further illustrated in FIG. 7, the simulation loop includes frame simulation (operation 134) and frame rendering (operation 136). During simulation of each frame, user input is analyzed to determine desired movements or operations of a moving object or avatar representing the user in the complex virtual environment. By performing operations such as collision detection, VR system 28 can determine the position of the moving object or avatar and the viewer position for the next frame to be rendered. Information determined during simulation is then used to perform rendering operations. During the rendering phase, occlusion culling and dynamic tessellation operations can be performed, in accordance with embodiments of the invention. The results of the rendering operations can be used by graphics system 24 to cause the rendered frame to be displayed to the user. During the simulation loop, various system configuration operations may be performed, such as memory management operations. [0121] FIG. 8 is an exemplary flow diagram for providing interactive displays of complex virtual environments, consistent with embodiments of the invention. For purposes of illustration, the exemplary flow diagram of FIG. 8 will be described with reference to the system environment 10 of FIG. 1. The exemplary flow diagram of FIG. 8, however, may be implemented with other environments, including the network environment 50 of FIG. 2.
[0122] As illustrated in FIG. 8, in order to provide interactive displays of complex virtual environments, the VR file(s) are provided (step S.10). As described above, VR file(s) may be generated from a 3D model of the complex environment. The 3D model may be generated from a 2D drawing by using 3D modeling software or a 3D reconstructor program based on a laser scanning of the environment. The 3D model may also be created independently by a designer or programmer using 3D modeling software. In cases where the 3D model is based on a 2D drawing or file, 3D features may be selectively defined depending on the complex environment. For instance, if the 2D drawing represents the floor plan of building complex or residence, a 3D model may be created by projecting one or more elements in the drawing (such as walls and/or walkways) to permit better visualization of the floor plan. The elements to be projected in 3D may be uniquely identified in a 2D file (such as by color or texture) to assist in the designer or programmer in creating the 3D model. Once the 3D model is generated, the VR file(s) may be created. To generate the VR file(s), pre-processing operations may be performed by VR system 28. An exemplary pre-processing pipeline for generating VR file(s) is disclosed herein with reference to FIG. 6. Further, an exemplary flow diagram for generating VR file(s) from a 3D model is described below with reference to FIG. 9. [0123] Referring back to FIG. 8, after the VR file(s) are generated, the VR file(s) are loaded or accessed from memory 30. During or after the loading of the VR file(s), any necessary system configurations can be performed (step S.12). As indicated above, various system configuration options may be set by the user. For instance, simulation features (such as collision detection or gravity effects) may be activated or deactivated by a user. Also, specific system configurations (such as memory management operations) may be initiated by computing system 20. An initial viewer position may also be set according to a predetermined default setting for the first frame to be displayed to the user.
[0124] Simulation and rendering is then performed to display successive frames of the complex virtual environment based on input from the user. In particular, system 10 can poll for user input over a predetermined time period for each frame (step S.14). For instance, through user interface 40, VR system 28 may detect desired movements or operations entered by the user during the polling period. Using a keyboard, a trackball or a joystick, the user may enter desired movements (forward, back, left, right, etc.) and/or operations (lift, rotate, select, etc.). Based on input detected from the user, VR system 28 then performs simulation operations in order to render the next frame.
[0125] For example, predictive collision detection may be performed based on the detected user input (step S.16). Consistent with embodiments of the invention, VR system 28 analyzes all moving objects or avatars to determine if any collisions occur with other objects when moving the object or avatar between its current position and desired position for the next frame. As disclosed herein, the collision detection file 56 of the VR file(s) may be used to detect collisions (see, for example, FIGS. 16A and 16B). If a collision is found (step S.18; Yes), then VR system 28 responds to the collision (step S.20). For example, VR system 28 may project the movement of the object or avatar and determine an adjusted position from the first point of contact. If another collision is detected between the first point of contact and the adjusted position, then another adjusted position is detected until no further collisions are detected (step S.18; No). The final adjusted position is then used in place of the desired destination for purposes of setting the viewer position for the next frame. If no collisions are detected, then the viewer position is set to the desired position. An exemplary flow diagram for performing predictive collision detection operations is described below with reference to FIGS. 16A-16B.
[0126] To render a frame, it is necessary to remove or cull all elements or primitives that are outside the view frustum or occluded by other elements or objects. For this purpose, graphics system 24 performs culling operations (including occlusion culling) based on the determined viewer or camera position for the next frame (step S.22). To perform culling operations, the octree file 52 of the VR file(s) can be used. Exemplary flow diagrams for performing culling operations are described below with reference to FIGS. 19 and 20.
[0127] In addition to performing culling operations, graphics system 24 may perform dynamic tessellation operations (step S.24). In particular, for dynamic tessellation elements that are in the view frustum and not occluded, graphics system 24 determines a tessellation level for rendering each object. As further described in connection with the exemplary embodiment of FIG. 22, the tessellation level can control the number of triangles used to approximate the object in the rendered frame. For instance, objects or elements may be rendered with increasing number of triangles for higher tessellation levels. Further, objects or elements that appear closer to the camera position or screen may be assigned higher tessellation levels to cause the object or element to be rendered with a greater number of triangles and, therefore, greater detail.
[0128] After performing culling operations and dynamic tessellation, the frame is rendered and displayed to the user (step S.26). As part of this process, the results of the rendering operations may be used by graphics system 24 to issue or generate commands (such as OpenGL commands) to graphics hardware 18. The results of the rendering operations may indicate what elements or objects are visible and need to rendered, as well as how to render dynamic tessellation elements that are within the view frustum. Each visible element or object may be represented by a vertex array of x, y, z coordinate points that define the vertices of the element. An index array may be generated that reads into the vertex array and controls the number of triangles used to render the element or object. Thus, a set of OpenGL commands may be generated based on a specific vertex array data for the elements determined to be visible, as well as vertex array data for each visible dynamically tessellated element. In response to the Open GL commands, graphics hardware 18 (such as a graphics card) may cause the display of the next frame on display 12. Thereafter, successive frames may be simulated and rendered (steps S.14-S.26) until runtime operations are terminated by the user.
[0129] FIG. 9 is an exemplary flow diagram for providing VR file(s), consistent with embodiments of the invention. As described above, VR file(s) can be generated from conventional 3D models that include data representing a complex environment. The VR file(s) can be generated during pre-processing in order to prepare and store the files before run-time operations are performed. Once created, the VR file(s) can be accessed from memory and used for simulation and rendering operations. The exemplary flow diagram of FIG. 9 may be implemented through various systems or environments, such as the system environment 10 of FIG. 1 or the network environment 50 of FIG. 2. However, for purposes of illustration, reference will be made to the embodiment of FIG. 1 to describe the flow diagram of FIG. 9.
[0130] As illustrated in FIG. 9, the exemplary process begins with the loading of a 3D model (step S.40). In accordance with embodiments of the invention, a 3D model file may be designated by the user (using, for example, a displayed dialog box) and accessed from memory (such as memory 30). The 3D model file may be a CAD or other type of 3D model file that represents a complex environment (such as a plant, a building facility, an off-shore platform, etc.). Further, the 3D model file may be generated using Microstation or other types of commercially available software.
[0131] To create the 3D model, modeling software may be used to generate the 3D model. The 3D model may also be generated from a 2D drawing or file, such as an engineering or architectural drawing. Thus, the 3D model may be based on a 2D file in which one or more elements have been projected into 3D. Alternatively, the 3D model may be created by laser scanning the environment and using a 3D reconstructor program. When creating the 3D model file, the file may be prepared with various features or functions to facilitate updates to the objects represented in the 3D model. For instance, a virtual reality (VR) library may be provided with a number of special VR functions in order to incorporate such functions into the 3D model file. The VR functions may permit the name, texture, lighting and/or rotational aspects of an object to be specified and, if needed, updated in the future. An initial, viewer or camera position for an avatar may also be defined in the modeled environment. In addition, functional links may be defined for an object to permit functions such as a database link to retrieve information for the object (such as material and/or cost information, etc.).
[0132] After accessing or loading the 3D model file, the elements or primitives of the model are filtered (step S.42). For example, VR system 28 may analyze the 3D model data to identify and filter dynamic tessellation elements from other elements that are static or non-tessellated during rendering. In accordance with embodiments of the invention, elements that have curved or rounded features may be filtered and distinguished from other elements in the 3D model. For instance, elements including an arc, an ellipse, a curve and/or a fa- spline can be identified as dynamic tessellation elements and filtered from other elements identified as non-dynamic tessellation elements.
[0133] Consistent with embodiments of the invention, VR system 28 may identify one or more major types of dynamic tessellation elements, such as cone element types, torus element types, and/or surface-of-revolution element types. Generally, a cone element type is defined by two circles or ellipses and one or more lines (see, for example, FIG. 10A). If the dimensions of the two circles or ellipses are identical, the cone element type may represent a cylinder. A torus element type is generally defined by a center point, two ellipses or circles and one or more arcs. Torus element types can represent elbows or donuts (see, for example, FIG. 10B). Surface-of-revolution element types are defined by a number of parameters including a center point, an ellipse and an angle of rotation. Surface-of-revolution element types may represent different types of curved surfaces or objects (see, for example, FIG. 10C). [0134] To identify dynamic tessellation elements, VR system 28 may analyze the name, structure and/or format of the data representing each element in the 3D model. For example, as indicated above, 3D model files developed with Microstation may designate certain element types by specific element names. In addition, the structure or format of the data can be analyzed to confirm the identity of a particular element type. For instance, a cone element type in a Microstation file is defined by a data structure comprising a header followed by two ellipses and four lines. Further, in a 3D model developed with Microstation, a torus element type developed is defined by a data structure comprising a header followed by two ellipses and a set of arcs (such as four arcs) and a surface-of- revolution element type is defined by a header and a data set indicating a surface-of-revolution. Once again, a software-based component may be created using a software-manufacturer-specific, development language (such as Microstation Development Language (MDL)) that permits data sets defining elements in the 3D model to be identified regardless of the version of 3D model file.
[0135] For each identified dynamic tessellation element, VR system 28 may store the element in an optimized data format in the VR file(s) to better facilitate run-time operations. For example, as illustrated in FIG. 10A, a cone represented by a cone element type may be stored as a data structure including: a center point of the top (T); a center point of the bottom (B); a center point (C) on the axis between T and B; a radius of the top (Rt); and a radius of the bottom (R ). Further, as indicated in FIG. 10B, a torus represented by a torus element type may be defined by a data structure including: an axis of revolution (A); a center point on the axis of revolution (C); an angle of revolution (α); an internal radius (Ri); and an outer radius (R2). A surface-of-revolution, such as that depicted in FIG. 10C, may be represented by a surface-of-revolution element type stored as a data structure comprising: an axis of revolution (As); a center point of the revolution (Cs); an angle of the revolution (αs); a radius of the revolution (Rs); a center point of the ellipse (CA); an angle of the ellipse (OA); a shorter radius of the ellipse (RAI); and a longer radius of the ellipse (RA2).
[0136] For the remaining elements that are not filtered (i.e., all nondynamic tessellation elements), triangulation operations may be performed (step S.44). In particular, VR system 28 may analyze each non-dynamic tessellation element in the 3D model and store it either as a single triangle or set of triangles. Accordingly, if an element is represented as a single triangle in the 3D model, the element will be stored as a single triangle in the VR file(s). However, if the element is represented by a polygon with more than three sides or a set of polygons, then each polygon is triangulated and stored as a group or set of triangles. While storing all non-dynamic tessellation elements as triangles may increase the overall volume of data needed to define these elements, ail elements should be represented by triangles for rendering purposes to take advantage of state-of-the-art graphics cards that can directly handle triangles without any additional processing (CPU) resources required for rendering.
[0137] Consistent with embodiments of the invention, various techniques may be employed for triangulating polygons. For example, VR system 28 may be implemented to generate sets of triangles by analyzing the vertices of a polygon. Alternatively, a conventional graphics utility, such as OpenGL utility library (GLU), may be provided to perform triangulation operations. In such a case, a specific utility may be called by VR system 28 by providing the vertices of the polygon to be triangulated, and in response the utility may perform triangulation operations and output a set of triangles for the polygon.
[0138] After filtering dynamic tessellation elements (step S.42) and triangulating polygons (step S.44), an octree is constructed (step S.46). As indicated above, an octree can be constructed by spatially partitioning or dividing the complex environment into cells or cubes. Octree as well as other types of partitioning techniques can be used to simplify the task of determining which objects and surfaces are visible versus those that are hidden or occluded when rendering the complex environment.
[0139] The octree may be constructed from the filtered dynamic tessellation elements (reformatted and stored in an optimized data format) and non-dynamic tessellation elements (including polygonal elements that have been triangulated). To construct an octree, a single cell is first defined that contains the entire modeled environment, including all dynamic and non-dynamic tessellation elements. This cell is then divided into eight cells by cutting or dividing along each of three axes (x, y, z). For a 3D environment, each cell of the octree represents a three-dimensional volume defined by two x, y, z points (i.e., a minimum point and a maximum point for the cell). Each divided cell is then further divided into eight cells, with such cell division continuing until a cell is empty or contains fewer than a predetermined number of elements or primitives (such as polygons or triangles). Consistent with embodiments of the invention, each level of the octree may be represented by nodes that are hierarchically related to one another in accordance with each level of division.
[0140] When dividing a cell (including the first bounding cell), VR system 28 may cut or divide the cell at the center point of the cell to thereby create eight, equally-sized cells. Alternatively, VR system 28 may determine the median location of all triangles within the cell and divide the cell at the median location. Such a technique may be used to provide a more balanced octree, especially for environments in which triangles are not evenly distributed in space. When using the median location of triangles to divide cells, the resulting cells may be unequal in size. However, this will not effect the usage of the octree, since all cells are simply defined by a set of coordinates that define the vertices of the cell.
[0141] When performing cell division to construct the octree, an analysis may be performed to determine the number of elements that are contained within each cell. Therefore, as indicated above, VR system 28 may analyze and count non-dynamic tessellation elements (i.e., triangles) for the purposes of determining whether to further divide a cell. By way of example, cell partitioning for the octree may continue until a cell contains fewer than approximately 10,000 triangles. Further, dynamic tessellation elements may be analyzed to determine whether to further divide a cell. By way of example, cell partitioning for an octree may continue if the cell contains more than 1 ,000 dynamic tessellation elements. As such, the number of non-dynamic tessellation elements and/or the number of dynamic tessellation elements may be considered when performing cell division. Moreover, when assembling and storing the final VR file(s), all elements (including dynamic and non-dynamic tessellation elements) may be assigned or associated with their corresponding cells or nodes of the octree.
[0142] When creating cells of the octree, elements in the environment may not be positioned completely within a cell. This situation may arise with respect to both non-dynamic tessellation elements and dynamic tessellation elements. Various solutions may be used to address this problem. For instance, if division of a cell causes certain elements to be distributed into two or more cells, then each such element may be maintained only in the previous cell (i.e., the cell that was divided). This approach may be used for handling dynamic tessellation elements that are split during cell division. For non-dynamic tessellation elements, cells may be adaptively stretched to control the assignment of elements to cells of the octree.
[0143] FIG. 11 illustrates an example of stretching an octree cell to accommodate non-dynamic tessellation elements (i.e., triangles). For ease of illustration, the example of FIG. 11 is presented in 2D, but the concept of stretching is applicable to 3D cell partitioning. As illustrated in FIG. 11 , a cell division may result in the creation of Original Cell #1 and Original Cell #2. Triangle T2 may be located within Original Cell #1 , whereas triangle T1 may be located within Original Cell #1 and Original Cell #2. To address this problem, Original Cell #1 may be stretched along one direction to accommodate triangle T1 and define a new cell, Stretched Cell #1. While an overlap may exist between the stretched cell and other neighboring cells (such as between Stretched Cell #1 and Original Cell #2), elements are maintained or reassigned only to the cells that fully contain the element. For example, after performing stretching on the cells illustrated in FIG. 11 , triangles T1 and T2 would be assigned to Stretched Cell #1 and triangle T3 would maintain its assignment to Original Cell #2.
[0144] When stretching the dimension(s) of a cell, the cell may be stretched only to the extent that it is necessary to fully contain the divided element (such as triangle T1). Thus, the vertices of divided or split elements may be used to determine the extent and the axis (x, y or z in 3D environments) along which the stretch should be performed. For example, as illustrated in FIG. 11 , Stretched Cell #1 may be generated as a result of stretching the Original Cell #1 in the direction of and only to the extent of the outer vertex of triangle T1 that is not contained within Original Cell #1.
[0145] Further, consistent with embodiments of the invention, if more than one cell can be stretched, then the cell determined to have the majority of the volume or area of the divided element(s) may be stretched. As a result, VR system 28 would only perform stretching on the cell determined to require the least amount of stretching. Moreover, if more than one element is outside of a cell, then the cell may be stretched in one or more directions until all elements are within the stretched cell.
[0146] When stretching cells for a 3D environment, stretching can be performed along one or more axes (x, y and/or z). Further, since all triangle elements should be contained within a cell before it is divided, the vertices of the divided elements may be compared against the center point or median location where the cell division or cut occurred to determine the direction and amount of stretching required for divided cells. Such an analysis may be performed with respect to each divided element by comparing the x, y and z coordinates of the triangle to the center point or median location. This process may be repeated for divided cells requiring stretching based on more than one divided element or triangle.
[0147] Consistent with an embodiment of the invention, various techniques may be performed for allocating dynamic tessellation elements to the cells of the octree. For example, dynamic tessellation elements may be allocated only to cells within which they are fully contained. To determine if a cell fully contains a dynamic tessellation element, a bounding box may be generated about the dynamic tessellation element and the bounding box may be compared to the cells of the octree to determine the lowest level cell of the octree that fully contains the bounding box. The allocation of dynamic tessellation elements may occur during the process of creating the octree or as a separate process that is performed after all of the cells of the octree are constructed.
[0148] Referring again to FIG. 9, after identifying and constructing the cells of the octree (step S.46), each cell of the octree may be analyzed to perform a number of operations on the elements contained in the cell (step S.48). As illustrated in FIG. 9, such operations may include sorting elements by material, optimizing vertex arrays, and/or building triangle strips.
[0149] For example, the elements of each cell (including non-dynamic tessellation elements and dynamic tessellation elements) may be sorted according to their material type. The material type for each element, which may be extracted from the 3D model, may define various rendering properties for the element such as color, opacity, brightness and/or texture. For each cell, the material type for each element may be determined and then one or more linked lists may be created to list the elements according to material type. Sorting elements by material type in the final VR file(s) can provide faster rendering rates, since most conventional graphics cards can only process and render elements with one material that is active at a time. As a result, organizing elements by material type can reduce the amount of processing time required by the graphics hardware to switch between material types during run-time operations.
[0150] As illustrated in FIG. 9 (step S.48), the elements of each cell may also be processed to optimize vertex arrays. Such an operation may be performed to optimize the manner in which triangle elements are stored in the VR file(s). For example, triangle elements in a cell can be analyzed to determine shared vertices between triangles and, for triangles identified with shared vertices, the data defining such triangles may be reformatted to reduce the overall amount of data required to define the triangles. Triangle elements that do not share a common vertex with one or more other triangles can also be reformatted to define such triangles with vertex data to provide similar formatting with the triangles identified to have common vertices.
[0151] To illustrate the concept of providing optimized vertex array data, reference will now be made to FIGS. 12A and 12B to illustrate an embodiment consistent with the principles of the invention. Assume, for example, that a triangle (T1) is defined by a set of x, y, z points in the 3D model. As illustrated in FIG. 12A, three such points (P1 , P2, P3) are required to define the triangle (T1). For 3D modeling, each of these points (P1 , P2, P3) may be defined by different x, y, z coordinates. Thus, P1 = (x1 , y1 , z1), P2 = (x2, y2, z2), and P3 = (x3, y3, z3) and T1 can be defined by a combined data set of x, y, z coordinates (x1 , y1 , z1 ; x2, y2, z2; x3, y3, z3) in the 3D model. In the example of FIG. 12A, triangle (T1) does not share a common vertex with any other triangle element in the cell. However, as illustrated in FIG. 12B, assume that two other triangles in the cell (T2 and T3) have one common vertex (P5). In the 3D model, these triangles (T2 and T3) may be represented by combined data sets (i.e., (P4, P5, P6) and (P5, P7, P8), respectively) where each point (P) in the combined data sets is uniquely defined by x, y, z coordinates. Thus, in the 3D model, T2 may be defined by a combined data set of x, y, z coordinates (x4, y4, z4; x5, y5, z5; x6, y6, z6) and T3 can be defined by another combined data set of x, y, z coordinates (x5, y5, z5; x7, y7, z2; x8, y8, z8). As can be seen by the above-noted example, repetitive data is stored in the 3D model for at least the coordinates defining the triangles (T2 and T3) that have a shared vertex.
[0152] To provide optimized vertex array data for the exemplary triangles of FIGS. 12A and 12B, VR system 28 may generate one or more vertex arrays (Av) that define the x, y, z coordinates for the vertex points (P) of all triangle elements contained in the cell. Thus, for a cell containing the triangles (T1 , T2, T3) illustrated in FIGS. 12A and 12B, a vertex array (Av) may be generated as follows: Ay = (P1 , P2, P3, P4, P5, P6, P7, P8), where each vertex point in the array is defined by a unique set of x, y, z coordinates. To define each triangle in the cell, VR system 28 may also generate an index array of triangles (At) as follows: At = (T1 , T2, T3) where each triangle in the array is defined by a unique set of vertex points (e.g., T1=P1 , P2, P3: T2=P4, P5, P6 and T3=P5, P7, P8) defined by the vertex array (Av). Alternatively, the index array may be generated with pointers that identify positions in the vertex array. For example, assume that vertices P1 -P8 are defined in positions 1-8, respectively, of a vertex array (Av). Further, assume that an index array (At) is generated for the triangles T1 , T2, T3 as follows: At = (1 , 2, 3, 4, 5, 6, 7, 8). To determine the coordinates of each of the triangles, the pointers from every three positions of the index array may be used to read into the vertex array and extract the data. Thus, returning to the previous example, the pointers from the first three positions of the index array At (i.e., 1 , 2, 3) may be used to read the corresponding positions in the vertex array Av and extract the coordinates for the triangle T1 (i.e., x1 , y1 , z1 ; x2, y2, z2; and x3, y3, z3). The next three positions in the index array At (i.e., 4, 5, 6)) could then be read to extract the vertex data from the vertex array Av for the triangle T2, and a similar process may be repeated to extract the vertex data for the triangle T3. [0153] With such an approach, optimized vertex array data can be provided and stored with respect to the cells of the octree for the VR file(s). Optimizing vertex array data for triangle elements may reduce the overall amount of data needed to define the complex virtual environment, as well as improve graphics processing and cache performance since previously transformed vertices can be pulled from cache (rather than recalculated) during the rendering phase.
[0154] Another operation that may be performed on the triangle elements of each cell (FIG. 9; step S.48) is the building of triangle strips. In particular, after optimizing the vertex data, an index array for a triangle strip (Ats) may be built to replace triangle array data representing triangle elements that have common edges. FIG. 13, for example, illustrates a triangle strip comprising three triangles (T1 , T2, T3). The triangles of a triangle strip may result from triangulated polygons (see FIG. 9, step S.44) or may be original triangle elements in the 3D model that have common edges. Consistent with embodiments of the invention, VR system 28 may define an index array for a triangle strip (Ats) where the array defines triangles belonging to a triangle strip according to their vertex points. These vertex points may be further defined according to a stored vertex array (Av). Therefore, for the exemplary triangle strip of FIG. 13, an index array for a triangle strip (Ats) can be defined as follows: Ats = (P1 , P2, P3, P4, P5), where each vertex point (P) is further defined by a vertex array (Av). Alternatively, the index array for the triangle strip may be generated with pointers that identify positions in the vertex array. For instance, with respect to the example of FIG. 13, assume that vertices P1-P5 are defined in positions 1-5, respectively, of a vertex array (Av). Further, assume that an index array (Ats) is generated for the triangle strip as follows: A = (1 , 2, 3, 4, 5). To determine the coordinates of each of the triangles, (T1 , T2, T3) in the triangle strip, a sliding window (that is three positions wide) may be used to read the pointers from the index array and extract the data from the vertex array. For example, returning to the example for FIG. 13, with the sliding window placed at the first position in the array, the pointers from the first three positions of the index array Ats (i.e., 1 , 2, 3) may be used to read the corresponding positions in the vertex array Av and extract the coordinates for the triangle T1 (i.e., x1 , y1 , z1 ; x2, y2, z2; and x3, y3, z3). The sliding window may then be moved forward one position and the next three positions within the window could be read from the index array Ats (i.e., 2, 3, 4) to extract the vertex data from the vertex array Av for the triangle T2, and a similar process may be repeated to extract the vertex data for the triangle T3 of the triangle strip.
[0155] Various techniques may be employed to identify and define triangle strips. For example, VR system 28 may be implemented to analyze and identify triangles that have common edges and generate triangle strips for such triangle elements. Alternatively, a conventional graphics utility, such as NVtristrip-an Nvidia utility library, may be provided to perform triangle strip operations. In such a case, the utility may be called by VR system 28 by providing an index array for the triangles, and in response the utility may perform triangle strip operation and output an index array (such as Ats) for the corresponding triangle strip.
[0156] By storing triangle strips in the VR file(s), less memory capacity is required. Further, triangle strips can be handled more efficiently by most state-of- the-art graphics hardware systems. For instance, graphics hardware 18 can be adapted to render triangle strips through a graphics library or application program interface (API), such as OpenGL (an API maintained by a consortium of companies) or Direct 3D (an API maintained by Microsoft Corporation). Accordingly, by providing triangle strip array data in the VR file(s) cache memory requirements for the graphics hardware can be reduced and the data for rendering the complex virtual environment can be provided in a more optimized format.
[0157] Referring again to FIG. 9, a list of occluder candidates can be built from the 3D model data (step S.50). Additionally, a collision detection hierarchy can be generated based on the 3D model (step S.52). Although FIG. 9 illustrates the octree as being constructed before the occluders list and collision detection hierarchy are built, steps S.46 and S.48 (related to the octree) can be performed in any order relative to steps S.50 (related to the occluders list) and S.52 (related to the collision detection hierarchy). For example, as indicated above, these steps may be performed simultaneously or in parallel by VR system 28. Therefore, the exemplary flow diagram of FIG. 9 should not be construed as limiting the order by which certain steps are performed and, additionally, modifications to the order and nature of the steps of FIG. 9 may be implemented, consistent with the principles of the invention.
[0158] When rendering frames based on the VR file(s), there are generally two types of visibility detection and culling that must be performed. One type is view frustum culling, which is performed to remove elements or objects in the complex virtual environment that are outside the field of view. Another type is occlusion culling, which is performed to identify and remove elements or objects that are blocked from view by other elements or objects that are closer to the viewer or camera position. Consistent with embodiments of the invention, both types of visibility detection and culling may be performed on a cell-by-cell basis using the octree defined in the VR file(s). Additionally, to improve the performance of occlusion culling and real-time interaction with the user, an occluders candidate list may be provided in the VR file(s). As further described below, occluder candidates can be analyzed during rendering to more quickly identify those elements or objects that are occluded for each frame.
[0159] As illustrated in FIG. 9, occluder candidates may be generated during pre-processing of a 3D model. Generally, occluder candidates represent elements or primitives that have characteristics (such as a large surface areas) that classify them as global candidates for consideration when perform occlusion culling. Thus, to identify occluder candidates, VR system 28 may analyze elements represented in the 3D model to identify large objects (such as walls, floors, etc.) that have characteristics indicative of good occluder properties (i.e., they are likely to block the view of other elements). Various techniques, such area and/or shape analysis, may be performed to determine whether an object classifies as a good occluder candidate. Consistent with embodiments of the invention, FIG. 14 illustrates an exemplary flow diagram for generating and building a list of occluders. As indicated above, all identified occluder candidates may be stored together as a separate list in octree file 52 of the VR file(s). Alternatively, each occluder candidate may be merged with the octree and stored relative to nodes of the octree for cells occupied by the occluder candidate.
[0160] When performing simulation, collisions with objects may be detected and simulated to replicate real-world phenomena. The task of collision detection can be complex, especially for environments based on massive models. To improve such performance, spatial partitioning of the environment may also be performed during pre-processing to provide a collision detection hierarchy. In particular, as illustrated in FIG. 9, a collision detection hierarchy may be developed from the 3D model. Various techniques, such as grid subdivision processes, may be used to generate the collision detection hierarchy. With grid subdivision, the modeled environment (including all dynamic and non-dynamic tessellation elements) may be divided into a grid of equally sized cubes. After allocating elements or primitives (such as polygons or triangles) to the cubes, a bounding box tree (such as an axis-aligned bounding box (AABB) tree) may be created for each cube. Each bounding box can be created by dividing a cube into two equally sized cells or cubes and then further dividing the cubes until a cube is empty or contains fewer than a predetermined number of elements or primitives. As with octrees, each level of the bounding box may be represented by nodes that hierarchically related to one another in accordance with each level of division. This hierarchy of bounding boxes may be stored to provide the collision detection hierarchy (such as collision detection file 56) for the VR file(s) Consistent with embodiments of the invention, FIG. 15 illustrates an exemplary flow diagram for creating a collision detection hierarchy.
[0161] Referring again to FIG. 9, after analyzing and processing the 3D model data, the VR file(s) are stored (step S.54). As illustrated in the exemplary embodiment of FIG. 5, the VR file(s) may include an octree file 52 and a collision detection file 56, which may be stored separately or together as part of one general VR file. Once the VR file(s) are created, run-time operations may be performed to provide interactive displays of the complex virtual environment on a real-time basis. [0162] FIG. 14 is an exemplary flow diagram for identifying and building a list of occluder candidates, consistent with embodiments of the invention. The exemplary flow diagram of FIG. 14 may be implemented through various systems or environments during pre-processing, such as the system environment 10 of FIG. 1 or the network environment 50 of FIG. 2. For purposes of illustration, however, reference will be made to the embodiment of FIG. 1 to describe the flow diagram of FIG. 14.
[0163] As illustrated in FIG. 14, neighboring triangle elements are identified (step S.70). To identify neighboring triangles, VR system 28 may analyze the data for all triangle elements represented in the complex environment (i.e., all non-dynamic tessellation elements, including those that were triangulated). When analyzing the data, the defined edges of each triangle element may be analyzed to determine if it has a common edge with other triangle elements. Triangle elements that are found to have common edges and that are within the same plane may be identified as neighboring triangles. With respect to dynamic tessellation elements, such elements may also be analyzed in a similar manner (in which case the dynamic tessellation elements may be triangulated) or they may be entirely ignored for the purposes of identifying occluder candidates.
[0164] After identifying neighboring triangles, the neighboring triangles are merged into polygons (step S.72). For example, VR system 28 may merge each set of neighboring triangles that are determined to be co-planar and redefine them through data representative of a polygon. Each merged set of neighboring triangles are then analyzed to determine if the resulting polygon is convex (step S.74). A merged polygon may be considered convex if it is not possible to pass a line through the polygon and intersect more than two sides of the polygon. If a line can be passed through the polygon and intersect more than two sides, then the polygon is not convex but instead concave. Consistent with embodiments of the invention, only merged triangles resulting in convex polygons may be considered for purposes of identifying candidate occluders.
[0165] If a merged polygon is convex (step S.74; Yes), then processing continues to determine the area of the polygon (step S.78). The polygon (or candidate occluder) and its determined area may then be stored for further processing and sorting. If a set of neighboring triangles when merged do not form a convex polygon (step S.74; No), then the polygon is rejected from further consideration (step S.76). Additional neighboring triangles are merged and analyzed in a similar fashion until all neighboring triangles have been processed (step S.80; No).
[0166] As illustrated in FIG. 14, candidate occluders (i.e., all merged polygons that have been determined to be convex) are then sorted (step S.82) in order to identify global occluder candidates for the complex environment. The global occluder candidates are then stored (step S.84). As indicated above, the global occluder candidates may either be stored as a separate list in the VR file(s) or they may merged into the octree according to the cell level or node of the octree within which they are fully contained.
[0167] The sorting of candidate occluders (step S.82) may be performed using various approaches or techniques. For example, candidate occluders may be sorted and selected according to predetermined thresholds relative to characteristics such as area or shape. Alternatively, a predetermined number of candidate occluders may be selected which exhibit the largest number of triangles or the largest area. Alternatively, all candidate occluders may be selected and sorted relative to area and/or size.
[0168] In accordance with an embodiment of the invention, the sorting of candidate occluders (step S.82) may also be performed dynamically. For instance, to select global occluder candidates, a predetermined number of candidate occluders that exhibit the largest area may be selected based on the total number of candidates that were identified. By way of example, a predetermined percentage of candidates (such as 0.1 %) may be selected according to those that exhibit the largest area. Thus, if one million candidate occluders were identified, only a predetermined number of those candidates (such as 0.1 % or 1 ,000) that had the largest area would be selected and stored as global occluder candidates.
[0169] FIG. 15 illustrates an exemplary flow diagram for creating a collision detection hierarchy, consistent with embodiments of the invention. As with FIG. 14, the exemplary flow diagram of FIG. 15 may be implemented through various systems or environments during pre-processing, such as the system environment 10 of FIG. 1 or the network environment 50 of FIG. 2. For purposes of illustration, however, reference will again be made to the embodiment of FIG. 1 to describe the exemplary flow diagram.
[0170] As illustrated in FIG. 15, a grid subdivision is first performed on the complex environment (step S.100). To perform a grid subdivision, VR system 28 may divide the complex environment into a plurality of equal sized cubes. The size of each cube may be predetermined or selected according to the size of the complex environment. By way of example, data representative of massive models may be divided into cubes that are 10 x 10 x 10 meters. To perform grid subdivision, all elements or primitives of the complex environment represented by the 3D model should be pre-processed, in accordance with embodiments of the invention. Thus, all non-dynamic tessellation elements should be pre-processed so that they are represented by triangle elements. For example, non-dynamic tessellation elements represented by polygons containing more than three sides should be triangulated into a set of triangles. Further, all dynamic tessellation elements should be pre-processed and triangulated so that they are represented by triangle elements. The results of such pre-processing of the dynamic and nondynamic tessellation elements may provide data representing list(s) of triangles for the performing the grid subdivision.
[0171] Various techniques may be implemented for triangulating dynamic tessellation elements, consistent with embodiments of the invention. As further described below, dynamic tessellation elements may be represented by a number of triangles based on computed tessellation level(s). During run-time operations, the tessellation level may be set according to a number of factors to control the number of triangles and, thus, the degree of detail by which the dynamic tessellation level is rendered for the user. Thus, objects that appear closer to the user may be rendered with higher detail than objects that appear more distant from the user. For the purposes of creating the collision detection hierarchy for the VR file(s), however, the tessellation level may be computed to approximate the dynamic tessellation element with sufficient detail to permit collision detection operations to be performed accurately.
[0172] Consistent with embodiments of the invention, the tessellation level computed for the purposes of generating triangles for the collision detection hierarchy may be determined according to the dynamic tessellation element type (e.g., cone element type, torus element type, and/or surface-of-revolution element type). For example, if the dynamic tessellation element corresponds to a cone element type (see, for example, FIG. 10A), the tessellation level (Lc) may be computed as follows: Lc = (R)1 2 , where R represents the larger of the two radii (Rt or Rb) of the cone. For a torus element type (see, for example, FIG. 10B), a horizontal tessellation level (LH) and a vertical tessellation level (Lv) may be computed as follows: LH = Lv= (Rι)1 2, where Ri represents the radius of the torus measured from the center point (C). By way of further example, for a surface-of- revolution element (see, for example, FIG. 10C), a revolution tessellation level (LR) and an arc tessellation level (LA) may be computed. The revolution tessellation level (LR) may be computed as follows: LR = (αs x Rs)/50, where αs represents the angle of the revolution, and Rs represents the radius of the revolution. Further, the vertical tessellation level (Lv) may be computed as follows: Lv = (O.A X (RAI + RA2))/50, where αA represents the angle of the profile or ellipse, RAι and RA2 represent the radii of the profile or ellipse. The above-noted examples for the different dynamic tessellation element types assume a scale size that is measured in centimeters. If a different scale size is used, then the exemplary algorithms for computing the tessellation levels may be adjusted accordingly.
[0173] In accordance with an embodiment of the invention, the tessellation level for generating triangles for the collision detection hierarchy may be restricted to an upper value and/or a lower value. For example, tessellation levels may be restricted by a lower clamp levels to guarantee that the tessellation level always has at least a predetermined minimum value. For example, for cone type elements, a lower clamp level of 8 may be predefined. With such a lower clamp level, the tessellation level for a cone element may be restricted and reset to 8 in cases where the computed tessellation level is lower than 8. Similarly, for torus type elements, both the horizontal tessellation level LH and the vertical tessellation level Lv may be restricted by a lower clamp level, such a lower clamp level of 8. For surface-of-revolution elements, clamp levels may also be predefined for the revolution tessellation level LR and the arc tessellation level LA. By way of example, a lower clamp level of 8 may be predefined for the revolution tessellation level LR. Further, by way of example, a lower clamp level of 4 may be predefined for the arc tessellation level LA. The above-noted clamp levels are exemplary only and other clamp levels may be predefined, consistent with embodiments of the present invention.
[0174] Based on the computed tessellation level(s), the number of triangles for representing the dynamic tessellation element may be determined. As further described below with reference to Table 1 , predetermined relationships or algorithms may be used for determining the number of triangle segments based on the computed tessellation level(s). In addition, various techniques may be used for generating vertex data for the triangle segments representing each dynamic tessellation element, including the techniques further described below for dynamically tessellating elements with triangles during run-time operations.
[0175] Referring again to FIG. 15, after dividing the environment into equally-sized cubes (step S.100), all triangle elements are allocated to cubes of the grid (step S.102). For example, VR system 28 may allocate or associate triangle elements according to the cubes within which they are fully contained. For triangle elements that are partially within one or more cubes, such triangle elements may be allocated or associated with each and every cube that it is partially present in. Alternatively, cubes of the grid may be stretched in order to allocate triangle elements for the collision detection hierarchy.
[0176] As further illustrated in FIG. 15, each cube of the grid subdivision is further processed by creating a bounding box tree for each cube of the grid (step S.104). By way of example, VR system 28 may create a bounding box tree such as an AABB tree for each cube of the grid. To create an AABB tree, each cube is first divided into two equally sized cubes along one of the three major x, y, z planes. This division of cubes is then repeated along alternate directions of the three major x, y, z planes until a cube is empty or contains fewer than a predetermined number of triangle elements (such as 10 or less triangle elements). In addition to AABB trees, other known types of bounding box trees can be utilized as part of step S.104, consistent with embodiments of the invention.
[0177] All triangle elements are allocated to the cubes or bounding boxes of the tree (step S.106). For instance, triangle elements may be allocated or associated with bounding boxes that they are fully contained within. If a cube division dissects a triangle element, the dissected triangle element may be allocated or associated with only the previous cube or bounding box within which it was contained. Other techniques and methods may be utilized for allocating dissected triangle elements. For example, bounding boxes may be stretched to allocate triangle elements.
[0178] Finally, as illustrated in FIG. 15, the collision detection hierarchy is stored (step S.108). As indicated above, the collision detection hierarchy may be filed as part of a collision detection file (such as file 56) of the VR file(s). Such a file may include data indicating the hierarchy of bounding boxes and, for each bounding box, the triangle elements that are allocated or associated with the bounding box. Further, in accordance with an embodiment of the invention, each bounding box may be stored with data representing a bounding sphere. The bounding sphere may be a sphere that bounds the bounding box and may be used during run-time operations to determine whether a collision or intersection occurs between an object and the bounding area (represented by the bounding sphere). As a result, collision detection operations for objects (such as an avatar also represented by a bounding sphere) may be performed faster and with less processing resources being required.
[0179] Consistent with embodiments of the invention, user input and data from the VR file(s) are used to simulate and rendered successive frames of a complex virtual environment. To perform run-time operations, the VR file(s) for the complex environment can be accessed or loaded from memory (such as memory 30 in FIG. 1). For this purpose, a dialog box or window may be displayed to the user so that the appropriate VR file(s) can be identified or selected. A user may also be given the option to set various system configurations, such as screen resolution, color depth and/or key or input device selection for navigation. Once the VR file(s) are loaded, an initial frame of the complex environment may be displayed to the user based on a predetermined or default camera or viewer position. Thereafter, a simulation loop can be executed to simulate and render frames according to input from the user.
[0180] During simulation, various operations may be performed to simulate real-world phenomena. Examples of such operations include gravity simulation and predictive collision detection. By simulating a user's intended movements or operations for the next frame, the position and viewpoint for the next frame can be determined. Further, information determined during simulation can be used to generate rendering instructions. The rendering instructions are then used by the graphics system to cause the frame to be displayed to the user.
[0181] FIGS. 16A and 16B illustrate exemplary flow diagrams for performing predictive collision detection, consistent with embodiments of the invention. As indicated above, predictive collision detection may be performed during run-time operations to simulate frames of a complex virtual environment based on user input. The features of FIGS. 16A and 16B can be performed on a frame-by-frame basis during a simulation loop or the consecutive display of frames of the virtual environment. During rendering, the position of the user may be represented in a third-person mode by, for example, an avatar. In each frame, the avatar may be represented by various images, such as an image of a human.
[0182] As illustrated in FIG. 16A, user input is analyzed for the next frame to be simulated and rendered (step S.120). For example, user input collected over a predetermined polling period may be analyzed to determine position, direction and velocity of an avatar or a moving object. The predetermined polling period may be set to the rate at which frames are rendered to the user (i.e., the frame rate). The frame rate may be actively measured (e.g., by measuring and taking the average rate for the last 10-20 frames).
[0183] The current position of the avatar (or moving object) and the desired position for the next frame are then determined (step S.122). For example, a virtual projection or path may be calculated from the current position of the avatar (at the beginning of the polling period) to the desired position (at the end of the polling period) based on the position, direction and velocity data derived from the user input. Similar calculations could also be performed with respect to a moving object controlled by the user input. Further, when determining the desired position, the effect of gravity may be considered to determine the distance traveled by the avatar or moving object.
[0184] Using the collision detection hierarchy of the VR file(s), a rough proximity query is performed (step S.124). In particular, a rough proximity determination can be performed by identifying which cubes of the grid subdivision intersect with the avatar or moving object when it is moved along the calculated path between the current position and desired position. As part of this operation, the avatar or moving object may be approximated with a bounding volume, such as an ellipsoid or sphere, that is centered relative to the calculated path. Further, cubes of the grid subdivision may be approximated with a sphere or other types of bounding volumes.
[0185] For each identified cube, a refined proximity query is performed based on the bounding box tree associated with each cube (step S.126). For instance, a refined proximity determination can be performed for each identified cube by identifying which bounding boxes of the associated bounding box tree intersect with the avatar or moving object when it is moved along the calculated path between the current position and the desired position. Once again, the avatar or moving object may be approximated with a bounding volume, such as an ellipsoid or sphere, that is centered relative to the calculated path as it is moved between the current position and desired position. Further, as indicated above, each bounding box can be approximated with a bounding volume, such as a sphere.
[0186] Referring again to FIG. 16A, each of the identified bounding boxes are analyzed to identify intersecting triangles elements with the moving avatar or object (step S.128) As part of this process, the avatar or moving object can be approximated with a bounding volume (such as an ellipsoid) and moved along the calculated path between the current position and the desired position. A determination can then be made whether there are any intersecting triangles within the identified bounding box based on the projection of the avatar or object. [0187] To perform the operations of steps S.124, S.126 and S.128, various parameterization and/or other types of techniques may be employed. For instance, with respect to steps S.124 and S.126, a sweep test may be employed for detecting for intersections between the avatar (approximated with a sphere) and grid cells or bounding boxes (also approximated with a sphere). As part of a sweep test, all points on the surface of the bounding sphere for the avatar may be parameterized as a function of location (x, y, z) and time as the bounding sphere is moved from its current position to the desired position. While moving the bounding sphere for the avatar, a test may be performed for any intersections between the bounding sphere for the avatar and the bounding sphere for the cell or bounding box. To determine any such intersections, the bounding sphere for the cell or bounding box may be treated as a fixed or stationary entity, and all points on the bounding sphere may be parameterized as a function of location (x, y, z). By way of example, Appendix A contains exemplary programming code prepared in C++ for performing a sweep test with respect to two bounding spheres. The exemplary programming code of Appendix A may be used as part of steps S.124 and S.126 to determine any intersections that occur when one bounding sphere (representing the avatar or moving object) is moved relative to another bounding sphere (representing the grid cell or bounding box). [0188] Similar techniques may also be used to perform the operations of step S.128. For example, a sweep test may be employed for detecting intersections between the avatar or moving object (approximated with an ellipsoid or sphere) and stationary triangle elements. In particular, the avatar or moving object may be approximated with a bounding sphere that is parameterized as a function of location (x, y, z) and time as it is moved relative to a triangle element. Any intersections between the bounding sphere and the triangle element may be detected and recorded during the sweep test. For purposes of illustration, Appendix B contains exemplary programming code prepared in C++ for performing a sweep test with respect to a bounding sphere and a triangle element. The exemplary programming code of Appendix B may be used as part of step S.128 to determine any intersections that occur when the bounding sphere (representing the avatar or moving object) is moved toward a desired position relative to a stationary triangle.
[0189] The techniques used for implementing steps S.124, S.126 and S.128 are not limited to the above-noted examples. Other parameterization techniques and 3D intersection tests may be used, consistent with embodiments of the invention. For example, 3D intersection algorithms and approaches may be used for moving objects, such as those described in Eberly, David H., "3D Game Engine Design: A Practical Approach to Real-Time Computer Graphics," Morgan Kaufmann Publishers, ISBN 1558605932, (September 2000), the disclosure of which is expressly incorporated herein by reference to its entirety. Further, various intersection algorithms and programming code for testing for intersections of moving objects may be used, such as that provided at <http://www.magic-software.com/lntersection3D.html>. [0190] As illustrated in FIG. 16A, a determination is made as to whether a collision was detected (step S.130). For instance, if at least one intersecting triangle is identified, then it is determined that a collision was found. When a collision is detected (step S.130; Yes), each intersecting triangle may be analyzed to determine the time of contact and the exact point where the intersection or contact occurred with the avatar or moving object (step S.132). Thereafter, processing continues to step S.144 (see FIG. 16B). If no collisions are found (step S.130; No), then the avatar or moving object can be moved or adjusted without interference to the desired position and, as a result, the desired position is provided in order to generate the rendering instructions for the next frame (step S.134).
[0191] Referring to FIG. 16B, the intersecting triangle with the earliest contact time is identified (step S.144). As part of this process, all intersecting triangles and their associated contact times may be analyzed to determine the intersecting triangle that has the earliest time of contact. Then, the position of the avatar or moving object is determined at the earliest time of contact (step S.146). For example, based on the determined velocity from the user input, the avatar can be moved along the calculated path from the current position until the earliest time of contact (which can be measured in time relative to the beginning of the polling period). If the avatar or moving object is represented by a bounding volume (such as an ellipsoid for the avatar), then the center point or position of the bounding volume may be moved along the calculated path. The location where the avatar is positioned at the earliest time of contact is the point of contact (see, e.g. FIG. 18A). [0192] Consistent with embodiments of the invention, the simulation of a collision may be handled in a number of different ways. For instance, the avatar or moving object may simply be stopped at the point of contact for the first collision detected. In such as case, the collision can be rendered in the next frame and the user would be responsible to provide appropriate input to reposition the avatar or moved the object to avoid future collisions. Alternatively, when a collision is detected during simulation, an adjusted position may be determined before rendering the next frame to the user. Thus, in response to the detected collision, an adjusted position in the complex virtual environment may be determined using various techniques. On such technique is performing a sliding motion from the point of contact. Under a sliding motion approach, the adjusted position may be determined by projecting the desired position onto a plane that is tangent to the point of contact (see, for example, FIG. 18A). In such a case, the adjusted position may be defined at a point where the projection intersects the tangent plane at a ninety degree angle (i.e., orthogonal to the tangent plane). FIG. 17 illustrates an exemplary flow diagram for performing a sliding motion operation, consistent with an embodiment of the invention.
[0193] As illustrated in FIG. 16A, after determining the point of contact, a sliding motion may be performed to determine an adjusted position for the avatar or moving object (step S.148). If another collision is detected when performing the sliding motion operation (step S.150, Yes), then a new point of contact may be set based on the earliest point of contact with the avatar or moving object during the sliding motion (step S.154). Thereafter, a sliding motion operation is repeated with the reset point of contact until no collisions are detected (step S.150; No). When no collisions are detected, the avatar or moving object can be moved to the adjusted position without interference and, as a result, the adjusted position may be provided in order to generate the rendering instructions for the next frame (step S.152).
[0194] FIG. 17 is an exemplary flow diagram for performing a sliding motion operation, consistent with embodiments of the invention. For purposes of illustration, the embodiment of FIG. 17 is described with reference to performing a sliding motion operation on an avatar. However, the exemplary flow diagram of FIG. 17 may also performed with other moving objects, including those approximated with a bounding ellipsoid or any other type of bounding volume.
[0195] As illustrated in FIG. 17, the point of contact with the avatar is provided (step S.170). As indicated above, the point of contact may be determined with the avatar being approximated with a bounding volume such as an ellipsoid. In such a case, the center of the bounding ellipsoid may be positioned at the current position and the bounding ellipsoid may be moved along the calculated path toward the desired position until the first point of contact is detected (see, for example, FIGS. 18B and 18C). Alternatively, the avatar or moving object may be represented simply by a point and moved from the current position toward the desired position until the first point of contact with another element or object is detected (see, for example, FIG. 18A).
[0196] A tangent plane is then calculated at the point of contact (step S.172). For example, a plane tangent to the bounding ellipsoid at the point of contact may be calculated. By way of example, FIG. 18B illustrates an example of a tangent plane calculated from the point of contact between a bounding ellipsoid for an avatar and a flight of steps. FIG. 18C illustrates another example of a tangent plane calculated from the point of contact between a bounding ellipsoid for an avatar and a wall. In the example of FIG. 18C, the tangent plane is co-planar with the wall.
[0197] Consistent with embodiments of the invention, the tangent plane may represent a sliding plane along which a sliding motion is performed to determine the adjusted position of the avatar. Once the tangent plane is determined, the adjusted position of the avatar may be calculated by, for example, projecting the desired position onto the tangent plane (step S.174). In such a case, the adjusted position may be defined at a point where the projection intersects the tangent plane at a ninety degree angle (i.e., orthogonal to the tangent plane). In the example of FIG. 18B, the projection of the desired position onto the tangent plane results in an adjusted position that is located up the flight of steps away from the point of contact. However, in the example of FIG. 18C, the projection of the desired position onto the tangent plane results in an adjusted position that is identical to the point of contact. This is because the user is attempting to move the avatar directly into the wall and, as a result, no sliding motion is possible.
[0198] After projecting the desired position, the point at which the projection intersects the tangent plane at a ninety-degree angle is set as the adjusted position (step S.176). As part of this process, the effects of gravity may be incorporated. Thus, if the adjusted position would result in the rendering of a floating position above other objects in the environment (i.e., the avatar would be rendered as floating in the air), then a new adjusted position may be set by incorporating a gravity function to cause the calculated position to float down. Thus, in the example of FIG. 18B, the adjusted position could be recalculated to compensate for gravity and cause the avatar to be rendered at a position on the steps above the first point of contact.
[0199] Therefore, consistent with embodiments of the invention, sliding motion operations may be incorporated to respond to collisions detected during simulation. Sliding motion operations may permit the avatar or moving object to effectively slide to an adjusted position from the first point of contact. Further, only the adjusted position may be rendered to the user. This would permit the user to glide up or down objects such as a flight of steps or stairs. Additionally, where the user attempts to directly move the avatar into an object (such as a wall) to cause the tangent line to be co-planar with the intersecting object, then no sliding motion may be simulated and the next frame may be rendered with the avatar stopped at the point of contact (such as in the case of FIG. 18C).
[0200] As indicated above, the desired or adjusted position calculated during simulation can be used to reset the viewer or camera position for rendering the next frame. During rendering, view frustum and occlusion culling operations may be performed based on the viewer position. To facilitate these operations, VR file(s) consistent the embodiments of the invention may be used to provide more efficient processing and rendering of the frame.
[0201] By way of example, FIG. 19 illustrates an exemplary flow diagram for rendering complex virtual environments using VR file(s), consistent with embodiments of the invention. The exemplary flow diagram of FIG. 19 may be performed during run-time operations using various systems or environments, such as the system environment 10 of FIG. 1 or the network environment 50 of FIG. 2. For purposes of illustration, however, reference will be made to the embodiment of FIG. 1 to describe the flow diagram of FIG. 19. [0202] As illustrated in FIG. 19, the viewer or camera position is set for the next frame (step S.180). As indicated above, the viewer position for rendering the next frame may be set according to the desired or adjusted position calculated during simulation by VR system 28. With the viewer position, the view frustum can be determined and the octree stored as part of the VR file(s) can be traversed and analyzed to cull nodes that are outside of the view frustum (step S.182). By using the stored octree for the complex virtual environment, view frustum culling can be performed efficiently by graphics system 24 to eliminate cells (i.e., portions of the complex virtual environment) that are outside of the view frustum for the next frame.
[0203] When performing view frustum culling (step S.182), graphics system 24 may eliminate all nodes or cells of the octree that are completely outside the view frustum for the next frame. If a cell is determined to be partially outside of the view frustum, then graphics system 24 may traverse down the octree to analyze more finite cells and determine which portions are in or outside of the view frustum. Graphics system 24 may traverse the octree until all nodes or cells within the view frustum are identified and all other nodes or cells outside of the view frustum are culled.
[0204] After performing view frustum culling (step S.182), the stored global list of occluder candidates may be analyzed in order to cull nodes or cells of the octree that are occluded (step S.184). As indicated above, the VR file(s) may include a global list of occluder candidates which may be stored as a separate list or merged into the octree. During run-time operations, these global occluder candidates may be analyzed to identify nodes or cells of the octree that are occluded by other objects or elements within the view frustum. In addition to analyzing occluder candidates, other occlusion culling operations may be performed to cull occluded nodes or cells for the next frame. By way of example, an exemplary flow diagram for analyzing occluder candidates and performing occlusion culling is described below with reference to FIG. 20.
[0205] For all visible cells, elements to be dynamic tessellated may be identified and dynamically tessellated (step S.186). For example, graphics system 24 may analyze each visible cell to locate and identify dynamic tessellation elements. The identified elements may then be dynamically tessellated according to various factors, such as the size of the element or object on the screen. FIG. 23, described in detail below, illustrates an exemplary flow diagram for performing dynamic tessellation operations, consistent with embodiments of the invention.
[0206] Referring again to FIG. 19, after performing occlusion culling (step S.184) and dynamically tessellating elements in visible cells (step S.186), the next frame is rendered to the user (step S.188). As part of this process, graphics system 24 may generate a set of commands (such as OpenGL commands) based on the results of the rendering operations for the next frame to be rendered. The OpenGL commands may be issued by the graphics engine (such as graphics engine 242) to the graphics card or hardware (such as graphics hardware 18) to cause display 12 to render the next frame to the user.
[0207] FIG. 20 is an exemplary flow diagram for analyzing global occluder candidates and performing occlusion culling, consistent with embodiments of the invention. The exemplary flow diagram of FIG. 20 may be performed during runtime operations to perform occlusion culling using, for example, the system environment 10 of FIG. 1 or the network environment 50 of FIG. 2. For purposes of illustration, however, reference will be made to the embodiment of FIG. 1 to describe the flow diagram of FIG. 20.
[0208] To perform occlusion culling, an associated quality metric may be determined for each global occluder candidate that is within the view frustum (step S.190). For instance, based on the view frustum or octree nodes determined to be visible for the next frame, global occluder candidates in the VR file(s) may be analyzed by graphics system 24 to determine a quality metric. The quality metric may be determined based on a number of factors that indicate whether the occluder candidate may function as a good occluder for the next frame. By way of example, factors for determining a quality metric may include: the visible surface area of the occluder candidate; the distance of the occluder candidate from the viewer or camera position; the shortest distance between the center of the occluder candidate and the perimeter of the occluder candidate; the angle between the viewer or camera position for the next frame, and the point on the perimeter of the occluder that is closest to the center of the occluder (the "closest perimeter point"); and/or the number of elements occluded by the occluder candidate in previous frame(s).
[0209] For example, in accordance with an embodiment of the invention, the quality metric (Q) for an occluder candidate may be defined as follows: Q = α + n; where α is the value of the angle between the viewer position and the closest perimeter point and n is the number of occluded elements in previous frame(s) by the occluder candidate. As illustrated in FIG. 21 A, the center of the occluder candidate may be a middle point or center of gravity for the occluder candidate from which the closest perimeter point is determined. Further, as illustrated in FIG. 21 B, the angle α may be determined from the angle between the viewer point and closest perimeter point relative to the center of the occluder candidate. Moreover, consistent with embodiments of the invention, the number n may be determined from the number of occluded elements in the previous frame or a weighted average of occluded elements (giving more weight to the number of occluded elements from more recent frames) of a plurality of previous frames (such as the last ten frames).
[0210] Referring again to FIG. 20, the occluder candidates that are within the view frustum are sorted in accordance with their quality metric (step S.192). For instance, graphics system 24 may sort the occluder candidates in descending order based on their associated quality metric, such that occluder candidates with higher quality metrics come before other occluder candidates with lower quality metrics. By sorting occluder candidates, occluder candidates with higher quality metrics can be analyzed first and occluded cells can be identified more quickly and efficiently.
[0211] The analysis of occluder candidates may be performed over a predetermined time period. In accordance with an embodiment of the invention, such an analysis may also be limited in time (e.g., over a 2ms time period) to prevent processing time by the graphics system 24 being wasted on excessive occluder candidates that have low quality metrics or marginal chances of acting as an occluder. Thus, the time period may be set according to the speed of the graphics hardware 18 to prevent processing time of the graphics system 24 from taking longer than that which could otherwise be handled more efficiently through rendering with the graphics hardware 18. This is because most conventional graphics card include depth or z buffering which enable the graphics card to determine, on a pixel-by-pixel basis, which elements or objects are closer to the viewer position so that occluded pixels of elements or objects are not displayed. Thus, occluded elements or objects that are not identified within the predetermined time period by the graphics system 24 can still be processed and eliminated (i.e., not displayed) during processing by the graphics hardware 18. Accordingly, a timer may be reset (timer = 0) before starting to analyze the sorted occluder candidates (step S.194). After setting the timer, each occluder candidates may be analyzed in order to determine if it occludes cells within the view frustum (step S.196).
[0212] If the occluder candidate is determined to fully occlude one or more cells (step S.198; Yes), then each of the fully occluded cells may be culled (step S.200). As illustrated in FIG. 22A, a cell may be determined to be fully occluded if the cell is within the shadow volume of the occluder candidate (i.e., the points defining the cell are contained within all edges of the occluder candidate (OC-1 )). In contrast, as illustrated in FIG. 22B, a cell may be determined not to be fully occluded if the points defining the cell are outside at least one edge of the occluder candidate (OC-1) and if the cell does not intersect any one of the edges of the occluder candidate.
[0213] If any cells are determined to be partially occluded (step S.202; Yes), then an analysis may be performed to determine if any of such cells are fully occluded by a combination of occluders. If cells are occluded by a combination of occluders, then any such cells may be occluded (step S.204). After inspecting each occluder candidate, a determination is made as to whether the predetermined time period has expired (step S.206). If the timer has not expired (step S.206, No), then additional occluder candidates are examined. Otherwise, if the timer has expired (step S.206; Yes), the process of inspecting occluder candidates is terminated.
[0214] As indicated above, partially occluded cells may be further inspected to determine if they are fully occluded by a combination of occluders (see step S.204). For each partially blocked cell, an inspection can be made for neighboring occluders that, in combination with the occluder candidate, fully occlude the cell. As illustrated in FIG. 22C, a neighboring occluder (OC-2) may be identified by searching along the intersecting edge of the occluder candidate (OC-1) that intersects the cell. For each neighboring occluder that has a common edge with the occluder candidate along the intersecting edge, the other edges of the neighboring occluder may be analyzed to determine if the cell is occluded. If the points defining the cell are all within the other edges of the neighboring occluder (the case represented in the example of FIG. 22C), then the cell is fully occluded by the combination of the occluders. As a result, the cell is culled (step S.204).
[0215] If an intersecting edge is detected with the neighboring cell, then additional searching may be performed for other neighboring cells along the intersecting edge. This process may continue until the cell is determined to be fully occluded or until no additional neighboring cells are located. In cases where an intersecting edge is identified along an edge of a neighboring cell and no other neighboring cells are located with respect to the neighboring edge, the cell may be determined to be only partially occluded and the process may terminate.
[0216] For example, as illustrated in FIG. 22D, a cell may first be determined to be partially occluded by an occluder candidate (OC-1). An inspection along the intersecting edge of the occluder candidate (OC-1) would reveal a neighboring occluder candidate (OC-2). To determine if the cell is fully occluded, the other edges of the neighboring occluder (OC-2) may be analyzed. With the example of FIG. 22D, such an analysis would reveal that the points defining the cell are within two of the other edges of the occluder (OC-2). However, an intersecting edge also exists with respect to the neighboring occluder (OC-2). Thus, the cell is not fully occluded by the combination of the identified occluders (OC-1 and OC-2). A search along the intersecting edge of the neighboring occluder (OC-2) would reveal that there are no additional neighboring occluders. Therefore, in the example of FIG. 22D, the process would terminate with the cell being identified as being only partially occluded.
[0217] Another example is illustrated in FIG. 22E. In this example, the cell would first be determined to be partially occluded by an occluder candidate (OC- 1 ). An inspection along the intersecting edge of the occluder candidate (OC-1) would then reveal a neighboring occluder candidate (OC-2). To determine if the cell is fully occluded by the neighboring occluder (OC-2), the other edges of the neighboring occluder may be analyzed. In the case of FIG. 22E, such an analysis would reveal that the points defining the cell are within all of the other edges of the occluder (OC-2) and that no other intersecting edges exist. Therefore, the cell is fully occluded by the identified occluders (OC-1 and OC-2) and the cell may be culled. While another neighboring occluder (OC-3) exists, processing may terminate once it is determined that the cell is fully occluded and, thus, there is no need to search or analyze additional neighboring occluders.
[0218] FIG. 23 illustrates an exemplary flow diagram for performing dynamic tessellation, consistent with embodiments of the invention. The exemplary flow diagram of FIG. 23 may be performed during run-time operations to perform dynamic tessellation using, for example, the system environment 10 of FIG. 1 or the network environment 50 of FIG. 2. For purposes of illustration, however, reference will be made to the embodiment of FIG. 1 to describe the flow diagram of FIG. 23.
[0219] As illustrated in FIG. 23, the dynamic tessellation elements for each visible cell within the view frustum are first identified (step S.220). As indicated above, curved and rounded elements may be filtered and stored as dynamic tessellation elements in the VR file(s). Accordingly, when inspecting each visible cell, graphics system 24 may readily identify and single out those elements stored in the VR file(s) as dynamic tessellation elements. Consistent with embodiments of the invention, one or more types of dynamic tessellation elements may be stored in the VR file(s), such as cone element types, torus element types, and surface-of-revolution types.
[0220] For each identified dynamic tessellation element, a filtering process may be performed based on the screen size of the element or object (step S. 224). The filtering process may be performed so that only identified elements of a significant or predetermined size in the next frame will be dynamically tessellated. To determine or estimate the screen size of an element, one or more factors may be considered, including the actual size of the element and/or the distance of the element from the viewer or camera position. By inspecting or estimating the screen size of an element, graphics system 24 can determine whether to filter the element so that it will be dynamically tessellated for the next frame or whether to reject an element so that it will not be dynamically tessellated and/or displayed. As a result, elements that are too small or insignificant for dynamic tessellation (such as elements that have a screen size of only a few pixels) may be identified and rejected by graphic system 24.
[0221] Various techniques may be employed to filter dynamic tessellation elements. For instance, in accordance with an embodiment of the invention, the screen size for an element may be estimated and compared with a predetermined threshold. If the estimated screen size is below the predetermined threshold, then the element may be rejected for filtering. Otherwise, if the estimated screen size is equal to or greater than the predetermined threshold, the element may be filtered so that it will be dynamically tessellated and rendered in the next frame. Optionally, elements determined to have a screen size that is below a predetermined threshold may have a tessellation level that is set to zero so that they are not displayed.
[0222] Embodiments consistent with the invention may estimate or calculate a screen size for each type of dynamic tessellation element. For example, if the identified element is a cone element type (see, for example, FIG. 10A), then the screen size (Sc) may be computed as follows: Sc = (50 x H)/ d, where H represents the visible radius of the cone and d represents the distance between the viewer or camera position and the center point (C) along the main axis of the cone. Referring to FIG. 10A, the visible radius H of a cone may be computed by taking the distance between the center point (C) and either the center point of the top (T) or the center point of the bottom (B) of the cone. If the screen size Sc of the cone element type is determined to be less than 0.3, then element may be rejected. Otherwise, if the screen size Sc of the cone is equal to or greater than 0.3, then the element may be further processed for dynamic tessellation. The above-noted example assumes a scale size that is measured in centimeters. If a different scale size is used, then the exemplary algorithm for computing the screen size (Sc) and the exemplary thresholds may be adjusted accordingly.
[0223] Similar processing may be performed for identified torus element types. For example, if the identified element is a torus element type (see, for example, FIG. 10B), then the screen size (ST) may be computed as follows: Sτ = (50 x V)/d, where V represents the visible radius of the torus and d represents the distance between the viewer or camera position and the center point (C) of the torus. Referring to FIG. 10B, the visible radius V of a torus may be calculated by summing the internal and outer radii (Ri and R2) of the torus. With a torus element type, the element may be rejected if the screen size Sj of the element is determined to be less than 0.3. Otherwise, if the screen size ST of the tours is equal to or greater than 0.3, then the element may be filtered for dynamic tessellation. Once again, the above-noted example assumes a scale size that is measured in centimeters. If a different scale size is used, then the exemplary algorithm for computing the screen size (ST) and the exemplary thresholds may be adjusted.
[0224] Surface-of-evolution element types may also be filtered, consistent with embodiments of the invention. For a surface-of-revolution element (see, for example, FIG. 10C), then the screen size (Ss) may be computed as follows: Ss = (50 x V)/(DA + Dc), where V represents the visible radius of the element, DA represents the distance between the viewer or camera position (E) and the center point (CA) of the arc or ellipse of the element, and Dc represents the distance between the viewer or camera position (E) and the center point (Cs) of the surface-of-revolution element. Referring to FIG. 10C, the visible radius V may be calculated by summing the radii of the arc or ellipse (RAI and RA2) with the radius of the of the revolution (Rs). With a surface-of-revolution element type, the element may be rejected if the screen size Ss of the element is determined to be less than 0.03. Otherwise, if the screen size of the Ss of the element is equal to or greater than 0.03, then the surface-of-revolution element may be filtered for dynamic tessellation. Once again, the above-noted example assumes a scale size that is measured in centimeters. If a different scale size is used, then the exemplary algorithm for computing the screen size (Ss) and the exemplary thresholds may be adjusted accordingly.
[0225] Referring again to FIG. 23, for each of the filtered elements, a tessellation level may be computed (step S.228). The tessellation level may depend on or relate to the screen size of the element or object. In accordance with embodiments of the invention, the tessellation level determined during runtime operations may control the manner or level of detail by which an element is rendered. Thus, the tessellation level may be based on a predetermined scale or related to a value which indicates the number of triangles used for rendering the element or object. Further, as indicated above, if the screen size of an element is determined to be below a predetermined threshold, then the tessellation level for the element may be set to zero so that the element will not be displayed.
[0226] Consistent with embodiments of the invention, a tessellation level may be computed for each type of dynamic tessellation element. For example, in order to determine the tessellation level for a cone element type (see, for example, FIG. 10A), the tessellation level (Lc) may be computed as follows: Lc = ((500 x R)/d) x Q, where R represents the larger of the two radii (R or R ) of the cone, d represents the distance between the viewer or camera position and the center point (C) along the main axis of the cone, and Q represents a quality level metric. The quality level Q may be user defined to permit control or adjustment to the level of detail in the rendered frame. By way of example, a default value (such as 1) may be assigned to the quality level Q to represent normal processing. By increasing or decreasing the quality level Q from the default value, a user can adjust the level of detail in the rendered frame so that a dynamically tessellated cone appears with lesser or greater detail. The above- noted example assumes a scale size that is measured in centimeters. If a different scale size is used, then the exemplary algorithm for computing the tessellation level (Lc) may be adjusted accordingly.
[0227] The tessellation level for torus element types can also be determined, consistent with embodiments of the invention. For instance, to dynamically tessellate a torus element type (see, for example, FIG. 10B), a horizontal tessellation level (LH) and a vertical tessellation level (Lv) may be computed. The horizontal tessellation level (LH) may be used for dynamically tessellating the end(s) of the torus and may be computed as follows: LH = ((50 x V)/d) x Q, where V represents the sum of the two radii (R or R2) of the torus, d represents the distance between the viewer or camera position and the center point (C) of the torus, and Q represents a quality level metric. The vertical tessellation level (Lv) may be used for dynamically tessellating along the torus and may be computed as follows: Lv = ((150 x V)/d) x Q, where V represents the sum of the two radii (Ri or R2) of the torus, d represents the distance between the viewer or camera position and the center point (C) of the torus, and Q represents a quality level metric. For both the horizontal and vertical tessellation levels, the quality level Q may be defined by a user to permit control or adjustment to the level of detail in the rendered frame. Once again, by increasing or decreasing the quality level Q from the default value (such as a default value of 1), a user can adjust the level of detail in the rendered frame so that dynamically tessellated torus appears with lesser or greater detail. The above-noted example assumes a scale size that is measured in centimeters. If a different scale size is used, then the exemplary algorithms for computing the tessellation levels (LH and Lv ) may be adjusted accordingly.
[0228] Surface-of-revolution elements may also be dynamically tessellated. For instance, in order to dynamically tessellate a surface-of-revolution element type (see, for example, FIG. 10C), a revolution tessellation level (LR) and an arc tessellation level (LA) may be computed. The revolution tessellation level (LR) may be used for dynamically tessellating about the revolution of the element and may be computed as follows: LR = (αs x Rs x 20,000)/(Ds)2 x Q, where αs represents the angle of the revolution, Rs represents the radius of the revolution, Ds represents the distance between the viewer or camera position (E) and the center point (Cs), and Q represents a quality level metric. The vertical tessellation level (Lv) may be used for dynamically tessellating along the arc of the element and may be computed as follows: Lv = (c x (RAI + RA2) X
20,000)/(DA)2 x Q, where O.A represents the angle of the profile or ellipse, RAι and RA2 represent the radii of the profile or ellipse, DA represents the distance between the viewer or camera position (E) and the center point (CA) of the profile or ellipse, and Q represents a quality level metric. In both of the tessellation levels, the quality level Q may be defined by a user to permit control or adjustment to the level of detail in the rendered frame. Thus, by increasing or decreasing the quality level Q from the default value (such as a default value of 1), a user can adjust the level of detail in the rendered frame so that dynamically tessellated element appears with lesser or greater detail. Once again, the above- noted example assumes a scale size that is measured in centimeters. If a different scale size is used, then the exemplary algorithms for computing the tessellation levels (LR and Lv) may be adjusted.
[0229] In accordance with an embodiment of the invention, the tessellation level for a dynamically tessellated element may be restricted to an upper value and/or a lower value. To restrict the tessellation level of dynamically tessellated elements, upper and/or lower clamp levels may be predefined. For example, for cone type elements that are dynamically tessellated, an upper clamp level (such as 36) may be predefined. With such an upper clamp level, the tessellation level for a cone element may be restricted and reset to 36 in cases where the computed tessellation level is greater than the upper clamp level.
[0230] Clamp levels may also be predefined for torus type elements. For instance, upper and lower clamp levels may be predefined for the horizontal tessellation level LH and the vertical tessellation level Lv. By way of example, an upper clamp level of 36 and a lower clamp level of 3 may be predefined for the horizontal tessellation level LH. Further, by way of example, an upper clamp level of 64 and a lower clamp level of 1 may be predefined for the vertical tessellation level Lv. For both the horizontal and vertical tessellation levels, the tessellation level may be restricted and reset when the computed value for the tessellation level exceeds the predefined upper or lower clamp levels.
[0231] The tessellation levels for a surface-of-revolution element may also be restricted by predefined clamp levels. For example, upper and lower clamp levels may be predefined for the revolution tessellation level LR and the arc tessellation level LA- By way of example, an upper clamp level of 64 and a lower clamp level of 8 may be predefined for the revolution tessellation level LR. Further, by way of example, an upper clamp level of 32 and a lower clamp level of 4 may be predefined for the arc tessellation level LA. For both the revolution and arc tessellation levels, the tessellation level may be restricted and reset when the computed value for the tessellation level exceeds the predefined upper or lower clamp levels.
[0232] Referring again to FIG. 23, for each element to be dynamically tessellated, a determination may be made as to whether the tessellation for the next frame to be rendered is the same as the tessellation level used in a previous frame (step S.232). For this purpose, the vertex data for dynamically tessellated elements may be buffered in memory to facilitate such an analysis. Further, the tessellation levels for previously rendered dynamic tessellation elements may be stored in a table or file. If the element was not rendered in the previous frame at the same tessellation level (step S.232; No), then a vertex array may be computed for rendering the dynamically tessellated element according to the tessellation level for the next frame (step S.238). However, if the element was previously rendered at the same tessellation level (step S.232; Yes), then the stored vertex data for rendering the element in the previous frame may be used (step S.236). By using the stored vertex data, the element can be rendered faster in the next frame without the need to use any processing resources for computing the array. As further described below, the stored vertex data may be stored in a memory buffer such as video, AGP or backend memory (see FIG. 24 A). [0233] To generate a vertex data based on the tessellation level (step S.238), various techniques may be employed. For example, the number of triangles for rendering an element may be computed for each type of dynamic tessellation element as a function of the tessellation level. Consistent with embodiments of the invention, Table 1 includes exemplary relationships between the computed tessellation level and the number of triangle segments to render various types of the dynamic tessellation elements.
TABLE 1
Figure imgf000097_0001
[0234] In Table 1 , an open cone refers'to a cone element in which both ends of the cone are rendered as an open surface. In such cases, the side surface of the cone element may be rendered with 2Lc number of triangles. A closed cone refers to a cone element in which both ends of the cone are rendered with a solid or closed surface (such as a cap). Closed cones may be rendered using 2Lc number of triangles for the side surface and Lc triangles for each end of the cone. Thus, a total of 4LC triangles (2LC + Lc + Lc) may be used to rendered a closed cone. Of course, if only one end of a cone element is closed, then a total of 3LC triangles (2Lc + Lc) may be used to render the cone.
[0235] Torus elements may also be rendered in a similar fashion. For instance, in the table, an open torus refers to a torus element in which both ends of the torus are rendered as an open surface. In such cases, the side surface of the torus element may be rendered with (2Lv x LH) number of triangles. On the other hand, a closed torus refers to a torus element in which both ends of the torus are rendered with a solid or closed surface (such as a cap). A closed torus may be rendered using (2Lv x LH) number of triangles for the side surface and Lv triangles for each end of the torus Thus, a total of (2Lv x LH ) + 2Lv triangles may be used to rendered a closed torus. Of course, if only one end of a torus element is closed and the other end is open, then a total of (2Lv x LH ) + Lv may be used to render the torus.
[0236] Various techniques may be used to render dynamic tessellation elements according to the computed tessellation level(s). For example, to render a cone element type, the top and bottom ellipses of the cone (see FIG. 10A) may be divided by Lc number of points placed about the perimeter of each ellipse. Thereafter, a set of triangles may be defined for rendering the cone by connecting the points of the top and bottom ellipses. The triangle segments may be defined as a triangle list or triangle strip. For an open cone, this will result in the cone being rendered with 2 Lc number of triangles. For a closed cone, the above- noted process will result in the cone being rendered with 4 Lc number of triangles, defined as a single triangle list or defined with up to three triangle strips.
[0237] Similar techniques may be used for rendering other types of dynamic tessellation elements. For example, to render a torus element type, the inner ellipse or circle of the cone (having a radius R2-see FIG. 10B) may be divided by LH number of points placed about the perimeter of the ellipse. Then, Lv number of inner ellipses may be defined at equal intervals about the center (C) of the torus, with the center of each ellipse being positioned at a distance equal to the larger radius (R1) from the center (C) of the torus. Finally, a set of triangles may be defined for rendering the torus by connecting the points of the ellipses. The triangle segments may be defined as a triangle list or triangle strip. For an open torus, this will result in the torus being rendered with 2 Lv x LH number of triangles. For a closed torus, this process will result in the torus being rendered with (2 Lv x LH) + Lc number of triangles, defined as a single triangle list or defined with up to three triangle strips.
[0238] Surface-of-revolution element types may also be dynamically tessellated and rendered according to the computed tessellation levels. For example, to render a surface-of-revolution element type, the arc of the profile or ellipse of the element may be divided by LA number of points. Then, LR number of arc profile segments may be defined at equal intervals about the center (Cs- see FIG. 10C) of the surface-of-revolution. Finally, a set of triangles may be defined for rendering the surface-of-revolution by connecting the points of the arc profile segments. As a result, the surface-of-revolution will be rendered with 2 LR x LA number of triangles, defined as a single triangle list or defined as a single triangle strip.
[0239] Various techniques may be used for rendering dynamic tessellation elements, including those described above. Such techniques may be implemented through software-based components. By way of a non-limiting example, Appendix C includes exemplary programming code prepared in C++ for rendering a cone element. The exemplary programming code of Appendix C generates triangle segments for a cone element based on a determined tessellation level for the element. By further way of example, Appendix D includes exemplary programming code prepared in C++ for tessellating torus or elbow elements. In the exemplary code of Appendix D, triangle segments for an elbow or torus element are generated based on determined tessellation levels. Moreover, in Appendix E, exemplary C++ programming code is provided for generating triangle segments for rendering a surface-of-revolutions element based on determined tessellation levels for the element.
[0240] Consistent with embodiments of the invention, memory management operations may be performed to improve graphics rendering and system performance. For instance, memory management operations may be performed for allocating and handling data stored in video memory and/or accelerated graphics port (AGP) memory. Further, one or more rendering modules may be provided with the graphics system (such as graphics system 24 of FIG. 1) so that rendering operations are optimized for the particular type of graphics hardware employed by the system. A software-based rendering module may be provided for each type of graphics hardware (GeForce type graphics cards, etc.) supported by the system. The specifics of the graphics hardware may be automatically detected when the VR file(s) are loaded or may be specified by the user during system set-up. Depending on the type of graphics hardware used during run-time operations, the system may select the most optimum rendering module so that rendering operations can be performed efficiently. [0241] By way of example, for systems employing GeForce type graphics card, an optimum rendering module may be provided that includes a custom memory manager. During system set-up operations, the custom memory manager may load and store vertex array data from the VR file(s) into the video memory of the graphics card. If video memory is not available, then the data may be stored in the system or AGP memory. In either case, storing as much vertex array data as possible into the video memory of the graphics card can reduce system access and improve the efficiency of run-time operations.
[0242] Referring to FIGS. 24A-24C, an exemplary systems and methods for performing memory management operations will be described, consistent with embodiments of the invention. The memory management operations described below may be performed to provide efficient rendering of complex virtual environments based on data from VR file(s). The exemplary features of the memory management operations, such as those described below with reference to FIGS. 24B and 24C, may be performed by one or more software-based modules.
[0243] As illustrated in FIG. 24A, memory management operations may be performed using a video memory 254 and an AGP memory 256. Video memory 254 may be implemented with the video memory of a graphics card (such as a graphics card provided as part of graphics hardware 18 in FIG. 1). AGP memory 256 may be implemented with memory reserved in the system environment or platform (such as reserved memory in computing platform 20 or memory 30 in FIG. 1). Further, consistent with embodiments of the invention, direct access to AGP memory 256 may be provided to the graphics card through an AGP bus or system bus. Such access by the graphics card may be performed without involvement or use of the system CPU 260 (such as a CPU of computing platform 20 in FIG. 1).
[0244] As illustrated in FIG. 24A, a backend memory 252 may also be reserved or defined. Backend memory 252 may be allocated using video memory 254 and/or AGP memory 256. For example, to handle complex virtual environments represented by large or massive models, backend memory 252 may have a total capacity of approximately 4M. In contrast, video memory 254 and AGP memory 256 may be approximately 16M in total memory size. During run-time operations, backend memory 252 may be used as a first-in, first-out (FIFO) queue to overwrite vertex arrays or data representing previously rendered elements with data representing other elements to be rendered to the user.
[0245] FIG. 24B is an exemplary flow diagram of memory management operations that may be performed during system set-up (i.e., during loading of the VR file(s)). As illustrated in FIG. 24B, during initialization or system set-up, video memory 254, AGP memory 256 and backend memory 252 may be allocated or reserved (step S.242). As part of this process, all non-dynamic tessellation elements defined in the VR file(s) may be loaded and stored into video memory 254 and/or AGP memory 256 (step S.244). For example, all non-dynamic tessellation elements defined in the octree file (such as octree file 52) may be loaded and stored in video memory 254 and/or AGP memory 256. If there is insufficient memory to store all of the non-dynamic tessellation elements into video memory 254 and/or AGP memory 256, then any additional elements may be stored in system memory (such as memory 30 in FIG. 1) and later transferred by system CPU 260 into backend memory 252 when it is needed to render a particular frame. [0246] If there is available memory after storing all of the non-dynamic tessellation elements into video memory 254 and/or AGP memory 256 (step S.246; Yes), then dynamic tessellation elements defined in the VR file(s) may be stored in the remaining storage areas of the video memory and/or AGP memory as needed or permitted (step S.248). For example, in preparation for rendering the first frame, dynamic tessellation elements may be pre-loaded into memory. Dynamic tessellation elements may be loaded according to a default or initial position of the camera for the first frame to be rendered. As run-time operations are initiated and performed, additional memory management operations may be performed, as further described below.
[0247] FIG. 24C is an exemplary flow diagram of memory management operations that may be performed during run-time operations. Consistent with embodiments of the invention, the features of FIG. 24C may be performed for rendering frames of a complex virtual environment. For example, the exemplary memory management operations of FIG. 24C may be performed to facilitate the rendering of dynamic tessellation elements in each frame.
[0248] During run-time operations, a simulation loop may be executed to simulate and render each frame based on input from a user (see, for example, FIG. 7). During the simulation phase, collision detection and other operations may be performed to determine the viewer or camera position for the frame to be rendered. Based on the viewer position for the next frame, visibility and culling operations may be performed as part of a rendering phase to identify the visible elements for the frame. As indicated above, visible elements may include nondynamic tessellation elements and dynamic tessellation elements. Consistent with embodiments of the invention, when displaying each rendered frame, all of the visible non-dynamic tessellation that are defined in the video or AGP memory may first be displayed by the graphics hardware on a display terminal. As part of this process, if some of the visible non-dynamic tessellation elements are stored in system memory (i.e., due to insufficient video or AGP memory during preloading), then any such elements may be copied to the backend memory to permit the graphics hardware to display them to the user. Thereafter, each of the dynamic tessellation elements to be displayed in the frame may be handled in accordance with the features of FIG. 24C.
[0249] As illustrated in FIG. 24C, for each dynamic tessellation element in the frame, a determination may be made whether the dynamic tessellation element was rendered in a previous frame (step S.250). For this purpose, the vertex data for dynamically tessellated elements may be buffered in memory (such as the video, AGP or backend memory) to facilitate such an analysis. Further, the tessellation levels for previously rendered dynamic tessellation elements may be stored in a table or file. If the element was previously rendered (step S.250; Yes), then an analysis can be made to determine if the tessellation level has changed (step S.252). For example, after determining the tessellation level of the dynamic tessellation element for the next frame, the tessellation level may be compared to the tessellation level for the same element in the previous frame. If there is no change in the tessellation level (step S.252; No), then the previously stored or buffered vertex data for rendering the dynamic tessellation element may be identified and used for the next frame (step S.254). However, if there is a change in the tessellation level (step S.252; Yes) or if the element was not previously rendered (step S.250; No), then an analysis may be performed to estimate the amount of memory space required for storing the triangle segments for the dynamic tessellation element (step S.256).
[0250] To estimate the memory requirements (step S.256), various approaches may be used. For example, based on the tessellation level, the number of triangles for rendering the dynamic tessellation element may be computed. As indicated above, the number of triangles can be computed directly from the tessellation level depending on the element type (open cone, closed cone, open torus, closed torus, surface-of-revolution, etc.). Relationships, such as the exemplary relationships provided in Table 1 , may be used to compute the number of triangles based on the determined tessellation level(s) for the element. With the computed number of triangles, an estimate of memory requirements can be determined based on the number of bytes required per triangle or triangle strip.
[0251] For example, triangle lists or triangle strips may be defined using vertex arrays and index arrays. Each vertex defined in the vertex array may require a predetermined amount of memory (such as 32 bytes), and each index or pointer in the index array may require another predetermined amount of memory (such as 4 bytes). Thus, for a single triangle defined by three vertices, the memory requirements would be 108 bytes = (3 x 32 bytes) + (3 x 4 bytes). With triangle strips, each triangle is built using two vertices from a previous triangle plus one new vertex. As a result, n - 2 triangles can be constructed from n vertices with a triangle strip. Thus, if a triangle strip represents three triangles, n = 5 and the memory requirements for the triangle strips would be 180 bytes = (5 x 32 bytes) + (5 x 4 bytes). By analyzing the exact combination of segments (triangles and/or triangle strips) for each dynamic tessellation element, an the rendered frame (step S.266). As indicated above, OpenGL commands may be used by the graphics system to cause the graphics hardware to display visible elements and objects in the rendered frame. The dynamic tessellation element may be displayed by the graphics hardware based on the vertex data stored in the video, AGP and/or backend memory. Consistent with embodiments of the invention, the display of each dynamic tessellation element after identifying or generating the vertex data may be performed to facilitate re-use of the backend memory for other elements to be rendered and displayed. This is because any memory space used in the backend memory for the dynamic tessellation element may be freed for use as soon as the element is displayed. Alternatively, all visible dynamic tessellation elements may be prepared in video, AGP and/or backend memory before commands are issued to the graphics hardware to permit their display. In such a case, the graphics hardware would not display the visible dynamic tessellation elements until the vertex data for all of the elements have been identified or prepared in the video, AGP, and/or backend memory.
[0254] As further illustrated in FIG. 24C, the above-described memory management operations may be repeated for each dynamic tessellation element of the frame to be rendered. Thus, if there are additional dynamic tessellation elements (step S.268; Yes), then processing loops back to step S.250. Otherwise, after analyzing and processing all dynamic tessellation elements (step S.268; No), the process may terminate until the visible dynamic tessellation elements for the next frame are identified and prepared for display. In which case, the enter process of FIG. 24C may be re-initiated for each element (starting at step S.250).
105 [0255] Other embodiments of the invention will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. For example, in connection with the network environment 50 of FIG. 2, various training applications may be implemented to provided interactive displays to multiple users. Such training applications may be implemented with a pointer to permit a trainer or any user to highlight visible objects or guide users through the complex virtual environment. The pointer may be a predetermined 2D or 3D object (such as an arrow) that is displayed in rendered frame according to input from the trainer or user. For the trainer or user controlling the pointer, the pointer may always be displayed at the center of the rendered frame. This technique may provide better visualization and immersion effects for the user, especially for navigating through the complex virtual environment. Also, the size of the pointer may be scaled according to the distance from the viewer position to the collision point with an object along the line-of-sight. For other users observing the pointer, the pointer may be displayed in the rendered frame according to their distance and/or angle from the position of the pointer or trainer in complex environment.
[0256] To perform operations in a rendered frame, other types of pointers may be selected and controlled by a user. For instance, predetermined pointers such as a question mark symbol ("?") may be displayed to permit a user to get information or specifications (such as weight, dimensions, cost, etc.) for objects represented in the complex environment. Other predetermined pointers may also be provided to permit a user to perform specific functions when selected. For example, predetermined pointers may be displayed in rendered frames to permit
106 estimate of the total memory requirements for the element can be computed. By way of example, assume a dynamic tessellation element relates to a closed cone that is rendered by three triangle strips, including one triangle strip of 100 triangles for the side surface and one triangle strip of 50 triangles for each of the closed end surfaces. In such a case, the total memory requirements for the closed cone element would be computed as 7416 bytes = [(102 x 32 bytes) + (102 x 4 bytes)] + 2 x [(52 x 32 bytes) + (52 x 4 bytes)].
[0252] After determining the memory requirements for the dynamic tessellation element (step S.256), a determination is made whether there is sufficient available memory in the video or AGP memory (step S.258). If there is sufficient memory space (step S.258; Yes), then the dynamic tessellation element is tessellated by the system CPU using the available memory space in the video or AGP memory (step S.260). If the tessellation level of the element is less than the level used in the previous frame, the same memory location in the video or AGP memory may be used for tessellating the element for the next frame and any free or additional space in memory may be flagged for other elements to be rendered in the frame. If there is not sufficient memory space in the video or AGP memory (step S.258; No), then the dynamic tessellation element may be tessellated by the system CPU using available system memory (step S.262). Thereafter, the vertex array data for the tessellated element may be copied and stored in the backend memory to facilitate the subsequent display of the element by the graphics hardware (step S.264).
[0253] After identifying or generating the vertex data for the dynamic tessellation element (steps S.254, S.260 or S.264), commands are issued to the graphics hardware by the graphics engine to cause the display of the element in
104 a user to rotate or move objects, open or close doors, and/or perform other functions.
[0257] In addition, the invention is not limited to the particulars of the embodiments disclosed herein. For example, the individual features of each of the disclosed embodiments may be combined or added to the features of other embodiments. In addition, the steps of the disclosed methods herein may be combined or modified without departing from the spirit of the invention claimed herein. Accordingly, it is intended that the specification and embodiments disclosed herein be considered as exemplary only, with a true scope and spirit of the invention being indicated by the following claims.
APPENDIX A
bool Spherer :ersect_ (Sphere si, Vector v, sphere s2) { float fA « VecLengthSquared(v);
Vector cdiff = VecSubtract {s2.center, si. center); float fC » Vec engt Squared (cdi f) ; float sR = sl.r + s2.r; float sRsqr = sR * sR; if {fA > 0) i float fB => VecDo (cdiff, v) ; if (fB <« 0)
{ if ((-fA) <= fB) return ((£A*fC) - (fB*fB) ) <= (fA'SRsqr) ; else return ( (f + 2*£B) + fC) <= sRsqr; 1 ) return fC <= sRsqr;
APPENDIX B
/// Test a moving triangle against a unit sphere at che origin . inline bool vrSegmentUnitSphereAtOriginTest (const VRVector3D δstart , const VRVector3D -.end, VRVector3D "returnedPoinc, float "returned1-?)
{
VR_ENTER(vrSegmentϋnitSphereAtOriginTest) ;
// a quiαk box test if (start.^;<-1.0 5&end.x<-1.0f) return false; if (start. >1.0fi&end.x>1.0f) return false; if (start. y<-1.0f&--end.y<-l. Of) return false; if (start.y>l. Of-.Send. y>l. Of) return false; if (start.z<-1.0 end.--<-1.0f) return false; if (start.2>1.0f-i&and.z>l. Of) return false;
// compute dir, a unit veccor in the direction if the segment
VRVector3D dir=end; dir « vrVecSubtrac fdir, start); loat invLen=l . O /vrVacLength (dir) ; dir = vrVecScalarMult(dir, inv en) ;
// compute minimum distance to collision, d float dot = vrVecDot (start, dir) ; float det=1.0f+dot*dot-vrVecDo (start, start) ; if (det<0.0f) return false; // segment misses sphere float d=- (vrVecDo (dir, start) tsqr (det) ) ; if (d<0.0f) return false; // segment is either start solid or misses float t=d*inv en; // convert back to 0-1 t oaram if (t>1.0f) return false; // collides after the end of the segment if (re urnedT)
*returnedT=t; if (returnedPoint) {
*re cur nedPoin c=dα. r ; (*tetυrnedPoinC) = vrVecScalarMult ( (TeturπedPoiπt) , d) ; (*retαrnedPoint) = vrvecAdd( (*retumedPoinx) , start) ; // assert (fats (vrVeαLengthSquare f ( "returnedPoint) ) -1. Of) <0. If) ;
) return true;
VR LEAVE;
/// Test a moving triangle against a unic sphere at the origin . inline bool vrϋnitSp ereAtOriginTriangleTest ( const VRVector3D δvelocity, //for the criangle const VRVector3D δA, const VRVector3D £B, const VRVector3D &C, //triangle bool backFaces,bool frontFaces, // which faces to accept
VRVector3D δreturnedPoint, RVectorSD {.returnedNormal, loat sreturnedT)
{
VRJS-NTER (vrUnitSphereAtOriginTriangleTest ) ; static const float tiι.y=l--.-10£; retumedT=1.01 ; // start off che end
VRVecto 3D edgeAC-C; edgeAC = vrVecSubtracc (edgeAC, A);
VRVector3D edgeAB=B; edgeAB = vrVecSubt act (edgeAB, A); returnedNor al=edgeAB; returnedNormal = vrVecCross (returnedNormal, edgeAC); float deno =wrVecDot (velocity, returnedNormal) ,-
if (
(!backPaces&sdenom>0) 11 // not accepting back faces (!frontFaces&&denom<0) ) //not accepting front faces
{ return false;
} bool doFace-»fabs(deno )>tiny;
VRVector3D Av=A;
Av = vrVecAdd(Av, velocity);
VRVector3D Bv=B;
Bv = vrVecAdd(Bv, velocity);
VRVector3D Cv=C;
Cv = vrVecAdd(Cw, velocity) ;
// test the 3 vercs VRVector3D retp; float rett; if (vrSegmentUnitSphereAtOriginTes (A,Av, sretp, srett) ) { if (rett<returnedT)
Figure imgf000114_0001
} if (vrSegmentϋnicSphereAtOriginTest (B, Bv, Sretp, Srett) ) if (rett<retαrnedT)
{ recurnedT=retc; returnedPoint=retp;
)
} if (vrSegmentUnitSphereAtOriginTest(C,Cv, retp, rett) ) if (rett<returnedτ)
{ returnedT=rett; retαrnedPoint«retp; ) )
// now tes t che three edges VRVector3D start,end; float ts,te; float inv enAC=l . Of/vrVecLength (edgeAC) ; edgeAC - vrVecScalarMult (edgeAC, invLenAC) ,- fcs=-vrVecDo (A, edgeAC) ; (Av, edgeAC) ;
Figure imgf000115_0001
(ts>0. 0 I | te>0.0f ) s // che line of impact is off this edge (ts*invLenAC<1.0f | | te*invLenAC<1.0f ) ) // che line of impact is of. this edge
!-tart=edgeAC; start=vrVecScalarMul (start, ts) s art=vrVecAdd (star , A) ; end=edgeAC; end=vrVecScalarMult(end, te) ; end=vrVecAdd(end, Av) ,- if (vrSegment-JnitSphereAtOriginTest (start, end, Sretp, rett) )
{ if (rett<returnedT) t float test=( (te-ts) *rett+ts) "invLenAC; if (test>0.0f&Stest<1.0f)
{ returnedT-rett ; returnedPoint=retp; 1 > ) ) float irw enAB=l . Of /vrVecLength (edgeAB) ; edgeAB=vrVecScalarMul ( edgeAB, invLenAB) ; ts=-vrVecDot (A, edgeAB) ; te=-vrVecDot (Av, edgeAB) ; if (
(ts>0.0f U te>0.0f & // the l ine of impact is off this edge
( CS*inv enAB<1..0f I | teainvLenAB<1.0f ) ) // the line of impact is of this edge { start=edgeAB; start=vrVecScalarMult (start, ts) ; start=vrV--cAdd (start, A) ; en =e ge ; end-=vrVecScalarMult (end, te) ; end=vrVecAdd(Av, end); if (vrSegmentϋnitSphereAtOriginTest (start, nd, sretp, Srett) )
{ if (rett<returnedT)
{ float test= ( (te-ts ) *rett>hts) "invLenAB; if (test>O . OfSStest<1.0f )
( returnedT=rett; returnedPoint=retp;
} } }
VRVector3D edgeCB-B; edgeCB=vrVacSubtrac (edgeCB, C) ; float invLenCB=l - O /vrVecLengt ( edgeCB) ; edgeCB=vrVecScalarMult (edgeCB, invLenCB) ; ts=-vrVecDot (C, edgeCB) ; ce=-vrVecDo (Cv, edgeCB) ; if (
(ts>0.0fI |te>0.0f)SS // che line of impact is off this edge (ts*inv enCB<1.0f | |te»invLenCB<1.0f) ) // che line of Impact is off this edge i start=<edgeCB; start=vrVecScalarMult (start, ts) ; sta t=vrVecAd (start, C) ; end=edgeCB; end=vrVecScalar ul (end,te) ; end=vrVecAdd (end, Cv) ; if (vrSegmentϋnitSp ereAtOriginTest (start, nd, sretp, Srett) )
{ if (rett<returnedT) ( float test= ( (te-ts) *rett+ts) '"invLenCB; if (test>0.0 sstest<1.0f)
( retumedT=rett; returnedPoint=retp; > 1 }
)
// and finally the surface of the triangle if (doFace)
{
VRVectorSD perpCB=edgeCB; perpCB = vrVecCross (perpCB, returnedNormal); float invLenPerpCB-1. O /vrVecLength (perpCB) ; perpCB=vrVecSσalarMult (perpCB, invLenPerpCB) ;
≤tart=edgeCB; starε=vrVecScalar ul (start, -vrVecDot (C, edgeCB) ) ; VRVactor3D - temp=perpCB; temp=vrVecScalarMult (temp, -vrVecDo (C, erpCB) ) ; start=vrVecAd (start, temp) ; start=vrVecAd (start, C) ; end=edgeCB; end=vrVec≤calar ult (end, -vrVecDot (Cv, edgeCB) ) ; temp^perpCB; temp=vrVecScalarMult (temp, -vrVecDot (Cv, erpCB) ) ; end=vrVecAdd(end, temp) ; end=vrVecAdd(end, Cv) ; if (vrSegmentUnitSphereAtOriginTest (start, end, retp, rett) )
{ if (rett<returnedT) {
// sec At Be cc to the triangle at the time of contact
VRVector3D At=velocity; At=-vrVecScalarMult(At, rett) ; VRVector3D Bt=At; VRVector3D Ct=At; At=vrVecAdd(At, A) ; BfvrVecAdd(B , B) ;• Ct«vrVecAdd(Ct, C) ;
VRVector3D edgePA--At; edgePA=vrVecSubtrac (edgePA, retp) ; VRVector3D edgePB=Bt; edgePB=vrVecSubtract (edgePB, retp) ; VRVector3D edgePC=Ct; edgePC=vrVecSubtract (edgePC, retp) ;
// now see if the hit spot is in the triangle temρ=edgePA; temp = vrVecCross (temp, edgePB); if (vrVecDot (temp, returnedNormal) >=0. Of)
{ temp=edgePC; temp=vrVecCross (temp, edgePA) ; if (vrVecDot (temp, eturnedNormal) >=0. Of )
I te p-edgePB; tempsvrVecCross (temp, edgePC) ,- if (vrVecDot (temp, returnedNormal) >=0. Of) returnedT=rett; returnedPoint=retp;
} if (returnedT<"-1.0f)
< returnedNormal--vrVecScalarMult (returnedNormal, - .Of) ; VRVeσtor3D tvel-velocity; tvel-vrVecSαalar ult (tvel, -returnedT) ; returnedPoint= vrVecAdd(returnedPoint, tvel); return true;
} return false; VR LEAVE;
APPENDIX C
void VRCone : : endTessellation ( VRVertex *va, // Vertex array zo fill
VRlndexArray **ia, // Index arrays to fill const int base_va, // stare in vertex array const int base ia // 5care in index arrays
)
I
VR_ENTE (VRCone: :endTessellation) ; va+=base_va; ia+=base_ia; VRlndexArray *iarray=* (ia++) ; endpoints:
Figure imgf000120_0001
VRColorRθG8B8A8 hite={ (char) 255, (char) 255, (char) 255, (char) 255 };
VRVertex v; int vc=0; // vertex counter ttifdef VRJTLATΪENING if (pNumSegs=-2)
{
/// Special case: flattening !
VRVector3D tovie l=vrVecSubtract (pBase, pViewPos) ; vrVecNormalize(Stoviewl) ;
VRVector3D dirl=vrVecCross (toviewl,pAxis) ; vrVecNor-nalizet&dirl) ;
VRVector3D tovie 2=vrVecSubtract (pTop,pViewPos) ; vrVecNormaliae (stovie 2) ;
VRVector3D dir2=vrVeσCross (tovie 2,pAxis) ; vrVecNormalize (&dir2) ; float coefl=pFlatCoef; float coef2=l-coefl; v.normal. x=dirl.x*coefl-tovie l.x"coef2; v.normal. y=dirl.ywcoef1-toviewl.ycoef2; v.normal. z=dirl. z*coe l-tovie l .2*coef2; vrVecNormaliaet&v.normal) ;
Figure imgf000120_0002
v. ormal.x=dir2.x*coefl-tovie 2.x*coef2; v.normal . y-dir2. y*coefl-toview2. *coe£2; v.normal. z-dir2. z*coefl-toview2. ≥+coe£2; vrVecNoπmali2e(&v. normal) ; .xyz .x=dir2.x* TopR+pϊop.x; .xyz . y=dir2. y*pTopR+pTo . y; .xyz . z-dir2. z *pTop +pTo . z; vrVertexEnvMapSolid(v) ; * (va+-ι-)=v;
χ*coef2; *coef2; zwcoef2;
Figure imgf000121_0001
vrVecNegate (Sdιr2) ; v.normal.x=dir2.x*coefl-toview2.x*coef2; v.normal .y=dir . yσoefl-coview2. *coef ; v.normal- 2=dir2. z*coe l-toview2.2*coef2; vrVecNormalize(Sv.normal) ; v.xyz. x=dir2. x*pTopR+pTop.κ; .xyz.y=dir2.y*ρTopR+pTop.y; v.xyz. z=dιr2. zwpTopR+pTop. z; vrVertexEnvMapSolιd(v) ;
»(va++)=v;
/// Indices of the billboarded quad iarray->setPrimitiveType (VR_TRIANGLE_LIST) ; iarray->ge Indices ()->setLength( 6) ; "" unssgneu sat *indices-iarray->getϊndices ()->lock(0, 6) ; indices [0] =base_va; indices [1] indices [ j indices [3] indicesi _ indices [5]
Figure imgf000121_0002
iarray->getlndices ( ) ->ιinlock ( ) ; return;
1 ftendif
ASSUME (pNumSegs>=3, R_INVALID_OP, "endTesselation: invalid number of segments");
/ /// Caloulace che vertices; ffiir (snt i=0; i<=pNumSegs;i++)
{
// Precalculate angle and trig stuff: fliat alpha=2*i»PI/pNumSegs; fliat cosalpha=cosf (alpha) ; fliat sinalpha=sinf (alpha) ; v.normal=vrVector3D (cosalpha, 0, sinalpha) ; vrVecMatrixMult (s (v. normal) ,pMatrix) ; v.κyz«-vrVector3D (pBaseR*cosalpha, CpBaseR+sinalpha) ; vrVεcMatrixMult (s (v. xyz) , pMatrix) ; v. κyz--.vrVecAdd ( . xya, pBase ) ; vrVertexEnvMapSolid(v) ; * (va++) =v; v. xyz=vrVector3D (pTopR*cosalpha, h,pTopR*sinalpha) ; vrVecMatrxx ult (& (v.xyz) ,p atrix) ; v. xyz-vrVecAdd (v.xy2,pBase) ; vrVertexEnvMapSolid ( ) ; * (va++) =v;
} vc=i*2; snt top^center; snt base_center;
$ (pTopDynCapped) { top_center~vc; v.normal=vrVector3D(0, 1,0) ; vrVecMatrixMult (& (v. normal) ,pMatrix) ; .xyz=vrVector3D (0, h, 0) ; vrVecMatrixMul ( (v.xyz) ,pMatriχ) ; v.χyz=vrVecAdd(v.xyz,pBase) ; vrVertexEnvMapSolid(v) ; * (va++)=v;
/// Update normals for these vertices: fir (snt i=0;i<=pNu»Segs;i++)
{
// Precalculate angle and trig stuff: fliat alpha=2*i*PI/pNumSegs; fliat σosalpha=cosf (alpha) ; fliat sinalpha=sinf (alpha) ; ;
Figure imgf000122_0001
vrVertexEnvMapSolid (v) ; * (va++) =«v; > vc+= (i+l) ; ) sf (pBaseDynCapped) I base_center-vc; v.normal=vrVector3D (0,-1,0) ; vrVecMatrixMul (S (v.normal) ,pMatrix) ; v.xyz=vrVeαtor3D (0,0,0) ; vrVecMatrixMult (& (v. xyz) ,pMatrix) ; v. xyz=vrVecAd ( .xyz,pBase) ;
VrVertexEnvMapSolid (v) ;
* (va++)»v;
/// Update normals for these vertices: fir (snt i=0; i<=ρNumSegs; i++)
// Precalcula te angle and trig stuff: fliat alpha-2*i*Pi pNumSegs; fliat cosalpha=cos (alpha) ; fliat sinalpha=sinf (alpha) ; v. xyz=vrVector3D (pBaseR*cosalpha, 0, pBaseR-sinalpha) ; vr t ( (v. xyz) , pMatrix) ; v.
Figure imgf000122_0002
(v. xy2,pBase) ; vrVertexEnvMapSoli (v) ; " (va++) «v; }
/// Calculate the indices: iarray->setPrimitiveType (VR TRIANGLE_STRIP) ; iarray->getlndices () ->setLengt (pNumSegs*2+2) ; unssgneu snt *indices=iarray~>getlndices ()->lock(0,pNumSegs*2 τ2) ; fir (i=0;i<pNumSegs*2+2;i+τ) w (indices++) =base_va+i; iarray->getlndices ( ) ->unlock ( ) ; sf (pTopDynCapped)
{ iarray=* (ia++) ; iarray->setPrimitiveType (VR_TRIANGLE_FAN) ; iarray->getlndices ( ) ->setLeήgth (pNumSegst2 ) ; unssgneu snt *indices=iarray->getlndices ()->lock(0,pNumSegs+2) ; fir (snt i=0;i<=pNumSegs+l;i++)
* (indices++) =base_va+toρ_center+i; iarray->get Indices ()->unlock() ; ) sf (pBaseDynCapped) { iarray=* (ia++) ; iarray->setPrimiciveType(VR ΪRIANGLEJFAN) ; iarray->getlndices ( ) ->setLerJgch (pNumSegs+2) ; unssgneu snt *indiees=iarray->getlndices ()->lock(0,pNumSegs+2) ; fir (snt i=0;i<=ρNumSegs+l,-i++)
*(indices-*-+)=base_va+base>_center+i; iarray->getlndices ( ) ->unlock ( ) ; }
VR LEAVE;
}
APPENDIX D
void VRElbow: : endTessellation (
VRVertex "Λ,va, // Vertex array to fill
VRlndexArray **ia, // Index arrays to fill const int ba≤e_va, // start in vertex array const int base_ia // start in index arrays ) i
VR_ENT£R(VRElbo : .-endTessellation) ; va+=base_va;
VRlndexArray *iarray=ia [base_ia] ; const VRColorR8G8B8A8 white={ (char) 255, (char) 255, (char) 255, (char) 255 } ; tifdef VR__FLATTENING if (pNumVertSegs=-=2) f
VRMatrix m; VRVertex V; VRQuaternion q; VRVector3D flatdir; VRVector3D dir; for (int i*0;i<-pNuroHorizSegs;i++)
{
/// optimized: matrix is only dependent of i q=vrQuaternion (pAxis, i*pAngle/ ( loat)pNυmHoriaSegs) ; vrQuaternionToMatrix (q, sm) ;
VRVecto 3D pos=vrVector3D (0, 0,0); vrVecMatrixMult (Spos,pProfileOrient) ; pos^vrVecAd (pos, pS art) ; vrVecMatrixMult (&pos,m) ; pos=vrVecAdd(pos,pCenter) ; flatdir=vrVecSubtract (pos,pViewPos) ; vrVecNormalize (Sflatdir) ;
VRVector3D dir=vrVector30(0, 1,0) ; vrVecMatrixMult (Sdir,pProfileOrient) ; vrVecMatrixMul (Sdir,m) ; dir=vrVecCross (dir, flatdir) ; vrVecϋ-ormalize(Sdir) ; // slow too, buz can 't avoid it : (
. v.normal«dir; v.κyz.x»dir.x*pRadius+pos.x; •
Figure imgf000125_0001
vrVecNegate (& (v.normal) ) ; v. xyz . =-dir .x»pRadiusfpos .κ; v.:<yκ .y«=-dir. y*pRad.Lus+pos .y; v.xyz . ε=-di . z*pRadiusγpos . z; vrVertexEnvMapSolid(v) - * (va+r)=v; /// Indices of the billboarded elbow iarray->setPrimitiveType (VR_TRIANGLE_LIST) ; iarray->getlndices ( ) ->setLength (pNumHorizSegs*6) ; unsigned int *indices==iarray->getlndices ( ) -->lock (Q, pNumHorizSegs*6) ; int n=0; for (i=0;i<pNumHori2Segs;i+t,n+=6)
{ indices [n] =base_va+i*2 ; indices [n+1] =base_va+i*2+1; indices [n+2] - ase^va .•i,r2+2; indices|;n+3]=indices[n+2] ; indices [n+4]=indices [n+1] ; indices [n+5Ϊ=base_va+iw2+3;
} iarray->getlndices ( ) ->αnloc ( ) ; return; }
#endif
/// calculate the vertices for (int i=0;i<=pNumHorizSegs;i++)
{
/// optimized: matrix is only dependent of i
VRQuaternion q=vrQuaternion (pAxis, i*pAngle/ (float) pNumHorizSegs) ;
VR atrix κι;
VRVertex v; vrQuaternionToMatrix (q, m) ; for (int 3=0; j<pNumVertSegs; ++) { v.xyz=-vrVector3D (pRadius,"cosf (j *2*PI/pNumVertSegs) , 0, pRadius*sinf ( *2*PI/pNumVertSegs) ) ; v. normal=vrVector3D (cosf (j *2*PI/pNumVertSegs) , 0, sin (j *2*PI/pNumVertSegs) ) ; vrVecMatrixMul ( S (v. xyz) , pProfileOrient) ; vrVecMatrixMult ( (v. normal) ,pProfileθrient) ; v. xyz-vrVecAdd(v . xyz,pStart) ; vrVecMatrixMul ( (v. xyz) ,m) ; vrVecMatrixMul ( (v. normal) ,m) ; v.κyz>-vrVecAdd (v.xyz,pCenter) ; vrVertexEnvMapSolid(v) ;
* (va++) =v;
/// Calculate the indices: iarray->setPrimitiveType(VR__TRIANGLE STRIP) ; iarray->getIndices ( ) -2-setLengt ( ( (pNumVertSegs+1) *2τi) *ρNumHori2Segs) ; unsigned int *indices=iarray->getlndices () ->loc (0,pNumHorizSegs* ( (pNumVertSegs+1) *2+1) ) ; int nc=0; for (int k=0; k<pNumHorizSegs; k++) for (int l=-0; l<:=pNumVertSegs; l++)
{
* (indice≤++) =»base_va+k*pNumVertSegs+l%pNumVertSegs;
* (indices++) =base~va+ (k+1) *pNumVertSegs+l%pNumVertSegs; nc+=2; ~
}
* (indices++) =base_va+ (k+l) *pNumVertSegs; nc++; ) iarray->getlndices ( ) ->unloc ( ) ; VR LEAVE;
APPENDIX E
VRVertex '"va, // Vertex array to fill
VRlndexArray **ia, // Index arrays to fill const int base_va, // start in vertex array const int base_i // start in index arrays
)
VRJ2NTBR (VRSweptArc: : endTessellation) ; if (pNumArcSegs === 0 I I pNumSweepSegs ==■ 0) return;
VRVertex sva [4096] ; va+=base_va;
VRlndexArray *iarray = ia [base_ia] ;
// Build vertex array: for (int i = 0; i <= pNumArcSeg-s ; i++)
{
// Generate one arc: float alpha = pArc≤tartAngle + (i * (float) pArcSweepAngle/ (float) pNumArcSegs) ; float cosalpha = cos (alpha); float sinalpha = sinf (alpha); float rlcosa = pArcRadiusl * cosalpha; float r2sina = pArcRadius2 * sinalpha;
VRVertex v; v.xy2 = vrVector3D(r2sina, 0, rlcosa); vrVecMatrixMult (Sv.κyz, pArcRotation) ; v.xyz = vrVecAdd (v.xyz, pArcCenter) ; vrVecMatrixMult (Sv.κyz, plnvMatrix) ; float n2 = (rlcosa * rlcosa) + (r2sina * r2sina) ; v.normal = vrVector3D (rlcosa * rlcosa / n2, r2sina » r2sina / a2,
0); svafi] = v;
) for (i = 1; i <= pNumSweepSegs; iτ+) float alpha = i w (float) pSweepAngle / (float) pNumSweepSegs; for (int j = 0; j <- pNumArcSegs; jτ-ι-)
{
VRVertex v; v =• svaij]; vrVecRotateYC&v.xyz, alpha); vrVecRotateY(Sv.normal, alpha); Sva[i* (pNuιtιArcSegs+1) + j] = v; 1 ) for (i = 0; i < (pNumArcSegs T 1) * (pNumSweepSegs + 1); i++) {
VRVertex v = sva[i); vrVecMatrixMul ( Sv. xyz, pMatrix) ; v. xyz = vrvecAdd fv. κy≥, pSweepCenter) ; vrVertexEnvMapSolid (v) ; sva [i) = v; memcp (va, ≤va, sizeof (VRVertex) * (pNumArcSegs+1) * (pNumSweepSegs+1) ) ;
// Build index array: iarray->≤etPrlmitiveType (VR_TRIANGLE_LIST) ; iarray->getlndice≤ ( ) ->setLengt ( (pNumSweepSegs) * pNumArcSegs * 6); unsigned int *idx - iarray->getlndices ()->lock(Q, (pNumSweepSegs) * pNumArcSegs * 6) ; int index = 0; for (int s = 0; s < pNumSweepSegs; sf+)
{ for (int a = 0; a < pNumArcSegs; a++)
I idx [index+*] <= base_va + s* (pNumArcSegstl) T a; idxfindex-H-J = base~va + s* (pNumArcSegs+1) + a + 1; ids[index++] = ba≤e~va + (sτl) * (pNumArcSegs+1) + a; idx[index++] = base_va + s* (pNumArcSegs+1) + a + 1; idx [index.-.-] = base_va + (s+1)* (pNumArσSegs+1) + a; idx[indexτ+] = base_va + (s+1) * (pNumArcSegs+1) + a + 1; ) ) iarray->getlndice≤ ()->unlock() ;
VR LEAVE;

Claims

A1. A method for generating virtual reality (VR) file(s) for facilitating interactive displays of a complex virtual environment, the method comprising: providing a three-dimensional (3D) model of the complex environment, the 3D model comprising data defining elements represented in the complex environment; and analyzing the data of the 3D model to generate VR file(s) including a collision detection hierarchy, the collision detection hierarchy being generated by: filtering the data of the 3D model to identify dynamic tessellation elements and non-dynamic tessellation elements; generating a plurality of triangles representing the identified dynamic tessellation elements and non-dynamic tessellation elements; performing a grid subdivision on the plurality of triangles to define a plurality of cubes; creating a bounding box tree for each cube, each bounding box tree comprising a plurality of bounding boxes; and allocating the plurality of triangles to the bounding boxes. A2. A method according to Claim A1 , wherein filtering comprises identifying dynamic tessellation elements in the 3D model based on dynamic tessellation element types, the dynamic tessellation element types including at least one of a cone element type, a torus element type, and a surface-of- revolution element type. A3. A method according to Claim A2, wherein the method further comprises reformatting the data representing each dynamic tessellation element based on the identified dynamic tessellation element type.
A4. A method according to Claim A1 , wherein generating a plurality of triangles comprises: identifying non-dynamic tessellation elements defined by polygons having more than three tine segments; and triangulating polygons of the identified non-dynamic tessellation elements that have more than three line segments so that each polygon is represented by triangles.
A5. A method according to Claim A1, wherein generating a plurality of triangles comprises: computing a tessellation level for each of the identified dynamic tessellation elements; determining the number of triangles for each dynamic tessellation element based on the computed tessellation level; and triangulating each dynamic tessellation element based on the determined number of triangles so that the dynamic tessellation element is represented by triangles.
A6. A method according to Claim A1, wherein the method further comprises: analyzing the data of the 3D model to generate, as part of the VR file(s), data defining an octree, the octree comprising spatially partitioned cells containing the identified dynamic tessellation elements and non-dynamic tessellation elements of the complex virtual environment. A7. A method according to Claim A6, wherein analyzing the 3D model to define an octree comprises constructing the octree by spatially partitioning the identified dynamic tessellation elements and non-dynamic tessellation elements into cells.
A8. A method according to Claim A7, wherein analyzing the 3D model to define an octree further comprises storing the identified data defining the dynamic tessellation elements and the non-dynamic tessellation elements in an octree file in accordance with predetermined data formats.
A9. A method according to Claim A8, wherein storing comprises reformatting and storing identified dynamic tessellation elements based on dynamic tessellation element types, the dynamic tessellation element types including at least one of a cone element type, a torus element type, and a surface-of-revolution element type.
A10. A method according to Claim A8, wherein the method further comprises: generating a list of occluder candidates based on the data of the 3D model; and storing the occluder candidates as part of the octree file.
A11. A method according to Claim A10, wherein generating the list of occluder candidates comprises: identifying candidate occluders based on the 3D model data; determining the area of each candidate occluder; and selecting a predetermined portion of the identified candidate occluders based on the determined areas to generate the list of occluder candidates. A12. A method according to Claim A11 , wherein identifying candidate occluders comprises: analyzing triangles represented in the complex environment to identify triangles with common edges; merging triangles with common edges into polygons; determining if each polygon resulting from the merging of triangles is convex; and storing each polygon determined to be convex as a candidate occluder.
A13. A method according to Claim A7, wherein analyzing the 3D model to define an octree comprises: identifying non-dynamic tessellation elements defined by polygons having more than three line segments; and triangulating polygons of the identified non-dynamic tessellation elements that have more than three line segments so that each polygon is represented by triangles.
A14. A method according to Claim A13, wherein analyzing the 3D model to define an octree further comprises: determining if a cell of the octree divides one or more triangles representing a non-dynamic tessellation element; and if a cell a divides one or more triangles, stretching the cell to define a stretched cell that fully contains each divided triangle.
A15. A method according to Claim A1 , wherein providing a 3D model comprises generating the 3D model from a two-dimensional (2D) model. A16. A method according to Claim A1 , wherein the VR file(s) comprise a collision detection file, and wherein the method further comprises storing the collision detection hierarchy as part of the collision detection file.
A17. A method for generating virtual reality (VR) file(s) for facilitating interactive displays of a complex virtual environment, the method comprising: providing a three-dimensional (3D) model of the complex environment, the 3D model comprising data defining elements represented in the complex environment; and analyzing the data of the 3D model to generate VR file(s) including an octree file, the octree file comprising data defining an octree of spatially partitioned cells, each cell of the octree containing one or more elements of the complex environment, the octree file being generated by: filtering the data of the 3D model to identify dynamic tessellation elements and non-dynamic tessellation elements; generating a plurality of triangles representing the identified nondynamic tessellation elements; and defining cells of the octree by spatially partitioning the identified dynamic tessellation elements and triangles representing the non-dynamic tessellation elements.
A18. A method according to Claim A17, wherein the method further comprises: generating a list of occluder candidates based on the data of the 3D model; and storing the occluder candidates as part of the octree file. A19. A method according to Claim A18, wherein generating the list of occluder candidates comprises: identifying candidate occluders based on the 3D model data; determining the area of each candidate occluder; and selecting the identified candidate occluders with the largest determined areas to generate the list of occluder candidates.
A20. A method according to Claim A19, wherein identifying candidate occluders comprises: analyzing triangles represented in the complex environment to identify triangles with common edges; merging triangles with common edges into polygons; determining if each polygon resulting from the merging of triangles is convex; and storing each polygon determined to be convex as a candidate occluder.
A21. A method according to Claim A17, wherein generating comprises: identifying non-dynamic tessellation elements defined by polygons having more than three line segments; and triangulating polygons of the identified non-dynamic tessellation elements that have more than three line segments so that each polygon is represented by triangles.
A22. A method according to Claim A17, wherein defining cells of the octree comprises: determining if a cell of the octree divides one or more triangles representing a non-dynamic tessellation element; and if a cell is determined to divide one or more triangles, stretching the cell to define a stretched cell that fully contains each divided triangle.
A23. A method according to Claim A17, wherein defining cells of the octree comprises: defining a single cell that contains all of the identified dynamic tessellation elements and triangles of the non-dynamic tessellation elements; partitioning the single cell into a predetermined number of partitioned cells, each of the partitioned cells being smaller than the single cell; and repeating the partitioning of cells until a cell contains fewer than a first predetermined number of dynamic tessellation elements or a second predetermined number of triangles of the non-dynamic tessellation elements.
A24. A method according to Claim A17, wherein the method further comprises allocating dynamic tessellation elements to the cells of the octree such that each dynamic tessellation elements is allocated only to cells within which the dynamic tessellation element is fully contained.
A25. A method according to Claim A24, wherein allocating comprises defining a bounding box that fully contains a dynamic tessellation element and comparing the bounding box to the cells of the octree to identify the lowest level cell of the octree that fully contains the bounding box.
A26. A method according to Claim A17, wherein the method further comprises sorting the dynamic tessellation elements and triangles of the nondynamic tessellation elements that are contained in each cell of the octree according to material type. A27. A method according to Claim A17, wherein the method further comprises building triangle strips from the triangles of the non-dynamic tessellation elements that are contained in each cell of the octree.
A28. A method according to Claim A17, wherein the method further comprises storing data defining the dynamic tessellation elements and the nondynamic tessellation elements in the octree file in accordance with predetermined data formats.
A29. A method according to Claim A28, wherein storing comprises reformatting and storing identified dynamic tessellation elements based on dynamic tessellation element types, the dynamic tessellation element types including at least one of a cone element type, a torus element type, and a surface-of-revolution element type.
A30. A method according to Claim A28, wherein storing comprises generating vertex array data for defining triangles representing the non-dynamic tessellation elements.
A31. A method according to Claim A17, wherein the VR file(s) include a collision detection file and wherein the method further comprises: analyzing the data of the 3D model to generate a collision detection hierarchy; and storing the collision detection hierarchy as part of the collision detection file.
A32. A method according to Claim A31 , wherein analyzing the 3D model data to generate a collision detection hierarchy comprises: generating a plurality of triangles representing the identified dynamic tessellation elements and non-dynamic tessellation elements; performing a grid subdivision on the plurality of triangles to define a plurality of cubes; creating a bounding box tree for each cube, each bounding box tree comprising a plurality of bounding boxes; and allocating the plurality of triangles to the bounding boxes.
A33. A method according to Claim A32, wherein generating a plurality of triangles comprises: computing a tessellation level for each of the identified dynamic tessellation elements; determining the number of triangle segments for each dynamic tessellation element based on the computed tessellation level; and triangulating each dynamic tessellation element based on the determined number of triangles so that the dynamic tessellation element is represented by triangles.
A34. A method according to Claim A17, wherein providing a 3D model comprises generating the 3D model from a two-dimensional (2D) model.
A35. A computer program product for generating virtual reality (VR) file(s) for facilitating interactive displays of a complex virtual environment, the computer program product comprising a computer readable medium embodying a computer program, the computer program comprising instructions that are executable by a system to perform a method for generating VR file(s) based on data from a massive model representing the complex environment, the method comprising: analyzing the data of the massive model to generate VR file(s), the VR file(s) including data defining an octree and data defining a collision detection hierarchy, the octree comprising a plurality of cells containing elements of the complex environment, and the collision detection hierarchy comprising a plurality of bounding boxes containing elements of the complex environment, wherein the octree is generated by: filtering the data of the massive model to identify dynamic tessellation elements and non-dynamic tessellation elements; generating a plurality of triangles representing the identified nondynamic tessellation elements; and defining cells of the octree by spatially partitioning the identified dynamic tessellation elements and triangles representing the non-dynamic tessellation elements.
A36. A computer program product according to Claim A35, wherein defining cells comprises: determining if a cell of the octree divides one or more triangles representing a non-dynamic tessellation element; and if a cell a divides one or more triangles, stretching the cell to define a stretched cell that fully contains each divided triangle.
A37. A computer program product according to Claim A35, wherein the collision detection hierarchy is generated by: generating a plurality of triangles representing the identified dynamic tessellation elements and non-dynamic tessellation elements; performing a grid subdivision on the plurality of triangles to define a plurality of cubes; creating a bounding box tree for each cube, each bounding box tree comprising a plurality of bounding boxes; and allocating the plurality of triangles to the bounding boxes.
A38. A computer program product according to Claim A35, wherein the method further comprises: generating a list of occluder candidates based on the data of the massive model; and storing the occluder candidates in the VR file(s) with the octree.
A39. A computer program product according to Claim A38, wherein generating the list of occluder candidates comprises: identifying candidate occluders based on the massive model data; determining the area of each candidate occluder; and selecting identified candidate occluders based on the determined areas to generate the list of occluder candidates.
A40. A computer program product according to Claim A39, wherein identifying candidate occluders comprises: analyzing triangles represented in the complex environment to identify triangles with common edges; merging triangles with common edges into polygons; determining if each polygon resulting from the merging of triangles is convex; and storing each polygon determined to be convex as a candidate occluder.
A41. In a system for providing interactive displays of a complex virtual environment, a converter component for generating virtual reality (VR) file(s), the converter component comprising: means for accessing a massive model of the complex environment, the massive model including data defining elements represented in the complex environment; and means for generating VR file(s) based on the massive model data of the complex environment, the VR file(s) including an octree file and a collision detection file; wherein the octree file comprises data defining an octree of spatially partitioned cells, each cell of the octree containing one or more elements of the complex virtual environment, and wherein the octree file further comprises data defining a plurality of occluder candidates; and wherein the collision detection file comprises a collision detection hierarchy of bounding boxes that contain one or more elements of the complex virtual environment.
A42. In the system according to Claim A41 , wherein the converter component further comprises: means for filtering the data of the massive model to identify dynamic tessellation elements and non-dynamic tessellation elements represented in the complex virtual environment; and means for storing the identified data defining the dynamic tessellation elements and the non-dynamic tessellation elements in the octree file in accordance with predetermined data formats.
A43. In the system according to Claim A41 , wherein the generating means comprises means for constructing the octree by spatially partitioning the identified dynamic tessellation elements and non-dynamic tessellation elements into cells. A44. In the system according to Claim A41 , wherein the converter component further comprises: means for analyzing the data of the massive model to generate the plurality of occluder candidates.
A45. In the system according to Claim A44, wherein the analyzing means comprises: means for identifying candidate occluders based on the massive model data; means for determining the area of each candidate occluder; and means for selecting the identified candidate occluders based on the determined areas to provide the list of occluder candidates.
A46. In the system according to Claim A41 , wherein the converter component further comprises: means for filtering the data of the massive model to identify dynamic tessellation elements and non-dynamic tessellation elements represented in the complex virtual environment; means for generating a plurality of triangles representing the identified dynamic tessellation elements and non-dynamic tessellation elements; means for performing a grid subdivision on the plurality of triangles to define a plurality of cubes; means for creating a bounding box tree for each cube, each bounding box tree comprising a plurality of bounding boxes; and means for allocating the plurality of triangles to the bounding boxes.
A47. In the system according to Claim A46, wherein the means for generating a plurality of triangles comprises: means for computing a tessellation level for each of the identified dynamic tessellation elements; means for determining the number of triangle segments for each dynamic tessellation element based on the computed tessellation level; and means for triangulating each dynamic tessellation element based on the determined number of triangles so that the dynamic tessellation element is represented by triangles.
A48. In the system according to Claim A41 , further comprising a simulation component, the simulation component being adapted to receive user input and perform simulation operations based on the VR file(s).
A49. In the system according to Claim A48, further comprising a graphics system, the graphics system being adapted to render frames of the complex virtual environment based on the VR file(s) and in response to the simulation operations performed by the simulation component.
A50. A method for generating virtual reality (VR) file(s) for facilitating interactive displays of a complex virtual environment, the method comprising: providing a three-dimensional (3D) model of the complex virtual environment, the 3D model comprising data defining elements represented in the complex virtual environment; and analyzing the data of the 3D model to generate VR file(s) including an octree file and collision detection file, wherein the octree file comprises data defining an octree of spatially partitioned cells, each cell of the octree containing one or more elements of the complex virtual environment, and wherein the collision detection file comprises a collision detection hierarchy of bounding boxes that contain one or more elements of the complex virtual environment.
A51. A method according to Claim A50, further comprising: filtering the data of the 3D model to identify dynamic tessellation elements and non-dynamic tessellation elements represented in the complex virtual environment; and storing the identified data defining the dynamic tessellation elements and the non-dynamic tessellation elements in the octree file in accordance with predetermined data formats.
A52. A method according to Claim A51 , wherein analyzing comprises constructing the octree by spatially partitioning the identified dynamic tessellation elements and non-dynamic tessellation elements into cells.
A53. A method according to Claim A50, further comprising: analyzing the data of the 3D model to generate a list of occluder candidates; and storing the occluder candidates as part of the octree file.
A54. A method according to Claim A50, wherein providing a 3D model comprises generating the 3D model from a two-dimensional (2D) model.
B1. A method for performing simulation to facilitate the rendering of frames of a complex virtual environment, the method comprising: detecting, for a next frame to be rendered, input from a user; determining, based on the user input, desired movements from a current position to a desired position within the complex virtual environment; performing, using a collision detection hierarchy of bounding boxes that contain one or more elements of the complex environment, a predictive collision detection operation to determine if a collision will occur with an element represented in the complex environment if the desired movements are performed between the current position and the desired position; if a collision is detected, calculating an adjusted position based on the desired movements in the complex environment and setting a viewer position for rendering the next frame to the adjusted position; and if a collision is not detected, setting a viewer position for rendering the next frame to the desired position.
B2. A method according to Claim B1 , wherein determining desired movements comprises calculating a path between the current position and the desired position based on the user input.
B3. A method according to Claim B2, wherein performing a predictive collision detection operation comprises: identifying bounding boxes of the collision detection hierarchy that intersect with the desired movements based on the calculated path between the current position and the desired position; and analyzing elements contained in each identified bounding box to determine if a collision will occur if the desired movements are performed.
B4. A method according to Claim B3, wherein the user is represented by an avatar in the complex virtual environment and movements of the avatar in rendered frames of the complex environment are controlled based on movements indicated by the user input. B5. A method according to Claim B4, wherein the method further comprises approximating the avatar with a bounding volume, and wherein identifying intersecting bounding boxes comprises detecting bounding boxes that intersect with the bounding volume of the avatar when the bounding volume is moved between the current position and the desired position.
B6. A method according to Claim B4, wherein analyzing elements to determine if a collision will occur comprises detecting elements in each identified bounding box that intersect with the bounding volume of the avatar when the bounding volume is moved between the current position and the desired position.
B7. A method according to Claim B6, wherein calculating an adjusted position comprises: determining a position of the avatar at the earliest point of contact with an intersecting element represented in the complex virtual environment; and performing a sliding motion operation from the position of the earliest point of contact to determine an adjusted position of the avatar.
B8. A method according to Claim B7, wherein performing a sliding motion operation comprises: calculating a tangent plane at the position of the earliest point of contact with respect to the bounding volume of the avatar; projecting the desired position of the avatar onto the tangent plane; and setting an adjusted position for the avatar at approximately the projected point of the desired position on the tangent plane.
B9. A method according to Claim B6, wherein calculating an adjusted position comprises: determining a position of the avatar at the earliest point of contact with an element represented in the complex virtual environment; and setting an adjusted position for the avatar at approximately the position of the earliest point of contact.
B10. A method according to Claim B3, wherein a moving object is represented in the complex virtual environment and movements of the moving object in rendered frames of the complex environment are controlled based on movements indicated by the user input.
B11. A method according to Claim B10, wherein the method further comprises approximating the moving object with a bounding volume, and further wherein identifying intersecting bounding boxes comprises detecting bounding boxes that intersect with the bounding volume of the moving object when the bounding volume is moved between the current position and the desired position.
B12. A method according to Claim B10, wherein analyzing elements to determine if a collision will occur comprises detecting elements in each identified bounding box that intersect with the bounding volume of the moving object when the bounding volume is moved between the current position and the desired position.
B13. A method according to Claim B12, wherein calculating an adjusted position comprises: determining a position of the moving object at the earliest point of contact with an intersecting element represented in the complex virtual environment; and setting an adjusted position for the moving object at approximately the position of the earliest point of contact. B14. A method according to Claim B1 , wherein the method further comprises accessing the collision detection hierarchy from virtual reality (VR) files, the VR file(s) being generated from a three-dimensional (3D) model of the complex virtual environment.
B15. A method according to Claim B14, wherein the 3D model is a massive model.
B16. A method for performing simulation operations to facilitate the rendering of frames of a complex virtual environment, the simulation operations being performed as part of a simulation loop to provide interactive displays of the complex environment to a user, the method comprising: detecting, for a next frame to be rendered, input from a user; determining, based on the user input, desired movements from a current position to a desired position within the complex virtual environment; predicting, using a collision detection hierarchy of bounding boxes that contain one or more elements of the complex environment, whether a collision will occur with an element in the complex environment based on the desired movements between the current position and the desired position; and calculating an adjusted position for rendering the next frame in response to the detection of a collision, wherein determining desired movements based on the user input comprises calculating a path between the current position and the desired position based on the user input, and further wherein predicting whether a collision will occur comprises: performing a proximity query based on the calculated path between the current position and the desired position to identify intersecting bounding boxes; and analyzing elements contained in each identified bounding box to determine if a collision will occur based on the desired movements.
B17. A method according to Claim B16, wherein the method further comprises setting a viewer position for rendering the next frame to the desired position if a collision is not detected.
B18. A method according to Claim B16, wherein the method further comprises setting a viewer position for rendering the next frame to the adjusted position if a collision is detected.
B19. A method according to Claim B16, wherein the user is represented by an avatar in the complex virtual environment and movements of the avatar in rendered frames of the complex environment are controlled based on desired movements indicated by the user input.
B20. A method according to Claim B19, wherein the method further comprises approximating the avatar with a bounding volume, and further wherein performing a proximity query comprises detecting bounding boxes that intersect with the bounding volume of the avatar when the bounding volume is moved between the current position and the desired position.
B21. A method according to Claim B19, wherein analyzing elements to determine if a collision will occur comprises detecting elements in each identified bounding box that intersect with the bounding volume of the avatar when the bounding volume is moved between the current position and the desired position. B22. A method according to Claim B21 , wherein calculating an adjusted position comprises: determining a position of the avatar at the earliest point of contact with an intersecting element represented in the complex virtual environment; and performing a sliding motion operation from the position of the earliest point of contact to determine an adjusted position of the avatar.
B23. A method according to Claim B22, wherein performing a sliding motion operation comprises: calculating a tangent plane at the position of the earliest point of contact with respect to the bounding volume of the avatar; projecting the desired position of the avatar onto the tangent plane; and setting an adjusted position for the avatar at approximately the projected point of the desired position on the tangent plane.
B24. A method according to Claim B21 , wherein calculating an adjusted position comprises: determining a position of the avatar at the earliest point of contact with an element represented in the complex virtual environment; and setting an adjusted position for the avatar at approximately the position of the earliest point of contact.
B25. A method according to Claim B16, wherein a moving object is represented in the complex virtual environment and movements of the moving object in rendered frames of the complex environment are controlled based on movements indicated by the user input.
B26. A method according to Claim B25, wherein the method further comprises approximating the moving object with a bounding volume, and further wherein performing a proximity query comprises detecting bounding boxes that intersect with the bounding volume of the moving object when the bounding volume is moved between the current position and the desired position.
B27. A method according to Claim B25, wherein analyzing elements to determine if a collision will occur comprises detecting elements in each identified bounding box that intersect with the bounding volume of the moving object when the bounding volume is moved between the current position and the desired position.
B28. A method according to Claim B27, wherein calculating an adjusted position comprises: determining a position of the moving object at the earliest point-of-contact with an intersecting element represented in the complex virtual environment; and setting an adjusted position for the moving object at approximately the position of the earliest point of contact.
B29. A method according to Claim B16, wherein the method further comprises accessing the collision detection hierarchy from virtual reality (VR) files, the VR file(s) being generated from a three-dimensional (3D) model of the complex virtual environment.
B30. A method according to Claim B29, wherein the 3D model is a massive model.
B31. A computer program product for providing interactive displays of a complex virtual environment, the computer program product comprising a computer readable medium embodying a computer program, the computer program comprising instructions that are executable by a system to perform a simulation method to facilitate the rendering of frames of the complex environment, the method comprising: determining, based on user input for a frame to be rendered, desired movements from a current position to a desired position within the complex virtual environment; performing, using a collision detection hierarchy of bounding boxes that contain one or more elements of the complex environment, a predictive collision detection operation to determine if a collision will occur with an element represented in the complex environment if the desired movements are performed between the current position and the desired position; and in response to performing the collision detection operation, setting a viewer position for the frame to be rendered.
B32. A computer program product according to Claim B31 , wherein the method further comprises: if a collision is detected, calculating an adjusted position based on the desired movements in the complex environment and setting the viewer position for rendering the next frame to the adjusted position; and if a collision is not detected, setting the viewer position for rendering the next frame to the desired position.
B33. A computer program product according to Claim B31 , wherein determining desired movements based on the user input comprises calculating a path between the current position and the desired position based on the user input.
B34. A computer program product according to Claim B33, wherein performing the predictive collision detection operation comprises: performing a proximity query based on the calculated path between the current position and the desired position to identify intersecting bounding boxes, and analyzing elements contained in each identified bounding box to determine if a collision will occur based on the desired movements.
B35. A computer program product according to Claim B34, wherein the user is represented by an avatar in the complex virtual environment and movements of the avatar in rendered frames of the complex environment are controlled based on movements indicated by the user input.
B36. A computer program product according to Claim B35, wherein the method further comprises approximating the avatar with a bounding volume, and further wherein performing a proximity query comprises detecting bounding boxes that intersect with the bounding volume of the avatar when the bounding volume is moved between the current position and the desired position.
B37. A computer program product according to Claim B36, wherein analyzing elements to determine if a collision will occur comprises detecting elements in each identified bounding box that intersect with the bounding volume of the avatar when the bounding volume is moved between the current position and the desired position.
B38. A computer program product according to Claim B37, wherein the method further comprises: if a collision is detected, calculating an adjusted position based on the desired movements in the complex environment and setting the viewer position to the adjusted position for rendering the next frame.
B39. A computer program product according to Claim B38, wherein calculating an adjusted position comprises: determining a position of the avatar at the earliest point of contact with an intersecting element represented in the complex virtual environment; and performing a sliding motion operation from the position of the earliest point of contact to determine an adjusted position of the avatar.
B40. A computer program product according to Claim B39, wherein performing a sliding motion operation comprises: calculating a tangent plane at the position of the earliest point of contact with respect to the bounding volume of the avatar; projecting the desired position of the avatar onto the tangent plane; and setting an adjusted position for the avatar at approximately the projected point of the desired position on the tangent plane.
B41. A computer program product according to Claim B38, wherein calculating an adjusted position comprises: determining a position of the avatar at the earliest point of contact with an element represented in the complex virtual environment; and setting an adjusted position for the avatar at approximately the position of the earliest point of contact.
B42. A computer program product according to Claim B34, wherein a moving object is represented in the complex virtual environment and movements of the moving object in rendered frames of the complex environment are controlled based on desired movements indicated by the user input.
B43. A computer program product according to Claim B42, wherein the method further comprises approximating the moving object with a bounding volume, and further wherein performing a proximity query comprises detecting bounding boxes that intersect with the bounding volume of the moving object when the bounding volume is moved between the current position and the desired position.
B44. A computer program product according to Claim B42, wherein analyzing elements to determine if a collision will occur comprises detecting elements in each identified bounding box that intersect with the bounding volume of the moving object when the bounding volume is moved between the current position and the desired position.
B45. A computer program product according to Claim B44, wherein the method further comprises: if a collision is detected, calculating an adjusted position based on the desired movements in the complex environment and setting the viewer position to the adjusted position for rendering the next frame.
B46. A computer program product according to Claim B45, wherein calculating an adjusted position comprises: determining a position of the moving object at the earliest point of contact with an intersecting element represented in the complex virtual environment; and setting an adjusted position for the moving object at approximately the position of the earliest point of contact.
B47. In a system for providing interactive displays of a complex virtual environment, a simulation component for performing simulations based on virtual reality (VR) file(s), the VR file(s) including a collision detection hierarchy of bounding boxes that contain elements represented in the complex environment, the simulation component comprising: means for determining, based on input from a user, desired movements from a current position to a desired position within the complex virtual environment; means for detecting, using the collision detection hierarchy of bounding boxes, whether a collision will occur with an element represented in the complex virtual environment based on the desired movements from the current position to the desired position; and means for calculating an adjusted position when it is determined that a collision will occur based on the desired movements indicated by the user input.
B48. In a system according to Claim B47, wherein the simulation component further comprises: means for setting a viewer position for rendering a frame to the adjusted position if a collision is detected; and means for setting a viewer position for rendering a frame to the desired position if a collision is not detected.
B49. In a system according to Claim B47, wherein the means for detecting whether a collision will occur comprises: means for performing a proximity query to identify intersecting bounding boxes when simulating the desired movements between the current position and the desired position; and means for analyzing elements contained in each identified bounding box to determine if a collision will occur based on the desired movements.
B50. In a system according to Claim B49, wherein the user is represented by an avatar in the complex virtual environment and movements of the avatar in rendered frames of the complex environment are controlled based on desired movements indicated by the user input.
B51. In a system according to Claim B50, wherein the simulation component further comprises means for approximating the avatar with a bounding volume, and further wherein the means for performing a proximity query comprises means for detecting bounding boxes that intersect with the bounding volume of the avatar when the bounding volume is moved between the current position and the desired position.
B52. In a system according to Claim B50, wherein the means for analyzing elements to determine if a collision will occur comprises means for detecting elements in each identified bounding box that intersect with the bounding volume of the avatar when the bounding volume is moved between the current position and the desired position.
B53. In a system according to Claim B52, wherein the means for calculating an adjusted position comprises: means for determining a position of the avatar at the earliest point of contact with an intersecting element represented in the complex virtual environment; and means for performing a sliding motion operation from the position of the earliest point of contact to determine an adjusted position of the avatar.
B54. In a system according to Claim B53, wherein means for performing a sliding motion operation comprises: means for calculating a tangent plane at the position of the earliest point of contact with respect to the bounding volume of the avatar; means for projecting the desired position of the avatar onto the tangent plane; and means for setting an adjusted position for the avatar at approximately the projected point of the desired position on the tangent plane.
B55. In a system according to Claim B52, wherein the means for calculating an adjusted position comprises: means for determining a position of the avatar at the earliest point of contact with an element represented in the complex virtual environment; and means for setting an adjusted position for the avatar at approximately the position of the earliest point of contact.
B56. In a system according to Claim B47, further comprising a graphics system, the graphics system being adapted render frames of the complex virtual environment based on the results of the simulations performed by the simulation component.
B57. In a system according to Claim B47, further comprising means for generating the VR file(s) based on a three dimensional (3D) model of the complex virtual environment.
B58. A method for simulating movement of an avatar in a complex virtual environment, the method comprising: determining, based on input from a user, desired movements of the avatar from a current position to a desired position within the complex virtual environment; and detecting, using a collision detection hierarchy of bounding boxes, whether a collision will occur with an element represented in the complex virtual environment if the avatar is moved from the current position to the desired position, wherein detecting includes: performing a proximity query based on the desired movements of the avatar to identify bounding boxes that intersect with the avatar when the avatar is moved to the desired position; and analyzing elements contained in each identified bounding box to determine if a collision will occur with the avatar.
B59. A method according to Claim B58, wherein the method further comprises: determining, in response to the detection of a collision, an adjusted position for the avatar in the complex virtual environment.
B60. A method according to Claim B59, wherein determining an adjusted position comprises: determining a position of the avatar at the earliest point of contact with an intersecting element represented in the complex virtual environment; calculating a tangent plane at the point of contact with respect to a bounding volume for the avatar; projecting the desired position of the avatar onto the tangent plane; and setting an adjusted position for the avatar at approximately the projected point of the desired position on the tangent plane.
B61. A method according to Claim B59, wherein the method further comprises: setting a viewer position for rendering a frame to the adjusted position when a collision is detected. B62. A method according to Claim B58, wherein the method further comprises: setting a viewer position for rendering a frame to the desired position if a collision is not detected.
C1. In a virtual reality (VR) system for providing displays of a complex virtual environment based on VR file(s), the VR file(s) including data defining dynamic tessellation elements and non-dynamic tessellation elements of the complex virtual environment, a method for performing memory management operations, the method comprising: allocating video memory in graphics hardware of the VR system and allocating accelerated graphics port (AGP) memory as part of system memory of the VR system; pre-loading, prior to simulating and rendering frames of the complex environment, all non-dynamic tessellation elements defined in the VR file(s) into memory, wherein pre-loading includes pre-loading non-dynamic tessellation elements into at least one of the video memory and the AGP memory; determining, after pre-loading all non-dynamic tessellation elements, whether memory space is available in at least one of the video memory and the AGP memory; and if memory space is determined to be available, pre-loading dynamic tessellation elements defined in the VR file(s) into the available memory space of the video memory or the AGP memory.
C2. In a VR system according to Claim C1 , wherein pre-loading dynamic tessellation elements comprises pre-loading dynamic tessellation elements in accordance with an initial frame of the complex environment to be rendered and displayed to a user.
C3. In a VR system according to Claim C1 , wherein pre-loading all nondynamic tessellation elements comprises: determining whether there is sufficient memory space in at least one of the video memory and the AGP memory for all of the non-dynamic tessellation elements; and when it is determined that there is not sufficient memory space in the video or the AGP memory, pre-loading any additional non-dynamic tessellation elements defined in the VR file(s) to available system memory of the VR system.
C4. In a VR system according to Claim C1 , wherein the method further comprises buffering, in at least the video memory or the AGP memory, vertex data for rendered dynamic tessellation elements.
C5. In a VR system according to Claim C4, wherein the method further comprises: determining, for a dynamic tessellation element to be rendered, whether the dynamic tessellation element was rendered in a previous frame at a tessellation level that is the same as a tessellation level for the element in a frame to be rendered; and when it is determined that the dynamic tessellation element was rendered in a previous frame at the same tessellation level, using the buffered vertex data for the previous rendering of the dynamic tessellation element to facilitate the display of the frame to be rendered.
C6. In a VR system according to Claim C1 , wherein the method further comprises: estimating, for a dynamic tessellation element to be rendered, memory space requirements for storing rendering data for the dynamic tessellation element, the rendering data comprising vertex data defining one or more triangles representing the dynamic tessellation element; and determining whether there is sufficient memory space available in the video memory or the AGP memory for storing the rendering data based on the estimated memory space requirements.
C7. In a VR system according to Claim C6, wherein the method further comprises: using available system memory of the VR system for storing the rendering data of the dynamic tessellation element when it is determined that there is not sufficient memory space in at least one of the video memory and the AGP memory; and after dynamically tessellating the element, copying the rendering data from the system memory to a backend memory.
C8. In a VR system according to Claim C7, wherein allocating the video memory and the AGP memory comprises allocating the backend memory using a portion of at least one of the video memory and the AGP memory.
C9. In a VR system according to Claim C6, wherein estimating the memory space requirement comprises: computing the number of triangles for rendering the dynamic tessellation element based on a tessellation level for the element in the frame to be rendered; and estimating the memory space requirements as a function of the computed number of triangles and a predetermined number of bytes that are required for storing data defining the computed number of triangles.
C10. In a VR system according to Claim C9, wherein computing the number of triangles comprises computing the number of triangles from the tessellation level in accordance with dynamic tessellation element types, wherein the dynamic tessellation element types include at least one of an open cone element type, a closed cone element type, an open torus element type, a closed torus element type, and a surface-of-revolution element type.
C11. In a VR system according to Claim C9, wherein triangles are stored as part of a triangle list defined by vertex array data and index array data, and further wherein estimating the memory space requirements comprises determining the number of bytes required for storing vertex array data and index array data that define the computed number of triangles as part of the triangle list.
C12. In a VR system according to Claim C9, wherein triangles are stored as part of a triangle strip defined by vertex array data and index array data, and further wherein estimating the memory space requirements comprises determining the number of bytes required for storing vertex array data and index array data that define the computed number of triangles as part of the triangle strip.
C13. A method for performing memory management operations in a virtual reality (VR) system, the VR system being adapted to provide displays of a complex virtual environment based on VR file(s), the VR file(s) including data defining dynamic tessellation elements and non-dynamic tessellation elements, the method comprising: buffering, in memory of the VR system, vertex data for rendered dynamic
tessellation elements; determining, for a dynamic tessellation element to be rendered, whether the dynamic tessellation element was rendered in a previous frame at a tessellation level that is the same as a tessellation level for the element in a frame to be rendered; and when it is determined that the dynamic tessellation element was rendered in a previous frame at the same tessellation level, using the buffered vertex data for the previous rendering of the dynamic tessellation element to facilitate the display of the frame to be rendered.
C14. A method according to Claim C13, wherein the method further comprises allocating video memory in graphics hardware of the VR system and allocating accelerated graphics port (AGP) memory as part of system memory of the VR system.
C15. A method according to Claim C14, wherein buffering comprises buffering vertex data in at least one of the video memory and the AGP memory.
C16. A method according to Claim C14, wherein the method further comprises: estimating, for a dynamic tessellation element to be rendered, memory space requirements for storing rendering data for the dynamic tessellation element, the rendering data comprising vertex data defining one or more triangles representing the dynamic tessellation element; and determining whether there is memory space available in the video memory or the AGP memory for storing the rendering data based on the estimated memory space requirements. C17. A method according to Claim C16, wherein the method further comprises: using available system memory of the VR system for storing the rendering data of the dynamic tessellation element when it is determined that there is not sufficient memory space in the video memory or the AGP memory; and after dynamically tessellating the element, copying the rendering data from the system memory to a backend memory.
C18. A method according to Claim C17, wherein allocating the video memory and the AGP memory comprises allocating the backend memory using a portion of at least one of the video memory and the AGP memory.
C19. A method according to Claim C16, wherein estimating the memory space requirement comprises: computing the number of triangles for rendering the dynamic tessellation element based on a tessellation level for the element in the frame to be rendered; and estimating the memory space requirements as a function of the computed number of triangles and a predetermined number of bytes that are required for storing data defining the computed number of triangles.
C20. A method according to Claim C19, wherein computing the number of triangles comprises computing the number of triangles from the tessellation level in accordance with dynamic tessellation element types, wherein the dynamic tessellation element types include at least one of an open cone element type, a closed cone element type, an open torus element type, a closed torus element type, and a surface-of-revolution element type. C21. A method according to Claim C19, wherein triangles are stored as part of a triangle list defined by vertex array data and index array data, and further wherein estimating the memory space requirements comprises determining the number of bytes required for storing vertex array data and index array data that define the computed number of triangles as part of the triangle list.
C22. A method according to Claim C19, wherein triangles are stored as part of a triangle strip defined by vertex array data and index array data, and further wherein estimating the memory space requirements comprises determining the number of bytes required for storing vertex array data and index array data that define the computed number of triangles as part of the triangle strip.
C23. A method according to Claim C14, wherein the method further comprises: pre-loading, prior to simulating and rendering frames of the complex environment, all non-dynamic tessellation elements defined in the VR file(s) into memory, wherein pre-loading includes pre-loading non-dynamic tessellation elements into at least one of the video memory and the AGP memory; determining, after pre-loading all non-dynamic tessellation elements, whether memory space is available in the video memory or the AGP memory; and if memory space is determined to be available, pre-loading dynamic tessellation elements defined in the VR file(s) into the available memory space of the video memory or the AGP memory.
C24. A method according to Claim C23, wherein pre-loading dynamic tessellation elements comprises pre-loading dynamic tessellation elements in accordance with an initial frame of the complex environment to be rendered and displayed to a user.
C25. A method according to Claim C23, wherein pre-loading all nondynamic tessellation elements comprises: determining whether there is sufficient memory space in the video memory or the AGP memory for all of the non-dynamic tessellation elements; and when it is determined that there is not sufficient memory space in the video memory or the AGP memory, pre-loading any additional non-dynamic tessellation elements defined in the VR file(s) to available system memory of the VR system.
C26. A computer program product for providing interactive displays of a complex virtual environment, the computer program product comprising a computer readable medium embodying a computer program, the computer program comprising instructions that are executable by a system to perform a method for performing memory management operations, the method comprising: allocating video memory in graphics hardware of the VR system and allocating accelerated graphics port (AGP) memory as part of system memory of the VR system; estimating, for a dynamic tessellation element to be rendered, memory space requirements for storing rendering data for the dynamic tessellation element, the rendering data comprising vertex data defining one or more triangles representing the dynamic tessellation element; determining whether there is sufficient memory space available in at least one of the video memory and the AGP memory for storing the rendering data based on the estimated memory space requirements; and using the available memory space in at least one of the video memory and the AGP memory for dynamically tesselating the element when it is determined that there is sufficient memory space for the rendering data.
C27. A computer program product according to Claim C26, wherein the method further comprises: using available system memory of the VR system for storing the rendering data of the dynamic tessellation element when it is determined that there is not sufficient memory space in the video memory or the AGP memory; and after dynamically tessellating the element, copying the rendering data from the system memory to a backend memory.
C28. A computer program product according to Claim C27, wherein allocating the video memory and the AGP memory comprises allocating the backend memory using a portion of at least one of the video memory and the AGP memory.
C29. A computer program product according to Claim C26, wherein the method further comprises: buffering the vertex data for rendered dynamic tessellation elements in at least one of the video memory and the AGP memory.
C30. A computer program product according to Claim C29, wherein the method further comprises: determining, for a dynamic tessellation element to be rendered, whether the dynamic tessellation element was rendered in a previous frame at a tessellation level that is the same as a tessellation level for the element in a frame to be rendered; and when it is determined that the dynamic tessellation element was rendered in a previous frame at the same tessellation level, using the buffered vertex data for the previous rendering of the dynamic tessellation element to facilitate the display of the frame to be rendered.
C31. A computer program product according to Claim C26, wherein estimating the memory space requirement comprises: computing the number of triangles for rendering the dynamic tessellation element based on a tessellation level for the element in the frame to be rendered; and estimating the memory space requirements as a function of the computed number of triangles and a predetermined number of bytes that are required for storing data defining the computed number of triangles.
C32. A computer program product according to Claim C31 , wherein computing the number of triangles comprises computing the number of triangles from the tessellation level in accordance with dynamic tessellation element types, wherein the dynamic tessellation element types include at least one of an open cone element type, a closed cone element type, an open torus element type, a closed torus element type, and a surface-of-revolution element type.
C33. A computer program product according to Claim C31 , wherein triangles are stored as part of a triangle list defined by vertex array data and index array data, and further wherein estimating the memory space requirements comprises determining the number of bytes required for storing vertex array data and index array data that define the computed number of triangles as part of the triangle list. C34. A computer program product according to Claim C31 , wherein triangles are stored as part of a triangle strip defined by vertex array data and index array data, and further wherein estimating the memory space requirements comprises determining the number of bytes required for storing vertex array data and index array data that define the computed number of triangles as part of the triangle strip.
C35. A computer program product according to Claim C26, wherein the method further comprises: pre-loading, prior to simulating and rendering frames of the complex environment, all non-dynamic tessellation elements defined in VR file(s) into memory, wherein pre-loading includes pre-loading non-dynamic tessellation elements into at least one of the video memory and the AGP memory; determining, after pre-loading all non-dynamic tessellation elements, whether memory space is available in the video memory or the AGP memory; and if memory space is determined to be available, pre-loading dynamic tessellation elements defined in the VR file(s) into the available memory space of the video memory or AGP memory.
C36. A computer program product according to Claim C35, wherein preloading dynamic tessellation elements comprises pre-loading dynamic tessellation elements in accordance with an initial frame of the complex environment to be rendered and displayed to a user.
C37. A computer program product according to Claim C35, wherein preloading all non-dynamic tessellation elements comprises: determining whether there is sufficient memory space in the video memory or the AGP memory for all of the non-dynamic tessellation elements; and when it is determined that there is not sufficient memory space in the video memory or the AGP memory, pre-loading any additional non-dynamic tessellation elements defined in the VR file(s) to available system memory of the VR system.
C38. A system for performing memory management operations, the system comprising: means for buffering, in memory of a virtual reality (VR) system, vertex data for rendered dynamic tessellation elements; means for determining, for a dynamic tessellation element to be rendered, whether the dynamic tessellation element was rendered in a previous frame at a tessellation level that is the same as a tessellation level for the element in a frame to be rendered; and means for using the buffered vertex data for the previous rendering of the dynamic tessellation element to facilitate the display of the frame to be rendered when it is determined that the dynamic tessellation element was rendered in a previous frame at the same tessellation level.
C39. A system according to Claim C38, wherein the system further comprises means for allocating video memory in graphics hardware of the VR system and allocating accelerated graphics port (AGP) memory as part of system memory of the VR system.
C40. A system according to Claim C39, wherein the buffering means comprises means for buffering the vertex data in at least one of the video memory and the AGP memory. C41. A system according to Claim C39, wherein the system further comprises: means for estimating, for a dynamic tessellation element to be rendered, memory space requirements for storing rendering data for the dynamic tessellation element, the rendering data comprising vertex data defining one or more triangles representing the dynamic tessellation element; and means for determining whether there is sufficient memory space available in the video memory or the AGP memory for storing the rendering data based on the estimated memory space requirements.
C42. A system according to Claim C41 , wherein the system further comprises: means for using available system memory of the VR system for dynamically tessellating the element when it is determined that there is not sufficient memory space for the rendering data; and means for copying the rendering data from the system memory to a backend memory after dynamically tessellating the element.
C43. A system according to Claim C42, wherein the means for allocating the video memory and the AGP memory comprises means for allocating the backend memory using a portion of at least one of the video memory and the AGP memory.
C44. A system according to Claim C41 , wherein the means for estimating the memory space requirement comprises: means for computing the number of triangles for rendering the dynamic tessellation element based on a tessellation level for the element in the frame to be rendered; and means for estimating the memory space requirements as a function of the computed number of triangles and a predetermined number of bytes that are required for storing data defining the computed number of triangles.
C45. A system according to Claim C44, wherein the means for computing the number of triangles comprises means for computing the number of triangles from the tessellation level in accordance with dynamic tessellation element types, wherein the dynamic tessellation element types include at least one of an open cone element type, a closed cone element type, an open torus element type, a closed torus element type, and a surface-of-revolution element type.
C46. A system according to Claim C44, wherein triangles are stored as part of a triangle list defined by vertex array data and index array data, and further wherein the means for estimating the memory space requirements comprises means for determining the number of bytes required for storing vertex array data and index array data that define the computed number of triangles as part of the triangle list.
C47. A system according to Claim C44, wherein triangles are stored as part of a triangle strip defined by vertex array data and index array data, and further wherein the means for estimating the memory space requirements comprises means for determining the number of bytes required for storing vertex array data and index array data that define the computed number of triangles as part of the triangle strip.
C48. A system according to Claim C39, wherein the system further comprises: means for pre-loading, prior to simulating and rendering frames of the complex environment, all non-dynamic tessellation elements defined in VR file(s) into memory, wherein the pre-loading means comprises means for pre-loading non-dynamic tessellation elements into at least one of the video memory and the AGP memory; means for determining, after pre-loading all non-dynamic tessellation elements, whether memory space is available in the video memory or the AGP memory; and means for pre-loading dynamic tessellation elements defined in the VR file(s) into the available memory space of the video memory or the AGP memory when memory space is determined to be available.
C49. A system according to Claim C48, wherein the means for preloading dynamic tessellation elements comprises means for pre-loading dynamic tessellation elements in accordance with an initial frame of the complex environment to be rendered and displayed to a user.
C50. A system according to Claim C48, wherein the means for preloading all non-dynamic tessellation elements comprises: means for determining whether there is sufficient memory space in the video memory or the AGP memory for all of the non-dynamic tessellation elements; and means for pre-loading any additional non-dynamic tessellation elements defined in the VR file(s) to available system memory of the VR system when it is determined that there is not sufficient memory space in the video memory or the AGP memory.
D1. A method for rendering frames of a complex virtual environment based on an octree file, the octree file including data defining an octree for the complex environment that includes spatially partitioned cells, the cells of the octree containing dynamic tessellation elements and non-dynamic tessellation elements, the method comprising: determining, using the octree for the complex virtual environment, cells that are visible based on a viewer position for a frame to be rendered; identifying, in each of the visible cells, dynamic tessellation elements; and dynamically tessellating the identified dynamic tessellation elements in the visible cells, wherein dynamically tessellating the identified dynamic tessellation elements comprises: determining, for each identified dynamic tessellation element, a tessellation level; and rendering each identified dynamic tessellation element according to the determined tessellation level.
D2. A method according to Claim D1 , wherein determining visible cells comprises: determining a view frustum for the frame to be rendered based on the viewer position; traversing the octree to identify non-visible cells that are outside of the view frustrum; and culling the identified non-visible cells of the octree.
D3. A method according to Claim D2, wherein determining visible cells further comprises: analyzing a list of occluder candidates to identify occluder candidates that are within the view frustum; processing the occluder candidates that are within the view frustum to identify cells of the octree that are fully occluded by one or more occluder candidates; and culling the identified cells that are fully occluded.
D4. A method according to Claim D3, wherein processing the occluder candidates comprises: determining a quality metric for each of the occluder candidates that are within the view frustum; sorting the occluder candidates in descending order in accordance with the determined quality metric for each occluder candidate; and analyzing, over a predetermined time period, the sorted occluder candidates to identify cells that are fully occluded by the occluder candidates.
D5. A method according to Claim D4, wherein analyzing the sorted occluder candidates comprises: determining if each of the sorted occluder candidates fully occludes cells that are within the view frustum; and if an occluder candidate is determined to fully occluded one or more cells, culling each fully occluded cell.
D6. A method according to Claim D4, wherein analyzing the sorted occluder candidates comprises: determining if each of the sorted occluder candidates partially occludes cells that are within the view frustum; and if an occluder candidate is determined to partially occluded one or more cells, analyzing neighboring occluder candidates to determine if each partially occluded cell is fully occluded by a combination of occluder candidates. D7. A method according to Claim D1 , wherein the method further comprises: estimating a screen size for each identified dynamic tessellation element; determining if the estimated screen size is equal to or greater than a predetermined threshold; and filtering dynamic tessellation elements that are determined to have an estimated threshold equal to or greater than the predetermined threshold so that only filtered dynamic tessellation elements are dynamically tessellated and displayed to a user.
D8. A method according to Claim D7, wherein estimating a screen size comprises estimating a screen size based on dynamic tessellation element types, the dynamic tessellation element types including at least one of a cone element type, a torus element type, and a surface-of-revolution element type.
D9. A method according to Claim D7, wherein determining a tessellation level comprises setting a tessellation level to zero for identified dynamic tessellation elements that have a screen size less than the predetermined threshold.
D10. A method according to Claim D1 , wherein determining a tessellation level comprises determining a tessellation level based on one or more dynamic tessellation element types, the dynamic tessellation element types including a cone element type, a torus element type or a surface-of-revolution element type.
D11. A method according to Claim D10, wherein determining a tessellation level further comprises applying one or more predefined clamp levels to restrict a value of the determined tessellation level, wherein the clamp levels are predefined with respect to one or more dynamic tessellation element types. D12. A method according to Claim D1 , wherein rendering each identified element according to the determined tessellation level comprises: determining the number of triangles for rendering each dynamic tessellation element based on the determined tessellation level; and generating vertex data defining the determined number of triangles for rendering the dynamic tessellation element.
D13. A method according to Claim D12, wherein the method further comprises storing, in a buffer mEmory, the vertex data for rendered dynamic tessellation elements.
D14. A method according to Claim D1 , wherein the method further comprises: determining whether an identified dynamic tessellation element was rendered in a previous frame at a tessellation level that is the same as the determined tessellation level for the frame to be rendered; and using stored vertex data of the previous rendering of the dynamic tessellation element at the same tessellation level to facilitate the display of the frame to be rendered.
D15. A method for performing rendering operations for providing interactive displays a complex virtual environment, the rendering operations being performed as part of a simulation loop, the method comprising: determining, using an octree of cells for the complex virtual environment, cells that are visible based on a viewer position for a frame to be rendered, the cells of the octree containing elements of the complex environment; identifying, in each of the visible cells, dynamic tessellation elements; and dynamically tessellating the identified dynamic tessellation elements in the visible cells, wherein determining visible cells based on the viewer position comprises: determining a view frustum for the frame to be rendered based on the viewer position; traversing the octree to identify non-visible cells that are outside of the view frustum; and culling the identified non-visible cells of the octree.
D16. A method according to Claim D15, wherein dynamically tessellating the identified dynamic tessellation elements comprises: determining, for each identified dynamic tessellation element, a tessellation level; and rendering each identified dynamic tessellation element according to the determined tessellation level.
D17. A method according to Claim D16, wherein the method further comprises: estimating a screen size for each identified dynamic tessellation element; determining if the estimated screen size is equal to or greater than a predetermined threshold; and filtering dynamic tessellation elements that are determined to have an estimated threshold equal to or greater than the predetermined threshold so that only filtered dynamic tessellation elements are dynamically tessellated and displayed to a user.
D18. A method according to Claim D17, wherein estimating a screen size comprises estimating a screen size based on dynamic tessellation element types, the dynamic tessellation element types including at least one of a cone element type, a torus element type, and a surface-of-revolution element type.
D19. A method according to Claim D17, wherein determining a tessellation level comprises setting a tessellation level to zero for identified dynamic tessellation elements that have a screen size less than the predetermined threshold.
D20. A method according to Claim D16, wherein determining a tessellation level comprises determining a tessellation level based on one or more dynamic tessellation element types, the dynamic tessellation element types including a cone element type, a torus element type or a surface-of-revolution element type.
D21. A method according to Claim D20, wherein determining a tessellation level further comprises applying one or more predefined clamp levels to restrict a value of the determined tessellation level, wherein the predefined clamp levels are predefined with respect to the dynamic tessellation element types.
D22. A method according to Claim D16, wherein rendering each identified element according to the determined tessellation level comprises: determining the number of triangles for rendering each dynamic tessellation element based on the determined tessellation level; and generating vertex data defining the determined number of triangles for rendering the dynamic tessellation element.
D23. A method according to Claim D22, wherein the method further comprises storing, in a buffer memory, the vertex data for rendered dynamic tessellation elements. D24. A method according to Claim D16, wherein the method further comprises: determining whether an identified dynamic tessellation element was rendered in a previous frame at a tessellation level that is the same as the determined tessellation level for the frame to be rendered; and using stored vertex data for the previous rendering of the dynamic tessellation element at the same tessellation level to facilitate the display of the frame to be rendered.
D25. A method according to Claim D15, wherein determining visible cells further comprises: analyzing a list of occluder candidates to identify occluder candidates that are within the view frustum; processing the occluder candidates that are within the view frustum to identify cells of the octree that are fully occluded by one or more occluder candidates; and culling the identified cells that are fully occluded.
D26. A method according to Claim D25, wherein processing the occluder candidates comprises: determining a quality metric for each of the occluder candidates that are within the view frustum; sorting the occluder candidates in accordance with the determined quality metric for each occluded candidate; analyzing, over a predetermined time period, the sorted occluder candidates to identify cells that are fully occluded by the occluder candidates. D27. A method according to Claim D26, wherein analyzing the sorted occluder candidates comprises: determining if each of the sorted occluder candidates fully occludes cells that are within the view frustum; and if an occluder candidate is determined to fully occluded one or more cells, culling each fully occluded cell.
D28. A method according to Claim D26, wherein analyzing the sorted occluder candidates comprises: determining if each of the sorted occluder candidates partially occludes cells that are within the view frustum; and if an occluder candidate is determined to partially occluded one or more cells, analyzing neighboring occluder candidates to determine if each partially occluded cell is fully occluded by a combination of occluder candidates.
D29. A computer program product for providing interactive displays of a complex virtual environment, the computer program product comprising a computer readable medium embodying a computer program, the computer program comprising instructions that are executable by a system to perform a rendering method to facilitate the display of frames of the complex environment, the method comprising: determining, using an octree of cells that spatially partition the complex virtual environment, elements of the complex environment that are visible based on a viewer position for a frame to be rendered, the elements of the complex environment including dynamic tessellation elements and non-dynamic tessellation elements; and for each of visible element that is a dynamic tessellation element, dynamically tessellating the element by: determining a tessellation level for the visible dynamic tessellation element based on the viewer position for the frame to be rendered; and generating vertex data for rendering the visible dynamic tessellation element based on the determined tessellation level.
D30. A computer program product according to Claim D29, wherein cells of the octree contain one or more elements of the complex environment and further wherein determining visible elements comprises: determining a view frustum for the frame to be rendered based on the viewer position; traversing the octree to identify non-visible cells that are outside of the view frustrum; and culling the identified non-visible cells of the octree.
D31. A computer program product according to Claim D30, wherein determining visible elements further comprises: analyzing a list of occluder candidates to identify occluder candidates that are within the view frustum; processing the occluder candidates that are within the view frustum to identify cells of the octree that are fully occluded by one or more occluded candidates; and culling the identified cells that are fully occluded.
D32. A computer program product according to Claim D31 , wherein processing the occluder candidates comprises: determining a quality metric for each of the occluder candidates that are within the view frustum; sorting the occluder candidates in accordance with the determined quality metric for each occluder candidate; analyzing, over a predetermined time period, the sorted occluder candidates to identify cells that are fully occluded by the occluder candidates.
D33. A computer program product according to Claim D32, wherein analyzing the sorted occluder candidates comprises: determining if each of the sorted occluder candidates fully occludes cells that are within the view frustum; and if an occluder candidate is determined to fully occluded one or more cells, culling each fully occluded cell.
D34. A computer program product according to Claim D32, wherein analyzing the sorted occluder candidates comprises: determining if each of the sorted occluder candidates partially occludes cells that are within the view frustum; and if an occluder candidate is determined to partially occluded one or more cells, analyzing neighboring occluder candidates to determine if each partially occluded cell is fully occluded by a combination of occluder candidates.
D35. A computer program product according to Claim D29, wherein the method further comprises: estimating a screen size for each identified dynamic tessellation element; determining if the estimated screen size is equal to or greater than a predetermined threshold; and filtering dynamic tessellation elements that are determined to have an estimated threshold equal to or greater than the predetermined threshold so that only filtered dynamic tessellation elements are dynamically tessellated and displayed to a user.
D36. A computer program product according to Claim D35, wherein determining a tessellation level comprises setting a tessellation level to zero for identified dynamic tessellation elements that have a screen size less than the predetermined threshold.
D37. A computer program product according to Claim D29, wherein generating vertex data for rendering visible dynamic tessellation elements comprises: determining the number of triangles for rendering each dynamic tessellation element based on the determined tessellation level; and generating vertex data defining the determined number of triangles for rendering the dynamic tessellation element.
D38. A computer program product according to Claim D29, wherein the method further comprises storing, in a buffer memory, the vertex data for rendered dynamic tessellation elements.
D39. A computer program product according to Claim D29, wherein the method further comprises: determining whether a visible dynamic tessellation element was rendered in a previous frame at a tessellation level that is the same as the determined tessellation level for the frame to be rendered; and using stored vertex data of the previous rendering of the dynamic tessellation element at the same tessellation level to facilitate the display of the frame to be rendered.
D40. In a system for providing interactive displays of a complex virtual environment, a graphics system for rendering frames based on virtual reality (VR) file(s), the VR file(s) including data defining a list of occluder candidates and an octree of cells that contain one or more elements of the complex environment, the graphics system comprising: means for determining, using the octree for the complex virtual environment, cells that are not visible based on a viewer position for a frame to be rendered; means for culling all cells of the octree that are determined not to be visible; means for analyzing the list of occluder candidates to determine cells of the octree that are occluded based on the viewer position; means for culling all cells of the octree that are determined to be occluded; and means for rendering elements contained in the visible, non-culled cells of the octree to facilitate the display the frame to a user.
D41. A system according to Claim D40, wherein the graphics system further comprises: means for identifying, in each of the visible cells of the octree, dynamic tessellation elements; means for determining, for each identified dynamic tessellation element, a tessellation level; and means for generating vertex data for rendering each identified dynamic tessellation element with a level of detail consistent with the determined tessellation level.
D42. A system according to Claim D41 , wherein the means for generating vertex data comprises: means for determining the number of triangles for rendering each dynamic tessellation element based on the determined tessellation level; and means for generating vertex data defining the determined number of triangles for rendering the dynamic tessellation element.
D43. A system according to Claim D41 , wherein the system further comprises means for storing, in a buffer memory, the vertex data for rendered dynamic tessellation elements.
D44. A system according to Claim D41 , wherein the graphics system further comprises: means for determining whether a visible dynamic tessellation element was rendered in a previous frame at a tessellation level that is the same as the determined tessellation level for the frame to be rendered; and means for using stored vertex data of the previous rendering of the dynamic tessellation element at the same tessellation level to facilitate the display of the frame to be rendered.
D45. A system according to Claim D41 , wherein the graphics system further comprises: means for estimating a screen size for each identified dynamic tessellation element; means for determining if the estimated screen size is equal to or greater than a predetermined threshold; and means for filtering dynamic tessellation elements that are determined to have an estimated threshold equal to or greater than the predetermined threshold so that only filtered dynamic tessellation elements are dynamically tessellated and displayed to a user.
D46. A system according to Claim D45, wherein the means for determining a tessellation level comprises means for setting a tessellation level to zero for identified dynamic tessellation elements that have a screen size less than the predetermined threshold.
D47. A system according to Claim D40, wherein the means for analyzing the list of occluder candidates comprises: means for determining a quality metric for each occluder candidate that is contained within a visible cell; means for sorting the occluder candidates according to the determined quality metric for each occluder candidate; and means for analyzing, over a predetermined time period, each of the sorted occluder candidates to detect cells that are fully occluded in the frame to be rendered.
D48. A system according to Claim D40, wherein the means for determining cells that are not visible comprises: means for determining a view frustum for the frame to be rendered based on the viewer position; and
means for traversing the octree to identify non-visible cells that are outside of the view frustum. D49. A method for rendering frames of a complex virtual environment based on an octree of cells that spatially partition the complex virtual environment, the cells of the octree containing one or more elements of the complex environment, the method comprising: determining, using the octree for the complex virtual environment, cells that are not visible based on a viewer position for the frame to be rendered; culling all cells of the octree that are determined not to be visible; analyzing a list of occluder candidates to determine cells of the octree that are occluded based on the viewer position; culling all cells of the octree that are determined to be occluded; and rendering elements contained in visible cells of the octree to display the frame to a user, wherein the elements contained in the visible cells include dynamic tessellation elements and non-dynamic tessellation elements.
D50. A method according to Claim D49, further comprising: identifying, in each visible cell of the octree, dynamic tessellation elements; determining, for each identified dynamic tessellation element, a tessellation level; and rendering each identified element with a level of detail corresponding to the determined tessellation level.
D51. A method according to Claim D50, wherein rendering each identified dynamic tessellation element comprises: determining the number of triangles for rendering a dynamic tessellation element based on the determined tessellation level; and generating vertex data defining the determined number of triangles for rendering the dynamic tessellation element. D52. A method according to Claim D51 , wherein the method further comprises storing, in a buffer memory, the vertex data for rendered dynamic tessellation elements.
D53. A method according to Claim D50, wherein the method further comprises: determining whether a visible dynamic tessellation element was rendered in a previous frame at a tessellation level that is the same as the determined tessellation level for the frame to be rendered; and using stored vertex data of the previous rendering of the dynamic tessellation element at the same tessellation level to facilitate the display of the frame to be rendered.
D54. A method according to Claim D50, wherein the method further comprises: estimating a screen size for each identified dynamic tessellation element; determining if the estimated screen size is equal to or greater than a predetermined threshold; and filtering dynamic tessellation elements that are determined to have an estimated threshold equal to or greater than the predetermined threshold so that only filtered dynamic tessellation elements are dynamically tessellated and displayed to a user.
D55. A method according to Claim D54, wherein determining a tessellation level comprises setting a tessellation level to zero for identified dynamic tessellation elements that have a screen size less than the predetermined threshold. D56. A method according to Claim D49, wherein analyzing a list of occluder candidates comprises: determining a quality metric for each occluder candidate that is contained within a visible cell sorting the occluder candidates according to the determined quality metric for each occluder candidate; analyzing, over a predetermined time period, each of the sorted occluder candidates to detect cells that are fully occluded in the frame to be rendered.
PCT/IB2002/005579 2001-12-21 2002-12-20 Virtual reality files for complex environments WO2003054804A2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU2002353394A AU2002353394A1 (en) 2001-12-21 2002-12-20 Virtual reality files for complex environments

Applications Claiming Priority (8)

Application Number Priority Date Filing Date Title
US10/023,997 2001-12-21
US10/023,996 US6809738B2 (en) 2001-12-21 2001-12-21 Performing memory management operations to provide displays of complex virtual environments
US10/023,999 US6791549B2 (en) 2001-12-21 2001-12-21 Systems and methods for simulating frames of complex virtual environments
US10/023,818 2001-12-21
US10/023,999 2001-12-21
US10/023,818 US20030117398A1 (en) 2001-12-21 2001-12-21 Systems and methods for rendering frames of complex virtual environments
US10/023,997 US20030117397A1 (en) 2001-12-21 2001-12-21 Systems and methods for generating virtual reality (VR) file(s) for complex virtual environments
US10/023,996 2001-12-21

Publications (2)

Publication Number Publication Date
WO2003054804A2 true WO2003054804A2 (en) 2003-07-03
WO2003054804A3 WO2003054804A3 (en) 2003-11-13

Family

ID=27487243

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2002/005579 WO2003054804A2 (en) 2001-12-21 2002-12-20 Virtual reality files for complex environments

Country Status (2)

Country Link
AU (1) AU2002353394A1 (en)
WO (1) WO2003054804A2 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118608730A (en) * 2024-07-25 2024-09-06 歌尔股份有限公司 Collision detection method, apparatus, and readable storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1999041705A1 (en) * 1998-02-17 1999-08-19 Sun Microsystems, Inc. Visible-object determination for interactive visualization
EP1139296A2 (en) * 2000-03-31 2001-10-04 Kabushiki Kaisha Toshiba Apparatus and method for obtaining shape data of analytic surface approximate expression

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1999041705A1 (en) * 1998-02-17 1999-08-19 Sun Microsystems, Inc. Visible-object determination for interactive visualization
EP1139296A2 (en) * 2000-03-31 2001-10-04 Kabushiki Kaisha Toshiba Apparatus and method for obtaining shape data of analytic surface approximate expression

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
GUEZIEC A ET AL: "A framework for streaming geometry in VRML" IEEE COMPUTER GRAPHICS AND APPLICATIONS, MARCH-APRIL 1999, IEEE, USA, vol. 19, no. 2, pages 68-78, XP000927826 ISSN: 0272-1716 *
JIMENEZ P ET AL: "3D collision detection: a survey" COMPUTERS & GRAPHICS, APRIL 2001, ELSEVIER, UK, vol. 25, no. 2, pages 269-285, XP004232467 ISSN: 0097-8493 *
KLOSOWSKI J T ET AL: "Efficient collision detection using bounding volume hierarchies of k-DOPs" IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, JAN.-MARCH 1998, IEEE, USA, vol. 4, no. 1, pages 21-36, XP002237414 ISSN: 1077-2626 *
PAJAROLA R: "Large scale terrain visualization using the restricted quadtree triangulation" PROCEEDINGS VISUALIZATION '98 (CAT. NO.98CB36276), PROCEEDINGS OF VISUALIZATION '98, RESEARCH TRIANGLE PARK, NC, USA, 18-23 OCT. 1998, pages 19-26, 515, XP010321005 1998, Piscataway, NJ, USA, IEEE, USA ISBN: 0-8186-9176-X *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118608730A (en) * 2024-07-25 2024-09-06 歌尔股份有限公司 Collision detection method, apparatus, and readable storage medium

Also Published As

Publication number Publication date
AU2002353394A8 (en) 2003-07-09
AU2002353394A1 (en) 2003-07-09
WO2003054804A3 (en) 2003-11-13

Similar Documents

Publication Publication Date Title
US6791549B2 (en) Systems and methods for simulating frames of complex virtual environments
US6809738B2 (en) Performing memory management operations to provide displays of complex virtual environments
US20030117397A1 (en) Systems and methods for generating virtual reality (VR) file(s) for complex virtual environments
US9852538B2 (en) System and method of reducing transmission bandwidth required for visibility-event streaming of interactive and non-interactive content
Tang et al. Interactive continuous collision detection between deformable models using connectivity-based culling
Teschner et al. Collision detection for deformable objects
US8350846B2 (en) Updating ray traced acceleration data structures between frames based on changing perspective
US6750859B2 (en) Size conditioned visibility search system and method
US6300965B1 (en) Visible-object determination for interactive visualization
CA2225017C (en) Method and apparatus for rapidly rendering computer generated images of complex structures
US20030117398A1 (en) Systems and methods for rendering frames of complex virtual environments
US20120236001A1 (en) Tree Insertion Depth Adjustment Based on View Frustrum and Distance Culling
WO2002045025A9 (en) Multiple processor visibility search system and method
WO2002054351A2 (en) Using ancillary geometry for visibility determination
Hastings et al. Optimization of large-scale, real-time simulations by spatial hashing
Weller et al. kDet: Parallel constant time collision detection for polygonal objects
JP4829885B2 (en) Method and system for identifying proximity regions between several digitally simulated geometric objects
JPH05266213A (en) High-speed display system for three-dimensional space data
US7050053B2 (en) Geometric folding for cone-tree data compression
Lau et al. A collision detection framework for deformable objects
WO2003054804A2 (en) Virtual reality files for complex environments
Huagen et al. A parallel collision detection algorithm based on hybrid bounding volume hierarchy
Weller et al. Kinetic Separation Lists for Continuous Collision Detection of Deformable Objects.
Lau et al. Large a collision detection framework for deformable objects
Sunar et al. Accelerating virtual walkthrough with visual culling techniques

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A2

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NO NZ OM PH PL PT RO RU SC SD SE SG SK SL TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW

AL Designated countries for regional patents

Kind code of ref document: A2

Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR IE IT LU MC NL PT SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
122 Ep: pct application non-entry in european phase
NENP Non-entry into the national phase in:

Ref country code: JP

WWW Wipo information: withdrawn in national office

Country of ref document: JP