WO2003083822A1 - Three dimensional volumetric display input and output configurations - Google Patents

Three dimensional volumetric display input and output configurations Download PDF

Info

Publication number
WO2003083822A1
WO2003083822A1 PCT/US2003/002341 US0302341W WO03083822A1 WO 2003083822 A1 WO2003083822 A1 WO 2003083822A1 US 0302341 W US0302341 W US 0302341W WO 03083822 A1 WO03083822 A1 WO 03083822A1
Authority
WO
WIPO (PCT)
Prior art keywords
display
recited
volumetric
dimensional
volume
Prior art date
Application number
PCT/US2003/002341
Other languages
French (fr)
Inventor
Gordon Kurtenbach
George Fitzmaurice
Ravin Balakrishnan
Original Assignee
Silicon Graphics, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US10/183,945 external-priority patent/US7554541B2/en
Priority claimed from US10/183,944 external-priority patent/US7324085B2/en
Priority claimed from US10/183,968 external-priority patent/US7205991B2/en
Priority claimed from US10/183,966 external-priority patent/US7839400B2/en
Priority claimed from US10/188,765 external-priority patent/US7138997B2/en
Priority claimed from US10/183,970 external-priority patent/US6753847B2/en
Application filed by Silicon Graphics, Inc. filed Critical Silicon Graphics, Inc.
Priority to AU2003214910A priority Critical patent/AU2003214910A1/en
Publication of WO2003083822A1 publication Critical patent/WO2003083822A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B30/00Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images
    • G02B30/50Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images the image being built up from image elements distributed over a 3D volume, e.g. voxels
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/033Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
    • G06F3/0346Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor with detection of the device orientation or free movement in a 3D space, e.g. 3D mice, 6-DOF [six degrees of freedom] pointers using gyroscopes, accelerometers or tilt-sensors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/041Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
    • G06F3/046Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means by electromagnetic means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04812Interaction techniques based on cursor appearance or behaviour, e.g. being affected by the presence of displayed objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04815Interaction with a metaphor-based environment or interaction object displayed as three-dimensional, e.g. changing the user viewpoint with respect to the environment or object
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04817Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance using icons

Definitions

  • the present invention is directed to input and output configurations for three-dimensional volumetric displays and, more particularly, to input configurations that allow the content of a three-dimensional volumetric display output configuration to be affected by actions by a user operating within an input configuration.
  • the present invention is directed to a system for managing data within a volumetric display and, more particularly, to a system that uses volume windows to manage data within a volumetric display.
  • the present invention is directed to providing two-dimensional (2D) widgets in three-dimensional (3D) displays and, more particularly, to mapping a 2D widget into a volumetric display at a position where it can be easily used, such as on the outside surface of the volumetric display inside an enclosure for the display.
  • the present invention is directed to providing graphical user interface widgets or interface elements that are viewable from different viewpoints in a volumetric display and, more particularly, to a system where a widget is produced that can be viewed and operated from multiple viewpoints.
  • the present invention is directed to a system for rotating a class of three- dimensional (3D) displays called volumetric displays and, more particularly, to a system that allows a user to rotate the display to view different parts of the scene within the display without having to move or walk around the display.
  • 3D three- dimensional
  • the present invention is directed to a system that allows users to point at objects within a volumetric display system, and, more particularly to a system that allows a number of different pointing approaches and pointing tools.
  • volumetric displays A class of three-dimensional (3D) displays, called volumetric displays, is currently undergoing rapid advancement.
  • the types of displays in this class include holographic displays, swept volume displays and static volume displays.
  • Volumetric displays allow for three-dimensional (3D) graphical scenes to be displayed within a true 3D volume.
  • Such displays can take many shapes including cylinders, globes, domes, cubes, an arbitrary shape, etc. with a dome being a typical shape. Because the technology of these displays is undergoing rapid development those of skill in the art are concentrating on the engineering of the display itself. As a result, the man-machine interface to or input/output configurations with which people interface with these types of displays is receiving scant attention.
  • volumetric displays allow a user to view different parts of a true 3D scene
  • the act of viewing the different parts typically requires that the user physically move around (or over) the display or that the display be moved or rotated in front of the user.
  • graphical objects may also move relative to the user.
  • the display is relatively stationary or when it is relatively moving, the user may need to interact with the display.
  • what the user needs is an effective mechanism for interacting with the display.
  • volumetric displays allow a user to view different parts of a true 3D scene
  • the act of viewing the different parts typically requires that the user physically move around (or over) the display or that the display be moved or rotated in front of the user.
  • graphical objects may also move relative to the user.
  • the display is relatively stationary or when it is relatively moving, the user may need to interact with the display.
  • 3D volumetric displays require mechanisms for the general management and placement of data within these types of displays. What is needed is a system for managing the volume(s) in a volumetric display.
  • volumetric displays allow a user to view different parts of a true 3D scene
  • the act of viewing the different parts typically requires that the user physically move around (or over) the display or that the display be moved or rotated in front of the user.
  • graphical user interface elements sometimes called widgets may also move relative to the user.
  • the widget is a two-dimensional (2D) interface, such as menu, a file tree, a virtual keyboard, or a display/view of a two dimensional document, such as a list or spreadsheet.
  • 2D two-dimensional
  • a solution is to place the 2D widgets anywhere within the display. This can result in the intermingling of widgets and data, which may not be desirable. Additionally, complex 3D selection techniques may be needed if the 2D widget is placed in the 3D scene space to avoid selecting scene elements when the widget is intended.
  • volumetric displays allow a user to view different parts of a true 3D scene
  • the act of viewing the different parts typically requires that the user physically move around the display. For example, if the user wants to view the backside of a scene including a building, the user must move to the backside of the display to see the back of the building. This movement is typically performed by the user walking around the display. Requiring the user to physically move around the display for an extended period of time is probably not the best way to work with these types of displays. And some movements may also be impractical, such as moving above the display for a view from above.
  • volumetric displays allow a user to view different parts of a true 3D scene
  • the act of viewing the different parts typically requires that the user physically move around (or over) the display or that the display be moved or rotated in front of the user.
  • graphical objects may also move relative to the user.
  • the user may need to interact with the display by pointing to something, such as a model object to, for example, paint the object, or to select the object for some function such as to move the object or select a control on an interface of the object.
  • the object to which the user needs to point may be at any level within the display from the surface of the display adjacent the enclosure to the farthest distance within the display from the enclosure or the user.
  • volumetric pointer It is an aspect of the present invention to establish a spatial relationship between the volumetric pointer and the user's body position, specifically the position of their hands. Movements of the hands and body position have a significant spatial congruence with the volumetric pointer/pointers.
  • the above aspects can be attained by a system that allows a number of 3D volumetric display configurations, such as dome, cubical and cylindrical volumetric display enclosures, to interact with a number of different input configurations, for example, a three-dimensional position sensing system, a planar position sensing system and a non-planar position sensing system.
  • the user interacts with the input configurations, such as by moving a stylus on a sensing grid formed on an enclosure surface. This interaction affects the content of the volumetric display, for example, by moving a cursor within the 3D display space of the volumetric display.
  • volume windows have the typical functions, such as minimize, resize, etc, that operate in a volume.
  • Application data such as a surface texture of a model, is assigned to the windows responsive to which applications are assigned to which windows in a volume window data structure.
  • Input events such as a mouse click, are assigned to the windows responsive to whether they are spatial or non-spatial. Spatial events are assigned to the window surrounding the event and non-spatial events are assigned to the active or working window or to the root.
  • the above aspects can be attained by a system that places user interface widgets in positions in a 3D volumetric display where they can be used with ease and directness.
  • the widgets are placed on the shell or outer edge of a volumetric display, in a ring around the outside bottom of the display, in a plane within the display and/or at the users focus of attention.
  • Virtual 2D widgets are mapped to volumetric display voxels and control actions in the 3D volume are mapped to controls of the widgets.
  • a system that provides a volumetric widget that can be viewed and interacted with from any location around a volumetric display.
  • a widget can be provided by duplicating the widget for each user, by providing a widget with multiple viewing surfaces or faces, by rotating the widget and by orienting a widget toward a location of the user.
  • the above aspects can be attained by a system that allows a user to physically rotate a three-dimensional volumetric display enclosure with a corresponding rotation of the display contents. This allows the user to remain in one position while being able to view different parts of the displayed scene from different viewpoints.
  • the display contents can be rotated in direct correspondence with the display enclosure or with a gain that accelerates the rotation of the contents with respect to the physical rotation of the enclosure. Any display widgets in the scene, such as a virtual keyboard, can be maintained stationary with respect to the user while scene contents rotate.
  • the above aspects can be attained by a system that creates a user manipulable volumetric pointer within a volumetric display.
  • the user can point by aiming a beam, positioning an input device in three-dimensions, touching a surface of the display enclosure, inputting position coordinates, manipulating keyboard direction keys, moving a mouse, etc.
  • the cursor can take a number of different forms including a point, an graphic such as an arrow, a volume, ray, bead, ring and plane.
  • the user designates an input position and the system maps the input position to a 3D position within the volumetric display.
  • the system also determines whether any object has been designated by the cursor and performs any function activated in association with that designation.
  • Figure 1001 shows a volumet ⁇ c display.
  • Figures 1002, 1003 and 1004 depict a 1003D to 1003D system configurations.
  • Figures 1005, 1006 and 1007 depict 2D to 3D configurations.
  • Figure 1008 shows a non-planar to 3D configuration.
  • Figures 1009, 1010, 1011 , 1012, 1013 and 1014 show configurations with physical intermediaries.
  • Figure 1015 depicts components of the system
  • Figures 1016A, 1016B, 1016C and 1016D illustrate digitizer embodiments.
  • Figures 1017, 1018A and 1018B show a dome shaped digitizer.
  • Figure 1019 depicts the operations of the system.
  • Figure 2001 depicts a volumetric display.
  • Figure 2002 shows a user managing the volume with a gesture-controlled plane.
  • Figures 2003a and 2003b illustrate volume management through space compression.
  • Figures 2004a and 2004b show object movement spatial volume management.
  • Figure 2005 depicts the components of the present invention.
  • Figures 2006 shows row volume windows.
  • Figure 2007 shows column volume windows.
  • Figure 2008 depicts pie or wedge shaped volume windows.
  • Figures 2009a and 2009b illustrate views of cubic volume windows.
  • Figures 2010a and 2010b show arbitrary shaped windows.
  • Figures 2011a, 2011b, 2012a and 2012b show different types of volume display strategies for volumes windows.
  • Figure 2013 illustrates volume window controls.
  • Figure 2014 depicts operations of a volume manager in initiating a volume window.
  • Figure 2015 shows operations of an application manager.
  • Figure 2016 depicts a data structure used in volume management.
  • Figure 3001 depicts a volumet ⁇ c display system.
  • Figure 3002 illustrates alternatives in arranging 2D widgets with a volumetric display.
  • Figure 3003 shows a users position and gaze range.
  • Figure 3004 depicts components of the present invention.
  • Figure 3005 illustrates the volumetric nature of voxels.
  • Figures 3006A and 6b depict configurations of voxels within a display.
  • Figure 3007 depicts mapping from a 2D virtual representation of a widget to a volumetric voxel version of the widget.
  • Figure 3008 shows the operations involved in interacting with the widget.
  • Figure 4001 depicts a volumetric display.
  • Figure 4002 shows a user viewpoint moving with respect to a planar user interface (Ul) element (top view).
  • Ul planar user interface
  • Figure 4003 depicts the hardware of the present invention.
  • Figure 4004 shows view ranges of widget faces of a volumetric widget arranged to allow any location to view the widget.
  • Figures 4005A - 4005D depict omni-directional volumetric widgets.
  • Figure 4006 shows a volumetric display with an array of user location detectors.
  • Figures 4007A - 4007C show a volumetric widget with faces corresponding to and oriented toward user locations.
  • Figure 4008 is a flowchart of operations that prevent the faces of a volumetric widget from occluding each other.
  • Figures 4009A - 4009C depict a sequence of face movements to eliminate facial occlusion.
  • Figure 4010 shows clustering viewpoints.
  • Figures 4011 A - 4011 C shows back and forth rotation of a volumetric widget
  • Figure 4012 depicts selection operations for a rotating widget.
  • Figure 4013 shows a volumetric widget having a rotating part and a stationary control part.
  • Figure 5001 depicts a user rotating an enclosure and the corresponding display with one degree of freedom.
  • Figure 5002 shows rotations with two or three degrees of freedom.
  • Figures 5003 and 5004 illustrate components of a rotating enclosure.
  • Figure 5005 depicts the digital hardware of the present invention.
  • Figure 5006 shows the operations associated with rotating an enclosure and display contents.
  • Figure 5007A - 5007C and 5008A - 5008C depict unity and positive rotational gain, respectively.
  • Figure 5009 illustrates operations with respect to widget objects within a rotating display.
  • Figure 5010A - 5010C shows maintaining widgets stationary with respect to a user while scene objects rotate.
  • Figure 5011 depicts time based or spatial rotation operations.
  • Figure 6001 depicts a volumetric display.
  • Figures 6002A-6002B shows tablet input devices associated with the display.
  • Figure 6002C shows tablets with regions corresponding to the volumetric display.
  • Figure 6003 illustrates a surface restricted cursor.
  • Figures 6004A and 6004B show user interaction with the volumetric display.
  • Figure 6005A and 6005B show 3D interaction with the volumetric display.
  • Figure 6006 show pointing with a beam.
  • Figures 6007A - 6007C show floating cursors.
  • Figure 6008 depicts hardware of the invention.
  • Figures 6009A - 6009D illustrates several types of digitizer displays.
  • Figure 6010 depicts a vector based cast ray.
  • Figures 6011A - 6011C show planar based cast rays.
  • Figure 6012 shows a surface tangent cast ray.
  • Figures 6013A and 6013B depict a fixed relationship between an input device and a ray based cursor.
  • Figure 6014 shows a cursor of intersecting rays.
  • Figures 6015A - 6015C show a bead cursor.
  • Figure 6016 depicts a ring cursor.
  • Figure 6017 illustrates a cone cursor
  • Figure 6018 shows a cylinder cursor
  • Figures 6019A and 6019B show a plane cursor.
  • Figures 6020A and 6020B illustrates a region of influence.
  • Figure 6021 A and 6021 B depict cursor guidelines.
  • Figure 6022 depicts object control with a ray.
  • Figure 6023 show user following track pads.
  • Figure 6024 illustrates the operations for a floating or surface cursor.
  • Figure 6025 illustrates operations for a ray pointer.
  • Figure 6026A and 6026B illustrate additional pointers
  • Volumetric displays allow a user to have a true three-dimensional (3D) view of a scene 1012 and are typically provided in the form of a dome 1014, as depicted in figure 1001.
  • the user 1016 as can be surmised from figure 1001 , can move about the dome 1014 to view different parts of the scene 12. From a particular arbitrary viewpoint or position, a user may want to interact with the scene or content within the volumetric display.
  • a 3D volumetric input space is mapped to a 3D volumetric display space.
  • the user's hand 1030 is tracked via a glove or a set of cameras in a volume 32 directly below the display volume 1034.
  • a virtual representation of the hand 1036, or some other type of position indicator, such as a cursor, is superimposed into the 3D output volumetric display 1034.
  • the 3D display 1050 is surrounded by a 3D input space 1052, created by a 3D volume input system, such as the Flock of Birds system from Ascension Technology Corporation.
  • the user's hand 1054 including a position indicator/sensor, is mapped to a cursor 1056 or some other position indicator representation, such as a virtual hand, within the display 1050.
  • the position sensor also produces a vector that indicates which direction the sensor is pointing.
  • the vector can be used to create a cursor in the enclosure at a fixed position along the vector.
  • the system infers an input vector based on the position of the input device and the center of the display. This spatial relationship or correspondence between the input space, output space and user position is dynamically updated as the user moves about the display. That is, the input/output space is automatically compensated/reconfigured.
  • Another configuration is to use half-silvered mirrors 1070 (see figure 1004) to combine the volumetric image 1072 with the user's view of their hands in a hand movement volume. This way, the user sees their hands operating within the display.
  • Another alternative is to use a camera to capture the users hands in the input space and superimpose them onto the volumetric display space.
  • Another alternative is an augmented-reality system where the user has a see-through, head mounted display (2D) which is being tracked. As the user moves the position and orientation of their head, graphics are presented on the LCD display and are aligned with real-world objects.
  • Another solution is to map a planer 2D input space into a 3D output space. This is particularly useful in controlling some subset of the 3D volumetric display space. For example, a standard 2D digitizing tablet or digitizer 90 (see figure 1005) or a regular mouse can be mapped to control aspects of the 3D scene, such as moving 3D objects along two dimensions.
  • a further solution is to map a planar 2D input space to a planar 2D space within the 3D output space of the display, as depicted in figure 6.
  • the system maps the input space of a digitizing tablet 1110 and the tilt/orientation of the tablet as sensed by a tilt/orientation sensor 1112 to a corresponding planar space 1114 in the display 1116.
  • the angle of the plane 1114 is responsive to the sensor 112.
  • the display enclosure 130 has planar surfaces (e.g., a cubic enclosure), the enclosure surface is used as the planar input device, as depicted in figure 1007. It is also possible to use a transparent digitizer superimposed over an LCD display.
  • Still another solution is to map a non-planar 2D input space to a 3D output space.
  • the system uses the display enclosure 1140 as the input space (i.e., the enclosure is a transparent digitizing input surface).
  • the system has a surface that detects and tracks a variety of input devices.
  • a digital stylus 1180 as shown in figure 1010, where a point and an orientation can be input or a Rockin'Mouse shaped device 1190, as shown in figure 1011 (see U.S. Patent 6,115,028) also allowing a point and an orientation to be input are used.
  • a surface fitting wireless mouse such as a curved (concave) bottom mouse, can be used with a curved surface output configuration. This type of mouse can also be park-able using electrostatic, magnetic or some other sticky method of removably adhering the mouse to the display surface. Using a mouse has the advantage of buttons and form factors with which people are familiar. In this situation, the surface to display mapping discussed above is performed.
  • input devices 1200 such as buttons, keyboards, sliders, touch-pads, mice and space-ball type devices, etc.
  • the input devices such as buttons for up, down, forward, backward, left and right motions, allowing multiple degrees of freedom, are used to control the position of a cursor like such buttons control the position of a cursor in a 2D system.
  • the input devices 1210, 1212, 1214 may need to be "repeated" (i.e., have more than one of each along the perimeter) to allow for simultaneous used by many users, or for use from any position the user may be standing/sitting at as shown in figure 1013.
  • the mounting platform 1220 that houses these devices could be made moveable (rotatable) around the display, as depicted in figure 1014, so that users can easily bring the required device within reach by simply moving the platform.
  • These devices typically communicate wirelessly by radio or infrared signals.
  • the position of the movable device also provides information about the users position or viewpoint.
  • the present invention is typically embodied in a system as depicted in figure 15 where physical interface elements 1230, such as a rotary dome position encoder, infrared user position detectors, a keyboard, touch sensitive dome enclosure surface, mouse, beam pointer, beam pointer with thumbwheel, stylus and digitizer pad or stylus and stylus sensitive dome enclosure surface, stylus with pressure sensor, flock-of-birds, etc. are coupled to a computer 1232, such as a server class machine.
  • the computer 1232 uses a graphical creation process, such as the animation package MAYA available from Alias
  • This process using position inputs from the input configurations as discussed herein, also creates the virtual interface elements, such as a virtual hand, a 3D point cursor, a 3D volume cursor, a pointing beam, a bead, etc.
  • the display output including the scene and interface elements, is provided to a volumetric display apparatus configuration 1234, such as one that will produce a 3D holographic display and discussed herein.
  • a dome shaped enclosure 1250 has a dome shaped digitizing tablet as depicted in figure 1016A.
  • the dome shaped enclosure 1256 (see figure 1016B) is used with a rectangular or cylindrical shaped digitizing tablet 1258.
  • a cylindrical or cubical enclosure 1260 is used with cylindrical or cubical digitizer surface.
  • the enclosure 1264 is dome shaped (or cubical or cylindrical) and the digitizing surface 1266 is planar as depicted in figure 1016D.
  • a digitizer 1280 determines a position of a stylus or pointer 1282 relative to a surface 1284, such as a transparent dome surface, having a checker board type closely spaced positional grid 1286 thereon when seen from above.
  • a processor 288 determines the coarse position of the pointer relative to the grid by sampling the grid lines through a set of multiplexers 1290 and 1292.
  • An error correction system 1294 generates and outputs a true position of the pointer 1282 relative to the surface 1284 to a computer system 1232 (see figure 1015).
  • the pointer 1282 typically includes an electromagnetic transducer for inducing a signal in the positional grid 1286 and the processor 1288 is coupled to the positional grid 1286 for sensing the signal and generating the coarse position of the pointer 1282.
  • the transducers also allow the determination of a vector from grid signals that indicates in which direction the pointer 1282 is pointing. Touch sensitive input surfaces operate in a similar fashion.
  • the positional grid 1286 can be applied to a surface of an enclosure, such as a dome shaped enclosure 310, as depicted in figures 1018A and 1018B.
  • Figures 1018A and 1018B (an exploded view) show a section 1312 of the dome surface including an inner substrate 314 and outer substrate 1316 between which is sandwiched the grid 1318.
  • the substrates comprise transparent materials, such as glass or plastic.
  • the computer system 1232 performs a number of operations as depicted in figure 1019.
  • the operations include obtaining 1330 the coordinate systems of the input device and the volumet ⁇ c display. The range of the coordinate systems is also obtained so that out-of-space conditions can be determined.
  • the system samples 1332 positional outputs of the input device, such as the digitizer, mouse, flock-of-birds, etc., to obtain the location of the users input. This information can also include information about where the user is pointing.
  • This position (and orientation if desired) is mapped 1334 into a 3D position within the volumetric display using the coordinate system (and the orientation vector, if needed).
  • the cursor or other position indicating representation is drawn 1336 at the mapped position with the volumetric display.
  • the mapping may involve determining a position on the surface that is being touched by a digitizing stylus, projecting a ray into the enclosure from the touch position where the ray is oriented by the pointing vector of the input stylus and positioning the cursor at a variable or fixed position along the ray.
  • Another mapping causes relative motion of a 3D input device such as a glove to be imparted to a cursor when a motion function is activated.
  • Other mappings as discussed in the related applications are possible.
  • a digitizing enclosure surface when a digitizing enclosure surface is the input configuration, allow the user to interact with a surface of a three-dimensional (3D) volumetric display and affect the 3D content of the display responsive to the interaction.
  • the interaction involves the user manipulating the stylus in a sensing region of the digitizing grid, the mapping of the stylus position to a 3D display position and the creation of a cursor at a 3D display position.
  • the cursor in one of a number of different possibilities, is created at a distance offset from a tip of the stylus along a pointing vector of the stylus.
  • the cursor can be used to perform typical functions such as selecting, painting, dragging/dropping, etc.
  • the present invention has been described with respect to input configurations where commands are input through position sensing type devices, such as a mouse, a pointer, touch sensitive surface, etc. It is also possible to use other types of input configurations, such as non-spatial configurations.
  • One non- spatial input space or configuration is a conventional voice or speech recognition system. In this configuration a voice command, such as "down” is recognized and the selected object or volume is moved accordingly. In this case down. The object is moved down in the display space at a constant slow rate until it reaches the bottom or until another command, such as "stop" is input and recognized.
  • a user position sensing system inputs the user position, the position is used to determine the relative position of the active object with respect to the user or the vector pointing from user to the object. This vector is used to determine a direction for object movement. To move closer the object is moved along the vector toward the user by moving in a negative direction. Again the motion would continue until a blocking object is encountered or another command is recognized.
  • Another non-spatial input configuration uses non-speech sounds, such as tones from a conventional multifrequency tone generator. Each multifrequency combination corresponds to a command and a conventional tone recognition system is used to convert the sounds to commands.
  • the input space or configuration could also use conventional eye- tracking-head-tracking technologies alone or in combination with other input configurations.
  • Volumetric displays allow a user to have a true three-dimensional (3D) view of a 3D scene 2012 and are typically provided in the form of a dome 14, as depicted in figure 2001.
  • the user 2016, as can be surmised from figure 2001 can move about the dome 2014 to view different parts of the scene 2012. From a particular arbitrary viewpoint or position, a user may want to interact with one or more regions within or scenes/content within the volumetric display.
  • volume window is a volume region within the volumetric display delimited from other parts/regions (or volumes) of the volumetric display by 3D bounding boxes or 3D boundaries to allow users and the system to distinguish between the windows.
  • the volumetric window boundaries can be delineated in a number of ways.
  • each VW can (1) have a wireframe border along the edges of the volume, (2) the wireframe itself could have a thickness as in a "bezeled” edge, (3) the background color within the volume can be a different color than the empty space within the 3D display (4) the "floor" of the VW can be a solid color or pattern with the "roof of the VW outlined in a wireframe or a solid pattern.
  • the display and management system preferably will allow visibility of a volume from any viewpoint.
  • volumetric display has a VW in the shape of a cube.
  • One face of a VW could be "open” while the sides of the remaining faces of the cube appear opaque.
  • the contents of the VW are only visible when the user can see the "open" side of the VW.
  • Volume windows can be active or inactive. This allows the system, for example, to direct input events to a target (active) volume window when multiple VWs are available.
  • a portion of the active volume window is preferably "highlighted" to differentiate it among the other VWs.
  • the border bezel or titlebar of an active volume window may turn a brighter/darker color compared to the other VWs.
  • the solutions also provide mechanisms for general management of the placement of the volumes within the display and the data within the volumes.
  • the user is allowed to define volumes, delineate sub-portions of a working volume, divide the working volume into-sub volumes, move volumes within the display, compact or compress volumes, establish relationships between volumes.
  • parent and sibling volumes can be defined such that when an act is performed on the parent, the siblings react.
  • a parent VW is closed, all of the sibling VWs are also closed.
  • Another example has a sibling VW attached to the border of a parent VW. When the parent VW moves, so does the sibling VW, etc.
  • the solutions include extending basic operations of 2D window managers, such as drag/drop to operate with volumetric user interfaces.
  • the solutions also allow users to interact with the volumes (or display space).
  • the users can use gestures to delineate sub-portions of the working volume.
  • Figure 2002 shows a user 2030 using gestures to specify an operational plane 2032 within the volume 2034, and, if the command associated with the plane is "divide volume", it can be to divide the volume into two sub-volumes or volume windows.
  • this gesture operation the positions of the hands are sensed using a touch sensitive surface of the display 2034 or a 3D position sensing system and a virtual plane is created between the conduct points.
  • a more specialized use is to create more space within the working volume, either by compacting objects within the space or by moving objects out the way.
  • a gesture can be given to "crush" the 3D scene 2040 along a specific horizontal plane 42 (see figures 3a and 3b).
  • the main elements of the scene 2040 would be still visible along the floor of the dome display 2044 to provide context while allowing manipulation access to objects 46 along the "back" of the scene.
  • a volume window can be iconified in the same way.
  • another style of 2D window management can be employed where the volumes are a tiled instead of overlapping or cascaded.
  • the full screen is used whenever possible and growing one edge of a volume window shrinks the adjoining window by the same amount. Windows never overlap in this tiling situation.
  • Another gesture command would cause objects being pushed to shrink in site to create more space.
  • Another example, as shown in figures 204a and 204b is a partition or scale of the 3D scene by using a "separation gesture" where the user specifies a start position 2060 (figure 4a) with their hands together and then separates the hands to make space 2062 (figure 4b). This has the effect of using virtual planes to part the 3D space, either translating the two halves or scaling the two halves (essentially scaling the scene to fit in the existing space).
  • the system is performing a "pick detection".
  • the window manager cycles through its parent windows passing along the input event and essentially asking if any window is interested. Since each window knows it's bounding box, it can determine if the event occurred in its 3D spatial volume. Ultimately, the system can determine if an event happened outside any volume window (e.g., it started on the "desktop"). The system can behave differently for events (e.g, perform some window management functions) that fall outside of VWs.
  • the present invention is typically embodied in a system, as depicted in figure 2005, where physical interface elements 80, such as a rotary dome position encoder, infrared user position detectors, a keyboard, touch sensitive dome enclosure surface, mouse, beam pointer, beam pointer with thumbwheel, stylus and digitizer pad or stylus and stylus sensitive dome enclosure surface, stylus with pressure sensor, flock-of-birds, etc. are coupled to a computer 2082, such as a server class machine.
  • the computer 2082 uses a graphical creation process, such as the animation package MAYA available from Alias
  • This process using position inputs from the input configurations, also creates virtual interface elements, such as a virtual hand, a 3D point cursor, a 3D volume cursor, a pointing beam, a bead, etc. suitable for manipulating the volume windows.
  • the display output including the volume windows, scenes and interface elements, etc., is provided to a volumetric display apparatus configuration 2084, such as one that will produce a 3D holographic display and discussed herein.
  • a volumetric display space can be divided into a number of sub-volumes or volume windows in a number of different ways or with different organizations.
  • Figure 2006 depicts a cubic volumetric display space 2100 having a basic division into planar cubic type "row" windows 2102 while figure 2007 depicts division of a cubic display space 2120 into planar "column” windows 2122.
  • Figure 2008 shows a dome or cylinder display space 2140 divided into pie wedge volume windows 2142.
  • These sub-volumes can be created using the planar definition approach mentioned previously with respect to figure 2002. A user could manually define the volumetric space and create all of these subvolumes. In this approach, the system senses the position and orientation of one or a pair of pointing devices which are used to define a plane within the volume creating two windows.
  • the plane is then designated as a bounding plane between two volumes. However, in practice, this is preferably done by the system adhering to a particular space management policy selected by the user.
  • the user selects an icon representing the type of volume, designates a 3D origin or a default origin, and the system draws the predefined volume at the origin.
  • the system would have a new slice predefined and shrink the pre-existing slices by some percentage to make room, such that the resizing of the slices is done by the system. This shrinking would happen more in a "fully tiled” policy. Otherwise, when someone requests a new VW, it would overlap with existing VWs. In a free-for-all overlapping policy, the system would "crush” or deflate the volumes into a 2D representation or smaller 3D representation and push them to the edge of the volumetric display (or to some designated taskbar).
  • Figures 2009a and 2009b show a dome shaped volumetric display 2160 including a volumetric display space 2162 having three cubic shaped volume windows 2164, 2166 and 2168. If the space 2162 is considered to be a root volume window, the spaces 2164, 2166 and 2168 are volume windows. These volume windows are created by the application used to view the data being selected (e.g., selecting a file and issuing the "open" command, causes the system to determine what application to use to display the data).
  • a data file of the application has information such as preferred shape, size and position that is set as a default or that the application retained from the last time this VW was opened. Creating new volumes typically occurs from within the application.
  • the user also can specify the scale as well as the position of the new volume.
  • the new volumes be positioned in the center of the volume.
  • the (0,0,0) coordinate origin will preferably be in the center of the base of the display. The particular placement of the origin is not as important as is the establishment of a standard.
  • Figures 2010a and 2010b illustrate a dome shaped volumetric display 180 including a volumetric display space 2182 including two volume windows, a curved-box 2184, and an oval-tube 2186.
  • These shapes can be chosen by a user picking from a list of pre-defined volumes from within an application. Thus, the user just needs to specify the position and scale of the volumes.
  • Simple volumes can be constructed using standard practices found in 3D graphics programs, such as Alias
  • an existing 3D object can be selected and a new volume space can be defined based on the shape of the selected object.
  • Figures 2011a, 2011 b, 2012a and 2012b depict additional volumes managed by the present invention.
  • the volumes do not intersect and are in a "tiled" configuration and the management system preferably allows the volume 2192 to be seen through the volume 2190 when the user is viewing the display from above (figure 2011a).
  • a working window if enlarged, would result in the shrinking of the displayed portion of an abutting volume. This is in contrast to the situation shown in figures 2012a and 2012b where the volume windows overlap and (active/working) window 194 takes display precedence over volume window 2196 and is the input focus.
  • volume window has height, width and depth. It is possible for a volume window to essentially have a minimal depth, such that it is one voxel deep and is a 2D window with 3D characteristics.
  • Figure 2013 illustrates a volume window 2210, actually a visible bounding box of a volume window, having an attached volume activation region 2212 that acts like the title bar at the top of a typical 2D window.
  • the title bar will also have an optional text label (e.g., name of data file and/or application) and other graphic marks signifying status information and/or to identify the application running within the volume.
  • the title or activation bar 2212 is typically attached to the volume to which it is assigned and conforms to the shape of the volume.
  • the title bar 2212 signals the orientation of the volume and what side of the volume is the front.
  • the title bar can be inside or outside the volume to which it is assigned.
  • the title bar is a volume and preferably has a high priority for display such that it may only be clipped in limited circumstances.
  • the title bar also preferably has a preferred front "face" of the data volume where it appears and the volume is assigned the highest ' precedence or priority in the display memory/data structure so that it is completely displayed.
  • the volume 2210 becomes the active working volume. Dragging the title bar will also perform a move volume window operation.
  • Within the activation region 2212 are four controls that could be considered among the typical controls for a volume window. These controls include a move volume window control 2214, a maximize volume window control 2266, a minimize volume window control 2218 and a resize volume window control 2220. These controls function in a manner similar to the controls in a 2D window display system.
  • the move volume control when activated, allows the user to select the volume and move it to another location in a drag and drop type operation.
  • a pointer device is used to select and activate the control 2214 by, for example, having a beam, created responsive to the pointer device, intersect the control.
  • a button on the pointer when activated, causes the selection of the control intersected by the beam.
  • the volume activation region of a window intersected by the beam becomes the volume selected for moving when a pointer device button is depressed. Once the volume window is selected, it moves with the movement of the beam until the button is released, similar to the drag and drop of a 2D window.
  • the depth of moving volume window along the beam as it is moved is typically controlled by another control device, such as a thumb wheel on the pointer device.
  • another control device such as a thumb wheel on the pointer device.
  • the position of the bounding box for the window is updated in the volume window data structure.
  • the user can swing the beam to move the volume transversely and use the thumb wheel to move the window closer to or further away from the user.
  • Dragging the title bar will also perform a move volume window operation. In this move operation a 3D volume is moved in three dimensions in accordance with a 3D input vector or two separate 2D input vectors.
  • a resize control allows the volume window to be resized.
  • the size can be automatically changed through automatic functions similar to increments in a zoom operation or the user can use an input device to resize the window by dragging the sides of the volume window.
  • a drag of a corner of a volume window causes the volume to be expand in 3D.
  • the resizing can be stopped with a warning being displayed to the user or the display space allocated to the abutting window can be clipped.
  • the portions of the volume window data structure defining a size and a position of a bounding box are updated during resizing.
  • the maximize operation 2216 expands the volume in the three- dimension until it "fills" the volumetric display.
  • the expansion is according to a policy, such as center VW in the display space and expand until the VW contacts the outside edge of the display space.
  • a policy such as center VW in the display space and expand until the VW contacts the outside edge of the display space.
  • the policy could only expand 2 or even 1 of the dimensions or the user could be allowed to designate dimensions to expand.
  • the system substitutes a mini-3D icon for the VW and preferably places the icon at a designated position, such as the origin of the VW or on the bottom of the display space.
  • a designated position such as the origin of the VW or on the bottom of the display space.
  • An alternative is to display only the task bar at a preferred position.
  • the bitmap for the icon in the window data structure is obtained and placed in the display space as noted above.
  • volume windows can be divided into a number of different tasks, such as initialization of a window, performing tasks within the window, etc.
  • the initialization or open operations, as depicted in figure 2014 include an application, such as the manager of a 3D drawing program requesting the allocation of one or more volumetric drawing windows.
  • This is an application started within the volumetric display and it asks for display space.
  • the request can include a preferred position or placement for the window within the volumetric display which can be associated with the current position of a cursor, a preferred size, etc.
  • the volume manager allocates 2240 a VW data structure (see figure 2016) for each of the volumetric windows requested by the application.
  • the volume manager places 2242 the volume window in the volume window data structure, links it to the root volume, sets a permission in the data structure indicating that the application can perform functions in the allocated window and informs the application of the location of the data structure.
  • the manager determines 2244 the placement and size of the window responsive to default parameters and any currently active volumetric windows. For example, if the default or requested size would overwrite an existing volume, the volume being initiated can be scaled down or moved by the manager so that there is no overlap with an existing volume window. As in common overlapping 2D windowing systems, a new window request is always granted and placed on top of existing windows. The scaling of nearby windows occurs if the system is employing a tiling policy VW's and follows conventions as in 2D operations but in three dimensions. One approach is to push existing VWs to the outer perimiter of the volumetric display, reducing any empty space between VWs. This would grab as much free space as possible before having to scale existing VWs.
  • the system places 2246 a visible boundary for the bounding box in the display around the volume determined by the manager to be allocated to the initiated volume window.
  • the bounding box has a predefined shape the bounding box can be drawn and highlighted by the manager or the request can be passed onto the application which can perform the operation if a VW has an arbitrary shape.
  • the application directs all application events, in this case drawing events, such as paint brush movement, to the volume window(s) allocated.
  • the application In order to perform application functions, such as drawing in a volume window, the application, such as a drawing application, sends a request, in this case a draw request, to a drawing manager.
  • the request includes a volume window identifier (ID), a command or function to be performed, such as DrawLine, and a location where the command is to be performed.
  • the drawing manager checks 2260 to see if the unique volume window ID of the request is valid and to see if the application is allowed to draw in the identified volume window by comparing the identification in the graphics port of the volume window data structure. The manager then checks 2262 to see if any of the window is visible (that is, it has not been minimized) again by accessing the data structure.
  • the manager maps 2264 the location associated with the request from the application coordinate space to device space taking into account the current position or location of the specified volume window (boundingBoxPositionVW and orientationVW). That is, the application location is mapped to a corresponding location in the window of the display. From the list of active windows, the manager then determines or computes 2266 which regions of the specified window are visible. The draw request is then executed 2268 only for valid visible regions of the specified volumetric window.
  • the volume manager uses a data structure such as depicted in figure 2016.
  • This data structure is a list data structure having a root node 2280 and can take the shape of a linear list or a tree of VW nodes 2282 as shown as the example.
  • the root node 2280 includes the following fields.
  • the List is a list of pointers to the volume windows (nodes) of the display.
  • the shape type defines the particular type of displays, such as dome, cube, cylinder, etc.
  • the shape has an associated width, height and depth dimensions.
  • the operations are pointers to operations that can be performed or are valid in the volumetric display and include any parameters that may be set by default for the operation. The operations for move, maximize and minimize discussed previously would typically be included but are not shown.
  • Each of the volume windows includes the following fields.
  • the VgraphicPort defines the application graphics port in which the volume window is drawn. This structure defines the volume in which the drawing can occur, the volume window's visibility region, clipping region, etc. Fields for making the volume visible (hidden or visible) and highlighting the volumes are included.
  • TitleBarStructure contains information to position, display and handle the move, maximize, minimize, and resize volume window functionality.
  • the "Front" of the VW is determined, in part, by the orientationVW information.
  • the VolRgnHandle are structures that define a pointer to a volumetric region. Note that this region can be defined as an arbitrary shape. By default, the VolRgnHandle consists of a CUBIC shape with six values: bottom FrontX, bottomFrontY, bottomFrontZ and topBackX, topBackY, topBackZ.
  • the contentVRgn defines the space the volume window owns, relative to the application. All of the region may or may not be visible within the volumetric display (depending on the position of the VW and other VWs).
  • the updateVRgn specifies which portion of the entire contentVRgn which the application must refresh and redraw. While the VW can be any shape, a bounding box will be defined that minimally surrounds the shape. Thus, boundingBoxPositionVW specifies the absolute position of the VW relative to the (0, 0, 0) origin of the volumetric display.
  • the orientation of the volumetric window is defined by the OrientationHandle which specifies the central axis or spine of the volume window as well as the "front" region of the volume window.
  • the central axis by default, is a vertical vector which matches the (0, 0, 0) coordinate axis of the volumetric display.
  • ShapeType is a set of known volume window shapes (e.g., CUBIC, PIE, CYLINDER, EXTRUDED_SURFACE, ARBITRARY).
  • 2Dicon is a 2D or 3D bitmap image used to represent the VW when it is minimized.
  • nextVW points to the next VW in the WindowManager's VW list.
  • ParentVW by default is the RootVW. However, if subVW are defined, then the parentVW will not be the RootVW but instead the true owner of the subVW.
  • the volume manager uses the input event and an assignment policy to determine which volume window receives the event. For example, one policy is to send all events to the application corresponding to the window that encloses the spatial location of the event or cursor. If more than one window encloses the event, a priority policy is used, such as visible volume window. For input events that do not have an inherent spatial position for example keyboard events, the events are sent to the window that currently has the designated input focus, such as the working or active window. When the cursor or input focus is not in a VW, the event is sent to the root.
  • Volume windows can be related hierarchically such that a window can have volume sub-windows. It is preferred that all sub-windows obey the operations of the parent in the hierarchy. For example, if a parent window is deleted all children of the parent are also deleted. If a parent gets moved all of the children are moved by the same amount and in the same direction. A SubVW does not necessarily move with the parentVW. However, if a parentVW is minimized or closed, the subVW does comply. A parent may or may not "clip" the display of its children against its own bounding box. That is, children may exist outside of the volume of the parent. A child preferably inherits properties or attributes of the parent volumetric window.
  • Volumetric displays allow a user to have a true three-dimensional view of a scene 3012 and are typically provided in the form of a dome 3014, as depicted in figure 3001.
  • the user 16 as can be surmised from figure 3001 , move about the dome 3014 to view different parts of the scene 3012.
  • a planar 2D widget 3018 within the volumetric display and which may have icons, controls etc. within it can be in a position such that it is difficult to access by the user.
  • FIG. 3002 depicts a widget 3042 housed in a horizontal plane positioned on a bottom of the display enclosure, or on the volumetric display system "desktop.” The plane could also be positioned vertically or at an arbitrary angle depending on the needs of the user.
  • Another alternative is to conventionally determine the users position and/or eye gaze, as depicted in figure 3003, and position or arrange the 2D widgets within or outside the focus of attention as needed.
  • widgets that require the user's attention i.e., Alert widgets
  • Status information that is needed but not critical can appear on the peripheral of the users eye gaze perhaps surrounding the object that is the user's current focus of attention.
  • Widgets can be placed in depth to assign priorities to them. For example, an Alert dialog box may be of a higher priority than another dialog box thus causing the Alert dialog box to be placed in front of the first dialog box and the first dialog box is "pushed back" in depth (stacked).
  • the present invention is typically embodied in a system as depicted in figure 3004 where physical interface elements 3050, such as a rotary dome position encoder, infrared user position detectors, a keyboard, touch sensitive dome enclosure surface, mouse, pointer, etc. are coupled to a computer 3052, such as a server class machine.
  • the computer 3052 uses a graphical creation process, such as the animation package MAYA available from Alias
  • the display output, including the scene and widgets is provided to a conventional volumetric display apparatus 3054, such as one that will produce a 3D holographic display.
  • 2D widgets can be represented within a computer system in a number of different ways.
  • a typical way is to represent the widget as a two-dimensional display map of pixels that have a color value and possibly a control value associated with each of the two-dimensional positions within a virtual image area the widget typically occupies.
  • the widget is mapped from the virtual positions to actual display positions responsive to the position of the widget specified by the system.
  • the system position is often controllable by the user, such as allowing a user to move a GUI to different places on a display with a point and drag type command or action.
  • a volumetric display is comprised of voxels or volume pixels where each voxel has a 3D position as well as a voxel height, width and depth.
  • Figures 305A and 305B depict a portion of a plane 3070 of voxels from the front (5A) and side (5B) in a volumetric display 3072. The positions of voxels within the display are typically determined with reference to a center of the display having the coordinates (0,0,0).
  • voxels within the display can be arranged in a number of different ways as depicted in figures 3006A and 3006B where figure 3006A shows concentric layers 90 and 92 of voxels and figure 6B shows rectilinearly stacked layers 3094, 3096, 3098 and 3100 of voxels.
  • voxels 3102, 3104, 3106, 3108 and 3110 and voxels 3112, 3114, 3116, 3118 and 3120 are surface voxels that might be used for part of a 2D widget displayed on the outside surface of the display inside the enclosure.
  • the programming interface to a volumetric display may have abstractions in which the 3D display space is defined as a collection of voxels that are discrete, cubically shaped, and individually addressable sub-portions of the display space.
  • the display software may translate these discrete voxels into a continuous representation that is more compatible with the display rendering hardware.
  • the pixels of the virtual image In displaying a 2D widget within a volumetric display the pixels of the virtual image must be mapped to corresponding voxels. This can be accomplished by a mapping between the 2D virtual representation and a "layer" of voxels in an appropriate location in the display, such as on the "surface" of the display.
  • a control portion of a 2D widget such as part of a trashcan icon, might be mapped to the voxels 3112-3120 in figure 3006B.
  • the mapping of the 2D widget to the voxels is performed continuously or is updated at the rate of the refresh rate of the volumetric display. These mapping operations are shown in figure 3007.
  • the voxels used for display need not be limited to displaying a widget.
  • One or more widgets can be displayed in a plane.
  • the entire 2D desktop work space typically presented to a user on a display, such as a CRT or LCD, can be converted into a three-dimensional plane.
  • the plane can be at the bottom of the volumetric display or at any desired angle or position within the volumetric display.
  • the workspace can also be divided among several planes with different windows/icons/controls tiled or cascaded.
  • the mapping of the virtual representation of the widget starts with obtaining 3132 the pixel based image of the 2D widget, which is essentially a 2D window pixel map of a portion of a 2D desktop.
  • the 2D representation of the entire workspace is obtained.
  • the pixels of the shape of the widget are then mapped 3134 to the voxels of the display where the voxels are typically offset from the center of the display such that an x coordinate of the 2D pixels maps to a 3D voxel at x+ (x offset), the y coordinate of the 2D pixel maps to the 3D voxel at y + (y offset) and the z coordinate of the voxel is 0 + (z offset).
  • This can create a widget that has a 3D surface or a volume. Note that scaling may occur in this mapping such that the widget is either made "larger" or "smaller” as compared to the virtual map.
  • mapping can be from a linear "plane" in which the 2D widget is represented to voxels that may form a curved surface
  • the mapping uses conventional coordinate translation techniques to determine the effects for each voxel to allow the 2D widget to be curved in the volumetric display space. This mapping is appropriate particularly for displays with voxels arranged as depicted in figure 6B.
  • the texture of the 2D interface is mapped 3136 to the 3D surface of user interface. In performing this mapping, the interface typically takes precedence over other display values of the voxels that may have been set by the scene of the display.
  • the pull down menu overwrites the scene values. It is also possible to combine the values of the scene and user interface in some way, such as by averaging the scene and interface values, so that both are visible, though this is not preferred.
  • the widgets can also be texture mapped.
  • the texture mapping procedure includes first having the system determine whether each voxel in the display intersects a surface of the 3D widget. If it does, the system maps the voxel position into a (u,v) local surface position of a texture map for the widget. Using the local surface position, the system samples the texture map for the widget surface. The value of the sample is then assigned to the voxel. When the 3D widget is more than one voxel deep, and depending on the surface intersected, the mapping may sample a front, back or side texture for the widget.
  • the present inventions obtains the texture information from a single, 2D texture map of the original 2D widget. That is, only one texture map of the 2D widget is needed to translate it into voxel space.
  • Additional 3D characteristics can be obtained from the 2D widgets. For example, shading is commonly used on 2D widgets to give the visual impression of depth. A 3D surface for a widget is derived by analyzing this shading information such that these shaded 2D widgets would actually have true depth in the 3D display. Also pseudo-2D widget behavior is realized as real 3D behavior in the 3D volume. For example, depressing a push button widget actually moves the button in depth in the 3D display. Another aspect about giving 2D widgets volume is rather than synthesizing the depth aspect of a widget, it is simply determined by convention.
  • the convention could be to surround each 2D widget or collection of 2D widgets in the 3D display with a 3D widget "frame" which would give the edge of the widget thickness and thus make viewing and accessing from extreme angles easier.
  • a 3D widget "frame" which would give the edge of the widget thickness and thus make viewing and accessing from extreme angles easier.
  • the frame of a 2D window automatically is given thickness in 3D volumetric displays.
  • the texture of the widget takes on the shape of the surface of the widget. Because the surface can be enlarged or otherwise changed in configuration during the mapping, the texture mapping may use conventional processes to stretch or morph the texture for the surface.
  • mapping of a widget may map from a linear shape to a curved shape associated with the surface of a dome
  • conventional processes are also used to warp or morph the widget shape and/or texture into the desired shape, such as to make a curved edge of a menu window appear straight in a polar type coordinate system.
  • the widget is ready for use or interaction with the users.
  • This interaction occurs within the operations associated with creating and projecting a scene within the volumetric display. That is, the GUI operations may be at the beginning or end of a scene projection operation or in the middle based on an interrupt.
  • the operations form a loop in which the 2D virtual display is updated 3150. This update may occur because the user has activated a pull down menu in the display, the system has moved the display because of a spatial conflict or a cursor/pointer has been moved into the display by the user to make a control selection or for a number of other reasons. The update occurs as previously discussed with respect to figure 3007.
  • the updated display is mapped 3152 to the desired position and voxels within the volumetric display and the voxel data is output 3154 to the volumetric display system.
  • the determination is made as to whether a control type input has been input 3156, such as by the user positioning a pointer at a 3D position in or over the widget and activating a selection device, such as a button of a mouse or a touch sensitive portion of the display enclosure. If a control type input has been input, the system determines 3158 whether the pointer lies within or the touched part of the enclosure lies over a control portion of the 2D display. This is accomplished by essentially comparing the coordinates of the pointer or the coordinates of the touch to the coordinates of the control specified in the virtual map of the 2D widget.
  • the touch position is translated to the nearest voxels along a surface normal of the display enclosure and then the voxels so selected are mapped as noted above. If a control has been selected and activated, the system performs 3160 the function of the control.
  • the present invention has been described with respect to taking a 2D representation of a widget and mapping its texture representation into a 3D widget that has volume. It is also possible to construct 3D widget representations, such as a 3D slider, and map them more directly.
  • the present invention has been described with respect to activating a control associated with a cursor or pointer intersecting a voxel corresponding to a control by ray-casting the pointer of the center of the display and selecting a first control that has voxels intersected by the ray.
  • the control discussed herein has been active controls in which the user activates the control. Other types of controls can also be involved, such a dwell controls which are typically used to display help information in a "bubble".
  • the input discussed herein has included pointing inputs. However, the input can be text from a keyboard that is entered in a window of a widget.
  • volumetric-based widget can be reconstituted/created using a core set of drawing primitives library (such as drawline, fill rectangle, draw text) that has been tailored to work on the volumetric display.
  • drawing primitives library such as drawline, fill rectangle, draw text
  • the present invention also includes a hybrid display system including a volumetric display 3054 and conventional 2D displays 3056, such as LCD or CRT screens (see figure 3004).
  • a hybrid display has a spherical-shaped volumetric display (figure 3001 ) with a traditional LCD display mounted and viewable as the floor of the display replacing or in addition to widget display 3042 of figure 3002.
  • the 2D widgets may reside on the LCD display - which also serves as part of the display enclosure.
  • small touch-sensitive LCD panels may be arranged along the base rim of the spherically shaped or cubically-shaped volumetric display and serve as a displayable exterior surface on the enclosure replacing or in addition to widget display 3038 of figures 3002.
  • One additional example is a hybrid configuration in which images are projected onto the volumetric enclosure using a traditional digital projector (often used to project computer displays onto large screens for presentations). While the 2D widgets may be presented on these traditional 2D displays serving as part of the volumetric enclosure, software libraries and infrastructure treats these display spaces as separate logical displays and separately addressable or as part of the single, logical voxel space of the volumetric display.
  • Volumetric displays allow a user to have a true three-dimensional view of a scene 4012 and are typically provided in the form of a dome 4014, as depicted in figure 4001.
  • the user 4016 as can be surmised from figure 4001 , can move about the dome 4014 to view different parts of the scene 4012.
  • a planar UI widget 4036 within the volumetric display 4038 relatively turns such that it is no longer viewable by the user as depicted in the top view of figure 4002.
  • the solutions include the production and display of a volumetric graphic user interface element or widget or omni-viewable widget.
  • a volumetric widget or omni- viewable widget is one that can be viewed and interacted with from any user location or viewpoint around a volumetric display.
  • One solution that provides a volumetric widget is to replicate a planar widget several times around the volumetric display so that the user will always have a readable view of the contents of the widget. This solution can result in a cluttered display.
  • Another solution is to provide a multifaceted widget where a face of the widget is always readable by the viewer. This can result in a widget that takes up a greater volume of the display.
  • a further solution is to provide a widget that rotates to facilitate viewing from any viewpoint. This can result in a widget that is readable only part of the time.
  • a further solution is to track a position of the user and orient the widget to face the user's location. This solution requires tracking technology.
  • An additional solution is to combine two or more of the solutions discussed above. Each of these solutions provides a volumetric or omni-viewable widget and will be discussed in more detail below.
  • the present invention is typically embodied in a system as depicted in figure 3 where physical interface elements 4050, such as a rotary dome position encoder, infrared user position detectors, a keyboard, etc. are coupled to a computer 4052.
  • the computer 4052 uses a graphical creation process, such as the animation package MAYA available from Silicon Graphics, Inc., to create a three- dimensional (3D) scene including virtual interface elements, such as the volumetric widgets discussed herein, and move them about in the scene automatically or based on some user control input.
  • the display output, including the scene and widgets, is provided to a conventional volumetric display apparatus 4054, such as one that will produce a 3D holographic display
  • a display 4070 can include a volumetric widget that comprises multiple duplicate copies 4072, 4074, 4076, 4078, 4080 and 4082 of a virtual interface element, such as a graphical slider or icon toolbox.
  • a virtual interface element such as a graphical slider or icon toolbox.
  • Each of the elements 4072 - 4082 has a viewing angle range with, for example, element 4080 having a range 4084 and element 4082 having a range 4086.
  • the elements are arranged or positioned in such a way 'that from any point around the display 4070 a user is within the acceptable viewing angle range of one of the elements. This can be accomplished by providing a sufficient number of elements or by arranging the elements more or less deeply within the display such that the ranges of the adjacent elements overlap at or near the surface of the display 4070.
  • Omni-viewable widgets can be created by forming widgets with multiple faces or multiple surfaces as depicted in figures 4405A - 4405D.
  • the faces are typically exact duplicates showing the same information in the same way.
  • Figure 4405A depicts a cubical widget 4100 with six faces and each face containing a duplicate of the contents to be displayed by the widget.
  • Figure 4005B depicts an octagonal solid widget 4102 with 8 faces and each face displaying the same contents.
  • Figure 4005C depicts a tent type widget 4104 with two faces, each facing the opposite direction and each displaying the same contents. This type of widget can also be rotated back and forth, as indicated by the arrows, to allow the viewing range of the two displays to intersect all user viewpoint positions.
  • Figure 4005D depicts a globular or ball shaped widget 4106 with identical faces arranged one the surface of the globe.
  • Other shapes of multiple face widgets are possible such as pyramidal and cylindrical.
  • each face does not have to be an exact duplicate; a face may be specialized for a particular viewpoint. For example, if a widget is a direction widget (showing the compass directions), each face is not a literal duplicate as in this case, each face shows the directions appropriate to their viewpoint.
  • each user can be provided with a position indicator as part of an input device, such as that provided by a conventional 3D input glove.
  • Input devices located around the display can also be used by the users to register their locations or viewpoints.
  • a camera and an object detection system could also be used.
  • Another alternative, as depicted in figure 4006, is to provide an array of conventional infrared detectors 4120 arranged in a circumferential band below or at the bottom of a volumetric display enclosure 4122. In this approach those detectors that are active indicate the presence of a user.
  • a conventional interface between the computer 4052 see figure 4003 and the detectors allows the computer to conventionally detect the number and positions of users positioned around the display 4122.
  • Another variation is to use audio microphones to detect the position of users based on from where the sound is coming.
  • the computer can create a omni-viewable widget that includes an interface element for each user, as depicted in figures 4007A - 4007C.
  • the system detects two users A and B on opposite sides of the display enclosure 4140, as depicted in figure 4007A, two widgets elements 4142 and 4144 are created and positioned (as shown by the dashed lines) to face the users.
  • Figure 4007B shows two users A and B in different positions than in figure 4007A and widget elements 4146 and 4148 positioned to face these different positions.
  • Figure 4007C shows three users A, B and C and three corresponding user-facing widget elements 4150, 4152 and 4154.
  • the system may need to prevent the widgets from overlapping each other and thereby obscuring display contents in the overlapped areas. This is discussed below with respect to figure 4008.
  • the system determines 4170 (see figure 4008) the number of users and their positions or viewpoints using a position determination system such as previously discussed. For each viewpoint an identical widget element is created and oriented 4172 in an orientation that is tangential to the surface of the display enclosure at the position of the corresponding viewpoint around the circumference of the display enclosure and perpendicular to the normal at that point.
  • centroid of the oriented widget elements is determined 4174.
  • Widget elements that have surfaces that overlap or intersect are incrementally moved 4176 away from the centroid radially along the normal until no intersections exist.
  • the sequence of figures 4009A-40009C show these operations in more detail.
  • a widget with three widget elements 4190, 4192 and 4194 is created in the center of the display 4196 for three user viewpoints 4198, 4200 and 4202.
  • the widgets are moved along their respective normals 4206, 4208 and 4210 until widget 4192 no longer intersects with widget 4194, as shown in figure 4009B.
  • widget 4194 stops moving.
  • widgets 4190 and 4192 still intersect.
  • widgets 4190 and 4192 are moved along their respective normals 4206, 4208 and 4210 until they no longer intersect as shown in figure 49C. Rather than move only those widgets that intersect, it is possible to move all the widgets by the same incremental amount until no intersections exist.
  • Other placement algorithms may be used to achieve a similar non-overlapping result.
  • viewpoints are grouped into viewpoint clusters and create a display for each of the clusters. This is depicted in figure 4010 where five user viewpoints V4001-V4005 are shown.
  • the system measures the angles between the viewpoints and compares this to the view range of the particular widget element being created. If the angle is less than the range, the viewpoints can be included within the same cluster. For example, the angle between viewpoints V4001 and V4002 and between viewpoints V4002 and V4005 is greater than the viewing range of the widget element being used. The angle between V4003 and V4004 is also too great.
  • the angle between V4002 and V4003 and between V4004 and V4005 is less than the range and these viewpoints can be grouped into two clusters C4001 and C4002 while V4001 is allocated to its own cluster C4003. Once the clusters are determined the average of the positions or angles of the viewpoints in each cluster is used to determine the angular positions W4001 , W4002 and W4003 of the widget elements.
  • the widgets can be continuously rotate.
  • the rotation can be a revolution through 360 degrees or the rotation can rock back and forth through a fixed number of degrees to favor certain viewpoints.
  • the rocking is depicted in figures 4011A - 4011C.
  • a widget 4220 in the display enclosure 4224 is rotated back and forth between the viewpoints of users A and B.
  • the widget 4220 is oriented to user A
  • in figure 4011 B the widget is oriented between users A and B
  • the widget is oriented toward user B.
  • This rotation between view points is performed by determining the user view points and redrawing the widget in sequential and incremental angular positions between the view points.
  • the rotation in either mode can "jump ahead" resulting in certain viewpoints being ignored or covered at different rates.
  • the widget can be rotated along an arbitrary axis or multiple axes.
  • a rotating widget When a rotating widget is selected by the user for an operation, such as by the user positioning a pointer pointing at the moving widget, and performing a selection operation, such as pressing a selection button on the pointer, the widget needs to be stopped from rotating so the user can review the widget for a desired period of time or otherwise interact with the widget.
  • a selection operation such as pressing a selection button on the pointer
  • the rotation of a widget in either a back and forth motion or in a full circular motion includes a display 4240 of the widget at a particular rotary position (see figure 4012).
  • the system determines 4242 whether the widget has been selected. Selection can occur in a number of different ways including the positioning of a 3D cursor on the widget and the activation of a selection button on a device, such as a 3D "mouse". If the widget has not been selected, the position of the widget is updated 4244 and it is displayed 4240 in its new rotary position. If the widget has been selected, the widget is flagged 4246 as selected so that other users cannot select the widget. The selected widget can also be displayed in a way that indicates it has been selected, such as by highlighting it.
  • the widget can then optionally be oriented 4248 toward the user using the input from (or input vector of) a selecting device to determine which user has selected the widget and the user location as indicated by the location detectors.
  • the correlation between location and selection device can be created by having the users register their location and their input device or by using input devices whose position around the volumetric display is known.
  • the system then performs the interactions 4250 with the widget as commanded by the user. If the user is only reviewing the contents of the widget then there is no positive interaction.
  • the user can use the selecting device to move a cursor to select a control of the widget, thereby positively interacting with the widget. Once the interaction is finished or a time-out period expires, the widget is deselected 4252 and the rotation of the widget continues.
  • a rotating widget whose contents have controls making it appropriate to orient the widget toward the user when the user desires to interact with the widget. It is also possible to provide a widget that includes a control with which the user can interact while the widget is rotating without having the widget stop rotating and orient toward the user.
  • a widget 4270 is depicted in figure 4013.
  • This widget 4270 includes a rotating portion 4272 that can indicate the function of the widget via a label and a stationary portion 4274 that includes the control 4276.
  • the control 4276 is a slider bead that the user slides up and down to perform some function, such as an object scale slider used to change the scale of a selected object in the display.
  • Another example is a push-button implemented by a deformable sphere.
  • the rotating widget has been described with respect to rotating into view of the users with the widget remaining at the same position within the display. It is also possible for the widget to rotate in the display by traveling around the circumference of the display or the interior of the display enclosure much like a world rotates on a globe. In general any rotation pivot point or any path can be used to move a widget.
  • These paths are computed such that they route the widgets around objects that would occlude the widgets being viewed from certain viewpoints.
  • One method for defining a path within a volumetric display to be used to move a widget along is to define two concentric rings along the base of the volumetric display. Call one ring the inner and the other the outer ring. The widget traverses along the inner ring until it's position plus a fixed delta amount intersects an object. With a collision imminent, the widget transitions to the outer ring until it is able to return to the inner ring - having passed the object. Additional "outer" rings can be defined if traversal along the current ring is not valid (e.g., intersecting an object).
  • widget movement is "content-dependent".
  • widget placement may be content-dependent such that a widget or widgets are not only oriented or duplicated but placed in the scene to minimize the occlusion of them by other content in the scene.
  • Omni-viewable widgets can also be of value in other virtual environments and display systems.
  • virtual environments that use head-mounted displays could benefit from omni-viewable widgets placed in a virtual world.
  • any application where different viewpoints of a 3D scene omni- viewable widgets can be viewed and operated from multiple viewpoints.
  • the present invention can be generalized to 2D displays (e.g., a display where a widget is viewable and operable from the top or bottom of the display). For example, sitting face-to-face at a table with a LCD display in the table surface between the two people. Each widget is replicated to be oriented toward each person. If four people gather around the display four versions of the widget are produced.
  • 2D displays e.g., a display where a widget is viewable and operable from the top or bottom of the display. For example, sitting face-to-face at a table with a LCD display in the table surface between the two people. Each widget is replicated to be oriented toward each person. If four people gather around the display four versions of the widget are produced.
  • volumetric widgets that are display widgets that display data contents to the user. It is possible to provide widgets that not only output contents but also widgets that allow input to the graphics system using input data fields, cursor activatable controls, etc.
  • the present invention is directed to a system that allows a user to physically rotate an enclosure for a three-dimensional (3D) volumetric display to thereby rotate the scene displayed within the enclosure so the user can view different parts of the scene without the user being required to physically move around the display (or to navigate a virtual camera type view of the scene.
  • a combination of physical rotation of the display and manipulation of the virtual camera either thru turning the display or thru some other input device is also useful. For example, turning the physical display to rotate the contents of the display while at the same time moving a mouse to magnify the contents.
  • Figure 5001 illustrates the user 5010 spinning the enclosure 5012 about a center or vertical axis 5014 by physically pushing the enclosure with his hands in contact with the enclosure 5012. As the enclosure moves so does the scene displayed therein.
  • FIG. 502 depicts an enclosure 5030 that can be rotated with two degrees of freedom 5032 and 5034. This enclosure 5030 can be rotated like a ball thus permitting three degrees of rotation.
  • the control interface that is, the ability of the user to control the rotation of the display contents, is accessible from any direction that the user approaches the enclosure or from any view point around the enclosure. That is, the interface is omni-viewpoint directionally accessible and controllable.
  • the rotation or movement of the displayed scene with the rotation or movement of the enclosure can be accomplished in a number of different ways.
  • the display mechanism such as a holographic projector, can be physically rotated with the enclosure.
  • Such an embodiment where the display apparatus is inside the enclosure requires that the display interface, particularly, the power and display inputs, be coupled through a conventional rotating connection.
  • the power and display inputs can also be supplied electromagnetically similar to the way that smart cards are supplied with power and I/O data.
  • By rotating the entire display mechanism the contents need not be redrawn each time the display is moved.
  • the displayed scene can be virtually rotated responsive to the rotation of the enclosure. That is, as the enclosure is rotated the scene can be redrawn to correspond to the enclosure rotation.
  • the virtual rotation of the scene within the enclosure does not require that the display apparatus be rotated and a rotational or electromagnetic coupling is not needed. Also, some combination of virtual rotation and physical rotation can be used.
  • the enclosure mechanism can be made as depicted in figures 5003 and 5004 where figure 5003 shows a side view and figure 4 shows a top view.
  • the enclosure mechanism 5050 includes a transparent plastic enclosure 5052 mounted on a rotating base 5054.
  • the display apparatus such as the holographic projector
  • the rotating base 5054 would be transparent and the displayed scene would be projected through the rotating base 5054 from a fixed base 5056 in which the display apparatus would be housed.
  • a bearing 5058 couples the rotating base 5054 to the fixed base 5056 and allows the base 5054 to rotate about the vertical axis of the enclosure mechanism 5050.
  • the mechanism 5050 also includes a conventional rotary encoder 60 that outputs the rotational position of the enclosure 5052 as it rotates.
  • the transparent base 5054 need not exist if a support bearing is on the periphery of the enclosure rather than in the center as shown.
  • rotational sensors like those used for conventional track balls can be used to sense rotation in the two dimensions.
  • a similar scheme can be used but with an additional sensor to sense twisting of the ball (rotation in the third dimension).
  • the rotary encoder 5060/5070 (see figure 5005) is coupled to a sensor A/D (Analog to Digital) converter 5072 that supplies the rotational position to a computer 5074.
  • the computer 5074 supplies the scene to be displayed to the display apparatus 5076.
  • the scene processing system being executed by the computer 5074 is a conventional system that can rotate the scene or objects within the scene to different positions or viewpoints responsive to scene rotational position inputs.
  • Wavefront, Inc. is a system that can rotate computer generated graphical scenes or objects responsive to rotational inputs.
  • Scene displays may also contain virtual widgets, such as three dimensionally positioned cursors, virtual keyboards, 3D manipulators, dialog boxes, etc. that are displayed within the enclosure. If such widgets exist within the scene, it may be necessary to adjust the position of the widgets within the scene as the scene rotates. For example, a virtual keyboard should always face the user even when the scene is rotated so that the keyboard is available for use by the user. However, a 3D cursor typically needs to remain at its last position within the scene as the scene rotates.
  • the computer 5074 also adjusts the positions of interface widgets, as necessary, within the scene responsive to the rotation as will be discussed in more detail later herein.
  • the computer 5074 need not determine scene positions but still determines and makes widget position adjustments as needed. In this embodiment, by limiting the redrawing to only widgets that need to remain stationary or rotate at a different rate than the scene, computational resources are more effectively used.
  • the rotation of the scene within the enclosure is typically a 1 for 1 rotation. That is, if the enclosure is rotated 10 degrees the displayed scene is rotated 10 degrees. It is also possible to rotate the displayed scene at a rate that is faster or slower than the rotation rate of the enclosure. Negative scene rotation with respect to enclosure rotation is also possible. Preferably, the user can set the rotational gain discussed above.
  • the operations 5090 (see figure 5006) performed by the computer 5074 start with initialization operations 5092 and 5094 where the physical and virtual rotational positions are set to zero.
  • the system determines 5096, from the output of the encoder, whether the enclosure has been rotated. If not, the system loops back and waits for a rotational movement of the enclosure. If the enclosure has been rotated, the amount of and direction of virtual rotation, vr, is determined 5098 followed by applying 5100 a rotational gain function g to the physical rotation, rp.
  • the gain g is set to one.
  • the gain is set to a value greater than 1 , the virtual rotation exceeds the physical rotation.
  • Figures 5007A - 5007C with figures 5008A - 5008C depict accelerated rotation based on a gain of 1.0 and 4.5 respectfully.
  • Figures 5007A-5007C depict a unity gain where the display follows the physical rotation of the enclosure 5154.
  • figures 5007A - 5007C are compared to figures 5008A - 5008C, when the enclosure 5172 is rotated by 10 degrees from figure 5008A to figure 5008B, the object 5174 is rotated by 45 degrees. Similarly when the enclosure is rotated to a 20-degree rotational position, the object is rotated by 5090 degrees as depicted in figure 5008C.
  • the virtual scene can also be constantly rotated, with the physical rotation adjusting the rate at which the scene is being rotated. For example, consider a virtual scene consisting of a swirling water current. Physically rotating the enclosure in the same direction of the water current speeds up the water flow. Rotating in the opposite direction, slows the water down.
  • the object is rotated by the inverse of the gain adjusted physical rotation.
  • the relative rotation of a scene object and the widgets allows scene objects to be rotated in front of the user while widgets remain stationary with respect to the user or a world coordinate system. This is discussed in more detail below with respect to figures 5010A - 501 OC.
  • Figures 5010A, 5010B and 5010C depict the inverse rotation of widgets relative to enclosure and scene rotation.
  • Figure 5010A depicts a scene object F, three widgets w1 , w2 and w3 that need to be maintained at the same or constant relative position with respect to the user, the enclosure 5222 and an enclosure orientation indicator 5224.
  • the base orientation, widget orientation and scene object orientations are all at zero degrees.
  • the enclosure 5222 has been rotated by 45 degrees so that the scene object F has also been rotated by 45 degrees relative to the center of rotation which is at the center of the display.
  • the widget positions have remained at zero degrees (that is, the widgets have been rotated a -45 degrees with respect to the enclosure).
  • the widgets are discussed above as being adjusted continuously as the contents are rotated. It is also possible to adjust the widgets based on a rotational threshold. For example, a keyboard Ul widget need not always be facing the user but can be oriented with respect to the user with some tilt. But when the tilt gets too large, such as above 10 degrees, the widget is rotated back to facing the user. Note that this may not happen all at once in the next display refresh cycle. Instead, the rotation could happen in increments until the final rotation amount is reached. This has the effect of preventing a visually jarring discontinuity by smoothly animating the widget to the final rotation position.
  • the relative positioning of widgets with respect to scene objects as the enclosure and scene objects rotate can be extended to be used with portions of the display contents that are designated.
  • one or more three-dimensional sections/parts/segments/sub-volumes of a volumetric display can be designated to remain stationary with respect to the user as the scene/enclosure rotate.
  • a scene of a landscape may be partitioned such that the landscape being modified rotates with the enclosure while a segment holding landscape elements, such as trees, bushes, rocks, remains stationary. This would facilitate the user selecting landscape elements and placing them in desired positions.
  • Other effects can be created with different gains set for different segments of the display. Different rotational gains can also be set for different objects.
  • a secondary input stream control different sub- portions of the display while the display enclosure is being rotated. For example, suppose the user uses a mouse input device to click-on and hold a window in place while, with the other hand, they rotate the display. In this case, the window would not rotate with the display. This can be accomplished by assigning each object in the display a rotational gain of one and adjusting the gain of the selected window to negative one.
  • Figure 5011 depicts an embodiment where the rotation is time sensitive.
  • the system performs a mode test. If the mode is set to a non-time based rotation mode, the system virtually rotates 236 the display contents in the direction of the physical rotation corresponding to the rotation of the enclosure as previously discussed. If the mode is set to a time based rotation, the system rotates the display contents and redraws 5238 the display continuously until the time value set for the time based rotation expires or is reached.
  • rotating the base When in time mode, rotating the base effects only the temporal position of the time-based media (e.g., an animation or a movie). For example, rotating the enclosure clockwise by some unit amount may "scrub" advance the 2D/3D movie by one frame (i.e., next time increment). Rotating the enclosure counter-clockwise by a unit will decrement the 2D/3D movie by one frame.
  • the time-based media e.g., an animation or a movie.
  • the present invention has been described with the rotation of the enclosure causing the contents of the display to rotate. It is possible for the rotation of the enclosure to be applied to a specific object (or objects) designated by the user in the scene such that rotation of the enclosure causes the object to rotate about the object center of mass. Additionally, the center of rotation could be varied. For example, normally, the center of rotation is the central axis of the display. However, an object could be selected and then rotating the enclosure rotates the scene about the object's center axis. Note that rotation of the display contents around any arbitrary axis could be controlled by rotating the display enclosure.
  • the rotational offset of the enclosure could be used to control the rate at which the display contents are rotated. For example, rotating the enclosure 10 degrees to the right makes the display contents rotate at a rate of 10 degrees per second. Rotating the enclosure an additional 10 degrees increases the rotational rate to 20 degrees per second.
  • This is accomplished using the joystick functions that control the rate of movement of virtual objects.
  • the design variations of joysticks such as "spring-loaded", isotonic vs isometric and different rate control mappings are applied.
  • Position relative to the enclosure can be used to control rotation of the display. For example, suppose a virtual head is displayed in the volumetric display. Suppose the user approaches the display from the "back of the head" viewpoint. Touching the enclosure on this side causes the display content to rotate 180 degrees so the face of the head faces the user. This can be accomplished by designating a reference for an object and, when a touch occurs, rotating the contents to align the reference with the touch. Rather than using touch to signal position, voice or thermal sensing or any other position sensors could also be used.
  • the typical volumetric displays being manufactured today have mechanical components that have inertia that could cause the mechanical components to distort when the enclosure is rotated quickly. Distortion of the mechanical components can cause the display contents to be distorted. These types of distortion can be measured and the display compensated for these types of distortions.
  • the present invention has been described with respect to actually rotating the enclosure when rotating the scene.
  • the present invention has been described with respect to using a shaft encoder and the enclosure rotating about a centrally positioned shaft. It is possible for the rotational bearing for the enclosure to be on the periphery of the enclosure and the encoder to be mounted to sense the rotation of the periphery of the enclosure. It is possible to provide a ring around the bottom of the display enclosure that can be rotated to rotate the display contents and to thereby not need to rotate the entire enclosure. It is possible, to sense a rotational rate force being applied to the enclosure or a circumferential ring via a rate controller sensor and virtually rotate the displayed scene accordingly. That is, the rotational force is sensed, the enclosure or ring does not actually rotate and the display rotates proportional to the force sensed.
  • the present invention has been described using rotary type encoders to sense rotational motion of the display. It is possible to use other types of sensors such as yaw, pitch and roll sensors to sense rotational force. It is also possible to mount roller wheels/balls around the periphery of the enclosure, sense the rotation of the roller wheels/balls and rotate the display contents accordingly without rotating the enclosure.
  • the volumetric display has been described as a dome or ball, however, other shapes, such as cubes, pyramids, etc., can be used for such displays.
  • the volumetric display also need not have a complete enclosure and the display contents can be projected into an open volume or a partially enclosed volume.
  • a proxy object serve as a representation of the dome.
  • This proxy object can be manipulated (i.e., rotated) and cause the corresponding actions to occur on the volumetric display.
  • Floor sensors can serve to indicate a rotational amount or user position.
  • the size of display may not be desktop scale but could be smaller (e.g., wristwatch or PDA) or much larger (e.g., room scale as in viewing a car) or even much larger (e.g., as in an amusement park ride scale).
  • volumetric display is mounted on a mechanical armature or trolly that can sense it's position and orientation in space (i.e., "spatially-aware" display like the Chameleon).
  • a volumetric display for viewing the internals of a human body.
  • the volumetric display shows a human liver.
  • the operator then physically moves the volumetric display 16 inches up and 5 inches to the left.
  • the internal structures such as the stomach and lungs are displayed until the operator finally stops moving the display when the heart appears. This can be accomplished by sampling the position of the trolly.
  • the system finds a corresponding point in a 3D display map. The contents of the display map corresponding to the volume of the display at the corresponding point are transferred to the volumetric display.
  • Volumetric displays allow a user to have a true three-dimensional (3D) view of a scene 6012 and are typically provided in the form of a dome 6014, as depicted in figure 1.
  • the user 6016 as can be surmised from figure 6001 , can move about the dome 6014 to view different parts of the scene 6012. From a particular arbitrary viewpoint, a user may want to select an object 6018 within the scene of the volumetric display and this may be difficult to do with traditional interface tools.
  • a first solution is to restrict movement of a cursor type volumetric pointer to a designated plane 6030 within the volumetric display 6033 and use a two-dimensional input device 6033, such as a stylus pad or mouse to input motion of the cursor on the plane.
  • a stylus and digitizer pad form the input device
  • the orientation of the plane 30 in the display can be controlled by the pitch of the pad and the direction of the pad using sensors for sensing pitch and direction.
  • FIG. 603 Another solution (see figure 6002C) using a 6002D digitizer tablet 6033 has designated regions on the tablet 6033 that map to regions of the volumetric display 6034.
  • a tablet 6035 may have a cross-section marked on the tablet such as "Front” 6036 and "Back” 6037. Placing the stylus in one of these regions maps the cursor to the corresponding position on the outer shell 6037 of the volumetric display 6034.
  • having a "Top” 6038 and "Front" 6039 region delineated on the tablet 6040 can position the cursor in 3-space by selecting two points (one in the "Top” region and one in the "Front” region) with the stylus.
  • Another solution is to restrict the cursor 6041 to moving along the outer surface 6042 of the display as depicted in figure 6003.
  • the cursor 6041 travels along the surface 6042 at a point that is the closest point on the surface 6042 to a stylus 6044 even when the stylus 6044 is lifted from the surface 6042.
  • a surface moving cursor can also be controlled using a touch sensitive display enclosure as well as the arrow keys of a keyboard, a mouse and other 2D input devices.
  • a convention is used to designate what is selected. The convention limits the designation to objects on the surface of the enclosure, to objects vertically under the point of touch, to a closest object, to objects orthogonal to the surface at the cursor, etc. Objects within the range of influence of the cursor would typically be shown as being within that influence by, for example, being highlighted.
  • the surface moving cursor can also be used to tumble the contents of the display. For example, as the cursor moves over the top of the display as depicted in figure 6003 the contents of the display are locked to the cursor and, thus the contents "tumble" within the display enclosure.
  • Figure 6004A and 6004B show a user 6050 touching the display 6052 at two points and the pointing convention being the creation of vertical virtual planes which the user can move by moving the points of touch to, for example, push aside objects that the virtual planes encounter.
  • a further solution is to allow a user to manipulate a cursor (a flying or floating volumetric pointer) within the three-dimensional (3D) space of the volumetric display using a three-dimensional input device, such as the tilt mouse set forth in U.S. Patent 6,115,028, the Flock of Birds system from Ascension Technology Corporation, etc.
  • Figures 6005A and 6005B sequentially depict a user moving a 3D input device 6072 in space adjacent to the display 6074 and a cursor 6076 in the display moving in correspondence thereto.
  • Another solution is to allow the user to point at an object 90 to be selected using a three dimensional pointing device 6092, such as a beam pointer, to thereby point to the object 6090 to be selected using a visible volumetric pointer ray 6094.
  • a three dimensional pointing device 6092 such as a beam pointer
  • An alternative solution is to partition the volumetric space into a 3D grid and use pushbuttons to advance or retard a cursor in each dimension (e.g., using the arrow keys on the keyboard with or without modifier keys moves the cursor to the next cell in the 3D grid).
  • selecting an object can be done by determining a traversal sequence through the volume using a heuristic algorithm. For example, consider a volume space that is partitioned into a stack of thin slices or "slabs". A scan algorithm could search for objects starting at the top left of the slab space, scanning across from left to right, row-by-row until the bottom of the slab is reached. This same scan is performed for each progressively deep slice of the volumetric space. Again, the net effect of this algorithm is to make each object in the volume addressable by defining a sequence of objects and having the user jump to the next or previous object using a "next" key.
  • the cursor being moved can perform a number of different functions including designating/selecting objects, changing the display of objects in the display, such as by applying paint or dye, moving objects within the display and other functions typical of cursors used in 2D displays.
  • the cursor can be a volumetric point cursor, such as small object in the scene like the 3D arrow 110 depicted in figure 7A. While this has the advantage of an easily understood metaphor because 2D arrows are used with conventional 2D displays, these cursors can suffer from being obscured by other objects in the line of sight in the conventional displays. It is often difficult in conventional displays to perceive where in the depth dimension the cursor resides. This problem is alleviated in Volumetric Displays due to the enhanced depth perception and users wider field of view. Further, since volumetric displays will allow easy scene rotation, this in turn will increase the efficiency of pointing in 3D with a point volumetric cursor.
  • the cursor can also be a 3D volume cursor 6112 as depicted in figure 6007B.
  • volumetric volume can enhance depth perception.
  • the volume cursor shape could be cubic, spherical, cylindrical, cross, arrow or arbitrary shapes like a 3D shovel, tire tube or irregularly shaped object. While depth perception is not a problem with volumetric displays, volume cursors nonetheless afford certain advantageous properties when used with volumetric displays. First, if the volume cursor is made semitransparent, objects behind the cursor can still be seen. Second, the volumetric nature of the cursor can enable volume operations such as selecting multiple objects at once.
  • the cursor can also be a depth controllable type cursor, such as a bead cursor 6114 as depicted in figure 6007C.
  • a head type depth cursor allows the user to control the bead of the cursor using two different modes of interaction.
  • the cursor is positioned by pointing a beam 6116 at an object and the position of the cursor 114 along the beam is adjusted with a position control, such as a slider, a thumbwheel, pressure sensor, etc.
  • a position control such as a slider, a thumbwheel, pressure sensor, etc.
  • an interface control such as a button
  • the depth type cursor could also have the cursor be a stick or wand shape rather than the bead shape shown in figure 6007C.
  • the stick or wand could be divided into segments with a different cursor function allocated to each segment and thus be a smart cursor. For example, assume that the cursor has two segments: a delete segment and a modify segment. During operations, when a "delete" segment contacts an object and the function is activated, the delete function is performed when the control is activated while when the "modify” segment contacts the object the object is modified according to a predetermined function when the control is activated.
  • a cursor used for entry of text (2D or 3D) into the volumetric display would preferable have an l-beam shape.
  • a convention sets the lay of a line of text
  • the present invention is typically embodied in a system as depicted in figure 8 where physical interface elements 6130, such as a rotary dome position encoder, infrared user position detectors, a keyboard, touch sensitive dome enclosure surface, mouse, beam pointer, beam pointer with thumbwheel, stylus and digitizer pad or stylus and stylus sensitive dome enclosure surface, stylus with pressure sensor, etc. are coupled to a computer 6132, such as a server class machine.
  • the computer 6132 uses a graphical creation process, such as the animation package MAYA available from Alias
  • This process also creates the virtual interface elements, such as the 3D point cursor, 3D volume cursor, beam, bead, etc. discussed herein.
  • the display output including the scene and interface elements, is provided to a conventional volumetric display apparatus 6134, such as one that will produce 3D a holographic display.
  • Pointing to objects within a volumetric display can be effectuated using a number of different volumetric systems as depicted in figures 6009A-6009D. These systems operate using the technology included in a conventional stylus and digitizing tablet or pad input devices.
  • This type of technology includes transparent and flexible digitizers capable of sensing and outputting not only the position of the stylus but also the angle (vector) of the stylus with respect to the digitizer surface and the distance the stylus is located from the surface.
  • These types of styli and digitizers are also capable of inputting a control action, such as is required for activating a control, via switches included within the stylus and sensed by the digitizer/tablet via pressure transducers and via multiple coils.
  • a transparent digitizer 6150 for example, a transparent plastic with an embedded wire sensing grid
  • the digitizer 6150 senses a stylus 6154 and provides a position of the stylus to a computer 6156.
  • the computer 6156 produces a volumetric scene, along with determining a position of a cursor within the display, and outputs the scene with cursor therein to the display system 6158, which produces the scene including the cursor within the display enclosure 6152.
  • the digitizer 6160 is spaced from the enclosure 6162 and can take the shape of a box or a cylinder.
  • the digitizer 6164 and enclosure 6166 can be box or cylindrically shaped (see also figures 6002A and 6002B).
  • the transparent digitizer 6168 is spaced from the enclosure 6170 and takes the shape of a familiar rectangular tablet.
  • cursor position can be based on a three- dimensional input, such as provided by a digitizing glove, or based on a pointing device such as a beam pointer. In most applications, the beam can be considered a preview of what will be selected once a control, such as a button is used to select the object.
  • Beam pointing can be divided into a number of categories: vector based, planar based, tangent based beam pointing, object pointing or snap-to-grid.
  • an orientation input vector 6190 for a stylus 6192 with respect to the display enclosure 6194 is determined.
  • This vector 6190 is used to cast a ray or beam 6196 where the ray can be coincident with the vector or at some offset with respect to the vector.
  • the ray 6196 can be invisible or preferably made visible within the volumetric display to aid in the pointing process.
  • the cast ray or vector is used to determine which voxels within the display to highlight to make the ray visible. Once the path of the ray 6196 is known a determination can be made as to any objects that the ray encounters or intersects.
  • An object, such as virtual object 6198, hit by a ray can, if desired, be selected when a control, such as a button on the stylus, is activated.
  • the ray 6196 may change properties (such as direction, or shape) when hitting or passing through an object. For example, a ray passing through a container of water may simulate the bending effect of a light ray in water.
  • a ray is cast orthogonal to a designated reference plane from a contact point of a stylus with a tablet surface.
  • Figure 6011 A illustrates a ray 6220 cast from a stylus contact point 6222 to a bottom plane 6224 of the display enclosure.
  • Figure 6011 B shows a ray 6226 cast from a contact point 6228 to an arbitrary user defined plane 6230.
  • the reference plane can be specified by the input of planar coordinates by the user or with a plane designation device (see figures 6019A and 6019B).
  • a cast ray 6232 can be used to select a first virtual object 6234 that the ray encounters.
  • a ray 6250 (see figure 6012) is cast orthogonal to a plane 6252 that is tangent to a digitizer display enclosure 6253 at a point of contact 6254 of a stylus 6256 with the digitizer. Once again any object encountered by the cast ray 6250 can be selected.
  • the point of contact from which the ray is cast or projected is determined by the position of the stylus.
  • This point from which a ray is cast orthogonal to the surface of the display can be designated using other devices, such as a mouse or the arrow keys on a keyboard.
  • moving a mouse on a mouse pad adjacent to the display 6253 can move a ray projection point cursor in "two dimensions" on the surface of the display. That is, the ray projection point cursor is a surface moving cursor.
  • the mouse pad has a front side, a back side, a right side and a left side and the display 6253 has corresponding sides.
  • the ray projection point cursor is moved along the surface of the display 6253 from front to back in a proportional movement. This is accomplished by sampling the 2D inputs from the mouse and moving the cursor along the surface in the same direction and the same distance, unless a scale factor is used to adjust the distance moved on the display surface.
  • the ray is projected from the cursor into the display orthogonal to the surface at the point of the cursor.
  • selection using a beam can be performed in a number of different ways.
  • the beam can automatically select an object, which it encounters or a particular position along the beam can be selected using a cursor, such as a bead or stick as previously mentioned.
  • the position of the bead can be controlled by a position device, such as a thumb wheel or a secondary device. It is also possible to fix the bead a predetermined distance along the beam and allow the position of the stylus to indicate the position of the bead as shown in figures 6013A and 6013B.
  • Figure 6013A shows the stylus 6270 in contact with the enclosure 6272 and the bead 6274 positioned within the display along the ray 6276.
  • Figure 6013B shows the stylus 6270 at a distance from the enclosure 6272 and the bead 6274 in the display at a same constant distance from the stylus 6270 along the ray 6276.
  • FIG. 6014 Another bead based selection mechanism is shown in figure 6014.
  • a bead 6290 is created at an intersection of beam 292 and secondary beam 6294 cast by separate styli 6296 and 6298.
  • the secondary beam 6294 specifies the position along the primary beam 6292 where the cursor is created based on a region of influence or angular tracking and intersection designation.
  • the present invention allows the size of the bead to be changed as depicted in figures 6015A and 6015B.
  • a user changes or initiates a change in the size of the bead 6310 using an input device, such as a thumbwheel on a stylus 6314.
  • the size can be continuously varied until it is of a size desired by the user as depicted in figure 6015B.
  • the bead cursor has reached the desired size it can be positioned surrounding or contacting an object or objects 6316 that the user desires to select (and excluding undesired objects 6318), as depicted in figure 6015C.
  • the enlarged bead cursor can be shown with the original size bead 6310 as an opaque object therein to allow the user to see a position of a center of the cursor and, with a surrounding semitransparent volume cursor 6320 enclosing the embedded objects 6316 which have been selected.
  • a cursor associated with a ray can take other volume geometric shapes in addition to the bead or stick shapes previously mentioned. As depicted in figure 6016, the cursor can take the shape of a ring 6340 allowing the cursor to select a swept volume 6342 when the stylus is moved from an initial position 6344 to a final position 6346.
  • the ring 6340 (and volume 6342) can be made semitransparent or opaque as needed for a particular operation. Objects inside the volume can be selected for a functional operation or the swept volume could itself be acted on when a function is initiated.
  • the cursor can also take the shape of a cast cone 6360 as depicted in figure 6016 where the cone can be semitransparent and objects within or contacting the cone can be selected.
  • the cone can have its apex 6362 at the surface of the enclosure as shown or at some user desired position along the orientation vector of the input device as specified by an input device, such as a stylus thumbwheel.
  • the volume cursor associated with a cast ray can take the shape of a semitransparent voxel cylinder or shaft 6380 centered on the cast ray and optionally terminated by the bead 6384 as depicted in figure 6018.
  • Figure 6018 also depicts a situation where the objects within the shaft 6380 are rendered transparent so the user can see inside or through objects 6386 and 6387 within the display. Essentially a window into an object is created.
  • the transparent hose created by the shaft 6380 stops at the head 6384.
  • the position of the bead 6384 along the ray 6382 is adjustable and the bottom of the shaft can have a planar or some other shape.
  • the cursor used for selecting or designating within a volumetric display can also take other shapes, such as the shape of a displaying spanning plane as depicted in figures 6019A and 6019B.
  • Such an input plane 6400 can be specified by a rule or convention and an input device 6402 that can be "parked" at a location on the enclosure and that includes a mechanism for specifying location and orientation, such as the mechanism found within styluses that can be used to designate a contact point and a vector.
  • the rule could, for example, specify that the plane must be orthogonal to a bottom 6404 of the enclosure 6406, pass through the point of contact and be parallel with the vector.
  • the plane in addition to acting as a cursor can be used in combination with a ray to form a cursor where the cursor would be formed at an intersection of the plane and the ray.
  • the selection mechanism with respect to cast rays can also include a region of influence that automatically selects objects within a specified and variable distance of the cast ray as shown in figures 6020A and 6020B.
  • a region of influence that automatically selects objects within a specified and variable distance of the cast ray as shown in figures 6020A and 6020B.
  • four objects 6420-6426 are within the selection region 6427 of the ray 6428 while one object 6430 is not.
  • a "spread" function is also used which is a spatial nearest neighbor heuristic. Based on the currently selected object has it's nearest neighbor determined, etc.
  • Figure 6020B shows the same objects but with only object 6420 being within the region of influence.
  • the guidelines 6442 are preferably semitransparent voxels that allow objects behind the guidelines in the line of sight of the user to be seen. The guidelines are particularly useful when the cursor 6442 is obscured by an object.
  • the cursor and its location designation apparatus such as a stylus can be used to control objects.
  • an object selected by a ray/bead can be moved by activating a move function and moving the location of the bead with the stylus.
  • Another possible motion is a rotational motion where an object 460 rotates as a stylus 6462 selecting the object rotates as depicted in figure 6022. Note that that object can rotate along any arbitrary axis. However, most applications will preferably rotate the object along the axis defined by the input ray.
  • Figure 6023 depicts virtual track pads 6480 and 6482 on the display surface that can be used with a surface cursor or a ray.
  • the track pad could also be used to set positions along a ray.
  • the track pads can move with a user as the user moves around the display.
  • the pointing operations involve obtaining 6500 input values from the input device where the input values are the raw output values of the input device (for example, stylus/pad or glove).
  • the system then combines 6502 the input values with enclosure shape and/or position. This allows the system to take into account the shape of the enclosure to use in deriving a positional coordinate. For example, when a 2D input tablet is essentially stretched over a dome shaped enclosure, the tablet only reports a 2D position. However, this position value is combined with the knowledge of the shape of the dome to derive or map to a 3D position (i.e., a point in three space which is on the dome). This shape and/or position information allows the correct mapping between input and output spaces to occur. Note that not all of the different embodiments make use of the shape of the enclosure. For example, when the input device senses its 3D location, the shape of the enclosure does not need to be known. However, the position of the display relative to the sensing volume of the 3D input device needs to be known. Hence, this operation also factors in display and enclosure position.
  • a cursor position metaphor for the input and output spaces is applied 6504. This is used because the cursor control techniques can be much more than simply a 3D position in space but may involve metaphors such as "ray-casting" that use additional information. For example, if a stylus based ray with a depth controllable bead is used, the ray is projected from the positional coordinate of the contact point of the stylus with the enclosure along the orientation vector of the stylus. The depth of the bead set by the depth control device (slider, thumbwheel, etc.) is used to determine a point along the ray from the contact point at which to create the bead cursor.
  • the applied metaphor involves transforming or mapping the input device coordinates (such as the coordinates of the stylus above or on the surface of the enclosure) into volumetric display surface coordinates and finding the closest point (voxel) on the display surface to the input coordinates as the position.
  • the input device coordinates such as the 3D position of a glove in a space adjacent the display enclosure
  • the display has a back and a front.
  • a cursor is at some position in the display.
  • the input device such as a non-vector flock-of-birds sensor, has a button that activates the "move" function of the device. If the cursor is at some arbitrary position in the display and the user is standing in front of the display, the input device is activated and is moved toward the display, the cursor moves from front to back. If the user turns off the activation, moves to the rear of the display, activates the device and moves the device toward the display, the cursor will move from the back to the front of the display. That is, movement of the input device away from the user will always move the cursor away from the user.
  • the metaphor in this situation requires that the movements of the cursor be matched in orientation and distance to the movement of the glove, unless a scale factor is involved where, for example, movement distance of the cursor is scaled to 1/2 of the movement of the glove.
  • the metaphor may also involve separate input surfaces being used depending on the shape of the display. For example, a cylinder can be comprised of 2 separate input surfaces: one for the top of the cylinder, and one for the side.
  • a mapping function between the input coordinate space for the volumetric display and the addressable voxel space within the volumetric display can be defined for a desired metaphor.
  • the cursor is then positioned 6506 (see figure 6024) at the appropriate position within the volumetric display.
  • a point cursor is just a point in 3 space and has no orientation or volume information. If the point is coincident with an object contact exists. Things like volume cursors, influence cursors, ring cursors, etc., can require orientation information as well as volume information.
  • the points comprising volume objects in the display space need to be compared to the points comprising the oriented volume cursor to determine if contact exists.
  • a determination 6510 is made as to whether this is a smart cursor or a control having a particular function that has been activated. If so, the system determines 6512 whether the function has been activated, such as by the depression of a button on a stylus. If the function has been activated, it is performed 6514 and the system continues inputting input device coordinates.
  • a determination 6632 is made concerning an origin of the ray that is to be cast, as depicted in figure 6025.
  • a stylus contact type mode see figures 6010, 6011A and 6013B
  • a transformation similar to that performed for the surface restricted cursor is performed.
  • a vector mode a closest point on the surface of the display along the vector is determined.
  • the system then casts 6634 a ray.
  • the vector mode the ray corresponds to the vector.
  • planar mode a search is performed for a point on a reference plane at which an orthogonal will intersect the ray point of origin.
  • the system determines whether an object has been contacted 6636.
  • the ray is the selection mechanism (see figure 6011 C)
  • conventional ray casting object detection along the ray is performed and the first object encountered, if any, is flagged.
  • the bead is treated like a volume cursor as discussed above.
  • a plane such as depicted in figure 6019
  • This virtual plane can have an orientation and position in object space.
  • objects that intersect or come in contact with the virtual plane are also activated.
  • the virtual plane moves position or orientation, the activated objects move a corresponding distance and direction proportional to the motion of the virtual plane.
  • Releasing the virtual plane also deactivates the objects which are currently in contact with the virtual plane. If the resulting virtual plane motion causes activated objects to be move beyond the imaging chamber of the display, their data structures are still affected even though they are not visible in the display.
  • Alternative strategies for plane operation include:
  • the system also includes permanent or removable storage, such as magnetic and optical discs, RAM, ROM, etc. on which the process and data structures of the present invention can be stored and distributed.
  • the processes can also be distributed via, for example, downloading over a network such as the Internet.
  • the rays of the present invention have been shown as typically beam or pencil type rays.
  • the rays can also take other shapes, even fanciful ones like the cork screw 6520 of figure 6026A and the lightening bolt 6522 of figure 6026B.
  • a surface restricted cursor could produce a target ray.

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Optics & Photonics (AREA)
  • Electromagnetism (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The present invention is a system that allows a number of 3D volumetric display (6042) or output configurations to interact with a number of different input configurations. The user interacts via the input configurations, such as by moving a digitizing stylus (6044) on the sensing grid formed on a dome enclosure surface. The present invention is a system that manages a volumetric display (6042) using volume windows. When initiated by an application a volume window is assigned to the application in a volume window data structure. The present invention is a system that places 2D user interface widgets in optimal positions in a 3D volumetric display (6042) where they can be easily used based on the knowlege user have about traditional 2D display systems. Virtual 2D widgets are mapped to volumetric voxels of the 3D display system. The present invention is a widget display system for a volumetric or true three-dimensional (3D) display that provides a volumetric or omni-viewable widget that can be viewed and interacted with from any location around the volumetric display. The present invention is a system that allows a user to physically rotate a 3D volumetric display (6042) enclosure with a corresponding rotation of the display contents. The present invention is a system that creates a volumetric display (6042) and a user controllable volumetric pointer within the volumetric display. The user designates an input position and the system maps the input position to a 3D cursor position (6041) within the volumetric display (6042).

Description

TITLE OF THE INVENTION
THREE DIMENSIONAL VOLUMETRIC DISPLAY INPUT AND OUTPUT
CONFIGURATIONS
CROSS REFERENCE TO RELATED APPLICATIONS
[0001] This application is related to and claims priority to U.S. provisional application entitled User Interfaces For Volumetric Displays having serial number 60/350,952 (S&H Docket 1252.1054P), by Kurtenbach et al, filed January 25, 2002, this application is also related to U.S. application entitled Three Dimensional Volumetric Display Input And Output Configurations having serial number 10/183,970 (S&H Docket 1252.1054), by Kurtenbach et al, filed June 28, 2002, this application is also related to U.S. application entitled Volume Management System For Volumetric Displays, having serial number 10/183,966 (S&H Docket 1252.1065), by Kurtenbach et al, filed June 28, 2002, to U.S. application entitled Widgets Displayed And Operable On A Surface Of A Volumetric Display Enclosure, having serial number 10/183,945 (S&H Docket 1252.1066) by Fitzmaurice et al, filed June 28, 2002, to U.S. application entitled Graphical User Interface Widgets Viewable And Readable From Multiple Viewpoints In A Volumetric Display, having serial number 10/183,968 (S&H Docket 1252.1067), by Fitzmaurice et al, filed Ju;ne 28, 2002, to U.S. application entitled A System For Physical Rotation of Volumetric Display Enclosures To Facilitate Viewing, having serial number 10/188,765 (S&H Docket 1252.1068), by Balakrishnan et al, June 28, 2002, to U.S. application entitled Techniques For Pointing To Locations Within A Volumetric Display, having serial number 10/183,944 (S&H Docket 1252.1069), by Balakrishnan et al, filed June 28, 2002, and all of which are incorporated by reference herein. BACKGROUND OF THE INVENTION
1. Field of the Invention
[0002] The present invention is directed to input and output configurations for three-dimensional volumetric displays and, more particularly, to input configurations that allow the content of a three-dimensional volumetric display output configuration to be affected by actions by a user operating within an input configuration.
[0003] The present invention is directed to a system for managing data within a volumetric display and, more particularly, to a system that uses volume windows to manage data within a volumetric display.
[0004] The present invention is directed to providing two-dimensional (2D) widgets in three-dimensional (3D) displays and, more particularly, to mapping a 2D widget into a volumetric display at a position where it can be easily used, such as on the outside surface of the volumetric display inside an enclosure for the display.
[0005] The present invention is directed to providing graphical user interface widgets or interface elements that are viewable from different viewpoints in a volumetric display and, more particularly, to a system where a widget is produced that can be viewed and operated from multiple viewpoints.
[0006] The present invention is directed to a system for rotating a class of three- dimensional (3D) displays called volumetric displays and, more particularly, to a system that allows a user to rotate the display to view different parts of the scene within the display without having to move or walk around the display.
[0007] The present invention is directed to a system that allows users to point at objects within a volumetric display system, and, more particularly to a system that allows a number of different pointing approaches and pointing tools.
2. Description of the Related Art
[0008] A class of three-dimensional (3D) displays, called volumetric displays, is currently undergoing rapid advancement. The types of displays in this class include holographic displays, swept volume displays and static volume displays. Volumetric displays allow for three-dimensional (3D) graphical scenes to be displayed within a true 3D volume. Such displays can take many shapes including cylinders, globes, domes, cubes, an arbitrary shape, etc. with a dome being a typical shape. Because the technology of these displays is undergoing rapid development those of skill in the art are concentrating on the engineering of the display itself. As a result, the man-machine interface to or input/output configurations with which people interface with these types of displays is receiving scant attention.
[0009] While the volumetric displays allow a user to view different parts of a true 3D scene, the act of viewing the different parts typically requires that the user physically move around (or over) the display or that the display be moved or rotated in front of the user. As the display moves relative to the user, graphical objects may also move relative to the user. When the display is relatively stationary or when it is relatively moving, the user may need to interact with the display. As a result, what the user needs is an effective mechanism for interacting with the display.
[0010] While the volumetric displays allow a user to view different parts of a true 3D scene, the act of viewing the different parts typically requires that the user physically move around (or over) the display or that the display be moved or rotated in front of the user. As the display moves relative to the user, graphical objects may also move relative to the user. When the display is relatively stationary or when it is relatively moving, the user may need to interact with the display. Because users will interact with these displays in unexpected ways, like conventional 2D displays, 3D volumetric displays require mechanisms for the general management and placement of data within these types of displays. What is needed is a system for managing the volume(s) in a volumetric display.
[0011] While the volumetric displays allow a user to view different parts of a true 3D scene, the act of viewing the different parts typically requires that the user physically move around (or over) the display or that the display be moved or rotated in front of the user. As the display moves relative to the user, graphical user interface elements, sometimes called widgets may also move relative to the user. This is a particular problem when the widget is a two-dimensional (2D) interface, such as menu, a file tree, a virtual keyboard, or a display/view of a two dimensional document, such as a list or spreadsheet. Assuming that a volumetric display system needs to make use of these two-dimensional widgets, the question arises as to where to place these widgets to allow the user to interact with them.
[0012] A solution is to place the 2D widgets anywhere within the display. This can result in the intermingling of widgets and data, which may not be desirable. Additionally, complex 3D selection techniques may be needed if the 2D widget is placed in the 3D scene space to avoid selecting scene elements when the widget is intended.
[0013] What is needed is a system that will optimally place two-dimensional widgets on or in a volumetric display to allow direct and simple interaction. [0014] While the volumetric displays allow a user to view different parts of a true 3D scene, the act of viewing the different parts typically requires that the user physically move around (or over) the display or that the display be moved or rotated in front of the user. As the display moves relative to the user, graphical user interface elements, sometimes called widgets may also move relative to the user. This is a particular problem when the widget is a two dimensional interface, such as a virtual keyboard, or a display/view of a two dimensional document, such as a list or spreadsheet.
[0015] What is needed is a system that will provide user interface elements that are viewable and operable from whatever viewpoint a user takes around a volumetric display.
[0016] While the volumetric displays allow a user to view different parts of a true 3D scene, the act of viewing the different parts typically requires that the user physically move around the display. For example, if the user wants to view the backside of a scene including a building, the user must move to the backside of the display to see the back of the building. This movement is typically performed by the user walking around the display. Requiring the user to physically move around the display for an extended period of time is probably not the best way to work with these types of displays. And some movements may also be impractical, such as moving above the display for a view from above.
[0017] Another approach to viewing the different parts of the display is for the user to virtually navigate around the scene using camera navigation techniques. For large complicated scenes, rendering the scene for each increment of camera navigation can be computationally expensive, with the result that slow refresh rates detract from the user's experience. What is needed is a system that will allow a user to physically rotate the display.
[0018] While the volumetric displays allow a user to view different parts of a true 3D scene, the act of viewing the different parts typically requires that the user physically move around (or over) the display or that the display be moved or rotated in front of the user. As the display moves relative to the user, graphical objects may also move relative to the user. When the display is relatively stationary or when it is relatively moving, the user may need to interact with the display by pointing to something, such as a model object to, for example, paint the object, or to select the object for some function such as to move the object or select a control on an interface of the object. The object to which the user needs to point may be at any level within the display from the surface of the display adjacent the enclosure to the farthest distance within the display from the enclosure or the user. As a result, the user needs a mechanism for pointing to objects at different locations within a volumetric display. Today, those in the field do not appear to be concerned with this problem. Because many computer users are familiar with conventional interface tools and techniques, what is needed is a mechanism that will allow users to point at objects within the volumetric display in a situation where the viewpoint changes and that takes advantage of the learned behavior of users with respect to two- dimensional (2D) display interfaces, such as the 2D mouse driven cursor.
SUMMARY OF THE INVENTION
[0019] It is an aspect of the present invention to provide effective mechanisms for a user to interact with content of the three-dimensional volumetric display.
[0020] It is also an aspect of the present invention to provide input and output configurations for a three-dimensional volumetric display.
[0021] It is another aspect of the present invention to provide dome, cubical and cylindrical output configurations.
[0022] It is also an aspect of the present invention to provide input configurations that allow a 3D volumetric input space to be mapped to the 3D volumetric display, a planer 2D input space to be mapped to the 3D volumetric display, a planar 2D input space to be mapped to a planar 2D space within the 3D volumetric display, and a non-planar 2D input space to be mapped to the 3D volumetric display.
[0023] It is an aspect of the present invention to provide a system that manages data within a volumetric display using volume windows.
[0024] It is another aspect of the present invention to allow users to use simple and familiar modes of operation in operating with volume windows.
[0025] It is also an aspect of the present invention to provide functions that support operations on volume windows.
[0026] It is an aspect of the present invention to provide a system that allows 2D widgets or graphical user interfaces to be used in a 3D volumetric display.
[0027] It is another aspect of the present invention to position widgets within a volumetric display at positions where they are useful for direct and simple interaction.
[0028] It is also an aspect of the present invention to provide 3D widgets in a volumetric display that can be used in much the same way 2D widgets are used in conventional 2D display systems.
[0029] It is an aspect of the present invention to place the widgets on an outside surface of a volumetric display inside a protective enclosure.
[0030] It is an aspect of the present invention to place the widgets on a surface within the volumetric display such as the "floor" of the display, back plane, or non- planar surface to be used in conventional 2D display systems.
[0031] It is an aspect of the present invention to provide widgets that can be used in a volumetric display where one or more users can view the display and the widgets from different viewpoints and locations around the display.
[0032] It is another aspect of the present invention to orient the widgets to the users by tracking the location of the users.
[0033] It is also an aspect of the invention to provide omni-directionally viewable widgets.
[0034] It is an additional aspect of the present invention to replicate planar widgets, providing a widget for each user or cluster of users.
[0035] It is a further aspect of the present invention to provide widgets that rotate so that all users can view the widgets.
[0036] It is an aspect of the present invention to provide a system that allows a user of a three-dimensional volumetric display to remain in one position while rotating the display, either directly by hand or through a rotation mechanism that could include a motor drive, so that scene objects are rotated within the display and can be see from different viewpoints.
[0037] It is another aspect of the present invention to allow a user to rotate a display enclosure and thereby rotate the contents of the display.
[0038] It is also an aspect of the present invention to maintain orientation of widgets in the display with respect to the user while display contents rotate.
[0039] It is a further aspect of the present invention to allow a gain to be applied to the virtual rotation of the display with respect to the physical rotation of the enclosure.
[0040] It is an additional aspect of the present invention to allow users to intuitively control a dome or ball shaped display by placing their hands on the enclosure of the display and moving the display with their hands in a direction based on the intuitive hand control.
[0041] It is an aspect of the present invention to provide a system for pointing at objects within a volumetric display system
[0042] It is an additional aspect of the present invention to allow a user to point at objects from different viewpoints around a volumetric display.
[0043] It is another aspect of the present invention to provide a number of different types of volumetric pointers each having a different volumetric geometry.
[0044] It is also an aspect of the present invention to provide a number of different ways in which to point within a volumetric display.
[0045] It is an aspect of the present invention to establish a spatial relationship between the volumetric pointer and the user's body position, specifically the position of their hands. Movements of the hands and body position have a significant spatial congruence with the volumetric pointer/pointers.
[0046] The above aspects can be attained by a system that allows a number of 3D volumetric display configurations, such as dome, cubical and cylindrical volumetric display enclosures, to interact with a number of different input configurations, for example, a three-dimensional position sensing system, a planar position sensing system and a non-planar position sensing system. The user interacts with the input configurations, such as by moving a stylus on a sensing grid formed on an enclosure surface. This interaction affects the content of the volumetric display, for example, by moving a cursor within the 3D display space of the volumetric display.
[0047] The above aspects can be attained by a system that manages a volumetric display using volume windows within a display space or main volume window or root volume. The volume windows have the typical functions, such as minimize, resize, etc, that operate in a volume. Application data, such as a surface texture of a model, is assigned to the windows responsive to which applications are assigned to which windows in a volume window data structure. Input events, such as a mouse click, are assigned to the windows responsive to whether they are spatial or non-spatial. Spatial events are assigned to the window surrounding the event and non-spatial events are assigned to the active or working window or to the root.
[0048] The above aspects can be attained by a system that places user interface widgets in positions in a 3D volumetric display where they can be used with ease and directness. The widgets are placed on the shell or outer edge of a volumetric display, in a ring around the outside bottom of the display, in a plane within the display and/or at the users focus of attention. Virtual 2D widgets are mapped to volumetric display voxels and control actions in the 3D volume are mapped to controls of the widgets.
[0049] The above aspects can be attained by a system that provides a volumetric widget that can be viewed and interacted with from any location around a volumetric display. Such a widget can be provided by duplicating the widget for each user, by providing a widget with multiple viewing surfaces or faces, by rotating the widget and by orienting a widget toward a location of the user.
[0050] The above aspects can be attained by a system that allows a user to physically rotate a three-dimensional volumetric display enclosure with a corresponding rotation of the display contents. This allows the user to remain in one position while being able to view different parts of the displayed scene from different viewpoints. The display contents can be rotated in direct correspondence with the display enclosure or with a gain that accelerates the rotation of the contents with respect to the physical rotation of the enclosure. Any display widgets in the scene, such as a virtual keyboard, can be maintained stationary with respect to the user while scene contents rotate.
[0051] The above aspects can be attained by a system that creates a user manipulable volumetric pointer within a volumetric display. The user can point by aiming a beam, positioning an input device in three-dimensions, touching a surface of the display enclosure, inputting position coordinates, manipulating keyboard direction keys, moving a mouse, etc. The cursor can take a number of different forms including a point, an graphic such as an arrow, a volume, ray, bead, ring and plane. The user designates an input position and the system maps the input position to a 3D position within the volumetric display. The system also determines whether any object has been designated by the cursor and performs any function activated in association with that designation. [0052] These together with other aspects and advantages which will be subsequently apparent, reside iη the details of construction and operation as more fully hereinafter described and claimed, reference being had to the accompanying drawings forming a part hereof, wherein like numerals refer to like parts throughout.
BRIEF DESCRIPTION OF THE DRAWINGS
[0053] Figure 1001 shows a volumetπc display.
[0054] Figures 1002, 1003 and 1004 depict a 1003D to 1003D system configurations.
[0055] Figures 1005, 1006 and 1007 depict 2D to 3D configurations.
[0056] Figure 1008 shows a non-planar to 3D configuration.
[0057] Figures 1009, 1010, 1011 , 1012, 1013 and 1014 show configurations with physical intermediaries.
[0058] Figure 1015 depicts components of the system
[0059] Figures 1016A, 1016B, 1016C and 1016D illustrate digitizer embodiments.
[0060] Figures 1017, 1018A and 1018B show a dome shaped digitizer.
[0061] Figure 1019 depicts the operations of the system.
[0062] Figure 2001 depicts a volumetric display.
[0063] Figure 2002 shows a user managing the volume with a gesture-controlled plane.
[0064] Figures 2003a and 2003b illustrate volume management through space compression.
[0065] Figures 2004a and 2004b show object movement spatial volume management.
[0066] Figure 2005 depicts the components of the present invention.
[0067] Figures 2006 shows row volume windows.
[0068] Figure 2007 shows column volume windows.
[0069] Figure 2008 depicts pie or wedge shaped volume windows.
[0070] Figures 2009a and 2009b illustrate views of cubic volume windows.
[0071] Figures 2010a and 2010b show arbitrary shaped windows. [0072] Figures 2011a, 2011b, 2012a and 2012b show different types of volume display strategies for volumes windows.
[0073] Figure 2013 illustrates volume window controls.
[0074] Figure 2014 depicts operations of a volume manager in initiating a volume window.
[0075] Figure 2015 shows operations of an application manager.
[0076] Figure 2016 depicts a data structure used in volume management.
[0077] Figure 3001 depicts a volumetπc display system.
[0078] Figure 3002 illustrates alternatives in arranging 2D widgets with a volumetric display.
[0079] Figure 3003 shows a users position and gaze range.
[0080] Figure 3004 depicts components of the present invention.
[0081] Figure 3005 illustrates the volumetric nature of voxels.
[0082] Figures 3006A and 6b depict configurations of voxels within a display.
[0083] Figure 3007 depicts mapping from a 2D virtual representation of a widget to a volumetric voxel version of the widget.
[0084] Figure 3008 shows the operations involved in interacting with the widget.
[0085] Figure 4001 depicts a volumetric display.
[0086] Figure 4002 shows a user viewpoint moving with respect to a planar user interface (Ul) element (top view).
[0087] Figure 4003 depicts the hardware of the present invention.
[0088] Figure 4004 shows view ranges of widget faces of a volumetric widget arranged to allow any location to view the widget.
[0089] Figures 4005A - 4005D depict omni-directional volumetric widgets.
[0090] Figure 4006 shows a volumetric display with an array of user location detectors.
[0091] Figures 4007A - 4007C show a volumetric widget with faces corresponding to and oriented toward user locations.
[0092] Figure 4008 is a flowchart of operations that prevent the faces of a volumetric widget from occluding each other.
[0093] Figures 4009A - 4009C depict a sequence of face movements to eliminate facial occlusion.
[0094] Figure 4010 shows clustering viewpoints.
[0095] Figures 4011 A - 4011 C shows back and forth rotation of a volumetric widget
[0096] Figure 4012 depicts selection operations for a rotating widget.
[0097] Figure 4013 shows a volumetric widget having a rotating part and a stationary control part.
[0098] Figure 5001 depicts a user rotating an enclosure and the corresponding display with one degree of freedom.
[0099] Figure 5002 shows rotations with two or three degrees of freedom.
[00100] Figures 5003 and 5004 illustrate components of a rotating enclosure.
[00101] Figure 5005 depicts the digital hardware of the present invention.
[00102] Figure 5006 shows the operations associated with rotating an enclosure and display contents.
[00103] Figure 5007A - 5007C and 5008A - 5008C depict unity and positive rotational gain, respectively.
[00104] Figure 5009 illustrates operations with respect to widget objects within a rotating display.
[00105] Figure 5010A - 5010C shows maintaining widgets stationary with respect to a user while scene objects rotate.
[00106] Figure 5011 depicts time based or spatial rotation operations.
[00107] Figure 6001 depicts a volumetric display.
[00108] Figures 6002A-6002B shows tablet input devices associated with the display.
[00109] Figure 6002C shows tablets with regions corresponding to the volumetric display.
[00110] Figure 6003 illustrates a surface restricted cursor.
[00111] Figures 6004A and 6004B show user interaction with the volumetric display.
[00112] Figure 6005A and 6005B show 3D interaction with the volumetric display.
[00113] Figure 6006 show pointing with a beam.
[00114] Figures 6007A - 6007C show floating cursors.
[00115] Figure 6008 depicts hardware of the invention.
[00116] Figures 6009A - 6009D illustrates several types of digitizer displays.
[00117] Figure 6010 depicts a vector based cast ray.
[00118] Figures 6011A - 6011C show planar based cast rays.
[00119] Figure 6012 shows a surface tangent cast ray.
[00120] Figures 6013A and 6013B depict a fixed relationship between an input device and a ray based cursor.
[00121] Figure 6014 shows a cursor of intersecting rays.
[00122] Figures 6015A - 6015C show a bead cursor.
[00123] Figure 6016 depicts a ring cursor.
[00124] Figure 6017 illustrates a cone cursor.
[00125] Figure 6018 shows a cylinder cursor.
[00126] Figures 6019A and 6019B show a plane cursor.
[00127] Figures 6020A and 6020B illustrates a region of influence.
[00128] Figure 6021 A and 6021 B depict cursor guidelines.
[00129] Figure 6022 depicts object control with a ray.
[00130] Figure 6023 show user following track pads.
[00131] Figure 6024 illustrates the operations for a floating or surface cursor.
[00132] Figure 6025 illustrates operations for a ray pointer.
[00133] Figure 6026A and 6026B illustrate additional pointers
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS
[00134] Volumetric displays allow a user to have a true three-dimensional (3D) view of a scene 1012 and are typically provided in the form of a dome 1014, as depicted in figure 1001. The user 1016, as can be surmised from figure 1001 , can move about the dome 1014 to view different parts of the scene 12. From a particular arbitrary viewpoint or position, a user may want to interact with the scene or content within the volumetric display.
[00135] There are a number of different solutions to this problem. These solutions involve creating input/output configurations for the volumetric display that define a spatial correspondence between an input space and an output space. The configurations also define a dynamically updatable spatial correspondence of the input and output spaces with the user.
[00136] In a first solution, a 3D volumetric input space is mapped to a 3D volumetric display space. In one configuration, as depicted in figure 1002, the user's hand 1030 is tracked via a glove or a set of cameras in a volume 32 directly below the display volume 1034. A virtual representation of the hand 1036, or some other type of position indicator, such as a cursor, is superimposed into the 3D output volumetric display 1034. In a second configuration, as depicted in figure 1003, the 3D display 1050 is surrounded by a 3D input space 1052, created by a 3D volume input system, such as the Flock of Birds system from Ascension Technology Corporation. In this configuration, the user's hand 1054, including a position indicator/sensor, is mapped to a cursor 1056 or some other position indicator representation, such as a virtual hand, within the display 1050. The position sensor also produces a vector that indicates which direction the sensor is pointing. The vector can be used to create a cursor in the enclosure at a fixed position along the vector. Rather than using the vector produced by the position sensor, the system infers an input vector based on the position of the input device and the center of the display. This spatial relationship or correspondence between the input space, output space and user position is dynamically updated as the user moves about the display. That is, the input/output space is automatically compensated/reconfigured. Another configuration is to use half-silvered mirrors 1070 (see figure 1004) to combine the volumetric image 1072 with the user's view of their hands in a hand movement volume. This way, the user sees their hands operating within the display. Another alternative is to use a camera to capture the users hands in the input space and superimpose them onto the volumetric display space. Another alternative is an augmented-reality system where the user has a see-through, head mounted display (2D) which is being tracked. As the user moves the position and orientation of their head, graphics are presented on the LCD display and are aligned with real-world objects.
[00137] Another solution is to map a planer 2D input space into a 3D output space. This is particularly useful in controlling some subset of the 3D volumetric display space. For example, a standard 2D digitizing tablet or digitizer 90 (see figure 1005) or a regular mouse can be mapped to control aspects of the 3D scene, such as moving 3D objects along two dimensions.
[00138] A further solution is to map a planar 2D input space to a planar 2D space within the 3D output space of the display, as depicted in figure 6. In this situation, the system maps the input space of a digitizing tablet 1110 and the tilt/orientation of the tablet as sensed by a tilt/orientation sensor 1112 to a corresponding planar space 1114 in the display 1116. The angle of the plane 1114 is responsive to the sensor 112. If the display enclosure 130 has planar surfaces (e.g., a cubic enclosure), the enclosure surface is used as the planar input device, as depicted in figure 1007. It is also possible to use a transparent digitizer superimposed over an LCD display.
[00139] Still another solution is to map a non-planar 2D input space to a 3D output space. In this solution, as depicted in figure 1008, the system uses the display enclosure 1140 as the input space (i.e., the enclosure is a transparent digitizing input surface). In this embodiment, the position 1142 touched by the user or indicated by a pointing device, such as a stylus or surface fitting curved mouse, is mapped to a position in the display. This is a direct and compelling way to interact with these displays.
[00140] In addition to direct manipulation using the hands, the solutions described herein also provide physical intermediaries between the hands and the input space as described below.
[00141] Another solution when the user desires to interact directly with the enclosure surface, is to deform the surface 1160 of a conventional deformable membrane surface that detects multiple pressure points and finger pinches 1162, as depicted in figure 1009. In this situation, the surface to display mapping discussed above is performed.
[00142] Instead of using the hands directly on the enclosure surface, the system has a surface that detects and tracks a variety of input devices. A digital stylus 1180, as shown in figure 1010, where a point and an orientation can be input or a Rockin'Mouse shaped device 1190, as shown in figure 1011 (see U.S. Patent 6,115,028) also allowing a point and an orientation to be input are used. A surface fitting wireless mouse, such as a curved (concave) bottom mouse, can be used with a curved surface output configuration. This type of mouse can also be park-able using electrostatic, magnetic or some other sticky method of removably adhering the mouse to the display surface. Using a mouse has the advantage of buttons and form factors with which people are familiar. In this situation, the surface to display mapping discussed above is performed.
[00143] The physical intermediaries also do not have to be on the enclosure itself as described below.
[00144] In an embodiment input devices 1200, such as buttons, keyboards, sliders, touch-pads, mice and space-ball type devices, etc., are mounted along the perimeter of the display (see figure 1012). In this embodiment, the input devices such as buttons for up, down, forward, backward, left and right motions, allowing multiple degrees of freedom, are used to control the position of a cursor like such buttons control the position of a cursor in a 2D system. The input devices 1210, 1212, 1214 may need to be "repeated" (i.e., have more than one of each along the perimeter) to allow for simultaneous used by many users, or for use from any position the user may be standing/sitting at as shown in figure 1013. Rather that having multiple input devices positioned around the display as depicted in figure 1013, the mounting platform 1220 that houses these devices could be made moveable (rotatable) around the display, as depicted in figure 1014, so that users can easily bring the required device within reach by simply moving the platform. These devices typically communicate wirelessly by radio or infrared signals. The position of the movable device also provides information about the users position or viewpoint.
[00145] The present invention is typically embodied in a system as depicted in figure 15 where physical interface elements 1230, such as a rotary dome position encoder, infrared user position detectors, a keyboard, touch sensitive dome enclosure surface, mouse, beam pointer, beam pointer with thumbwheel, stylus and digitizer pad or stylus and stylus sensitive dome enclosure surface, stylus with pressure sensor, flock-of-birds, etc. are coupled to a computer 1232, such as a server class machine. The computer 1232 uses a graphical creation process, such as the animation package MAYA available from Alias|Wavefront, Inc., to create three-dimensional (3D) scene elements. This process, using position inputs from the input configurations as discussed herein, also creates the virtual interface elements, such as a virtual hand, a 3D point cursor, a 3D volume cursor, a pointing beam, a bead, etc. The display output, including the scene and interface elements, is provided to a volumetric display apparatus configuration 1234, such as one that will produce a 3D holographic display and discussed herein.
[00146] The configurations that include a transparent digitizer or touch sensitive surface have a number of different shapes as depicted in figures 1016A - 1016D. In one embodiment a dome shaped enclosure 1250 has a dome shaped digitizing tablet as depicted in figure 1016A. In another embodiment the dome shaped enclosure 1256 (see figure 1016B) is used with a rectangular or cylindrical shaped digitizing tablet 1258. In a further embodiment, as shown in figure 1016C, a cylindrical or cubical enclosure 1260 is used with cylindrical or cubical digitizer surface. In a different embodiment the enclosure 1264 is dome shaped (or cubical or cylindrical) and the digitizing surface 1266 is planar as depicted in figure 1016D.
[00147] A digitizer 1280 (see figure 1017), such as described in U.S. Patent 5,854,449 incorporated by reference herein, determines a position of a stylus or pointer 1282 relative to a surface 1284, such as a transparent dome surface, having a checker board type closely spaced positional grid 1286 thereon when seen from above. A processor 288 determines the coarse position of the pointer relative to the grid by sampling the grid lines through a set of multiplexers 1290 and 1292. An error correction system 1294 generates and outputs a true position of the pointer 1282 relative to the surface 1284 to a computer system 1232 (see figure 1015). The pointer 1282 typically includes an electromagnetic transducer for inducing a signal in the positional grid 1286 and the processor 1288 is coupled to the positional grid 1286 for sensing the signal and generating the coarse position of the pointer 1282. The transducers also allow the determination of a vector from grid signals that indicates in which direction the pointer 1282 is pointing. Touch sensitive input surfaces operate in a similar fashion.
[00148] The positional grid 1286 can be applied to a surface of an enclosure, such as a dome shaped enclosure 310, as depicted in figures 1018A and 1018B. Figures 1018A and 1018B (an exploded view) show a section 1312 of the dome surface including an inner substrate 314 and outer substrate 1316 between which is sandwiched the grid 1318. The substrates comprise transparent materials, such as glass or plastic.
[00149] In using these input and output configurations the computer system 1232 (see figure 1015) performs a number of operations as depicted in figure 1019. The operations include obtaining 1330 the coordinate systems of the input device and the volumetπc display. The range of the coordinate systems is also obtained so that out-of-space conditions can be determined. Next, the system samples 1332 positional outputs of the input device, such as the digitizer, mouse, flock-of-birds, etc., to obtain the location of the users input. This information can also include information about where the user is pointing. This position (and orientation if desired) is mapped 1334 into a 3D position within the volumetric display using the coordinate system (and the orientation vector, if needed). The cursor or other position indicating representation, such as a virtual hand, is drawn 1336 at the mapped position with the volumetric display. The mapping may involve determining a position on the surface that is being touched by a digitizing stylus, projecting a ray into the enclosure from the touch position where the ray is oriented by the pointing vector of the input stylus and positioning the cursor at a variable or fixed position along the ray. Another mapping causes relative motion of a 3D input device such as a glove to be imparted to a cursor when a motion function is activated. Other mappings as discussed in the related applications are possible.
[00150] The operations described with respect to figure 1019, when a digitizing enclosure surface is the input configuration, allow the user to interact with a surface of a three-dimensional (3D) volumetric display and affect the 3D content of the display responsive to the interaction. The interaction involves the user manipulating the stylus in a sensing region of the digitizing grid, the mapping of the stylus position to a 3D display position and the creation of a cursor at a 3D display position. The cursor, in one of a number of different possibilities, is created at a distance offset from a tip of the stylus along a pointing vector of the stylus. The cursor can be used to perform typical functions such as selecting, painting, dragging/dropping, etc.
[00151] The present invention has been described with respect to input configurations where commands are input through position sensing type devices, such as a mouse, a pointer, touch sensitive surface, etc. It is also possible to use other types of input configurations, such as non-spatial configurations. One non- spatial input space or configuration is a conventional voice or speech recognition system. In this configuration a voice command, such as "down" is recognized and the selected object or volume is moved accordingly. In this case down. The object is moved down in the display space at a constant slow rate until it reaches the bottom or until another command, such as "stop" is input and recognized. For user centric commands, such as "move closer", a user position sensing system inputs the user position, the position is used to determine the relative position of the active object with respect to the user or the vector pointing from user to the object. This vector is used to determine a direction for object movement. To move closer the object is moved along the vector toward the user by moving in a negative direction. Again the motion would continue until a blocking object is encountered or another command is recognized.
[00152] Another non-spatial input configuration uses non-speech sounds, such as tones from a conventional multifrequency tone generator. Each multifrequency combination corresponds to a command and a conventional tone recognition system is used to convert the sounds to commands.
[00153] The input space or configuration could also use conventional eye- tracking-head-tracking technologies alone or in combination with other input configurations.
[00154] Volumetric displays allow a user to have a true three-dimensional (3D) view of a 3D scene 2012 and are typically provided in the form of a dome 14, as depicted in figure 2001. The user 2016, as can be surmised from figure 2001 , can move about the dome 2014 to view different parts of the scene 2012. From a particular arbitrary viewpoint or position, a user may want to interact with one or more regions within or scenes/content within the volumetric display.
[00155] There are a number of different solutions to this problem. These solutions involve creating volume windows (VW) within the volumetric display or display space and allowing a user to manipulate these volume windows. A volume window is a volume region within the volumetric display delimited from other parts/regions (or volumes) of the volumetric display by 3D bounding boxes or 3D boundaries to allow users and the system to distinguish between the windows. The volumetric window boundaries can be delineated in a number of ways. For example, each VW can (1) have a wireframe border along the edges of the volume, (2) the wireframe itself could have a thickness as in a "bezeled" edge, (3) the background color within the volume can be a different color than the empty space within the 3D display (4) the "floor" of the VW can be a solid color or pattern with the "roof of the VW outlined in a wireframe or a solid pattern. The display and management system preferably will allow visibility of a volume from any viewpoint.
[00156] However, some cases may necessitate that only part of a volumetπc window is viewable based on where the user is positioned around the volumetric display. For example, suppose a volumetric display has a VW in the shape of a cube. One face of a VW could be "open" while the sides of the remaining faces of the cube appear opaque. Thus, the contents of the VW are only visible when the user can see the "open" side of the VW.
[00157] Volume windows can be active or inactive. This allows the system, for example, to direct input events to a target (active) volume window when multiple VWs are available. A portion of the active volume window is preferably "highlighted" to differentiate it among the other VWs. For example, the border bezel or titlebar of an active volume window may turn a brighter/darker color compared to the other VWs.
[00158] The solutions also provide mechanisms for general management of the placement of the volumes within the display and the data within the volumes. The user is allowed to define volumes, delineate sub-portions of a working volume, divide the working volume into-sub volumes, move volumes within the display, compact or compress volumes, establish relationships between volumes. For example parent and sibling volumes can be defined such that when an act is performed on the parent, the siblings react. As another example, if a parent VW is closed, all of the sibling VWs are also closed. Another example has a sibling VW attached to the border of a parent VW. When the parent VW moves, so does the sibling VW, etc. The solutions include extending basic operations of 2D window managers, such as drag/drop to operate with volumetric user interfaces.
[00159] The solutions also allow users to interact with the volumes (or display space). The users can use gestures to delineate sub-portions of the working volume. Figure 2002 shows a user 2030 using gestures to specify an operational plane 2032 within the volume 2034, and, if the command associated with the plane is "divide volume", it can be to divide the volume into two sub-volumes or volume windows. In this gesture operation the positions of the hands are sensed using a touch sensitive surface of the display 2034 or a 3D position sensing system and a virtual plane is created between the conduct points. A more specialized use is to create more space within the working volume, either by compacting objects within the space or by moving objects out the way. For example, a gesture can be given to "crush" the 3D scene 2040 along a specific horizontal plane 42 (see figures 3a and 3b). Here, the main elements of the scene 2040 would be still visible along the floor of the dome display 2044 to provide context while allowing manipulation access to objects 46 along the "back" of the scene. When a 2D window is iconified and put on the "task bar" this could be considered an extreme "crush" action. A volume window can be iconified in the same way. Alternatively, another style of 2D window management can be employed where the volumes are a tiled instead of overlapping or cascaded. Here, the full screen is used whenever possible and growing one edge of a volume window shrinks the adjoining window by the same amount. Windows never overlap in this tiling situation. Another gesture command would cause objects being pushed to shrink in site to create more space. Another example, as shown in figures 204a and 204b is a partition or scale of the 3D scene by using a "separation gesture" where the user specifies a start position 2060 (figure 4a) with their hands together and then separates the hands to make space 2062 (figure 4b). This has the effect of using virtual planes to part the 3D space, either translating the two halves or scaling the two halves (essentially scaling the scene to fit in the existing space). Other operations such as push out, delete, merge volumes, scale while preserving volume proportions, scale while not preserving volume proportions, "select" the volumes between the two planes defined by two hands, have a temporary volume defined by two planes positioned by the two hands (as described before), and where commands/operations are applied within this temporary volume or outside the temporary volume can be carried out with a plane tool used in volume management. Other gesture based actions are possible such as a "lasso" selection tool. Where a user gestures a shape on the display enclosure (e.g., an irregularly shaped oval outlined by contacting the display surface) and this shape is projected into the volume display as a "selection region" using a projection convention, such as orthogonal to a cube volumetric display surface.
[00160] These techniques use the 3D display map of the volumetric display to determine when one volume or tool encounters an object or another volume by comparing the assignment of display voxels to objects, volumes and tools. When a gesture causes the same voxels to be provisionally assigned to different objects, volumes, or tools, the system resolves the conflict by performing the appropriate function; such as moving a volume away from a plane being used to push aside objects and volumes.
[00161] Note that, in these operations, the system is performing a "pick detection". Depending on the type of input event (say mouse down), the window manager cycles through its parent windows passing along the input event and essentially asking if any window is interested. Since each window knows it's bounding box, it can determine if the event occurred in its 3D spatial volume. Ultimately, the system can determine if an event happened outside any volume window (e.g., it started on the "desktop"). The system can behave differently for events (e.g, perform some window management functions) that fall outside of VWs.
[00162] The present invention is typically embodied in a system, as depicted in figure 2005, where physical interface elements 80, such as a rotary dome position encoder, infrared user position detectors, a keyboard, touch sensitive dome enclosure surface, mouse, beam pointer, beam pointer with thumbwheel, stylus and digitizer pad or stylus and stylus sensitive dome enclosure surface, stylus with pressure sensor, flock-of-birds, etc. are coupled to a computer 2082, such as a server class machine. The computer 2082 uses a graphical creation process, such as the animation package MAYA available from Alias|Wavefront, Inc., to create volume windows and three-dimensional (3D) scene elements within the windows. This process, using position inputs from the input configurations, also creates virtual interface elements, such as a virtual hand, a 3D point cursor, a 3D volume cursor, a pointing beam, a bead, etc. suitable for manipulating the volume windows. The display output, including the volume windows, scenes and interface elements, etc., is provided to a volumetric display apparatus configuration 2084, such as one that will produce a 3D holographic display and discussed herein.
[00163] A volumetric display space can be divided into a number of sub-volumes or volume windows in a number of different ways or with different organizations. Figure 2006 depicts a cubic volumetric display space 2100 having a basic division into planar cubic type "row" windows 2102 while figure 2007 depicts division of a cubic display space 2120 into planar "column" windows 2122. Figure 2008 shows a dome or cylinder display space 2140 divided into pie wedge volume windows 2142. These sub-volumes can be created using the planar definition approach mentioned previously with respect to figure 2002. A user could manually define the volumetric space and create all of these subvolumes. In this approach, the system senses the position and orientation of one or a pair of pointing devices which are used to define a plane within the volume creating two windows. The plane is then designated as a bounding plane between two volumes. However, in practice, this is preferably done by the system adhering to a particular space management policy selected by the user. The user selects an icon representing the type of volume, designates a 3D origin or a default origin, and the system draws the predefined volume at the origin. For a "new slice" pie volume policy for a pie as in Figure 2008, the system would have a new slice predefined and shrink the pre-existing slices by some percentage to make room, such that the resizing of the slices is done by the system. This shrinking would happen more in a "fully tiled" policy. Otherwise, when someone requests a new VW, it would overlap with existing VWs. In a free-for-all overlapping policy, the system would "crush" or deflate the volumes into a 2D representation or smaller 3D representation and push them to the edge of the volumetric display (or to some designated taskbar).
[00164] Figures 2009a and 2009b show a dome shaped volumetric display 2160 including a volumetric display space 2162 having three cubic shaped volume windows 2164, 2166 and 2168. If the space 2162 is considered to be a root volume window, the spaces 2164, 2166 and 2168 are volume windows. These volume windows are created by the application used to view the data being selected (e.g., selecting a file and issuing the "open" command, causes the system to determine what application to use to display the data). A data file of the application has information such as preferred shape, size and position that is set as a default or that the application retained from the last time this VW was opened. Creating new volumes typically occurs from within the application. For the default shapes, the user also can specify the scale as well as the position of the new volume. By default, it is preferred that the new volumes be positioned in the center of the volume. The (0,0,0) coordinate origin will preferably be in the center of the base of the display. The particular placement of the origin is not as important as is the establishment of a standard.
[00165] Figures 2010a and 2010b illustrate a dome shaped volumetric display 180 including a volumetric display space 2182 including two volume windows, a curved-box 2184, and an oval-tube 2186. These shapes can be chosen by a user picking from a list of pre-defined volumes from within an application. Thus, the user just needs to specify the position and scale of the volumes. Simple volumes can be constructed using standard practices found in 3D graphics programs, such as Alias|Wavefront's Maya program. Creating an arbitrary shape can be performed by using an extrude command in this type of tool where the user draws a 2D cross- section shape and then defines a second profile curve to "extrude" along. Also, an existing 3D object can be selected and a new volume space can be defined based on the shape of the selected object.
[00166] Figures 2011a, 2011 b, 2012a and 2012b depict additional volumes managed by the present invention. In figures 11a and 11b the volumes do not intersect and are in a "tiled" configuration and the management system preferably allows the volume 2192 to be seen through the volume 2190 when the user is viewing the display from above (figure 2011a). With a tiled strategy a working window, if enlarged, would result in the shrinking of the displayed portion of an abutting volume. This is in contrast to the situation shown in figures 2012a and 2012b where the volume windows overlap and (active/working) window 194 takes display precedence over volume window 2196 and is the input focus.
[00167] The volumes discussed above all have height, width and depth. It is possible for a volume window to essentially have a minimal depth, such that it is one voxel deep and is a 2D window with 3D characteristics.
[00168] As in the 2D windows world, icons/controls can be designated for selection by the user for common operations. Figure 2013 illustrates a volume window 2210, actually a visible bounding box of a volume window, having an attached volume activation region 2212 that acts like the title bar at the top of a typical 2D window. The title bar will also have an optional text label (e.g., name of data file and/or application) and other graphic marks signifying status information and/or to identify the application running within the volume. The title or activation bar 2212 is typically attached to the volume to which it is assigned and conforms to the shape of the volume. The title bar 2212 signals the orientation of the volume and what side of the volume is the front. The title bar can be inside or outside the volume to which it is assigned. If inside, it can have the same appearing/disappearing operation as the "start" bar in a typical 2D windows system where the location of the cursor can cause the title bar to appear disappear. The title bar is a volume and preferably has a high priority for display such that it may only be clipped in limited circumstances. The title bar also preferably has a preferred front "face" of the data volume where it appears and the volume is assigned the highest'precedence or priority in the display memory/data structure so that it is completely displayed. When a pointer selects the title bar 2212, the volume 2210 becomes the active working volume. Dragging the title bar will also perform a move volume window operation. Within the activation region 2212 are four controls that could be considered among the typical controls for a volume window. These controls include a move volume window control 2214, a maximize volume window control 2266, a minimize volume window control 2218 and a resize volume window control 2220. These controls function in a manner similar to the controls in a 2D window display system.
[00169] The move volume control, when activated, allows the user to select the volume and move it to another location in a drag and drop type operation. In this operation, typically, a pointer device is used to select and activate the control 2214 by, for example, having a beam, created responsive to the pointer device, intersect the control. A button on the pointer, when activated, causes the selection of the control intersected by the beam. Similar to the 2D operation, when the move control is activated, the volume activation region of a window intersected by the beam becomes the volume selected for moving when a pointer device button is depressed. Once the volume window is selected, it moves with the movement of the beam until the button is released, similar to the drag and drop of a 2D window. The depth of moving volume window along the beam as it is moved is typically controlled by another control device, such as a thumb wheel on the pointer device. In performing the move the position of the bounding box for the window is updated in the volume window data structure. Thus, the user can swing the beam to move the volume transversely and use the thumb wheel to move the window closer to or further away from the user. Dragging the title bar will also perform a move volume window operation. In this move operation a 3D volume is moved in three dimensions in accordance with a 3D input vector or two separate 2D input vectors.
[00170] A resize control allows the volume window to be resized. The size can be automatically changed through automatic functions similar to increments in a zoom operation or the user can use an input device to resize the window by dragging the sides of the volume window. A drag of a corner of a volume window causes the volume to be expand in 3D. When a window is being resized and a side encounters another window the resizing can be stopped with a warning being displayed to the user or the display space allocated to the abutting window can be clipped. The portions of the volume window data structure defining a size and a position of a bounding box are updated during resizing.
[00171] The maximize operation 2216, expands the volume in the three- dimension until it "fills" the volumetric display. In cases where the display shape is different from the volume window shape, the expansion is according to a policy, such as center VW in the display space and expand until the VW contacts the outside edge of the display space. Of course the policy could only expand 2 or even 1 of the dimensions or the user could be allowed to designate dimensions to expand. During the maximize operation position and boundaries of the volume window bounding box are updated. The contents of the volume window are scaled in proportion to the change in volume of the volume window which occurs during the maximize operation.
[00172] In the minimize operation the system, substitutes a mini-3D icon for the VW and preferably places the icon at a designated position, such as the origin of the VW or on the bottom of the display space. An alternative is to display only the task bar at a preferred position. In the minimize operation the bitmap for the icon in the window data structure is obtained and placed in the display space as noted above.
[00173] The management of volume windows can be divided into a number of different tasks, such as initialization of a window, performing tasks within the window, etc.
[00174] The initialization or open operations, as depicted in figure 2014 include an application, such as the manager of a 3D drawing program requesting the allocation of one or more volumetric drawing windows. This is an application started within the volumetric display and it asks for display space. The request can include a preferred position or placement for the window within the volumetric display which can be associated with the current position of a cursor, a preferred size, etc. The volume manager allocates 2240 a VW data structure (see figure 2016) for each of the volumetric windows requested by the application. The volume manager then places 2242 the volume window in the volume window data structure, links it to the root volume, sets a permission in the data structure indicating that the application can perform functions in the allocated window and informs the application of the location of the data structure. The manager then determines 2244 the placement and size of the window responsive to default parameters and any currently active volumetric windows. For example, if the default or requested size would overwrite an existing volume, the volume being initiated can be scaled down or moved by the manager so that there is no overlap with an existing volume window. As in common overlapping 2D windowing systems, a new window request is always granted and placed on top of existing windows. The scaling of nearby windows occurs if the system is employing a tiling policy VW's and follows conventions as in 2D operations but in three dimensions. One approach is to push existing VWs to the outer perimiter of the volumetric display, reducing any empty space between VWs. This would grab as much free space as possible before having to scale existing VWs. The system then places 2246 a visible boundary for the bounding box in the display around the volume determined by the manager to be allocated to the initiated volume window. When the bounding box has a predefined shape the bounding box can be drawn and highlighted by the manager or the request can be passed onto the application which can perform the operation if a VW has an arbitrary shape. Once the boundary is created, the application directs all application events, in this case drawing events, such as paint brush movement, to the volume window(s) allocated.
[00175] In order to perform application functions, such as drawing in a volume window, the application, such as a drawing application, sends a request, in this case a draw request, to a drawing manager. The request includes a volume window identifier (ID), a command or function to be performed, such as DrawLine, and a location where the command is to be performed. The drawing manager, as depicted in figure 2015, checks 2260 to see if the unique volume window ID of the request is valid and to see if the application is allowed to draw in the identified volume window by comparing the identification in the graphics port of the volume window data structure. The manager then checks 2262 to see if any of the window is visible (that is, it has not been minimized) again by accessing the data structure. If no part of the window is visible, no action occurs. However, the volumetric window will have a flag (updateVRgn) set so when the VW becomes visible or is no longer being clipped, the region will be redrawn. Using the data structure of the window, the manager maps 2264 the location associated with the request from the application coordinate space to device space taking into account the current position or location of the specified volume window (boundingBoxPositionVW and orientationVW). That is, the application location is mapped to a corresponding location in the window of the display. From the list of active windows, the manager then determines or computes 2266 which regions of the specified window are visible. The draw request is then executed 2268 only for valid visible regions of the specified volumetric window.
[00176] The volume manager uses a data structure such as depicted in figure 2016. This data structure is a list data structure having a root node 2280 and can take the shape of a linear list or a tree of VW nodes 2282 as shown as the example. The root node 2280 includes the following fields.
Volume Manager Data Structure
Struct VM {
List VW;
ShapeType shape;
Int hardwareWidth;
Int hardwareHeight;
Int hardwareDepth;
}
Operations: lnitialize()
Create VW()
Name VW()
ShowVW()
HideVW()
HiliteVW()
BringToFront() '
SendBehind()
MoveVW()
ResizeVW()
The List is a list of pointers to the volume windows (nodes) of the display. The shape type defines the particular type of displays, such as dome, cube, cylinder, etc. The shape has an associated width, height and depth dimensions. The operations are pointers to operations that can be performed or are valid in the volumetric display and include any parameters that may be set by default for the operation. The operations for move, maximize and minimize discussed previously would typically be included but are not shown.
[00177] Each of the volume windows includes the following fields.
Volume Window.
Struct VW {
VgraphicPort vport;
Boolean visibility;
Boolean hilited;
TitleBarStructure titelbar; VolRgnHandle contentVRgn; VolRgnHandle updateVRgn; VolRgnHandle boundingBoxPositionVW; OrientationHandle orientationVW; ShapeType shape; lconlmage 2Dicon;
Handle nextVW;
Handle parentVW;
}
The VgraphicPort defines the application graphics port in which the volume window is drawn. This structure defines the volume in which the drawing can occur, the volume window's visibility region, clipping region, etc. Fields for making the volume visible (hidden or visible) and highlighting the volumes are included. The
TitleBarStructure contains information to position, display and handle the move, maximize, minimize, and resize volume window functionality. The "Front" of the VW is determined, in part, by the orientationVW information. The VolRgnHandle are structures that define a pointer to a volumetric region. Note that this region can be defined as an arbitrary shape. By default, the VolRgnHandle consists of a CUBIC shape with six values: bottom FrontX, bottomFrontY, bottomFrontZ and topBackX, topBackY, topBackZ. The contentVRgn defines the space the volume window owns, relative to the application. All of the region may or may not be visible within the volumetric display (depending on the position of the VW and other VWs). The updateVRgn specifies which portion of the entire contentVRgn which the application must refresh and redraw. While the VW can be any shape, a bounding box will be defined that minimally surrounds the shape. Thus, boundingBoxPositionVW specifies the absolute position of the VW relative to the (0, 0, 0) origin of the volumetric display. The orientation of the volumetric window is defined by the OrientationHandle which specifies the central axis or spine of the volume window as well as the "front" region of the volume window. The central axis, by default, is a vertical vector which matches the (0, 0, 0) coordinate axis of the volumetric display. ShapeType is a set of known volume window shapes (e.g., CUBIC, PIE, CYLINDER, EXTRUDED_SURFACE, ARBITRARY). 2Dicon is a 2D or 3D bitmap image used to represent the VW when it is minimized. nextVW points to the next VW in the WindowManager's VW list. ParentVW, by default is the RootVW. However, if subVW are defined, then the parentVW will not be the RootVW but instead the true owner of the subVW.
[00178] When the computer operating system receives an input event, the volume manager uses the input event and an assignment policy to determine which volume window receives the event. For example, one policy is to send all events to the application corresponding to the window that encloses the spatial location of the event or cursor. If more than one window encloses the event, a priority policy is used, such as visible volume window. For input events that do not have an inherent spatial position for example keyboard events, the events are sent to the window that currently has the designated input focus, such as the working or active window. When the cursor or input focus is not in a VW, the event is sent to the root.
[00179] Volume windows can be related hierarchically such that a window can have volume sub-windows. It is preferred that all sub-windows obey the operations of the parent in the hierarchy. For example, if a parent window is deleted all children of the parent are also deleted. If a parent gets moved all of the children are moved by the same amount and in the same direction. A SubVW does not necessarily move with the parentVW. However, if a parentVW is minimized or closed, the subVW does comply. A parent may or may not "clip" the display of its children against its own bounding box. That is, children may exist outside of the volume of the parent. A child preferably inherits properties or attributes of the parent volumetric window.
[00180] Volumetric displays allow a user to have a true three-dimensional view of a scene 3012 and are typically provided in the form of a dome 3014, as depicted in figure 3001. The user 16, as can be surmised from figure 3001 , move about the dome 3014 to view different parts of the scene 3012. From a particular viewpoint, a planar 2D widget 3018 within the volumetric display and which may have icons, controls etc. within it can be in a position such that it is difficult to access by the user.
[00181] There are a number of different solutions to this problem. One solution is to place the 2D widgets 3030 and 3032 on the inside surface of the volumetric display enclosure 3034, as depicted in figure 3002; that is, reserve portions of the shell of the display for graphical user interfaces. Conventional pointing and manipulation techniques, such as touching a touch sensitive surface of the enclosure 3034, can be used to interact with the widgets. The widgets also do not get mixed up with data or other data based graphics within the volumetric display. This type of widget positioning may require that the widgets be semitransparent so that the user can see the graphics within the display underneath or behind the widgets. Another alternative, which prevents the user's view of the display contents from being obscured by the widgets, is to place the widgets 3036 and 3038 in a ring 3040 at the bottom of the display. A further alternative is to house the 2D widgets or GUI elements in a plane positioned in the display. Figure 3002 depicts a widget 3042 housed in a horizontal plane positioned on a bottom of the display enclosure, or on the volumetric display system "desktop." The plane could also be positioned vertically or at an arbitrary angle depending on the needs of the user. Another alternative is to conventionally determine the users position and/or eye gaze, as depicted in figure 3003, and position or arrange the 2D widgets within or outside the focus of attention as needed. For example, widgets that require the user's attention (i.e., Alert widgets) would appear at the center of the user's eye gaze and at the front of the volume display. Status information that is needed but not critical can appear on the peripheral of the users eye gaze perhaps surrounding the object that is the user's current focus of attention. Widgets can be placed in depth to assign priorities to them. For example, an Alert dialog box may be of a higher priority than another dialog box thus causing the Alert dialog box to be placed in front of the first dialog box and the first dialog box is "pushed back" in depth (stacked).
[00182] The present invention is typically embodied in a system as depicted in figure 3004 where physical interface elements 3050, such as a rotary dome position encoder, infrared user position detectors, a keyboard, touch sensitive dome enclosure surface, mouse, pointer, etc. are coupled to a computer 3052, such as a server class machine. The computer 3052 uses a graphical creation process, such as the animation package MAYA available from Alias|Wavefront, Inc., to create a three-dimensional (3D) scene including virtual interface elements, such as the two dimensional graphical user interface elements or widgets discussed herein. The display output, including the scene and widgets, is provided to a conventional volumetric display apparatus 3054, such as one that will produce a 3D holographic display.
[00183] 2D widgets can be represented within a computer system in a number of different ways. A typical way is to represent the widget as a two-dimensional display map of pixels that have a color value and possibly a control value associated with each of the two-dimensional positions within a virtual image area the widget typically occupies. In a 2D display system the widget is mapped from the virtual positions to actual display positions responsive to the position of the widget specified by the system. The system position is often controllable by the user, such as allowing a user to move a GUI to different places on a display with a point and drag type command or action. A volumetric display is comprised of voxels or volume pixels where each voxel has a 3D position as well as a voxel height, width and depth. Figures 305A and 305B depict a portion of a plane 3070 of voxels from the front (5A) and side (5B) in a volumetric display 3072. The positions of voxels within the display are typically determined with reference to a center of the display having the coordinates (0,0,0). The voxels within the display can be arranged in a number of different ways as depicted in figures 3006A and 3006B where figure 3006A shows concentric layers 90 and 92 of voxels and figure 6B shows rectilinearly stacked layers 3094, 3096, 3098 and 3100 of voxels. In these examples voxels 3102, 3104, 3106, 3108 and 3110 and voxels 3112, 3114, 3116, 3118 and 3120 are surface voxels that might be used for part of a 2D widget displayed on the outside surface of the display inside the enclosure. Note that the programming interface to a volumetric display may have abstractions in which the 3D display space is defined as a collection of voxels that are discrete, cubically shaped, and individually addressable sub-portions of the display space. However, the display software may translate these discrete voxels into a continuous representation that is more compatible with the display rendering hardware. [00184] In displaying a 2D widget within a volumetric display the pixels of the virtual image must be mapped to corresponding voxels. This can be accomplished by a mapping between the 2D virtual representation and a "layer" of voxels in an appropriate location in the display, such as on the "surface" of the display. For example, a control portion of a 2D widget, such as part of a trashcan icon, might be mapped to the voxels 3112-3120 in figure 3006B. In the best scenario the mapping of the 2D widget to the voxels is performed continuously or is updated at the rate of the refresh rate of the volumetric display. These mapping operations are shown in figure 3007.
[00185] The voxels used for display need not be limited to displaying a widget. One or more widgets can be displayed in a plane. In fact, the entire 2D desktop work space typically presented to a user on a display, such as a CRT or LCD, can be converted into a three-dimensional plane. The plane can be at the bottom of the volumetric display or at any desired angle or position within the volumetric display. The workspace can also be divided among several planes with different windows/icons/controls tiled or cascaded.
[00186] The mapping of the virtual representation of the widget, as depicted in figure 3007, starts with obtaining 3132 the pixel based image of the 2D widget, which is essentially a 2D window pixel map of a portion of a 2D desktop. For the workspace, the 2D representation of the entire workspace is obtained. The pixels of the shape of the widget are then mapped 3134 to the voxels of the display where the voxels are typically offset from the center of the display such that an x coordinate of the 2D pixels maps to a 3D voxel at x+ (x offset), the y coordinate of the 2D pixel maps to the 3D voxel at y + (y offset) and the z coordinate of the voxel is 0 + (z offset). This can create a widget that has a 3D surface or a volume. Note that scaling may occur in this mapping such that the widget is either made "larger" or "smaller" as compared to the virtual map. Because the mapping can be from a linear "plane" in which the 2D widget is represented to voxels that may form a curved surface, the mapping uses conventional coordinate translation techniques to determine the effects for each voxel to allow the 2D widget to be curved in the volumetric display space. This mapping is appropriate particularly for displays with voxels arranged as depicted in figure 6B. Next the texture of the 2D interface is mapped 3136 to the 3D surface of user interface. In performing this mapping, the interface typically takes precedence over other display values of the voxels that may have been set by the scene of the display. That is, if the user activates a control that pulls down a menu, if the position of the menu coincides with a scene element, such as a 3D graphic of a house, the pull down menu overwrites the scene values. It is also possible to combine the values of the scene and user interface in some way, such as by averaging the scene and interface values, so that both are visible, though this is not preferred.
[00187] The widgets can also be texture mapped. In general, the texture mapping procedure includes first having the system determine whether each voxel in the display intersects a surface of the 3D widget. If it does, the system maps the voxel position into a (u,v) local surface position of a texture map for the widget. Using the local surface position, the system samples the texture map for the widget surface. The value of the sample is then assigned to the voxel. When the 3D widget is more than one voxel deep, and depending on the surface intersected, the mapping may sample a front, back or side texture for the widget. The present inventions obtains the texture information from a single, 2D texture map of the original 2D widget. That is, only one texture map of the 2D widget is needed to translate it into voxel space.
[00188] Additional 3D characteristics can be obtained from the 2D widgets. For example, shading is commonly used on 2D widgets to give the visual impression of depth. A 3D surface for a widget is derived by analyzing this shading information such that these shaded 2D widgets would actually have true depth in the 3D display. Also pseudo-2D widget behavior is realized as real 3D behavior in the 3D volume. For example, depressing a push button widget actually moves the button in depth in the 3D display. Another aspect about giving 2D widgets volume is rather than synthesizing the depth aspect of a widget, it is simply determined by convention. For example, the convention could be to surround each 2D widget or collection of 2D widgets in the 3D display with a 3D widget "frame" which would give the edge of the widget thickness and thus make viewing and accessing from extreme angles easier. An example of this is that the frame of a 2D window automatically is given thickness in 3D volumetric displays. As a result, the texture of the widget takes on the shape of the surface of the widget. Because the surface can be enlarged or otherwise changed in configuration during the mapping, the texture mapping may use conventional processes to stretch or morph the texture for the surface. Because the mapping of a widget may map from a linear shape to a curved shape associated with the surface of a dome, conventional processes are also used to warp or morph the widget shape and/or texture into the desired shape, such as to make a curved edge of a menu window appear straight in a polar type coordinate system.
[00189] Once the 2D widget is initially mapped into the display, the widget is ready for use or interaction with the users. This interaction occurs within the operations associated with creating and projecting a scene within the volumetric display. That is, the GUI operations may be at the beginning or end of a scene projection operation or in the middle based on an interrupt. As depicted in figure 3008, the operations form a loop in which the 2D virtual display is updated 3150. This update may occur because the user has activated a pull down menu in the display, the system has moved the display because of a spatial conflict or a cursor/pointer has been moved into the display by the user to make a control selection or for a number of other reasons. The update occurs as previously discussed with respect to figure 3007. The updated display is mapped 3152 to the desired position and voxels within the volumetric display and the voxel data is output 3154 to the volumetric display system. The determination is made as to whether a control type input has been input 3156, such as by the user positioning a pointer at a 3D position in or over the widget and activating a selection device, such as a button of a mouse or a touch sensitive portion of the display enclosure. If a control type input has been input, the system determines 3158 whether the pointer lies within or the touched part of the enclosure lies over a control portion of the 2D display. This is accomplished by essentially comparing the coordinates of the pointer or the coordinates of the touch to the coordinates of the control specified in the virtual map of the 2D widget. This involves mapping or performing a coordinate system translation of the voxel coordinates of the pointer position in display space to the corresponding coordinates in the 2D widget map space when a pointer is used. For a touch, the touch position is translated to the nearest voxels along a surface normal of the display enclosure and then the voxels so selected are mapped as noted above. If a control has been selected and activated, the system performs 3160 the function of the control.
[00190] The discussion above considered a single type of control such as a button being activated. For more complex controls, such as a slider, the loop of figure 3008 would include the conventional processes associated with complex 2D controls that govern widget behavior. In a slider off axis movements, such a touch that is perpendicular to slider orientation, are ignored but continued contact during a touch sliding operation keeps the slide function active. These types of more complex functions can be supplied by one of ordinary skill in the art. The particular policies concerning when and how to constrain input depend on the type of control involved.
[00191] The present invention has been described with respect to taking a 2D representation of a widget and mapping its texture representation into a 3D widget that has volume. It is also possible to construct 3D widget representations, such as a 3D slider, and map them more directly. The present invention has been described with respect to activating a control associated with a cursor or pointer intersecting a voxel corresponding to a control by ray-casting the pointer of the center of the display and selecting a first control that has voxels intersected by the ray. The control discussed herein has been active controls in which the user activates the control. Other types of controls can also be involved, such a dwell controls which are typically used to display help information in a "bubble". The input discussed herein has included pointing inputs. However, the input can be text from a keyboard that is entered in a window of a widget.
[00192] Note that translating a 2D widget to the 3D volumetic display may not require the use of texture maps. Instead, the volumetric-based widget can be reconstituted/created using a core set of drawing primitives library (such as drawline, fill rectangle, draw text) that has been tailored to work on the volumetric display.
[00193] As a consequence of the present invention of translating a conventional 2D widget into a volumetric-based widget through the use of texture mapping procedures, the procedures can be reversed to translate or collapse volumetric- based widgets into a 2D widget representation.
[00194] The present invention also includes a hybrid display system including a volumetric display 3054 and conventional 2D displays 3056, such as LCD or CRT screens (see figure 3004). One style of a hybrid display has a spherical-shaped volumetric display (figure 3001 ) with a traditional LCD display mounted and viewable as the floor of the display replacing or in addition to widget display 3042 of figure 3002. Here the 2D widgets may reside on the LCD display - which also serves as part of the display enclosure. Similarly, small touch-sensitive LCD panels may be arranged along the base rim of the spherically shaped or cubically-shaped volumetric display and serve as a displayable exterior surface on the enclosure replacing or in addition to widget display 3038 of figures 3002. One additional example is a hybrid configuration in which images are projected onto the volumetric enclosure using a traditional digital projector (often used to project computer displays onto large screens for presentations). While the 2D widgets may be presented on these traditional 2D displays serving as part of the volumetric enclosure, software libraries and infrastructure treats these display spaces as separate logical displays and separately addressable or as part of the single, logical voxel space of the volumetric display.
[00195] Volumetric displays allow a user to have a true three-dimensional view of a scene 4012 and are typically provided in the form of a dome 4014, as depicted in figure 4001. The user 4016, as can be surmised from figure 4001 , can move about the dome 4014 to view different parts of the scene 4012. As the user 4016 moves (see figure 4002) to different viewpoints 4030, 4032 and 4034, a planar UI widget 4036 within the volumetric display 4038 relatively turns such that it is no longer viewable by the user as depicted in the top view of figure 4002.
[00196] There are a number of different solutions to this viewability problem. The solutions include the production and display of a volumetric graphic user interface element or widget or omni-viewable widget. A volumetric widget or omni- viewable widget is one that can be viewed and interacted with from any user location or viewpoint around a volumetric display. One solution that provides a volumetric widget is to replicate a planar widget several times around the volumetric display so that the user will always have a readable view of the contents of the widget. This solution can result in a cluttered display. Another solution is to provide a multifaceted widget where a face of the widget is always readable by the viewer. This can result in a widget that takes up a greater volume of the display. A further solution is to provide a widget that rotates to facilitate viewing from any viewpoint. This can result in a widget that is readable only part of the time. A further solution is to track a position of the user and orient the widget to face the user's location. This solution requires tracking technology. An additional solution is to combine two or more of the solutions discussed above. Each of these solutions provides a volumetric or omni-viewable widget and will be discussed in more detail below.
[00197] The present invention is typically embodied in a system as depicted in figure 3 where physical interface elements 4050, such as a rotary dome position encoder, infrared user position detectors, a keyboard, etc. are coupled to a computer 4052. The computer 4052 uses a graphical creation process, such as the animation package MAYA available from Silicon Graphics, Inc., to create a three- dimensional (3D) scene including virtual interface elements, such as the volumetric widgets discussed herein, and move them about in the scene automatically or based on some user control input. The display output, including the scene and widgets, is provided to a conventional volumetric display apparatus 4054, such as one that will produce a 3D holographic display
[00198] As depicted in figure 4004, a display 4070 can include a volumetric widget that comprises multiple duplicate copies 4072, 4074, 4076, 4078, 4080 and 4082 of a virtual interface element, such as a graphical slider or icon toolbox. Each of the elements 4072 - 4082 has a viewing angle range with, for example, element 4080 having a range 4084 and element 4082 having a range 4086. The elements are arranged or positioned in such a way 'that from any point around the display 4070 a user is within the acceptable viewing angle range of one of the elements. This can be accomplished by providing a sufficient number of elements or by arranging the elements more or less deeply within the display such that the ranges of the adjacent elements overlap at or near the surface of the display 4070. This is shown in figure 4004 with the ranges 4084 and 4086 overlapping at the surface of the display 4070. The ranges of adjacent elements need not overlap at the surface of the display 4070 but can overlap at a predetermined distance from the surface responsive to an expected distance of typical user viewpoints from the surface.
[00199] Omni-viewable widgets can be created by forming widgets with multiple faces or multiple surfaces as depicted in figures 4405A - 4405D. The faces are typically exact duplicates showing the same information in the same way. Figure 4405A depicts a cubical widget 4100 with six faces and each face containing a duplicate of the contents to be displayed by the widget. Figure 4005B depicts an octagonal solid widget 4102 with 8 faces and each face displaying the same contents. Figure 4005C depicts a tent type widget 4104 with two faces, each facing the opposite direction and each displaying the same contents. This type of widget can also be rotated back and forth, as indicated by the arrows, to allow the viewing range of the two displays to intersect all user viewpoint positions. Figure 4005D depicts a globular or ball shaped widget 4106 with identical faces arranged one the surface of the globe. Other shapes of multiple face widgets are possible such as pyramidal and cylindrical. Note that each face does not have to be an exact duplicate; a face may be specialized for a particular viewpoint. For example, if a widget is a direction widget (showing the compass directions), each face is not a literal duplicate as in this case, each face shows the directions appropriate to their viewpoint.
[00200] To rotate or move a widget to face a user the position of each user must be determined. The detection or determination of the position of a user or users can be accomplished in a number of different ways. For example, each user can be provided with a position indicator as part of an input device, such as that provided by a conventional 3D input glove. Input devices located around the display can also be used by the users to register their locations or viewpoints. A camera and an object detection system could also be used. Another alternative, as depicted in figure 4006, is to provide an array of conventional infrared detectors 4120 arranged in a circumferential band below or at the bottom of a volumetric display enclosure 4122. In this approach those detectors that are active indicate the presence of a user. A conventional interface between the computer 4052 (see figure 4003) and the detectors allows the computer to conventionally detect the number and positions of users positioned around the display 4122. Another variation is to use audio microphones to detect the position of users based on from where the sound is coming.
[00201] When the number and positions of users are known, the computer can create a omni-viewable widget that includes an interface element for each user, as depicted in figures 4007A - 4007C. When the system detects two users A and B on opposite sides of the display enclosure 4140, as depicted in figure 4007A, two widgets elements 4142 and 4144 are created and positioned (as shown by the dashed lines) to face the users. Figure 4007B shows two users A and B in different positions than in figure 4007A and widget elements 4146 and 4148 positioned to face these different positions. Figure 4007C shows three users A, B and C and three corresponding user-facing widget elements 4150, 4152 and 4154. When the widget elements cannot be placed facing in opposite directions as depicted in figure 4007A such as in figures 4007B and 4007C, the system may need to prevent the widgets from overlapping each other and thereby obscuring display contents in the overlapped areas. This is discussed below with respect to figure 4008. [00202] Initially, for an omni-viewable widget with an element for each user, the system determines 4170 (see figure 4008) the number of users and their positions or viewpoints using a position determination system such as previously discussed. For each viewpoint an identical widget element is created and oriented 4172 in an orientation that is tangential to the surface of the display enclosure at the position of the corresponding viewpoint around the circumference of the display enclosure and perpendicular to the normal at that point. Next, the centroid of the oriented widget elements is determined 4174. Widget elements that have surfaces that overlap or intersect are incrementally moved 4176 away from the centroid radially along the normal until no intersections exist. The sequence of figures 4009A-40009C show these operations in more detail.
[00203] Initially (see top view figure 4009A) a widget with three widget elements 4190, 4192 and 4194 is created in the center of the display 4196 for three user viewpoints 4198, 4200 and 4202. The widgets are moved along their respective normals 4206, 4208 and 4210 until widget 4192 no longer intersects with widget 4194, as shown in figure 4009B. At this point widget 4194 stops moving. However, widgets 4190 and 4192 still intersect. As a result, widgets 4190 and 4192 are moved along their respective normals 4206, 4208 and 4210 until they no longer intersect as shown in figure 49C. Rather than move only those widgets that intersect, it is possible to move all the widgets by the same incremental amount until no intersections exist. Other placement algorithms may be used to achieve a similar non-overlapping result.
[00204] Alternatively to moving the widgets radially along their normals to eliminate intersections, it is possible to group viewpoints into viewpoint clusters and create a display for each of the clusters. This is depicted in figure 4010 where five user viewpoints V4001-V4005 are shown. In determining whether a viewpoint can be included in a cluster, the system measures the angles between the viewpoints and compares this to the view range of the particular widget element being created. If the angle is less than the range, the viewpoints can be included within the same cluster. For example, the angle between viewpoints V4001 and V4002 and between viewpoints V4002 and V4005 is greater than the viewing range of the widget element being used. The angle between V4003 and V4004 is also too great. However, the angle between V4002 and V4003 and between V4004 and V4005 is less than the range and these viewpoints can be grouped into two clusters C4001 and C4002 while V4001 is allocated to its own cluster C4003. Once the clusters are determined the average of the positions or angles of the viewpoints in each cluster is used to determine the angular positions W4001 , W4002 and W4003 of the widget elements.
[00205] As noted above, it is also possible to continuously rotate the widgets. The rotation can be a revolution through 360 degrees or the rotation can rock back and forth through a fixed number of degrees to favor certain viewpoints. The rocking is depicted in figures 4011A - 4011C. In this sequence a widget 4220 in the display enclosure 4224 is rotated back and forth between the viewpoints of users A and B. In figure 4011 A the widget 4220 is oriented to user A, in figure 4011 B the widget is oriented between users A and B, and in figure 4011 C the widget is oriented toward user B. This rotation between view points is performed by determining the user view points and redrawing the widget in sequential and incremental angular positions between the view points. The rotation in either mode can "jump ahead" resulting in certain viewpoints being ignored or covered at different rates. Furthermore, the widget can be rotated along an arbitrary axis or multiple axes.
[00206] When a rotating widget is selected by the user for an operation, such as by the user positioning a pointer pointing at the moving widget, and performing a selection operation, such as pressing a selection button on the pointer, the widget needs to be stopped from rotating so the user can review the widget for a desired period of time or otherwise interact with the widget. The operations involved in stopping the rotation of the widget are depicted in figure 4012 and discussed below.
[00207] The rotation of a widget, in either a back and forth motion or in a full circular motion includes a display 4240 of the widget at a particular rotary position (see figure 4012). The system then determines 4242 whether the widget has been selected. Selection can occur in a number of different ways including the positioning of a 3D cursor on the widget and the activation of a selection button on a device, such as a 3D "mouse". If the widget has not been selected, the position of the widget is updated 4244 and it is displayed 4240 in its new rotary position. If the widget has been selected, the widget is flagged 4246 as selected so that other users cannot select the widget. The selected widget can also be displayed in a way that indicates it has been selected, such as by highlighting it. The widget can then optionally be oriented 4248 toward the user using the input from (or input vector of) a selecting device to determine which user has selected the widget and the user location as indicated by the location detectors. The correlation between location and selection device can be created by having the users register their location and their input device or by using input devices whose position around the volumetric display is known. The system then performs the interactions 4250 with the widget as commanded by the user. If the user is only reviewing the contents of the widget then there is no positive interaction. The user can use the selecting device to move a cursor to select a control of the widget, thereby positively interacting with the widget. Once the interaction is finished or a time-out period expires, the widget is deselected 4252 and the rotation of the widget continues.
[00208] As discussed above, it is possible to provide a rotating widget whose contents have controls making it appropriate to orient the widget toward the user when the user desires to interact with the widget. It is also possible to provide a widget that includes a control with which the user can interact while the widget is rotating without having the widget stop rotating and orient toward the user. Such a widget 4270 is depicted in figure 4013. This widget 4270 includes a rotating portion 4272 that can indicate the function of the widget via a label and a stationary portion 4274 that includes the control 4276. In this case the control 4276 is a slider bead that the user slides up and down to perform some function, such as an object scale slider used to change the scale of a selected object in the display. Another example is a push-button implemented by a deformable sphere.
[00209] The rotating widget has been described with respect to rotating into view of the users with the widget remaining at the same position within the display. It is also possible for the widget to rotate in the display by traveling around the circumference of the display or the interior of the display enclosure much like a world rotates on a globe. In general any rotation pivot point or any path can be used to move a widget.
[00210] These paths are computed such that they route the widgets around objects that would occlude the widgets being viewed from certain viewpoints. One method for defining a path within a volumetric display to be used to move a widget along is to define two concentric rings along the base of the volumetric display. Call one ring the inner and the other the outer ring. The widget traverses along the inner ring until it's position plus a fixed delta amount intersects an object. With a collision imminent, the widget transitions to the outer ring until it is able to return to the inner ring - having passed the object. Additional "outer" rings can be defined if traversal along the current ring is not valid (e.g., intersecting an object). This same concept can work for other path shapes such as a rectangular path or a spiral path that has a width and height. Using this approach, widget movement is "content- dependent". In a similar manner, widget placement may be content-dependent such that a widget or widgets are not only oriented or duplicated but placed in the scene to minimize the occlusion of them by other content in the scene.
[00211 ] Omni-viewable widgets can also be of value in other virtual environments and display systems. For example, virtual environments that use head-mounted displays could benefit from omni-viewable widgets placed in a virtual world. In general, any application where different viewpoints of a 3D scene omni- viewable widgets can be viewed and operated from multiple viewpoints.
[00212] The present invention can be generalized to 2D displays (e.g., a display where a widget is viewable and operable from the top or bottom of the display). For example, sitting face-to-face at a table with a LCD display in the table surface between the two people. Each widget is replicated to be oriented toward each person. If four people gather around the display four versions of the widget are produced.
[00213] The present invention has been described with respect to volumetric widgets that are display widgets that display data contents to the user. It is possible to provide widgets that not only output contents but also widgets that allow input to the graphics system using input data fields, cursor activatable controls, etc.
[00214] The present invention is directed to a system that allows a user to physically rotate an enclosure for a three-dimensional (3D) volumetric display to thereby rotate the scene displayed within the enclosure so the user can view different parts of the scene without the user being required to physically move around the display (or to navigate a virtual camera type view of the scene. A combination of physical rotation of the display and manipulation of the virtual camera either thru turning the display or thru some other input device is also useful. For example, turning the physical display to rotate the contents of the display while at the same time moving a mouse to magnify the contents. Figure 5001 illustrates the user 5010 spinning the enclosure 5012 about a center or vertical axis 5014 by physically pushing the enclosure with his hands in contact with the enclosure 5012. As the enclosure moves so does the scene displayed therein. This motion allows one degree of freedom 5016 of rotation in the movement of the scene displayed within the enclosure 5012. The enclosure rotation is much like a "lazy-susan" type platform. Figure 502 depicts an enclosure 5030 that can be rotated with two degrees of freedom 5032 and 5034. This enclosure 5030 can be rotated like a ball thus permitting three degrees of rotation. In both of these embodiments the control interface, that is, the ability of the user to control the rotation of the display contents, is accessible from any direction that the user approaches the enclosure or from any view point around the enclosure. That is, the interface is omni-viewpoint directionally accessible and controllable.
[00215] The rotation or movement of the displayed scene with the rotation or movement of the enclosure can be accomplished in a number of different ways. The display mechanism, such as a holographic projector, can be physically rotated with the enclosure. Such an embodiment where the display apparatus is inside the enclosure requires that the display interface, particularly, the power and display inputs, be coupled through a conventional rotating connection. The power and display inputs can also be supplied electromagnetically similar to the way that smart cards are supplied with power and I/O data. By rotating the entire display mechanism, the contents need not be redrawn each time the display is moved. In an alternate embodiment the displayed scene can be virtually rotated responsive to the rotation of the enclosure. That is, as the enclosure is rotated the scene can be redrawn to correspond to the enclosure rotation. The virtual rotation of the scene within the enclosure does not require that the display apparatus be rotated and a rotational or electromagnetic coupling is not needed. Also, some combination of virtual rotation and physical rotation can be used.
[00216] In both of the embodiments discussed above, the enclosure mechanism can be made as depicted in figures 5003 and 5004 where figure 5003 shows a side view and figure 4 shows a top view. The enclosure mechanism 5050 includes a transparent plastic enclosure 5052 mounted on a rotating base 5054. In the embodiment where the display apparatus rotates with the enclosure, the display apparatus, such as the holographic projector, is housed within the rotating base 5054. In the virtual rotation embodiment discussed above, the rotating base 5054 would be transparent and the displayed scene would be projected through the rotating base 5054 from a fixed base 5056 in which the display apparatus would be housed. A bearing 5058 (roller or ball) couples the rotating base 5054 to the fixed base 5056 and allows the base 5054 to rotate about the vertical axis of the enclosure mechanism 5050. In both the embodiments noted above, the mechanism 5050 also includes a conventional rotary encoder 60 that outputs the rotational position of the enclosure 5052 as it rotates. For embodiments where the display apparatus is in the fixed base 5056, the transparent base 5054 need not exist if a support bearing is on the periphery of the enclosure rather than in the center as shown. For a two-dimensional rotation freedom device such as depicted in figure 5002, rotational sensors like those used for conventional track balls can be used to sense rotation in the two dimensions. For three-dimensional rotation, a similar scheme can be used but with an additional sensor to sense twisting of the ball (rotation in the third dimension).
[00217] The rotary encoder 5060/5070 (see figure 5005) is coupled to a sensor A/D (Analog to Digital) converter 5072 that supplies the rotational position to a computer 5074. The computer 5074 supplies the scene to be displayed to the display apparatus 5076. The scene processing system being executed by the computer 5074 is a conventional system that can rotate the scene or objects within the scene to different positions or viewpoints responsive to scene rotational position inputs. For example, the MAYA system available from Alias|Wavefront, Inc. is a system that can rotate computer generated graphical scenes or objects responsive to rotational inputs. When the computer detects that the enclosure has been rotated, the scene is virtually rotated within the enclosure with the computer 5074 determining new positions of objects within the scene and rendering the scene with the new positions. Scene displays may also contain virtual widgets, such as three dimensionally positioned cursors, virtual keyboards, 3D manipulators, dialog boxes, etc. that are displayed within the enclosure. If such widgets exist within the scene, it may be necessary to adjust the position of the widgets within the scene as the scene rotates. For example, a virtual keyboard should always face the user even when the scene is rotated so that the keyboard is available for use by the user. However, a 3D cursor typically needs to remain at its last position within the scene as the scene rotates. That is, if the cursor is pointing at a door of a building it should remain pointing at the door of the building as the building moves within a rotating scene. As a result, the computer 5074 also adjusts the positions of interface widgets, as necessary, within the scene responsive to the rotation as will be discussed in more detail later herein. When the display apparatus is rotated with the enclosure, the computer 5074 need not determine scene positions but still determines and makes widget position adjustments as needed. In this embodiment, by limiting the redrawing to only widgets that need to remain stationary or rotate at a different rate than the scene, computational resources are more effectively used.
[00218] The rotation of the scene within the enclosure is typically a 1 for 1 rotation. That is, if the enclosure is rotated 10 degrees the displayed scene is rotated 10 degrees. It is also possible to rotate the displayed scene at a rate that is faster or slower than the rotation rate of the enclosure. Negative scene rotation with respect to enclosure rotation is also possible. Preferably, the user can set the rotational gain discussed above.
[00219] The operations 5090 (see figure 5006) performed by the computer 5074 start with initialization operations 5092 and 5094 where the physical and virtual rotational positions are set to zero. The system then determines 5096, from the output of the encoder, whether the enclosure has been rotated. If not, the system loops back and waits for a rotational movement of the enclosure. If the enclosure has been rotated, the amount of and direction of virtual rotation, vr, is determined 5098 followed by applying 5100 a rotational gain function g to the physical rotation, rp. Typically the gain g is set to one. Here, the gain g is set to one so that rv = (g - 1 ) * rp = 0 * rp = 0. If the gain is set to a value greater than 1 , the virtual rotation exceeds the physical rotation. The gain can also be set to less than one or to negative numbers. For example, when the gain equals 0.5 the display rotates slower than the enclosure. If the gain is negative the display rotates in the opposite direction of the enclosure. Lastly, when the gain is zero rv = -rp and the display appears to remain fixed when the enclosure is rotated. Once the gain has been applied to the physical rotation, the virtual display position is offset 5102 by the corresponding rotational amount. Then, the scene is rendered and provided to the display apparatus for display. The accelerated rotation of scene objects is discussed in more detail below with respect to figures 5007A - 5007C and 5008A - 5008C.
[00220] Figures 5007A - 5007C with figures 5008A - 5008C depict accelerated rotation based on a gain of 1.0 and 4.5 respectfully. Figures 5007A-5007C depict a unity gain where the display follows the physical rotation of the enclosure 5154. As can be seen when figures 5007A - 5007C are compared to figures 5008A - 5008C, when the enclosure 5172 is rotated by 10 degrees from figure 5008A to figure 5008B, the object 5174 is rotated by 45 degrees. Similarly when the enclosure is rotated to a 20-degree rotational position, the object is rotated by 5090 degrees as depicted in figure 5008C.
[00221] The virtual scene can also be constantly rotated, with the physical rotation adjusting the rate at which the scene is being rotated. For example, consider a virtual scene consisting of a swirling water current. Physically rotating the enclosure in the same direction of the water current speeds up the water flow. Rotating in the opposite direction, slows the water down.
[00222] When widgets are scene objects, or when parts of the scene need to be rotated at different rates, the rotation of each scene object is determined as depicted in figure 5009. Once again the virtual and physical positions are initialized 5192 followed by rotation sensing 194. Then, after the physical rotation amount is determined 5196, for each scene object 198 (or segment that needs a different rotational gain), the type of each object in the scene is determined 5200. If the object is not a widget, the system makes 5202 no additional change in the rotation of the object. That is, the object is rotated in accordance with the discussion with respect to figure 5006. If the object is a widget (or a segment of the display that needs to be rotated differently than the scene), the object is rotated 5204 by the inverse of the physical rotation. If a gain has been applied, the object (widget or display segment) is rotated by the inverse of the gain adjusted physical rotation. The relative rotation of a scene object and the widgets allows scene objects to be rotated in front of the user while widgets remain stationary with respect to the user or a world coordinate system. This is discussed in more detail below with respect to figures 5010A - 501 OC.
[00223] Figures 5010A, 5010B and 5010C depict the inverse rotation of widgets relative to enclosure and scene rotation. Figure 5010A depicts a scene object F, three widgets w1 , w2 and w3 that need to be maintained at the same or constant relative position with respect to the user, the enclosure 5222 and an enclosure orientation indicator 5224. In figure 5010A the base orientation, widget orientation and scene object orientations are all at zero degrees. In figure 5010B the enclosure 5222 has been rotated by 45 degrees so that the scene object F has also been rotated by 45 degrees relative to the center of rotation which is at the center of the display. The widget positions have remained at zero degrees (that is, the widgets have been rotated a -45 degrees with respect to the enclosure). In figure 501 OC the enclosure 5222 has been rotated by 90 degrees so that the object F has also been rotated by 90 degrees. Again, the widget positions have remained at zero degrees. That is, as the object and enclosure rotate with respect to the user, the widgets remain in the same relative position with respect to the user and with respect to a world coordinate system.
[00224] The widgets are discussed above as being adjusted continuously as the contents are rotated. It is also possible to adjust the widgets based on a rotational threshold. For example, a keyboard Ul widget need not always be facing the user but can be oriented with respect to the user with some tilt. But when the tilt gets too large, such as above 10 degrees, the widget is rotated back to facing the user. Note that this may not happen all at once in the next display refresh cycle. Instead, the rotation could happen in increments until the final rotation amount is reached. This has the effect of preventing a visually jarring discontinuity by smoothly animating the widget to the final rotation position.
[00225] The relative positioning of widgets with respect to scene objects as the enclosure and scene objects rotate can be extended to be used with portions of the display contents that are designated. For example, one or more three-dimensional sections/parts/segments/sub-volumes of a volumetric display can be designated to remain stationary with respect to the user as the scene/enclosure rotate. For example, a scene of a landscape may be partitioned such that the landscape being modified rotates with the enclosure while a segment holding landscape elements, such as trees, bushes, rocks, remains stationary. This would facilitate the user selecting landscape elements and placing them in desired positions. Other effects can be created with different gains set for different segments of the display. Different rotational gains can also be set for different objects.
[00226] It is also possible for different rotational effects to be applied to different segments of the enclosure such that one segment rotates at a gain of one while another segment rotates with a gain of two.
[00227] It is also possible to have a secondary input stream control different sub- portions of the display while the display enclosure is being rotated. For example, suppose the user uses a mouse input device to click-on and hold a window in place while, with the other hand, they rotate the display. In this case, the window would not rotate with the display. This can be accomplished by assigning each object in the display a rotational gain of one and adjusting the gain of the selected window to negative one.
[00228] It is sometimes the case that the user may want to have a content rotation, once initiated, to continue for a specified period of time or, if the rotation is time indexed, until a specified time index is reached. Figure 5011 depicts an embodiment where the rotation is time sensitive. Once the enclosure rotation sensing 5232 operation has been completed, the system performs a mode test. If the mode is set to a non-time based rotation mode, the system virtually rotates 236 the display contents in the direction of the physical rotation corresponding to the rotation of the enclosure as previously discussed. If the mode is set to a time based rotation, the system rotates the display contents and redraws 5238 the display continuously until the time value set for the time based rotation expires or is reached. When in time mode, rotating the base effects only the temporal position of the time-based media (e.g., an animation or a movie). For example, rotating the enclosure clockwise by some unit amount may "scrub" advance the 2D/3D movie by one frame (i.e., next time increment). Rotating the enclosure counter-clockwise by a unit will decrement the 2D/3D movie by one frame.
[00229] The present invention has been described with the rotation of the enclosure causing the contents of the display to rotate. It is possible for the rotation of the enclosure to be applied to a specific object (or objects) designated by the user in the scene such that rotation of the enclosure causes the object to rotate about the object center of mass. Additionally, the center of rotation could be varied. For example, normally, the center of rotation is the central axis of the display. However, an object could be selected and then rotating the enclosure rotates the scene about the object's center axis. Note that rotation of the display contents around any arbitrary axis could be controlled by rotating the display enclosure.
[00230] The rotational offset of the enclosure could be used to control the rate at which the display contents are rotated. For example, rotating the enclosure 10 degrees to the right makes the display contents rotate at a rate of 10 degrees per second. Rotating the enclosure an additional 10 degrees increases the rotational rate to 20 degrees per second. This is accomplished using the joystick functions that control the rate of movement of virtual objects. Hence, the design variations of joysticks such as "spring-loaded", isotonic vs isometric and different rate control mappings are applied.
[00231] Position relative to the enclosure can be used to control rotation of the display. For example, suppose a virtual head is displayed in the volumetric display. Suppose the user approaches the display from the "back of the head" viewpoint. Touching the enclosure on this side causes the display content to rotate 180 degrees so the face of the head faces the user. This can be accomplished by designating a reference for an object and, when a touch occurs, rotating the contents to align the reference with the touch. Rather than using touch to signal position, voice or thermal sensing or any other position sensors could also be used.
[00232] The typical volumetric displays being manufactured today have mechanical components that have inertia that could cause the mechanical components to distort when the enclosure is rotated quickly. Distortion of the mechanical components can cause the display contents to be distorted. These types of distortion can be measured and the display compensated for these types of distortions.
[00233] The present invention has been described with respect to actually rotating the enclosure when rotating the scene. The present invention has been described with respect to using a shaft encoder and the enclosure rotating about a centrally positioned shaft. It is possible for the rotational bearing for the enclosure to be on the periphery of the enclosure and the encoder to be mounted to sense the rotation of the periphery of the enclosure. It is possible to provide a ring around the bottom of the display enclosure that can be rotated to rotate the display contents and to thereby not need to rotate the entire enclosure. It is possible, to sense a rotational rate force being applied to the enclosure or a circumferential ring via a rate controller sensor and virtually rotate the displayed scene accordingly. That is, the rotational force is sensed, the enclosure or ring does not actually rotate and the display rotates proportional to the force sensed.
[00234] The present invention has been described using rotary type encoders to sense rotational motion of the display. It is possible to use other types of sensors such as yaw, pitch and roll sensors to sense rotational force. It is also possible to mount roller wheels/balls around the periphery of the enclosure, sense the rotation of the roller wheels/balls and rotate the display contents accordingly without rotating the enclosure.
[00235] The volumetric display has been described as a dome or ball, however, other shapes, such as cubes, pyramids, etc., can be used for such displays. The volumetric display also need not have a complete enclosure and the display contents can be projected into an open volume or a partially enclosed volume.
[00236] It is possible to have a proxy object serve as a representation of the dome. This proxy object can be manipulated (i.e., rotated) and cause the corresponding actions to occur on the volumetric display. Floor sensors can serve to indicate a rotational amount or user position. The size of display may not be desktop scale but could be smaller (e.g., wristwatch or PDA) or much larger (e.g., room scale as in viewing a car) or even much larger (e.g., as in an amusement park ride scale).
[00237] Another variation is a "volumetric chameleon". In this variation a volumetric display is mounted on a mechanical armature or trolly that can sense it's position and orientation in space (i.e., "spatially-aware" display like the Chameleon). For example, imagine using a volumetric display for viewing the internals of a human body. At position A, the volumetric display shows a human liver. The operator then physically moves the volumetric display 16 inches up and 5 inches to the left. Along the way, the internal structures such as the stomach and lungs are displayed until the operator finally stops moving the display when the heart appears. This can be accomplished by sampling the position of the trolly. Using this 3D trolly position, the system finds a corresponding point in a 3D display map. The contents of the display map corresponding to the volume of the display at the corresponding point are transferred to the volumetric display.
[00238] Volumetric displays allow a user to have a true three-dimensional (3D) view of a scene 6012 and are typically provided in the form of a dome 6014, as depicted in figure 1. The user 6016, as can be surmised from figure 6001 , can move about the dome 6014 to view different parts of the scene 6012. From a particular arbitrary viewpoint, a user may want to select an object 6018 within the scene of the volumetric display and this may be difficult to do with traditional interface tools.
[00239] There are a number of different solutions to this problem. These solutions involve creating a volumetric pointer.
[00240] A first solution (see figures 6002A and 6002B) is to restrict movement of a cursor type volumetric pointer to a designated plane 6030 within the volumetric display 6033 and use a two-dimensional input device 6033, such as a stylus pad or mouse to input motion of the cursor on the plane. When a stylus and digitizer pad form the input device, the orientation of the plane 30 in the display can be controlled by the pitch of the pad and the direction of the pad using sensors for sensing pitch and direction.
[00241] Another solution (see figure 6002C) using a 6002D digitizer tablet 6033 has designated regions on the tablet 6033 that map to regions of the volumetric display 6034. For example, a tablet 6035 may have a cross-section marked on the tablet such as "Front" 6036 and "Back" 6037. Placing the stylus in one of these regions maps the cursor to the corresponding position on the outer shell 6037 of the volumetric display 6034. Alternatively, having a "Top" 6038 and "Front" 6039 region delineated on the tablet 6040 can position the cursor in 3-space by selecting two points (one in the "Top" region and one in the "Front" region) with the stylus.
[00242] Another solution is to restrict the cursor 6041 to moving along the outer surface 6042 of the display as depicted in figure 6003. The cursor 6041 travels along the surface 6042 at a point that is the closest point on the surface 6042 to a stylus 6044 even when the stylus 6044 is lifted from the surface 6042. A surface moving cursor can also be controlled using a touch sensitive display enclosure as well as the arrow keys of a keyboard, a mouse and other 2D input devices. With a surface traveling or restricted cursor, a convention is used to designate what is selected. The convention limits the designation to objects on the surface of the enclosure, to objects vertically under the point of touch, to a closest object, to objects orthogonal to the surface at the cursor, etc. Objects within the range of influence of the cursor would typically be shown as being within that influence by, for example, being highlighted.
[00243] The surface moving cursor can also be used to tumble the contents of the display. For example, as the cursor moves over the top of the display as depicted in figure 6003 the contents of the display are locked to the cursor and, thus the contents "tumble" within the display enclosure.
[00244] Figure 6004A and 6004B show a user 6050 touching the display 6052 at two points and the pointing convention being the creation of vertical virtual planes which the user can move by moving the points of touch to, for example, push aside objects that the virtual planes encounter.
[00245] A further solution is to allow a user to manipulate a cursor (a flying or floating volumetric pointer) within the three-dimensional (3D) space of the volumetric display using a three-dimensional input device, such as the tilt mouse set forth in U.S. Patent 6,115,028, the Flock of Birds system from Ascension Technology Corporation, etc. Figures 6005A and 6005B sequentially depict a user moving a 3D input device 6072 in space adjacent to the display 6074 and a cursor 6076 in the display moving in correspondence thereto.
[00246] Another solution, as depicted in figure 6006, is to allow the user to point at an object 90 to be selected using a three dimensional pointing device 6092, such as a beam pointer, to thereby point to the object 6090 to be selected using a visible volumetric pointer ray 6094.
[00247] An alternative solution is to partition the volumetric space into a 3D grid and use pushbuttons to advance or retard a cursor in each dimension (e.g., using the arrow keys on the keyboard with or without modifier keys moves the cursor to the next cell in the 3D grid). Additionally, selecting an object can be done by determining a traversal sequence through the volume using a heuristic algorithm. For example, consider a volume space that is partitioned into a stack of thin slices or "slabs". A scan algorithm could search for objects starting at the top left of the slab space, scanning across from left to right, row-by-row until the bottom of the slab is reached. This same scan is performed for each progressively deep slice of the volumetric space. Again, the net effect of this algorithm is to make each object in the volume addressable by defining a sequence of objects and having the user jump to the next or previous object using a "next" key.
[00248] These volumetric pointer solutions will be discussed in more detail below.
[00249] The cursor being moved can perform a number of different functions including designating/selecting objects, changing the display of objects in the display, such as by applying paint or dye, moving objects within the display and other functions typical of cursors used in 2D displays.
[00250] The cursor being moved using the approaches noted above can be of a number of different varieties as discussed below.
[00251] The cursor can be a volumetric point cursor, such as small object in the scene like the 3D arrow 110 depicted in figure 7A. While this has the advantage of an easily understood metaphor because 2D arrows are used with conventional 2D displays, these cursors can suffer from being obscured by other objects in the line of sight in the conventional displays. It is often difficult in conventional displays to perceive where in the depth dimension the cursor resides. This problem is alleviated in Volumetric Displays due to the enhanced depth perception and users wider field of view. Further, since volumetric displays will allow easy scene rotation, this in turn will increase the efficiency of pointing in 3D with a point volumetric cursor.
[00252] The cursor can also be a 3D volume cursor 6112 as depicted in figure 6007B. When used with conventional displays, volumetric volume can enhance depth perception. The volume cursor shape could be cubic, spherical, cylindrical, cross, arrow or arbitrary shapes like a 3D shovel, tire tube or irregularly shaped object. While depth perception is not a problem with volumetric displays, volume cursors nonetheless afford certain advantageous properties when used with volumetric displays. First, if the volume cursor is made semitransparent, objects behind the cursor can still be seen. Second, the volumetric nature of the cursor can enable volume operations such as selecting multiple objects at once.
[00253] The cursor can also be a depth controllable type cursor, such as a bead cursor 6114 as depicted in figure 6007C. A head type depth cursor allows the user to control the bead of the cursor using two different modes of interaction. The cursor is positioned by pointing a beam 6116 at an object and the position of the cursor 114 along the beam is adjusted with a position control, such as a slider, a thumbwheel, pressure sensor, etc. When the bead 6114 contacts or is within an interface control, such as a button, the control can be activated. The depth type cursor could also have the cursor be a stick or wand shape rather than the bead shape shown in figure 6007C. The stick or wand could be divided into segments with a different cursor function allocated to each segment and thus be a smart cursor. For example, assume that the cursor has two segments: a delete segment and a modify segment. During operations, when a "delete" segment contacts an object and the function is activated, the delete function is performed when the control is activated while when the "modify" segment contacts the object the object is modified according to a predetermined function when the control is activated.
[00254] A cursor used for entry of text (2D or 3D) into the volumetric display would preferable have an l-beam shape. A convention sets the lay of a line of text
[00255] The present invention is typically embodied in a system as depicted in figure 8 where physical interface elements 6130, such as a rotary dome position encoder, infrared user position detectors, a keyboard, touch sensitive dome enclosure surface, mouse, beam pointer, beam pointer with thumbwheel, stylus and digitizer pad or stylus and stylus sensitive dome enclosure surface, stylus with pressure sensor, etc. are coupled to a computer 6132, such as a server class machine. The computer 6132 uses a graphical creation process, such as the animation package MAYA available from Alias|Wavefront, Inc., to create three- dimensional (3D) scene elements. This process, using position inputs as discussed in more detail later herein, also creates the virtual interface elements, such as the 3D point cursor, 3D volume cursor, beam, bead, etc. discussed herein. The display output, including the scene and interface elements, is provided to a conventional volumetric display apparatus 6134, such as one that will produce 3D a holographic display.
[00256] Pointing to objects within a volumetric display can be effectuated using a number of different volumetric systems as depicted in figures 6009A-6009D. These systems operate using the technology included in a conventional stylus and digitizing tablet or pad input devices. This type of technology includes transparent and flexible digitizers capable of sensing and outputting not only the position of the stylus but also the angle (vector) of the stylus with respect to the digitizer surface and the distance the stylus is located from the surface. These types of styli and digitizers are also capable of inputting a control action, such as is required for activating a control, via switches included within the stylus and sensed by the digitizer/tablet via pressure transducers and via multiple coils.
[00257] As depicted in figure 6009A, a transparent digitizer 6150 (for example, a transparent plastic with an embedded wire sensing grid) is included on an outside surface of a dome shaped enclosure 6152. The digitizer 6150 senses a stylus 6154 and provides a position of the stylus to a computer 6156. The computer 6156 produces a volumetric scene, along with determining a position of a cursor within the display, and outputs the scene with cursor therein to the display system 6158, which produces the scene including the cursor within the display enclosure 6152. In figure 6009B, the digitizer 6160 is spaced from the enclosure 6162 and can take the shape of a box or a cylinder. In figure 6009C, the digitizer 6164 and enclosure 6166 can be box or cylindrically shaped (see also figures 6002A and 6002B). In figure 6009D, the transparent digitizer 6168 is spaced from the enclosure 6170 and takes the shape of a familiar rectangular tablet. [00258] As previously discussed, cursor position can be based on a three- dimensional input, such as provided by a digitizing glove, or based on a pointing device such as a beam pointer. In most applications, the beam can be considered a preview of what will be selected once a control, such as a button is used to select the object. Beam pointing can be divided into a number of categories: vector based, planar based, tangent based beam pointing, object pointing or snap-to-grid.
[00259] In vector based pointing, as depicted in figure 6010, an orientation input vector 6190 for a stylus 6192 with respect to the display enclosure 6194 is determined. This vector 6190 is used to cast a ray or beam 6196 where the ray can be coincident with the vector or at some offset with respect to the vector. The ray 6196 can be invisible or preferably made visible within the volumetric display to aid in the pointing process. The cast ray or vector is used to determine which voxels within the display to highlight to make the ray visible. Once the path of the ray 6196 is known a determination can be made as to any objects that the ray encounters or intersects. An object, such as virtual object 6198, hit by a ray can, if desired, be selected when a control, such as a button on the stylus, is activated. Note that in certain applications, the ray 6196 may change properties (such as direction, or shape) when hitting or passing through an object. For example, a ray passing through a container of water may simulate the bending effect of a light ray in water.
[00260] In planar pointing, a ray is cast orthogonal to a designated reference plane from a contact point of a stylus with a tablet surface. Figure 6011 A illustrates a ray 6220 cast from a stylus contact point 6222 to a bottom plane 6224 of the display enclosure. Figure 6011 B shows a ray 6226 cast from a contact point 6228 to an arbitrary user defined plane 6230. The reference plane can be specified by the input of planar coordinates by the user or with a plane designation device (see figures 6019A and 6019B). In Figure 6011 C a cast ray 6232 can be used to select a first virtual object 6234 that the ray encounters.
[00261] In tangent pointing, a ray 6250 (see figure 6012) is cast orthogonal to a plane 6252 that is tangent to a digitizer display enclosure 6253 at a point of contact 6254 of a stylus 6256 with the digitizer. Once again any object encountered by the cast ray 6250 can be selected.
[00262] In figure 6012 the point of contact from which the ray is cast or projected is determined by the position of the stylus. This point from which a ray is cast orthogonal to the surface of the display can be designated using other devices, such as a mouse or the arrow keys on a keyboard. For example, moving a mouse on a mouse pad adjacent to the display 6253 can move a ray projection point cursor in "two dimensions" on the surface of the display. That is, the ray projection point cursor is a surface moving cursor. Assume for the purpose of this discussion that the mouse pad has a front side, a back side, a right side and a left side and the display 6253 has corresponding sides. When the mouse is moved from front to back, the ray projection point cursor is moved along the surface of the display 6253 from front to back in a proportional movement. This is accomplished by sampling the 2D inputs from the mouse and moving the cursor along the surface in the same direction and the same distance, unless a scale factor is used to adjust the distance moved on the display surface. In this embodiment, the ray is projected from the cursor into the display orthogonal to the surface at the point of the cursor.
[00263] With Object pointing, the ray is cast from the contact point toward the closest point on a designated object in the scene.
[00264] Other operating modes can be engaged, such as "snap-to-grid", which constrain the ray to travel along specified paths in the volume space.
[00265] As previously discussed, selection using a beam can be performed in a number of different ways. The beam can automatically select an object, which it encounters or a particular position along the beam can be selected using a cursor, such as a bead or stick as previously mentioned. As also previously mentioned, the position of the bead can be controlled by a position device, such as a thumb wheel or a secondary device. It is also possible to fix the bead a predetermined distance along the beam and allow the position of the stylus to indicate the position of the bead as shown in figures 6013A and 6013B. Figure 6013A shows the stylus 6270 in contact with the enclosure 6272 and the bead 6274 positioned within the display along the ray 6276. Figure 6013B shows the stylus 6270 at a distance from the enclosure 6272 and the bead 6274 in the display at a same constant distance from the stylus 6270 along the ray 6276.
[00266] Another bead based selection mechanism is shown in figure 6014. In this approach, a bead 6290 is created at an intersection of beam 292 and secondary beam 6294 cast by separate styli 6296 and 6298. The secondary beam 6294 specifies the position along the primary beam 6292 where the cursor is created based on a region of influence or angular tracking and intersection designation. [00267] When a bead is used as a volume cursor, such as the type that can select objects when the object is in the volume of the cursor, the present invention allows the size of the bead to be changed as depicted in figures 6015A and 6015B. Initially (see figure 6015A), before or after a bead 6310 is in a desired position along a cast ray 6312, a user changes or initiates a change in the size of the bead 6310 using an input device, such as a thumbwheel on a stylus 6314. The size can be continuously varied until it is of a size desired by the user as depicted in figure 6015B. When the bead cursor has reached the desired size it can be positioned surrounding or contacting an object or objects 6316 that the user desires to select (and excluding undesired objects 6318), as depicted in figure 6015C. The enlarged bead cursor can be shown with the original size bead 6310 as an opaque object therein to allow the user to see a position of a center of the cursor and, with a surrounding semitransparent volume cursor 6320 enclosing the embedded objects 6316 which have been selected.
[00268] A cursor associated with a ray can take other volume geometric shapes in addition to the bead or stick shapes previously mentioned. As depicted in figure 6016, the cursor can take the shape of a ring 6340 allowing the cursor to select a swept volume 6342 when the stylus is moved from an initial position 6344 to a final position 6346. The ring 6340 (and volume 6342) can be made semitransparent or opaque as needed for a particular operation. Objects inside the volume can be selected for a functional operation or the swept volume could itself be acted on when a function is initiated.
[00269] The cursor can also take the shape of a cast cone 6360 as depicted in figure 6016 where the cone can be semitransparent and objects within or contacting the cone can be selected. The cone can have its apex 6362 at the surface of the enclosure as shown or at some user desired position along the orientation vector of the input device as specified by an input device, such as a stylus thumbwheel.
[00270] As another alternative, the volume cursor associated with a cast ray can take the shape of a semitransparent voxel cylinder or shaft 6380 centered on the cast ray and optionally terminated by the bead 6384 as depicted in figure 6018. Figure 6018 also depicts a situation where the objects within the shaft 6380 are rendered transparent so the user can see inside or through objects 6386 and 6387 within the display. Essentially a window into an object is created. The transparent hose created by the shaft 6380 stops at the head 6384. The position of the bead 6384 along the ray 6382 is adjustable and the bottom of the shaft can have a planar or some other shape.
[00271] The cursor used for selecting or designating within a volumetric display can also take other shapes, such as the shape of a displaying spanning plane as depicted in figures 6019A and 6019B. Such an input plane 6400 can be specified by a rule or convention and an input device 6402 that can be "parked" at a location on the enclosure and that includes a mechanism for specifying location and orientation, such as the mechanism found within styluses that can be used to designate a contact point and a vector. The rule could, for example, specify that the plane must be orthogonal to a bottom 6404 of the enclosure 6406, pass through the point of contact and be parallel with the vector. The plane in addition to acting as a cursor can be used in combination with a ray to form a cursor where the cursor would be formed at an intersection of the plane and the ray.
[00272] The selection mechanism with respect to cast rays can also include a region of influence that automatically selects objects within a specified and variable distance of the cast ray as shown in figures 6020A and 6020B. In figure 6020A four objects 6420-6426 are within the selection region 6427 of the ray 6428 while one object 6430 is not. In this figure a "spread" function is also used which is a spatial nearest neighbor heuristic. Based on the currently selected object has it's nearest neighbor determined, etc. Figure 6020B shows the same objects but with only object 6420 being within the region of influence.
[00273] For point or volume cursors that move within the volume of the volumetric display 6440 it may be helpful to provide visual aids that help to show where in the 3D volume the cursor 6442 is located using axial based guide lines 6444 as shown in figures 6021 A and 6021 B. The guidelines 6442 are preferably semitransparent voxels that allow objects behind the guidelines in the line of sight of the user to be seen. The guidelines are particularly useful when the cursor 6442 is obscured by an object.
[00274] The cursor and its location designation apparatus, such as a stylus can be used to control objects. For example, an object selected by a ray/bead can be moved by activating a move function and moving the location of the bead with the stylus. Another possible motion is a rotational motion where an object 460 rotates as a stylus 6462 selecting the object rotates as depicted in figure 6022. Note that that object can rotate along any arbitrary axis. However, most applications will preferably rotate the object along the axis defined by the input ray.
[00275] For situations where the cursor is relegated to movement along the surface of a display in an enclosure, it is possible to position virtual controller/tools on the surface of the display with which the cursor interacts. Figure 6023 depicts virtual track pads 6480 and 6482 on the display surface that can be used with a surface cursor or a ray. The track pad could also be used to set positions along a ray. Using a motion tracking system, the track pads can move with a user as the user moves around the display.
[00276] The different types of pointing discussed above require similar but somewhat different operations as discussed in more detail below.
[00277] The pointing operations (see figure 6024) involve obtaining 6500 input values from the input device where the input values are the raw output values of the input device (for example, stylus/pad or glove).
[00278] The system then combines 6502 the input values with enclosure shape and/or position. This allows the system to take into account the shape of the enclosure to use in deriving a positional coordinate. For example, when a 2D input tablet is essentially stretched over a dome shaped enclosure, the tablet only reports a 2D position. However, this position value is combined with the knowledge of the shape of the dome to derive or map to a 3D position (i.e., a point in three space which is on the dome). This shape and/or position information allows the correct mapping between input and output spaces to occur. Note that not all of the different embodiments make use of the shape of the enclosure. For example, when the input device senses its 3D location, the shape of the enclosure does not need to be known. However, the position of the display relative to the sensing volume of the 3D input device needs to be known. Hence, this operation also factors in display and enclosure position.
[00279] Once the positional coordinate is known a cursor position metaphor for the input and output spaces is applied 6504. This is used because the cursor control techniques can be much more than simply a 3D position in space but may involve metaphors such as "ray-casting" that use additional information. For example, if a stylus based ray with a depth controllable bead is used, the ray is projected from the positional coordinate of the contact point of the stylus with the enclosure along the orientation vector of the stylus. The depth of the bead set by the depth control device (slider, thumbwheel, etc.) is used to determine a point along the ray from the contact point at which to create the bead cursor. As another example, for a surface cursor, the applied metaphor involves transforming or mapping the input device coordinates (such as the coordinates of the stylus above or on the surface of the enclosure) into volumetric display surface coordinates and finding the closest point (voxel) on the display surface to the input coordinates as the position. For a floating cursor, the input device coordinates (such as the 3D position of a glove in a space adjacent the display enclosure) are mapped to a corresponding 3D position within the display. A floating cursor is typically used with a dome shaped display surrounded by a volume sensing field. For the sake of this floating cursor discussion, the display has a back and a front. A cursor is at some position in the display. The input device, such as a non-vector flock-of-birds sensor, has a button that activates the "move" function of the device. If the cursor is at some arbitrary position in the display and the user is standing in front of the display, the input device is activated and is moved toward the display, the cursor moves from front to back. If the user turns off the activation, moves to the rear of the display, activates the device and moves the device toward the display, the cursor will move from the back to the front of the display. That is, movement of the input device away from the user will always move the cursor away from the user. The metaphor in this situation requires that the movements of the cursor be matched in orientation and distance to the movement of the glove, unless a scale factor is involved where, for example, movement distance of the cursor is scaled to 1/2 of the movement of the glove. The metaphor may also involve separate input surfaces being used depending on the shape of the display. For example, a cylinder can be comprised of 2 separate input surfaces: one for the top of the cylinder, and one for the side. In general, a mapping function between the input coordinate space for the volumetric display and the addressable voxel space within the volumetric display can be defined for a desired metaphor.
[00280] The cursor is then positioned 6506 (see figure 6024) at the appropriate position within the volumetric display.
[00281] Once the cursor is positioned within the display, a determination is made 6508 as to whether the cursor (volume or point or influence or shaped) is "contacting" any object, such as a model object or a virtual interface. This involves mapping from the cursor position within the display to an appropriate virtual space. A point cursor is just a point in 3 space and has no orientation or volume information. If the point is coincident with an object contact exists. Things like volume cursors, influence cursors, ring cursors, etc., can require orientation information as well as volume information. The points comprising volume objects in the display space need to be compared to the points comprising the oriented volume cursor to determine if contact exists.
[00282] Once the mapping has occurred, a determination 6510 is made as to whether this is a smart cursor or a control having a particular function that has been activated. If so, the system determines 6512 whether the function has been activated, such as by the depression of a button on a stylus. If the function has been activated, it is performed 6514 and the system continues inputting input device coordinates.
[00283] For the ray based pointing, once the input device coordinates are input 6630, such as the position of a stylus on the enclosure surface, a determination 6632 is made concerning an origin of the ray that is to be cast, as depicted in figure 6025. In a stylus contact type mode (see figures 6010, 6011A and 6013B) a transformation similar to that performed for the surface restricted cursor is performed. For a vector mode, a closest point on the surface of the display along the vector is determined. The system then casts 6634 a ray. For the vector mode, the ray corresponds to the vector. For the planar mode, a search is performed for a point on a reference plane at which an orthogonal will intersect the ray point of origin. If the ray has a displaced origin, such as for a displaced cone the origin of the ray is displaced accordingly. Next, the system determines whether an object has been contacted 6636. When the ray is the selection mechanism (see figure 6011 C), conventional ray casting object detection along the ray is performed and the first object encountered, if any, is flagged. When a bead is used, the bead is treated like a volume cursor as discussed above. Once a contact determination is made, the system performs the operations 6506-6510 discussed previously.
[00284] When a plane, such as depicted in figure 6019, is used as a cursor it defines a virtual plane. This virtual plane can have an orientation and position in object space. When the virtual plane is activated, objects that intersect or come in contact with the virtual plane are also activated. When the virtual plane moves position or orientation, the activated objects move a corresponding distance and direction proportional to the motion of the virtual plane. Releasing the virtual plane also deactivates the objects which are currently in contact with the virtual plane. If the resulting virtual plane motion causes activated objects to be move beyond the imaging chamber of the display, their data structures are still affected even though they are not visible in the display. Alternative strategies for plane operation include:
(1) manipulation of volumes of space not just objects into which the plane bumps,
(2) do not move volumes but instead compress space, (3) different behavior when the virtual plane cuts across the entire volume or, alternatively, partially intersects the volume, (4) if the virtual plane is manipulating volumes of space, different behaviors/actions happen depending on whether the space is in front of or behind the virtual plane. For example, space in front of the virtual plane may compress. However, space behind the virtual plane can either be enlarged or empty space can be defined.
[00285] The system also includes permanent or removable storage, such as magnetic and optical discs, RAM, ROM, etc. on which the process and data structures of the present invention can be stored and distributed. The processes can also be distributed via, for example, downloading over a network such as the Internet.
[00286] The present invention has been described using typical devices, such as a stylus to designate objects. However, it is possible to substitute other types of devices that can include pointing mechanisms, such as specialized surgical tools, which would allow the simulation and practice of a surgical operator.
[00287] The rays of the present invention have been shown as typically beam or pencil type rays. The rays can also take other shapes, even fanciful ones like the cork screw 6520 of figure 6026A and the lightening bolt 6522 of figure 6026B.
[00288] Combinations of the embodiments are also possible. For example, a surface restricted cursor could produce a target ray.
[00289] The many features and advantages of the invention are apparent from the detailed specification and, thus, it is intended by the appended claims to cover all such features and advantages of the invention that fall within the true spirit and scope of the invention. Further, since numerous modifications and changes will readily occur to those skilled in the art, it is not desired to limit the invention to the exact construction and operation illustrated and described, and accordingly all suitable modifications and equivalents may be resorted to, falling within the scope of the invention.

Claims

1. A system, comprising: a three-dimensional (3D) volumetric display output configuration having a display content; and an input configuration coupled to the volumetric display output configuration and allowing a user to affect the display content.
2. A system as recited in claim 1 , wherein the output configuration comprises one of a dome, a cylinder, a cubical box and an arbitrary shape.
3. A system as recited in claim 1, wherein the input configuration comprises one of a 3D volumetric input space mapped to the 3D volumetric display, a planer 2D input space mapped to the 3D volumetric display, a planar 2D input space mapped to a planar 2D space within the 3D volumetric display, and a non-planar 2D input space mapped to the 3D volumetric display.
4. A system as recited in claim 3, wherein the user produces inputs comprising one or directly with a hand, with a surface touching device and with an intermediary device.
5. A system as recited in claim 3, wherein the input configuration further comprises one of an input volume adjacent to the display, an input volume surrounding the display, a digitizing surface covering a surface of the display, a digitizing surface offset from the surface of the display, and an intermediary device used with the display.
6. A system as recited in claim 5, wherein the intermediary device comprises one of a stylus, a surface fitting mouse, a park able mouse, a multi-dimensional mouse, a movable input device positioned on a bottom periphery of the display and a set of identical input devices positioned spaced around a bottom periphery of the display.
7. A system as recited in claim 1 , wherein the input configuration comprises a non-planar 2D input space mapped to the 3D volumetric display.
8. A system as recited in claim 7, wherein the non-planar 2D input space comprises a digitizing surface covering a surface of the display and a digitizing stylus interacting with the surface.
9. A system as recited in claim 8, wherein the stylus has a tip, and the stylus and digitizing surface produce a pointing vector and a cursor is created in the display at a fixed distance from the tip along the vector.
10. A system as recited in claim 1 , wherein the input configuration comprises a tracking system tracking a user.
11. A system as recited in claim 1 , wherein the input configuration is non-spatial.
12. A system as recited in claim 11 , wherein the input configuration comprises a voice recognition system allowing the use to affect the display content using voice commands.
13. A system as recited in claim 1 , wherein the input configuration and output configuration define a spatial correspondence between an input space and an output space.
14. A system as recited in claim 13, wherein the spatial correspondences comprises one of 3D to 3D, 2D planar to 3D, 2D planar to 2D planar and non-planar 2D to 3D.
15. A system as recited in claim 12, where the input configuration, output configuration and the user define a dynamically updatable spatial correspondence.
16. A system, comprising: a dome shaped three-dimensional (3D) volumetric display having an enclosure surface; an input configuration comprising: a digitizing surface covering the enclosure surface; a digitizing stylus having a tip and interacting with the digitizing surface under the control of a user; and a digitizing system coupled to the digitizing surface and outputting non-planar position coordinates and a pointing vector responsive to the interaction between the stylus and the digitizing surface; and a computer coupled between the display and the digitizing system, producing 3D content displayed in the display, mapping the non-planar position coordinates to a 3D coordinate position in the display by offsetting along the vector by an offset distance from the tip and affecting the content at the 3D coordinate position.
17. A method, comprising: interacting, by a user, with a surface of a three-dimensional (3D) volumetric display; and affecting the 3D content of the display responsive to the interaction.
18. A method as recited in claim 17, wherein the display comprises a digitizing grid formed on the surface, the interacting comprises the user manipulating the stylus in a sensing region of the grid and the affecting comprises mapping a stylus position in the sensing region to a 3D display position and creating a cursor at a the 3D display position.
19. A method of managing a volumetric display, comprising: creating one or more volume windows in the volumetric display; and providing application data to the windows corresponding to one or more applications assigned to each of the windows.
20. A method as recited in claim 19, wherein the windows are delimited by visible boundaries.
21. A method as recited in claim 19, wherein the volume windows have a shape comprising one of cubic, cylindrical, pie wedges, and arbitrary shapes.
22. A method as recited in claim 19, wherein input events are assigned to one of the windows responsive to input focus.
23. A method as recited in claim 22, wherein input focus comprises the window enclosing the cursor for spatial input events and the active window for non spatial input events.
24. A method as recited in claim 19, wherein the windows have window functions comprising one of open, close, resize, maximize, minimize, move, hilite, and hide.
25. A method as recited in claim 24, wherein the open function allocates three-dimensional region within the display for an application having an origin and highlights a three-dimensional boundary of the region the further comprising display outputs of the application in the region, and sending all the cursor events to the application when a cursor is within the region.
26. A method as recited in claim 24, wherein the minimize function substitutes a three-dimensional icon for a designated volume window and places icon at a designated position within the display.
27. A method as recited in claim 24, wherein the maximize function expands a designated volume window until the window contacts a boundary of the display and scales contents of the volume window in proportion to the change in volume.
28. A method as recited in claim 24, wherein the move function moves a three-dimensional volume window in accordance with a three- dimensional motion vector.
29. A method as recited in claim 24, wherein the resize function determines whether an active window encounters an inactive window in three dimensions.
30. A method as recited in claim 19, wherein application requests are mapped to corresponding windows responsive to the window assigned to the application.
31. A method as recited in claim 19, wherein an active window having a size can have the size changed and the method comprising resizing a window that abuts the active window responsive to the change in size of the active window.
32. A method as recited in claim 19, wherein the user can designate volumes or sub-volumes using gestures.
33. A method of managing a volumetric display, comprising: creating volume windows within a volumetric display; and associating a process/thread with each of the volume windows.
34. A method as recited in claim 33, wherein the associating uses a data structure comprising a root node and a volume window node linked to the root node, the volume window node comprising pointers defining a boundary of the volume window and information identifying an application supplying output to the window and receiving input from the window.
35. A method as recited in claim 34, wherein each process operates with data of the associated volume window.
36. A method as recited in claim 34, wherein input events are assigned to one of the volume windows responsive to a display input focus.
37. A computer readable storage controlling a computer by creating one or more volume windows in a volumetric display, and providing application data to the windows corresponding to one or more applications assigned to each of the windows.
38. An apparatus, comprising: a volumetric display apparatus having a volumetric display; an input system producing input events; and a computer creating volume windows within the display, assigning application data to the windows responsive to applications assigned to the windows, and assigning input events to the windows responsive to input focus.
39. A display, comprising: a volumetric display space; and a volume window positioned in the display space and having a three- dimensional boundary.
40. A display as recited in claim 39, wherein the volume window has a title bar indicating a window orientation and a front of the volume window.
41. A volumetric display data structure readable by a computer and controlling production of a volumetric display by the computer, comprising: a root node defining a shape of a volumetric display space and three dimensional boundaries of the shape; volume window nodes linked to an root node and each node identifying an application associated with the window, specifying a position of the volume window in the display space and specifying a boundary of the volume window.
42. A process, comprising; generating and displaying a volumetric display; and producing a two-dimensional graphical user interface having three dimensions within the volumetric display.
43. A process as recited in claim 42, wherein the producing comprises positioning the user interface on an outside surface of the display.
44. A process as recited in claim 42, wherein the producing comprises positioning the interface in a ring.
45. A process as recited in claim 42. wherein the producing comprises positioning the user interface in a plane in the display.
46. A process as recited in claim 45, wherein the plane is one of a vertical plane, a horizontal floor plane and an arbitrary angle plane.
47. A process as recited in claim 42, wherein the producing comprises positioning the interface responsive to a users eye gaze.
48. A process as recited in claim 47, wherein the producing comprises positioning the interface responsive to a users focus of attention.
49. A process as recited in claim 42, wherein the producing comprises mapping a 2D interface representation into a 3D interface in the display.
50. A process as recited in claim 42, wherein the producing comprises mapping a two-dimensional interface representation to voxels in the volumetric display.
51. A process as recited in claim 50, wherein the mapping comprises assigning 2D texture of the interface to each voxel.
52. A process as recited in claim 51 , wherein when the widget has a thickness of more than one voxel the assigning comprises: determining whether a voxel intersects a 3D interface surface; mapping the 2D texture when an intersection occurs comprising: mapping the intersecting voxel to a user interface texture map local surface position; sampling texture of the texture map at the local surface position; and assigning the texture of the sample to the voxel.
53. A process as recited in claim 50, further comprising mapping control inputs of the volumetric display to controls of the representation.
54. A process as recited in claim 42, wherein the producing comprises drawing the interface in three dimensions using three- dimensional drawing primitives.
55. A process, comprising: producing a volumetric display comprising a two-dimensional graphical user interface having three dimensions; and converting the two-dimensional graphical user interface having three dimensions into a two-dimensional representation.
56. A process, comprising; producing a volumetric display; and producing a two-dimensional graphical user interface within the volumetric display and mapping control inputs of the volumetric display to controls of the interface.
57. A process, comprising; producing a volumetric display; and producing a two-dimensional graphical user interface within the volumetric display and positioning the user interface on an outside surface of the display responsive to a users focus of attention, comprising: mapping a two-dimensional interface representation to voxels in the volumetric display where the widget has a thickness of more than one voxel; assigning 2D texture of the interface to each voxel, assigning comprising: determining whether a voxel intersects a 3D interface surface; and mapping the 2D texture when an intersection occurs comprising: mapping the intersecting voxel to a user interface texture map local surface position; sampling texture of the texture map at the local surface position; and assigning the texture of the sample to the voxel; and mapping control inputs of the volumetric display to controls of the representation.
58. A process, comprising: producing a volumetric display; and converting an entire two-dimensional desk top workspace into a three- dimensional plane within the volumetric display.
59. An apparatus, comprising: a volumetric display system for displaying a three-dimensional scene; an input system inputting a three-dimensional control input for a user interface; and a computer system receiving the control input for the user interface, mapping a two-dimensional representation of the interface to voxels within the display system and mapping the three-dimensional control input to the two-dimensional representation.
60. An apparatus comprising: a volumetric display system for displaying a three dimensional scene; a two-dimensional display system having a two-dimensional display positioned as a floor at the scene and comprising a two-dimensional graphical user interface; an input system inputting a control input for the user interface; and a computer system receiving the control input, activating a control of the interface and performing a display function within the volumetric display system.
61. A display comprising: a three-dimensional volume displaying a three dimensional scene; and a two-dimensional display having three dimensions and being displayed within the three dimensional volume.
62. A computer readable storage for controlling a computer by a process of generating and displaying a volumetric display and producing a two-dimensional graphical user interface having three dimensions within the volumetric display.
63. A method, comprising: producing and displaying a three-dimensional scene in a volumetric display; and producing and displaying a volumetric interface element in the display.
64. A method as recited in claim 63, wherein the interface element comprises display faces arranged in separate non-adjoining locations within the volumetric display and each face having a viewing range where the viewing range of the faces cover the viewpoints of users around the volumetric display.
65. A method as recited in claim 64, wherein viewpoints are allocated into viewing clusters and a face is oriented along an average angle between the viewpoints in a cluster.
66. A method as recited in claim 63, wherein the volumetric element comprises display faces corresponding one-to-one with users and oriented to face corresponding users.
67. A method as recited in claim 66, further comprising positioning the faces to eliminate occlusion by neighboring faces.
68. A method as recited in claim 67, wherein each face has a normal to a surface of the volumetric display and the positioning comprises: determining a normal to a surface of the volumetric display corresponding to each user location; placing the faces in a center of the volumetric display along their corresponding normals; determining a centroid of the faces; orienting the faces orthogonal to their corresponding normals along radials from the centroid to the corresponding user locations; and moving occluding faces along their corresponding normals until occlusion is eliminated.
69. A method as recited in claim 63, wherein the volumetric element comprises an omni-directional element.
i
70. A method as recited in claim 69, wherein an omni-directional element comprises a multisided element with contents replicated on each side.
71. A method as recited in 70, wherein the multisided element comprises one of a cube, a globe and a hexagonal solid, the multisided element having faces each with a viewing range and the viewing ranges of the faces covering the viewpoints of users around the volumetric display.
72. A method as recited in claim 63, wherein the volumetric element comprises a rotating face rotating into view of the users.
73. A method as recited in claim 72, wherein the rotating face comprises one of a face revolving continuously through 360 degrees and a facerocking back and forth through a predetermined number of degrees.
74. A method as recited in claim 73, further comprising orienting the rotating face toward a user selecting the volumetric element.
75. A method as recited in claim 73, wherein the volumetric element further comprises a stationary control associated with the rotating face.
76. A method, comprising: producing and displaying a three-dimensional scene in a volumetric display; producing and displaying an omni-viewable widget in the display, where omni-viewable widget comprises display faces arranged in separate non-adjoining locations within the volumetric display, corresponding one-to- one with users and oriented to face corresponding users and each face having a viewing range where the viewing range of the faces cover the viewpoints of users around the volumetric display; and positioning the faces to eliminate occlusion by neighboring faces, where each face has a normal to a surface of the volumetric display and the positioning comprises: determining a normal to a surface of the volumetric display corresponding to each user location; placing the faces in a center of the volumetric display along their corresponding normals; determining a centroid of the faces; orienting the faces orthogonal to their corresponding normals along radials from the centroid to the corresponding user locations; and moving occluding faces along their corresponding normals until occlusion is eliminated.
77. A computer readable storage controlling a computer or via producing a three-dimensional scene in a volumetric display, and producing a volumetric interface element in the display.
78. An apparatus, comprising: a volumetric display system displaying a three-dimensional scene; and a computer creating the scene and providing a volumetric widget displayed by said display system.
79. A method, comprising: producing and displaying scene in a display; and producing and displaying an omni-viewable widget in the display.
80. A method, comprising: producing and displaying scene in a display; and modifying a widget to increase a number of viewpoints for the widget with the widget being viewable and operable omni-directionally and displaying a modified widget in the display.
81. A display, comprising: scene content; and an omni-viewable widget.
82. A method, comprising: rotating a volume rotational controller for a three dimensional display responsive to user rotational inputs; and rotating the display contents responsive to rotation of the controller.
83. A method as recited in claim 82, wherein the rotational inputs comprise a rotational force.
84. A method as recited in claim 82, wherein the rotational inputs comprise inputs indicating actual rotation of an enclosure.
85. A method as recited in claim 82, wherein the rotation of the display contents is responsive to a time value.
86. A method as recited in claim 82, wherein a display apparatus rotates with an enclosure.
87. A method as recited in claim 82, wherein the display contents are virtually rotated.
88. A method as recited in claim 87, wherein the rotation has at least two degrees of freedom.
89. A method as recited in claim 87, wherein a gain is applied to the virtual rotation.
90. A method as recited in claim 89, wherein the gain is one of positive and negative.
91. A method as recited in claim 89, wherein the gain is one of less than one and greater than one.
92. A method as recited in claim 82, wherein the display is divided into two or more segments and the segments each rotate with a different gain responsive to the rotational inputs.
93. A method as recited in claim 82, wherein the display includes scene objects and widgets and further comprising rotating the widgets differently than the scene objects.
94. A method as recited in claim 93, wherein the widgets remain stationary with respect to the user as the scene objects rotate.
95. A computer readable storage controlling a computer with a process of determining rotation of a rotable enclosure and rotating volumetric display contents responsive to the rotation.
96. An apparatus, comprising: a rotatable three-dimensional display; and a mechanism rotating display contents of the display as the enclosure is rotated.
97. An apparatus, comprising: a display having rotatable display contents; and an interface controlling the rotation of the display contents with the interface being omni-directionally accessible.
98. An apparatus as recited in claim 97, wherein the interface is omni-directionally controllable.
99. A method, comprising: determining movement of a volume movement controller; and moving display contents of a volumetric display responsive to the movement.
100. A method, comprising: rotating a volumetric display, and rotating contents of the display in correspondence to the display.
101. A volumetric display method, comprising: producing a three-dimensional scene in a volumetric display; and producing a three-dimensional pointer for an object in the display.
102. A method as recited in claim 101 , wherein the pointer comprises a ray.
103. A method as recited in claim 102, further comprising designating the object for selection when the ray intersects the object.
104. A method as recited in claim 103, wherein the object designated for selection is one of a first object the ray intersects, a last object the ray intersects and one of several objects intersected by the ray as designated by the user.
105. A method as recited in claim 102, further comprising indicating the object for selection when the object is within a region of influence of the ray.
106. A method as recited in claim 105, further comprising allowing the user to sweep out a volume in the display using the region of influence.
107. A method as recited in claim 106, further comprising changing the display in the swept volume.
108. A method as recited in claim 105, further comprising making the region of influence semitransparent.
109. A method as recited in claim 105, wherein the region of influence is defined by a segmented wand having segments each having a different cursor function.
110. A method as recited in claim 105, wherein the region of influence is defined by one of a ray area, a ball, a bead, a wand, a size adjustable bead, a ring, a cone, a cylinder, a rectangle, and a volume geometry.
111. A method as recited in claim 102, wherein the ray corresponds to a vector associated with an input apparatus.
112. A method as recited in claim 102, wherein the ray points coincident to a vector associated with an input apparatus.
113. A method as recited in claim 102, wherein the ray corresponds to an orthogonal to a plane tangent to a display surface at a user designated surface point.
114. A method as recited in claim 102, wherein the ray corresponds to an orthogonal to a reference plane intersecting a user designated display surface point.
115. A method as recited in claim 114, further comprising indicating the display surface point with a stylus.
116. A method as recited in claim 102, further comprising specifying a three-dimensional point along the ray as a cursor that can be used to activate a function.
117. A method as recited in claim 116, wherein the point is specified by one of a user adjustable point along the ray, a fixed position relative to a pointing apparatus, an intersection with another ray and an intersection with a plane.
118. A method as recited in claim 102, further comprising rotating an object intersected by the ray about the ray as an input device controlling the ray is rotated.
119. A method as recited in claim 101 , wherein the pointer comprises a point cursor.
120. A method as recited in claim 119, wherein the point cursor is always produced in the display.
121. A method as recited in claim 119, wherein the point cursor is semitransparent.
122. A method as recited in claim 119, further comprising producing orthogonal guidelines intersecting the point cursor in the display.
123. A method as recited in claim 101 , wherein the pointer comprises a volume cursor.
124. A method as recited in claim 123, wherein the volume cursor has a semitransparent surface.
125. A method as recited in claim 101 , wherein the pointer comprises a display surface cursor.
126. A method as recited in claim 125, further comprising positioning the cursor at a point on the surface of the display closest to an input device held in association with the display.
127. A method as recited in claim 101 , wherein the pointer comprises a display spanning plane.
128. A method as recited in claim 127, further comprising performing a function with respect to each object intersected by the plane.
129. A volumetric display method, comprising: producing a three-dimensional scene in a volumetric display; and producing a three dimensional pointer in the display, comprising: determining a device position of a input device moving in two dimensions in association with the display; determining a position of a surface cursor on a surface of the display responsive to the device position; and producing a ray in the display projected from the position of the surface cursor and orthogonal to the surface at the position; and selecting a first object intersected by the ray.
130. A volumetric display method, comprising: producing a three-dimensional scene in a volumetric display; and producing a three dimensional pointer in the display, comprising: determining a device position of a input device moving in two dimensions in association with the display; and determining a position of a surface cursor on a surface of the display responsive to the device position; and tumbling the scene responsive to the position.
131. A volumetric display method, comprising: producing a three-dimensional scene in a volumetric display; and producing a three dimensional pointer in the display, comprising: determining a device position and an orientation vector of an input device positioned in association with the display; determining an intersection point of the vector with the display; determining a length of the pointer from a length setting; and producing a ray in the display projected from the intersection point along the vector to an end point with the end point being located along the vector from the device responsive to the length.
132. A computer readable storage controlling a computer by producing a three-dimensional scene in a volumetric display and producing a three-dimensional pointer for an object in the display.
133. A display apparatus, comprising: a volumetric display system; an input device inputting position coordinates; and a computer producing a three-dimensional scene displayed by the display system and producing a three-dimensional volumetric pointer positioned in the scene responsive to the position coordinates.
134. A display apparatus as recited in claim 133, wherein the input device comprises on of a stylus/digitizing surface adjacent a display enclosure, a stylus/digitalizing surface on the display enclosure and a three- dimensional input device, and the three-dimensional pointer comprises one of a display surface restricted cursor, a floating point cursor, a floating volume cursor, a ray cursor, a ray cursor having a region of influence and a plane cursor.
135. A display, comprising: a volumetric display scene; and a volumetric pointer within the scene.
136. A method, comprising: simulating a line segment in three dimensional space; moving the simulated line segment in three dimensional space by manipulating an input device; and displaying in a volumetric display an intersection of the simulated line segment with the volumetric display.
PCT/US2003/002341 2002-01-25 2003-01-27 Three dimensional volumetric display input and output configurations WO2003083822A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU2003214910A AU2003214910A1 (en) 2002-01-25 2003-01-27 Three dimensional volumetric display input and output configurations

Applications Claiming Priority (14)

Application Number Priority Date Filing Date Title
US35095202P 2002-01-25 2002-01-25
US60/350,952 2002-01-25
US10/183,945 US7554541B2 (en) 2002-06-28 2002-06-28 Widgets displayed and operable on a surface of a volumetric display enclosure
US10/183,970 2002-06-28
US10/183,944 US7324085B2 (en) 2002-01-25 2002-06-28 Techniques for pointing to locations within a volumetric display
US10/183,944 2002-06-28
US10/183,968 US7205991B2 (en) 2002-01-25 2002-06-28 Graphical user interface widgets viewable and readable from multiple viewpoints in a volumetric display
US10/183,966 US7839400B2 (en) 2002-01-25 2002-06-28 Volume management system for volumetric displays
US10/183,945 2002-06-28
US10/188,765 US7138997B2 (en) 2002-06-28 2002-06-28 System for physical rotation of volumetric display enclosures to facilitate viewing
US10/183,970 US6753847B2 (en) 2002-01-25 2002-06-28 Three dimensional volumetric display input and output configurations
US10/188,765 2002-06-28
US10/183,968 2002-06-28
US10/183,966 2002-06-28

Publications (1)

Publication Number Publication Date
WO2003083822A1 true WO2003083822A1 (en) 2003-10-09

Family

ID=28679225

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2003/002341 WO2003083822A1 (en) 2002-01-25 2003-01-27 Three dimensional volumetric display input and output configurations

Country Status (2)

Country Link
AU (1) AU2003214910A1 (en)
WO (1) WO2003083822A1 (en)

Cited By (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2006022912A1 (en) * 2004-08-02 2006-03-02 Actuality Systems, Inc. Method for pointing and selection of regions in 3-d image displays
EP1821182A1 (en) * 2004-10-12 2007-08-22 Nippon Telegraph and Telephone Corporation 3d pointing method, 3d display control method, 3d pointing device, 3d display control device, 3d pointing program, and 3d display control program
EP2422854A3 (en) * 2010-08-20 2012-08-22 Nintendo Co., Ltd. Game system, game device, storage medium storing game program, and game process method
WO2013178358A1 (en) * 2012-06-01 2013-12-05 Kersting Juergen Method for spatially visualising virtual objects
US8845426B2 (en) 2011-04-07 2014-09-30 Nintendo Co., Ltd. Input system, information processing device, storage medium storing information processing program, and three-dimensional position calculation method
US8896534B2 (en) 2010-02-03 2014-11-25 Nintendo Co., Ltd. Spatially-correlated multi-display human-machine interface
US8913009B2 (en) 2010-02-03 2014-12-16 Nintendo Co., Ltd. Spatially-correlated multi-display human-machine interface
US8923686B2 (en) 2011-05-20 2014-12-30 Echostar Technologies L.L.C. Dynamically configurable 3D display
US8956209B2 (en) 2010-08-30 2015-02-17 Nintendo Co., Ltd. Game system, game apparatus, storage medium having game program stored therein, and game process method
US8961305B2 (en) 2010-02-03 2015-02-24 Nintendo Co., Ltd. Game system, controller device and game method
US9132347B2 (en) 2010-08-30 2015-09-15 Nintendo Co., Ltd. Game system, game apparatus, storage medium having game program stored therein, and game process method
EP2940995A1 (en) * 2014-04-29 2015-11-04 Satavision OY Virtual vitrine
US9199168B2 (en) 2010-08-06 2015-12-01 Nintendo Co., Ltd. Game system, game apparatus, storage medium having game program stored therein, and game process method
US9272207B2 (en) 2010-11-01 2016-03-01 Nintendo Co., Ltd. Controller device and controller system
US9418470B2 (en) 2007-10-26 2016-08-16 Koninklijke Philips N.V. Method and system for selecting the viewing configuration of a rendered figure
WO2017048891A1 (en) * 2015-09-15 2017-03-23 Looking Glass Factory, Inc. Laser-etched 3d volumetric display
US10150033B2 (en) 2010-08-20 2018-12-11 Nintendo Co., Ltd. Position calculation system, position calculation device, storage medium storing position calculation program, and position calculation method
US10943388B1 (en) 2019-09-06 2021-03-09 Zspace, Inc. Intelligent stylus beam and assisted probabilistic input to element mapping in 2D and 3D graphical user interfaces
EP3826741A4 (en) * 2018-07-25 2022-08-10 Light Field Lab, Inc. Light field display system based amusement park attraction
CN117611781A (en) * 2024-01-23 2024-02-27 埃洛克航空科技(北京)有限公司 Flattening method and device for live-action three-dimensional model

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4134104A (en) * 1976-08-25 1979-01-09 Ernest Karras Device for displaying data in three dimensions
US4160973A (en) * 1977-10-11 1979-07-10 Massachusetts Institute Of Technology Three-dimensional display
US5717415A (en) * 1994-02-01 1998-02-10 Sanyo Electric Co., Ltd. Display system with 2D/3D image conversion where left and right eye images have a delay and luminance difference base upon a horizontal component of a motion vector
US5805137A (en) * 1991-11-26 1998-09-08 Itu Research, Inc. Touch sensitive input control device
US6064423A (en) * 1998-02-12 2000-05-16 Geng; Zheng Jason Method and apparatus for high resolution three dimensional display
US6208318B1 (en) * 1993-06-24 2001-03-27 Raytheon Company System and method for high resolution volume display using a planar array
US20020008676A1 (en) * 2000-06-01 2002-01-24 Minolta Co., Ltd. Three-dimensional image display apparatus, three-dimensional image display method and data file format

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4134104A (en) * 1976-08-25 1979-01-09 Ernest Karras Device for displaying data in three dimensions
US4160973A (en) * 1977-10-11 1979-07-10 Massachusetts Institute Of Technology Three-dimensional display
US5805137A (en) * 1991-11-26 1998-09-08 Itu Research, Inc. Touch sensitive input control device
US6208318B1 (en) * 1993-06-24 2001-03-27 Raytheon Company System and method for high resolution volume display using a planar array
US5717415A (en) * 1994-02-01 1998-02-10 Sanyo Electric Co., Ltd. Display system with 2D/3D image conversion where left and right eye images have a delay and luminance difference base upon a horizontal component of a motion vector
US6064423A (en) * 1998-02-12 2000-05-16 Geng; Zheng Jason Method and apparatus for high resolution three dimensional display
US20020008676A1 (en) * 2000-06-01 2002-01-24 Minolta Co., Ltd. Three-dimensional image display apparatus, three-dimensional image display method and data file format

Cited By (34)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2006022912A1 (en) * 2004-08-02 2006-03-02 Actuality Systems, Inc. Method for pointing and selection of regions in 3-d image displays
EP1821182A1 (en) * 2004-10-12 2007-08-22 Nippon Telegraph and Telephone Corporation 3d pointing method, 3d display control method, 3d pointing device, 3d display control device, 3d pointing program, and 3d display control program
EP1821182A4 (en) * 2004-10-12 2011-02-23 Nippon Telegraph & Telephone 3d pointing method, 3d display control method, 3d pointing device, 3d display control device, 3d pointing program, and 3d display control program
US9418470B2 (en) 2007-10-26 2016-08-16 Koninklijke Philips N.V. Method and system for selecting the viewing configuration of a rendered figure
US8913009B2 (en) 2010-02-03 2014-12-16 Nintendo Co., Ltd. Spatially-correlated multi-display human-machine interface
US9776083B2 (en) 2010-02-03 2017-10-03 Nintendo Co., Ltd. Spatially-correlated multi-display human-machine interface
US9358457B2 (en) 2010-02-03 2016-06-07 Nintendo Co., Ltd. Game system, controller device, and game method
US8961305B2 (en) 2010-02-03 2015-02-24 Nintendo Co., Ltd. Game system, controller device and game method
US8896534B2 (en) 2010-02-03 2014-11-25 Nintendo Co., Ltd. Spatially-correlated multi-display human-machine interface
US9199168B2 (en) 2010-08-06 2015-12-01 Nintendo Co., Ltd. Game system, game apparatus, storage medium having game program stored therein, and game process method
US8337308B2 (en) 2010-08-20 2012-12-25 Nintendo Co., Ltd. Game system, game device, storage medium storing game program, and game process method
US8690675B2 (en) 2010-08-20 2014-04-08 Nintendo Co., Ltd. Game system, game device, storage medium storing game program, and game process method
US10150033B2 (en) 2010-08-20 2018-12-11 Nintendo Co., Ltd. Position calculation system, position calculation device, storage medium storing position calculation program, and position calculation method
EP2422854A3 (en) * 2010-08-20 2012-08-22 Nintendo Co., Ltd. Game system, game device, storage medium storing game program, and game process method
US8956209B2 (en) 2010-08-30 2015-02-17 Nintendo Co., Ltd. Game system, game apparatus, storage medium having game program stored therein, and game process method
US9132347B2 (en) 2010-08-30 2015-09-15 Nintendo Co., Ltd. Game system, game apparatus, storage medium having game program stored therein, and game process method
US9272207B2 (en) 2010-11-01 2016-03-01 Nintendo Co., Ltd. Controller device and controller system
US9889384B2 (en) 2010-11-01 2018-02-13 Nintendo Co., Ltd. Controller device and controller system
US8845426B2 (en) 2011-04-07 2014-09-30 Nintendo Co., Ltd. Input system, information processing device, storage medium storing information processing program, and three-dimensional position calculation method
US8923686B2 (en) 2011-05-20 2014-12-30 Echostar Technologies L.L.C. Dynamically configurable 3D display
DE102012010799B4 (en) * 2012-06-01 2015-03-05 Jürgen Kersting Method for the spatial visualization of virtual objects
WO2013178358A1 (en) * 2012-06-01 2013-12-05 Kersting Juergen Method for spatially visualising virtual objects
EP2940995A1 (en) * 2014-04-29 2015-11-04 Satavision OY Virtual vitrine
US9781411B2 (en) 2015-09-15 2017-10-03 Looking Glass Factory, Inc. Laser-etched 3D volumetric display
US10104369B2 (en) 2015-09-15 2018-10-16 Looking Glass Factory, Inc. Printed plane 3D volumetric display
US10110884B2 (en) 2015-09-15 2018-10-23 Looking Glass Factory, Inc. Enhanced 3D volumetric display
WO2017048891A1 (en) * 2015-09-15 2017-03-23 Looking Glass Factory, Inc. Laser-etched 3d volumetric display
EP3826741A4 (en) * 2018-07-25 2022-08-10 Light Field Lab, Inc. Light field display system based amusement park attraction
US11452945B2 (en) 2018-07-25 2022-09-27 Light Field Lab, Inc. Light field display system based amusement park attraction
US11938410B2 (en) 2018-07-25 2024-03-26 Light Field Lab, Inc. Light field display system based amusement park attraction
US10943388B1 (en) 2019-09-06 2021-03-09 Zspace, Inc. Intelligent stylus beam and assisted probabilistic input to element mapping in 2D and 3D graphical user interfaces
US11645809B2 (en) 2019-09-06 2023-05-09 Zspace, Inc. Intelligent stylus beam and assisted probabilistic input to element mapping in 2D and 3D graphical user interfaces
CN117611781A (en) * 2024-01-23 2024-02-27 埃洛克航空科技(北京)有限公司 Flattening method and device for live-action three-dimensional model
CN117611781B (en) * 2024-01-23 2024-04-26 埃洛克航空科技(北京)有限公司 Flattening method and device for live-action three-dimensional model

Also Published As

Publication number Publication date
AU2003214910A1 (en) 2003-10-13

Similar Documents

Publication Publication Date Title
US7528823B2 (en) Techniques for pointing to locations within a volumetric display
WO2003083822A1 (en) Three dimensional volumetric display input and output configurations
US20220084279A1 (en) Methods for manipulating objects in an environment
US7986318B2 (en) Volume management system for volumetric displays
US6753847B2 (en) Three dimensional volumetric display input and output configurations
US10852913B2 (en) Remote hover touch system and method
Grossman et al. Multi-finger gestural interaction with 3d volumetric displays
US7554541B2 (en) Widgets displayed and operable on a surface of a volumetric display enclosure
Deering HoloSketch: a virtual reality sketching/animation tool
US5583977A (en) Object-oriented curve manipulation system
US8836646B1 (en) Methods and apparatus for simultaneous user inputs for three-dimensional animation
US8643569B2 (en) Tools for use within a three dimensional scene
Balakrishnan et al. User interfaces for volumetric displays
US20200004403A1 (en) Interaction strength using virtual objects for machine control
WO2017054004A1 (en) Systems and methods for data visualization using tree-dimensional displays
Telkenaroglu et al. Dual-finger 3d interaction techniques for mobile devices
Stevenson et al. An inflatable hemispherical multi-touch display
WO1995011482A1 (en) Object-oriented surface manipulation system
Olwal et al. Unit-A Modular Framework for Interaction Technique Design, Development and Implementation
US12099660B2 (en) User-defined virtual interaction space and manipulation of virtual cameras in the interaction space
WO1995011480A1 (en) Object-oriented graphic manipulation system
JP2007140186A (en) Three-dimensional pointing method, three-dimensional pointing device, and three-dimensional pointing program
George User Interfaces for Volumetric Displays
vanDam Three-Dimensional User Interfaces for Immersive Virtual Reality
Albuquerque et al. 3D modelling with vision based interaction in semi-immersive enviroments

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NO NZ OM PH PL PT RO RU SD SE SG SK SL TJ TM TN TR TT TZ UA UG UZ VN YU ZA ZM ZW

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IT LU MC NL PT SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG

DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
121 Ep: the epo has been informed by wipo that ep was designated in this application
122 Ep: pct application non-entry in european phase
NENP Non-entry into the national phase

Ref country code: JP

WWW Wipo information: withdrawn in national office

Country of ref document: JP