EP1325460A1 - Virtual showcases - Google Patents
Virtual showcasesInfo
- Publication number
- EP1325460A1 EP1325460A1 EP01963916A EP01963916A EP1325460A1 EP 1325460 A1 EP1325460 A1 EP 1325460A1 EP 01963916 A EP01963916 A EP 01963916A EP 01963916 A EP01963916 A EP 01963916A EP 1325460 A1 EP1325460 A1 EP 1325460A1
- Authority
- EP
- European Patent Office
- Prior art keywords
- producing
- image space
- space
- image
- virtual
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
Definitions
- the present application contains the discussion of the use of mirrors with virtual tables from PCT/US99/28930 and the discussion of the optical properties of the extended virtual table from PCT/USOl/18327.
- the new material in the present application begins with the section titled Using virtual reality systems and mirrors to build virtual showcases.
- the invention relates generally to virtual and augmented environments and more specifically to the application of mirror beam-splitters as optical combiners in combination with projection systems that are used to produce virtual environments.
- PGC Pepper's Ghosts Configuration
- PGC is still used by entertainment and theme parks, such as the Haunted Mansion at Disney World to present special effects to the audience. Some of those systems reflect large projection screens that display prerecorded 2D videos or still images instead of real off-stage areas.
- a major limitation of PGCs is that they force the audience to observe the scene from predefined viewing areas, and consequently, the viewers' parallax motion is very restricted. Another limitation is that PGCs make no provision for viewing the scene from different perspectives.
- RIS Reach-In Systems
- 7,11,12,16 are desktop configurations that normally consist of an upside-down CRT screen which is reflected by a small horizontal mirror.
- these systems present stereoscopic 3D graphics to a single user who is able to reach into the presented visual space by directly interacting below the mirror.
- Such systems are used to overlay the visual space over the interaction space, whereby the interaction space can contain haptic information rendered by a force-feedback device.
- RIS apply full mirrors [11,16]
- RISs have only one correct perspective.
- Real Image Displays [3,5,8,9,10,13,14,17] are display systems that consist of single or multiple concave mirrors. Two types of images exist in nature: real and virtual. A real image is one in which light rays actually come from the image. In a virtual image, they appear to come from the reflected image - but do not. In case of planar or convex mirrors the virtual image of an object is behind the mirror surface, but light rays do not emanate form there. In contrast, concave mirrors can form reflections in front of the mirror surface where emerging light rays cross - so called "real images”.
- RID are commercially available (e.g. [4]), and are mainly employed by the advertisement or entertainment industry.
- a projection screen (such as a CRT screen, etc.) can be reflected instead -resulting in a free- floating two-dimensional image in front of the mirror optics that is displayed on the screen (some refer to these systems as "pseudo 3D displays" since the free-floating 2D image has an enhanced 3D quality).
- a RID is used to display prerecorded video images.
- a limitation of RIDs is that if a real object is located within the same spatial space as the real image formed by a RID (i.e.
- RID In front of the mirror surface, the object occludes the mirror optics and consequently the reflected image.
- RID suffer from occlusion problems like those encountered with regular projection screens. Additionally, RIDs are not able to dynamically display different view- ependent perspectives of the presented scene.
- Varifocal Mirror Systems [6,8,9] apply flexible mirrors.
- the mirror optics is set in vibration by a rear-assembled loudspeaker [6].
- Other approaches utilize a vacuum source to manually deform the mirror optics on demand to change its focal length [8,9].
- Vibrating devices for instance, are synchronized with the refresh-rate of a display system that is reflected by the mirror.
- the spatial appearance of a reflected pixel can be exactly controlled - yielding images of pixels that are displayed approximately at their correct depth (i.e. no stereo-separation is required).
- Due to the flexibility of VMS their mirrors can dynamically deform to a concave, planar, or convex shape (generating real or virtual images).
- VMW systems are, however, not suitable for optical see-through tasks, since the space behind the mirrors is occupied by the deformation hardware (i.e. loudspeakers or vacuum pumps).
- the deformation hardware i.e. loudspeakers or vacuum pumps.
- concavely-shaped VMS face the same problems as RID. Therefore, only full mirrors are applied in combination with such systems.
- a transformation of the graphics is required before they are displayed.
- the transformation ensures that the graphics are not perceived by the viewer as being mirrored or distorted.
- the transformation is trivial (e.g. a simple mirror-transformation of the frame-buffer content [7] or of the world-coordinate-axes [11,12,16]).
- PCT/US99/28930 and PCT/USOl/18327 describe several systems that utilize single planar mirrors as optical combiners in combination with rear-projection systems [1,2].
- scene transformations are disclosed that support non-static mirror-screen alignments and view-dependent rendering for single users.
- the work disclosed herein extends the techniques used in these systems to multiple planar or curved mirror surfaces and presents real-time rendering methods and image deformation techniques for such surfaces.
- the invention is apparatus for producing an image space.
- the apparatus comprises apparatus for producing an object space, a convex reflective surface that has a position relative to the object space such that there is a reflection of the object space in the reflective surface, and a tracker that tracks the position of the head of a person who is looking into the convex reflective surface.
- the apparatus for producing the object space receives the position information from the tracker, uses the position information to determine the person's field of view in the reflective surface, and producing the object space such that the image space appears in the field of view.
- the image space does not appear to be distorted to the person who is viewing the convex reflective surface and that the reflective surface may be either curved or made up of a number of planar reflective surfaces.
- the curved surface may be a cone and the planar reflective surfaces may form a pyramid.
- the object space may be either above or below the reflective surface.
- the image space seen in a plurality of the mirrors may be the same, or the image space seen in each mirror may be different.
- the mirrors may further transmit light as well as reflect it and a real object that is part of the image space may be viewed through the mirrors.
- the object space may then be used to produce images that augment the real object in the image space.
- An important use of the apparatus is to make virtual showcases, in which real objects positioned inside the convex reflective surface may be augmented by material produced in the object space.
- aspects of the invention include a method for producing the object space and compensating for distortion caused by the apparatus for producing the object space and by refraction in the mirrors and a method of transforming an image space to produce a planar image space such that when a reflection of the object space in a curved reflective surface is seen from a given point of view, the reflection contains the image space.
- Fig 1 Conceptual sketch and photograph of the xVT prototype
- Fig 2 A large coherent virtual content viewed in the mirror, or on the projection plane
- Fig 3 Real objects behind the mirror are illuminated and augmented with virtual objects.
- Fig. 4 The developed Virtual Showcase Prototypes. A Virtual Showcase built from planar sections (right), and a curved Virtual Showcase (left).
- Fig.5 Reflections of individual images rendered within the object space for each front-facing mirror plane merge into a single consistent image space.
- Fig. 6 The truncated pyramid-like Virtual Showcase.
- Fig. 7 Transformations with curved mirrors.
- Fig 8 Sampled distorted grid and predistorted grid after projection and re-sampling
- Fig 9 Bilinear interpolation within an undistorted/predistorted grid cell
- Fig 10 Precise refraction method and refraction approximation.
- Fig. 11 Overview of rendering (no fill) and image deformation (gray fill) steps, expressed as pipeline.
- Fig. 12 Example of steps 1102, 1104, and 1106 of Fig. 11
- Fig. 13 Overview of an implementation of the invention in a virtual table
- Fig. 14 Optics of the foil used in the reflective pad
- Fig. 15 The angle of the transparent pad relative to the virtual table surface determines whether it is transparent or reflective;
- Fig. 16 The transparent pad can be used in reflective mode to examine a portion of the virtual environment that is otherwise not visible to the user;
- Fig. 17 How the portion of the virtual environment that is reflected in a mirror is determined;
- Fig. 18 How ray pointing devices may be used with a mirror to manipulate a virtual environment reflected in the mirror;
- Fig. 19 Overview of virtual reality system program 109;
- Fig. 20 A portion of the technique used to determine whether the pad is operating in transparent or reflective mode;
- Fig.21 A transflective panel may be used with a virtual environment to produce reflections of virtual objects that appear to belong to a physical space;
- Fig. 22 How the transflective panel may be used to prevent a virtual object from being occluded by a physical object;
- Fig. 23 How the transflective panel may be used to augment a physical object with a virtual object.
- Fig. 24 The truncated cone-like Virtual Showcase.
- Fig. 25 Compensating for projection distortion.
- Fig. 26 Compensating for refraction.
- Fig. 27 Virtual Showcase configuration: up-side-down.
- Fig. 28 Virtual Showcase configuration: individual screens.
- Reference numbers in the drawing have three or more digits: the two right-hand digits are reference numbers in the drawing indicated by the remaining digits. Thus, an item with the reference number 203 first appears as item 203 in FIG.
- the virtual environment used in a virtual showcase may be provided using a system of the type shown in FIG. 13.
- Processor 1303 is executing a virtual reality system program 1309 that creates stereoscopic images of a virtual environment.
- the stereoscopic images are back-projected onto virtual table 1311.
- a user of virtual table 1311 views the images through LCD shutter glasses 1317. When so viewed, the images appear to the user as a three- dimensional virtual environment.
- Shutter glasses 1321 have a magnetic tracker attached to them which tracks the position and orientation of the shutter glasses, and by that means, the position and orientation of the user's eyes. Any other kind of 6DOF tracker could be used as well.
- the position and orientation are input (1315) to processing unit 1305 and virtual reality system program 1309 uses the position and orientation information to detennine the point of view and viewing direction from which the user is viewing the virtual environment. It then uses the point of view and viewing direction to produce stereoscopic images of the virtual reality that show the virtual reality as it would be seen from the point of view and viewing direction indicated by the position and orientation information.
- a preferred embodiment of system 1301 uses the Baron Virtual Table produced by the Barco Group as its display device.
- This device offers a 53"x40" display screen built into a table surface.
- the display is produced by a Indigo2TM Maximum Impact workstation manufactured by Silicon Graphics, Incorporated.
- the shutter glasses in the preferred embodiment are equipped with 6DOF (six degrees of freedom) Flock of Birds® trackers made by Ascension Technology Corporation for position and orientation tracking.
- virtual reality system program 1309 is based on the Studierstube software framework described in D. Schmalumble, A. Fuhrmann, Z. Szalavari, M. Gerckenz: "Studierstube” - An Environment for Collaboration in Augmented Reality. Extended abstract appeared Proc. of Collaborative Virtual Environments '96, Nottingham, UK, Sep. 19-20, 1996. Full paper in: Virtual Reality - Systems, Development and Applications, Vol. 3, No. 1, pp. 37-49, 1998. Studierstube is realized as a collection of C++ classes that extend the Open Inventor toolkit, described at P. Strauss and R. Carey: An Object Oriented 3D Graphics Toolkit. Proceedings of SIGGRAPH'92, (2):341-347, 1992.
- Open Inventor's rich graphical environment approach allows rapid prototyping of new interaction styles, typically in the form of Open Inventor node kits.
- Tracker data is delivered to the application via an engine class, which forks a lightweight thread to decouple graphics and I/O.
- Off-axis stereo rendering on the VT is performed by a special custom viewer class.
- Studierstube extends Open Inventor's event system to process 3D (i.e., true 6DOF) events, which is necessary for choreographing complex 3D interactions like the ones described in this paper.
- the .iv file format which includes our custom classes, allows convenient scripting of most of an application's properties, in particular the scene's geometry. Consequently very little application-specific C++ code — mostly in the form of event callbacks — was necessary.
- Window tools The rendering of window tools generally follows the method proposed in J. Viega, M. Conway, G. Williams, and R. Pausch: 3D Magic Lenses. In Proceedings of ACM UIST'96, pages 51-58. ACM, 1996, except that it uses hardware stencil planes. After a preparation step, rendering of the world "behind the window” is performed inside the stencil mask created in the previous step, with a clipping plane coincident with the window polygon. Before rendering of the remaining scene proceeds, the window polygon is rendered again, but only the Z-buffer is modified. This step prevents geometric primitives of the remaining scene from protruding into the window. For a more detailed explanation, see D. Schmalumble, G. Schaufler: Sewing Virtual Worlds Together With SEAMS: A Mechanism to Construct Large Scale Virtual Environments. Technical Report TR-186-2-87-11, Vienna University of Technology, 1998.
- the mirror tool is a special application of a general technique for using real mirrors to view portions of a virtual environment that would otherwise not be visible to the user from the user's current viewpoint and to permit more than one user to view a portion of a virtual environment simultaneously.
- the general technique will be explained in detail later on.
- transparent pad 1323 When transparent pad 1323 is being used as a mirror tool, it is made reflective instead of transparent. One way of doing this is to use a material which can change from a transparent mode and vice-versa. Another, simpler way is to apply a special foil that is normally utilized as view protection for windows (such as Scotchtint P-18, manufactured by Minnesota Mining and Manufacturing Company) to one side of transparent pad 1323. These foils either reflect or transmit light, depending on which side of the foil the light source is on, as shown in FIG. 14.
- FIG. 1401 At 1401 is shown how foil 1409 is transparent when light source 1405 is behind foil 1409 relative to the position 1407 of the viewer's eye, so that the viewer sees object 1411 behind foil 1409.
- At 1406 is shown how foil 1409 is reflective when light source 1405 is on the same side of foil 1409 relative to position 1407 of the viewer's eye, so that the viewer sees the reflection 1415 of object 1413 in foil 1409, but does not see object 1411.
- transparent pad 1323 When a transparent pad 1323 with foil 1409 applied to one side is used to view a virtual environment, the light from the virtual environment is the light source. Whether transparent pad 1323 is reflective or transparent depends on the angle at which the user holds transparent pad 1323 relative to the virtual environment. How this works is shown in FIG. 15. The transparent mode is shown at 1501. There, transparent pad 1323 is held at an angle relative to the surface 1311 of the virtual table which defines plane 1505. Light from table surface 1311 which originates to the left of plane 1505 will be transmitted by pad 1323; light which originates to the right of plane 1505 will be reflected by pad 1323.
- plane 1505 The relationship between plane 1505, the user's physical eye 1407, and surface 1311 of the virtual table (the light source) is such that only light which is transmitted by pad 1323 can reach physical eye 1407; any light reflected by pad 1323 will not reach physical eye 1407. What the user sees through pad 1323 is thus the area of surface 1311 behind pad 1323.
- the reflective mode is shown at 1503; here, pad 1323 defines plane 1507.
- pad 1323 defines plane 1507.
- light from surface 1311 which originates to the left of plane 1507 will be transmitted by pad 1323; light which originates to the right of plane 1507 will be reflected.
- the angle between plane 1507, the user's physical eye 1407, and surface 1311 is such that only light from surface 1311 which is reflected by pad 1323 will reach eye 1407.
- pad 1323 is reflecting, physical eye 1407 will not be able to see anything behind pad 1323 in the virtual environment.
- pad 1323 When pad 1323 is held at an angle to surface 1311 such that it reflects the light from the surface, it behaves relative to the virtual environment being produced on surface 1311 in exactly the same way as a mirror behaves relative to a real environment: if a mirror is held in the proper position relative to a real environment, one can look into the mirror to see things that are not otherwise visible from one's present point of view.
- This behavior 1601 relative to the virtual environment is shown in FIG. 16.
- virtual table 1607 is displaying a virtual environment 1605 showing the framing of a self-propelled barge.
- Pad 1323 is held at an angle such that it operates as a mirror and at a position such that what it would reflect in a real environment would be the back side of the barge shown in virtual environment 1605. As shown at 1603, what the user sees reflected by pad 1323 is the back side of the barge.
- virtual reality system program 1309 tracks the position and orientation of pad 1323 and the position and orientation of shutter glasses 1317. When those positions and orientations indicate that the user is looking at pad 1323 and is holding pad 1323 at an angle relative to table surface 1311 and user eye position 1407 such that pad 1323 is behaving as a mirror, virtual reality system program 1309 determines which portion of table surface 1311 is being reflected by pad 1323 to user eye position 1407 and what part of the virtual environment would be reflected by pad 1323 if the environment was real and displays that part of the virtual environment on the portion of table surface 1311 being reflected by pad 1323. Details of how that is done will be explained later.
- pad 1323 can function in both reflective and transparent modes as a magic lens, or looked at somewhat differently, as a hand-held clipping plane that defines an area of the viitual environment which is viewed in a fashion that is different from the manner in which the rest of the virtual environment is viewed.
- FIGs. 17 and 18 As indicated in the discussion of the mirror tool above, the mirror tool is a special application of a general technique for using mirrors to view virtual environments.
- Head tracking as achieved for example in the preferred embodiment of system 1301 by attaching a magnetic tracker to shutter glasses 1317, represents one of the most common, and most intuitive methods for navigating within immersive or semi-immersive virtual environments.
- Back-screen- projection planes are widely employed in industry and the R&D community in the form of virtual tables or responsive workbenches, virtual walls or powerwalls, or even surround-screen projection systems or CAVEs. Applying head-tracking while working with such devices can, however, lead to an unnatural clipping of objects at the edges of projection plane 1311.
- Standard techniques for overcoming this problem include panning and scaling techniques (triggered by pinch gestures) that reduce the projected scene to a manageable size.
- panning and scaling techniques triggered by pinch gestures
- these techniques do not work well when the viewpoint of the user of the virtual environment is continually changing.
- the method employs a planar mirror to reflect the virtual environment and can be used to increase the perceived viewing volume of the virtual environment and to permit multiple observers to simultaneously gain a perspectively correct impression of the virtual environment
- the method is based on the fact that a planar mirror enables us to perceive the reflection of stereoscopically projected virtual scenes three-dimensionally.
- the stereo images that are projected onto the portion of surface 1311 that is reflected in the planar mirror must be computed on the basis of the positions of the reflection of the user's eyes in the reflection space (i.e. the space behind the mirror plane).
- the physical eyes perceive the same perspective by looking from the physical space through the mirror plane into the reflection space, as the reflected eyes do by looking from the reflection space through the mirror plane into the physical space.
- Mirror 1703 defines a plane 1705 which divides what a user's physical eye 1713 sees into two spaces: physical space 1709, to which physical eye 1713 and physical projection plane 1717 belong, and reflection space 1707, to which reflection 1711 of physical eye 1713 and reflection 1715 of physical projection plane 1717 appear to belong when reflected in mirror 1703. Because reflection space 1707 and physical space 1709 are symmetrical, the portion of the virtual environment that physical eye 1713 sees in mirror 1703 is the portion of the virtual environment that reflected eye 1711 would see if it were looking through mirror 1703.
- virtual reality system program 1309 need only know the position and orientation of physical eye 1713 and the size and position of mirror 1703. Using this information, virtual reality system program 1309 can determine the position and orientation of reflected eye 1711 in reflected space 1707 and from that, the portion of physical projection plane 1717 that will be reflected and the point of view which determines the virtual environment to be produced on that portion of physical proj ection plane 1717.
- mirror plane 1705 is represented as:
- p is the physical point and p its reflection.
- the reflections of both eyes have to be determined.
- the positions of the reflected eyes are used to compute the stereo images, rather than the physical eyes.
- FIGs. 21-23 Using transflective tools with virtual environments: FIGs. 21-23
- the reflecting pad When the reflecting pad is made using a clear panel and film such as Scotchtint P-18, it is able not only to alternatively transmit light and reflect light, but also able to do both simultaneously, that is, to operate transflectively.
- a pad with this capability can be used to augment the image of a physical object seen through the clear panel by means of virtual objects produced on projection plane 1311 and reflected by the transflective pad. This will be described with regard to FIG. 21.
- the plane of transflective pad 2117 divides environment 2101 into two subspaces.
- subspace 2107 that contains the viewer's physical eyes 2115 and (at least a large portion of) projection plane 1311 'the projection space' (or PRS)
- subspace 2103 that contains physical object 2119 and additional physical light-sources 2111 'the physical space' (or PHS).
- virtual graphical element 2121 Also defined in physical space, but not actually present there, is virtual graphical element 2121.
- PHS 2103 is exactly overlaid by reflection space 2104, which is the space that physical eye 2115 sees reflected in mirror 2117.
- the objects that physical eye 2115 sees reflected in mirror 2117 are virtual objects that the virtual environment system produces on projection plane 1311.
- the virtual environment system uses the definition of virtual graphical element 2121 to produce virtual graphical element 2127 at a location and orientation on projection plane 1311 such that when element 2127 is reflected in mirror 2117, the reflection 2122 of virtual graphical element 2127 appears in reflection space 2104 at the location of virtual graphical element 2121. Since mirror 2117 is transflective, physical eye 2115 can see both physical object 2119 through mirror 2117 and virtual graphical element 2127 reflected in mirror 2117 and consequently, reflected graphical element 2122 appears to physical eye 2115 to overlay physical object 2119.
- the virtual environment system computes the location and direction of view of reflected eye 2109 from the location and direction of view of physical eye 2115 and the location and orientation of mirror 2117 (as shown by arrow 2113).
- the virtual environment system computes the location of inverse reflected virtual graphical element 2127 in projection space 2107 from the location and point of view of reflected eye 2109, the location and orientation of mirror 2117, and the definition of virtual graphical element 2121, as shown by arrow 2123.
- the definition of virtual graphical element 2121 will be relative to the position and orientation of physical object 2119.
- the virtual environment system then produces inverse reflected virtual graphical element 2127 on projection plane 1311, which is then reflected to physical eye 2115 by mirror 2117.
- reflection space 2104 exactly overlays physical space 2103
- the reflection 2122 of virtual graphical element 2127 exactly overlays defined graphical element 2121.
- physical object 2119 has a tracking device and a spoken command is used to indicate to the virtual environment system that the current location and orientation of physical object 2119 are to be registered in the coordinate system of the virtual environment being projected onto projection plane 1311. Since graphical element 2121 is defined relative to physical object 2119, registration of physical object 2119 also defines the location and orientation of graphical element 2121. In other embodiments, of course, physical object 2119 may be continually tracked.
- Transflective mirror 2117 thus solves an important problem of back-projection environments, namely that the presence of physical objects in PRS 2107 occludes the virtual environment produced on projection plane 1311 and thereby destroys the stereoscopic illusion.
- the virtual elements will always overlay the physical objects.
- reflection space 2104 exactly overlays PHS 2103, the reflected virtual element 2127 will appear at the same position (2122) within the reflection space as virtual element 2121 would occupy within PHS 2103 if virtual element 2121 were real and PHS 2103 were being viewed by physical eye 2115 without mirror 2117.
- FIG. 22 illustrates a simple first example at 2201.
- a virtual sphere 2205 is produced on projection plane 1311. If hand 2203 is held between the viewer's eyes and projection plane 1311, hand 2203 occludes sphere 2205. If transflective mirror 2207 is placed between hand 2203 and the viewer's eyes in the proper position, the virtual environment system will use the position of transflective mirror 2207, the original position of sphere 2205 on projection plane 1311, and the position of the viewer's eyes to produce a new virtual sphere at a position on projection plane 1311 such that when the viewer looks at transflective mirror 2207 the reflection of the new virtual sphere in mirror 2207 appears to the viewer to occupy the same position as the original virtual sphere 2205; however, since mirror 2207 is in front of hand 2203, hand 2203 cannot occlude virtual sphere 2205 and virtual sphere 2205 overlays hand 2203.
- the user can intuitively adjust the ratio between transparency and the reflectivity by changing the angle between transflective mirror 2207 and projection plane 1311. While acute angles highlight the virtual augmentation, obtuse angles let the physical objects show through brighter. As for most augmented environments, a proper illumination is decisive for a good quality. The technique would of course also work with fixed transflective mirrors 2207.
- FIG. 23 shows an example of how a transflective mirror might be used to augment a transmitted image.
- physical object 2119 is a printer 2303.
- Printer 2303's physical cartridge has been removed.
- Graphical element 2123 is a virtual representation 2305 of the printer's cartridge which is produced on projection plane 1311 and reflected in transflective mirror 2207.
- Printer 2303 was registered in the coordinate system of the virtual environment and the virtual environment system computed reflection space 2104 as described above so that it exactly overlays physical space 2103.
- virtual representation 2305 appears to be inside printer 2303 when printer 2303 is viewed through transflective mirror 2207.
- virtual representation 2305 is generated on projection plane 1311 according to the positions of printer 2303, physical eye 2115, and mirror 2117, mirror 2117 can be moved by the user and the virtual cartridge will always appear inside printer 2303.
- Virtual arrow 2307 which shows the direction in which the printer's cartridge must be moved to remove it from printer 2303 is another example of augmentation. Like the virtual cartridge, it is produced on projection plane 1311. Of course, with this technique, anything which can be produced on projection plane 1311 can be use to augment a real object.
- the normal/inverse reflection must be applied to every aspect of graphical element 2127, including vertices, normals, clipping planes, textures, light sources, etc., as well as to the physical eye position and virtual head-lights. Since these elements are usually difficult to access, hidden below some internal data-structure (generation-functions, scene-graphs, etc.), and an iterative transformation would be too time-intensive, we can express the reflection as a 4x4 transformation matrix. Note that this complex transformation cannot be approximated with an accumulation of basic transformations (such as translation, rotation and scaling).
- every graphical element will be reflected with respect to the mirror-plane.
- a side-effect of this is, that the order of polygons will also be reversed (e.g. from counterclockwise to clockwise) which, due to the wrong front-face determination, results in a wrong rendering (e.g. lighting, culling, etc.). This can easily be solved by explicitly reversing the polygon order.
- Any complex graphical element (normals, material properties, textures, text, clipping planes, light sources, etc.) is reflected by applying the reflection matrix, as shown in the pseudo-code above.
- Virtual reality system program 1309 in system 1301 is able to deal with inputs of the user's eye positions and locations together with position and orientation inputs from transparent pad
- FIG. 19 provides an overview of major components of program 1309 and their interaction with each other.
- the information needed to produce a virtual environment is contained in virtual environment description 1933 in memory 1307.
- virtual environment generator 1943 reads data from virtual environment description 1933 and makes stereoscopic images from it. Those images are output via 1313 for back projection on table surface 1311.
- Pad image 1325 and pen image 1327 are part of the virtual environment, as is the portion of the virtual environment reflected by the mirror, and consequently, virtual environment description 1933 contains a description of a reflection (1937), a description of the pad image (1939), and a description of the pen image (1941).
- Virtual environment description 1933 is maintained by virtual environment description manager 1923 in response to parameters 1913 indicating the current position and orientation of the user's eyes, parameters 1927, 1929, 1930, and 1931 from the interfaces for the mirror (1901), the transparent pad (1909), and the pen (1919), and the current mode of operation of the mirror and/or pad and pen, as indicated in mode specifier 1910.
- Mirror interface 1901 receives mirror position and orientation information 1903 from the mirror, eye position and orientation information 1805 for the mirror's viewer, and if a ray tool is being used, ray tool position and orientation information 1907.
- Mirror interface 1901 interprets this information to determine the parameters that virtual environment description manager 1923 requires to make the image to be reflected in the mirror appear at the proper point in the virtual environment and provides the parameters (1927) to manager 1923, which produces or modifies reflection description 1937 as required by the parameters and the current value of mode 1910. Changes in mirror position and orientation 1903 may of course also cause mirror interface 1901 to provide a parameter to which manager 1923 responds by changing the value of mode 1910.
- the extended virtual table The extended virtual table disclosed in PCT/US01/18327 has a large half-silvered mirror attached to one end of a virtual workbench. The mirror can be used in two ways: to extend the virtual reality created by the workbench's projector or to augment an object behind the mirror with the virtual reality created by the workbench's projector.
- FIG. 1 Physical arrangement of the extended virtual table: FIG. 1
- the Extended Virtual Table (xVT) prototype 101 consists of a virtual 110 and a real workbench 104 (cf. figure 1).
- a Barco BARON (2000a) 110 serves as display device that projects 54" x 40" stereoscopic images with a resolution of 1280 x 1024 (or optionally 1600 x 1200/2) pixels on the backside of a horizontally arranged ground glass screen 110.
- Shutter glasses 112 such as Stereographies' CrystalEyes (StereoGraphics, Corp., 2000) or NuVision3D's 60GX (NuVision3D Technologies, Inc. 2000) are used to separate the stereo-images for both eyes and make stereoscopic viewing possible.
- an electromagnetic tracking device 103/111 Ascension's Flock of Birds (Ascension Technologies.
- An Onyx InfiniteReality 2 which renders the graphics is connected (via a TCP/IP intranet) to three additional PCs that perform speech-recognition, speech-synthesis via stereo speakers 109, gesture-recognition, and optical tracking.
- a 40" x 40" large, and 10 mm thick pane of glass 107 separates the virtual workbench (i.e. the Virtual Table) from the real workspace. It has been laminated with a half-silvered mirror foil 3M's Scotchtint P-18 (3M, Corp., 2000) on the side that faces the projection plane, making it behave like a front-surface mirror that reflects the displayed graphics.
- a thick plate glass material (10mm) to minimize the optical distortion caused by bending of the mirror or irregularities in the glass.
- the half-silvered mirror foil which is normally applied to reduce window glare, reflects 38% and transmits 40% light. Note that this mirror extension costs less than $100. However, more expensive half-silvered mirrors with better optical characteristics could be used instead (see Edmund Industrial Optics (2000) for example).
- the mirror With the bottom leaning onto the projection plane, the mirror is held by two strings which are attached to the ceiling.
- the length of the strings can be adjusted to change the angle between the mirror and the projection plane, or to allow an adaptation to the Virtual Table's slope 115.
- a light-source 106 is adjusted in such a way that it illuminates the real workbench, but does not shine at the projection plane.
- the real workbench and the walls behind it have been covered with a black awning to absorb light that otherwise would be diffused by the wall covering and would cause visual conflicts when the mirror is used in a see-through mode.
- a camera 105 a Videum VO (Winnov, 2000) is applied to continuously capture a video-stream of the real workspace, supporting an optical tracking of paper-markers that are placed on top of the real workbench.
- a Videum VO Windnov, 2000
- Users can either work with real objects above the real workbench, or with virtual objects above the virtual workbench.
- Elements of the virtual environment, which is displayed on the projection plane, are spatially defined within a single world-coordinate system that exceeds the boundaries of the projection plane, covering also the real workspace.
- the mirror plane 203 splits this virtual environment into two parts that cannot be simultaneously visible to the user. This is due to the fact that only one part can be displayed on the projection plane 204.
- the user's viewing direction 207 is approximated by computing the single line of sight that originates at her point of view and points towards her viewing direction.
- the plane the user is looking at i.e. projection plane or mirror plane
- the user is looking at is then the one, which is first intersected by this line of sight. If the user is looking at neither plane, no intersection can be determined and nothing needs to be rendered at all.
- the projected virtual environment will not appear as reflection in the mirror.
- the user rather sees the same scene that she would perceive without the mirror if the projection plane were large enough to visualize the entire environment. This is due to the neutralization of the computed inverse reflection by the physical reflection of the mirror.
- the transformation matrix can simply be added to a matrix stack or integrated into a scene graph without increasing the computational rendering cost, but since its application reverses also the polygon order (which might be important for correct front-face determination, lighting, culling, etc.), appropriate steps have to be taken in advance (e.g., explicitly reversing the polygon order before reflecting the scene).
- the plane parameters (a,b,c,d) can be determined within the world coordinate system in different ways:
- the electromagnetic tracking device can be used to support a three-point calibration of the mirror plane.
- the optical tracking system can be applied to recognize markers that are (temporarily or permanently) attached to the mirror.
- a clipping plane that exactly matches the mirror plane (i.e. with the same plane parameters a,b,c,d).
- Visual conflicts arise if virtual objects spatially intersect the side of the user's viewing frustum that is adjacent to the mirror, since in this case the objects projection optically merges into its reflection in the mirror.
- the clipping plane culls away the part of the virtual environment that the user is not looking at (i.e. we reverse the direction of the clipping plane, depending on the viewer's viewing direction while maintaining its position). The result is a small gap between the mirror and the outer edges of the viewing frustum in which no graphics is visualized.
- This gap helps to differentiate between projection and reflection and, consequently, avoids visual conflicts. Yet, it does not allow virtual objects which are located over the real workbench to reach through the mirror. We can optionally activate or deactivate the clipping plane for situations where no, or minor visual conflicts between reflection and projection occur to support a seamless transition between both spaces.
- Figure 2 shows a large coherent virtual scene whose parts can be separately observed by either looking at the mirror 203 or at the projection plane 204. In this case, what is seen is a life-size human body for medical training viewed in the mirror (left), or on the projection plane (right). The real workspace behind the mirror is not illuminated.
- Figure 3 shows a simple example in which the mirror beam-splitter is used as an optical combiner. If the real workspace is illuminated, both the real and the virtual environment are visible to the user and real and virtual objects can be combined in AR-manner 301 : • Left: Real objects behind the mirror (the ball) are illuminated and augmented with virtual objects (the baby). The angle between mirror and projection plane is 60°.
- Optical distortion is caused by the elements of an optical system. It does not affect the sharpness of a perceived image, but rather its geometry and can be corrected optically (e.g., by applying additional optical elements that physically rescind the effect of other optical elements) or computationally (e.g., by pre-distorting generated images). While optical correction may result in heavy optics and non-ergonomic devices, computational correction methods might require high computational performance.
- optical distortion is critical, since it prevents precise registration of virtual and real environment.
- the purpose of the optics used in HMDs, for instance, is to project two equally magnified images in front of the user's eyes, in such a way that they fill out a wide field-of-view (FOV), and fall within the range of accommodation (focus).
- FOV field-of-view
- lenses are used in front of the miniature displays (or in front of mirrors that reflect the displays within see- through HMDs).
- the lenses, as well as the curved display surfaces of the miniature screens may introduce optical distortion which is normally corrected computationally to avoid heavy optics which would result from optical approaches.
- the applied optics forms a centered (on-axis) optical system; consequently, pre- computation methods can be used to efficiently correct geometrical aberrations during rendering.
- Rolland and Hopkins (1993) describe a polygon wrapping technique as a possible correction method for HMDs. Since the optical distortion for HMDs is constant (because the applied optics is centered), a two-dimensional lookup table is pre-computed that maps projected vertices of the virtual objects' polygons to their pre-distorted location on the image plane. Note that this requires subdividing polygons that cover large areas on the image plane. Instead of pre-distorting the polygons of projected virtual objects, the projected image itself can be pre- distorted, as described by Watson and Hodges (1995), to achieve a higher rendering performance.
- Correcting optical distortion is more complex for the mirror beam-splitter extension, since in contrast to HMDs, the image plane that is reflected by the mirror is not centered with respect to the optical axes of the user, but is off-axis in most cases. In fact, the alignment of the reflected image plane dynamically changes with respect to the moving viewer while the image plane itself remains at a constant spatial position in the environment.
- the projector that is integrated into the Virtual Table can be calibrated in such a way that it projects distorted images onto the ground glass screen.
- Projector-specific parameters such as geometry, focus, and convergence
- camera-based calibration devices While a precise manual calibration is very time consuming, an automatic calibration is normally imprecise and most systems do not offer a geometry calibration (only calibration routines for convergence and focus).
- FIG. 8 shows the calibration technique.
- the distorted displayed grid is then sampled with a device 805 that is able measure 2D points on the tabletop.
- D the geometrical deviation
- U -D the geometrical deviation
- the Mimio is a hybrid (ultrasonic and infrared) tracking system for planar surfaces which is more precise and less susceptible to distortion than the applied electromagnetic tracking device.
- its receiver 805 has been attached to a corner of the Virtual Table (note the area where the Mimio cannot receive correct data from the sender, due to distortion - this area 806 has been specified by the manufacturer).
- the supported maximal texture size of the used rendering package is 1024 x 1024 pixels, U is rendered within the area (of this size) that adjoins to the mirror.
- 10 x 9 sample points for an area of 40" x 40" on the projection plane is an appropriate grid resolution which avoids over-sampling but is sufficient enough to capture the distortion.
- Figure 8 illustrates the sampled distorted grid D 803 (gray), and the pre-distorted grid P 804 (black) after it has been rendered and re-sampled. Note that figure 8 shows real data from one of the calibration experiments (other experiments delivered similar results). The calibration procedure has to be done once (or once in a while - since the distortion behavior of the proj ector can change over time) .
- a thick plate glass material has been selected to keep optical distortion caused by bending small. Due to gravity, however, a slight flexion affects the 1st order imaging properties of our system (i.e. magnification and location of the image) and consequently causes a deformation of the reflected image that cannot be avoided.
- Figure 9-left illustrates the optical distortion caused by flexion.
- a bent mirror does not reflect the same projected pixel for a specific line of sight as a non-bent mirror.
- Correction of the resulting distortion can be realized by transforming the pixels from the position where they should be seen (reflected by an ideal non-bent mirror) to the position where they can be seen (reflected by the bent mirror) for the same line of sight.
- the correction of mirror flexion can be combined using the method described above. For every point t/ 903 of the undistoited grid U , the corresponding point of reflection R 911 on the bent mirror 907 has to be determined with respect to the current eye position of the viewer E 906. Note that this requires knowledge of the mirror's curved geometry.
- R 911 can simply be calculated by reflecting U 903 over the known (non-bent) mirror plane 907 (the reflection matrix, described by Bimber, Encarnacao & Schmalmination, 2000b, PCT patent application PCT/US99/28930 can be used for this), and then find the intersection between the bent mirror's surface and the straight line that is spanned by
- an appropriate pre-distortion offset can be interpolated from the measured (distorted) grid D (as illustrated in figure 9- right). This can be done by bilinear inte ⁇ olating between the corresponding points of the pre- distorted grid P that belongs to the neighboring undistoited grid points of U which form the cell 915 that encloses U 913.
- a thick pane of glass stabilizes the mirror and consequently minimizes optical distortion caused by flexion.
- it causes another optical distortion which results from refraction. Since the transmitted light that is perceived through the half- silvered mirror is refracted, but the light that is reflected by the front surface mirror foil is not, the transmitted image of the real environment cannot be precisely registered to the reflected virtual environment - even if their geometry and alignment match exactly within the world coordinate system. All optical systems that use any kind of see-through elements have to deal with similar problems. While for HMDs, aberrations caused by refraction of the lenses are mostly assumed to be static (as stated by Azuma (1997)), they can be corrected with paraxial analysis approaches.
- both lines of sight are simply shifted parallel along the plate's normal vector (N) 1009, by an amount ( ⁇ ) 1010 that depends on the entrance angle ( ⁇ ⁇ ) 1011 between the geometric line of sight and N 1009, its thickness ( T ) 1012, and the refraction index ( ⁇ ) -a material-dependent ratio that expresses the refraction behavior compared to vacuum (as an approximation to air).
- the amount of translation ( ⁇ ) 1010 can be computed as follows:
- Equation 1 Snell ' s law of refraction for planar plates of a higher density than air (compared to vacuum as approximation to air).
- Equation 2 Refraction-dependent amount of displacement along the plate's normal vector.
- T i.e. 10 mm
- ⁇ i.e. 1.5 for regular glass
- the refractor of a ray which is spanned by the two points (P,P 2 ) depends on the entrance angle ( ⁇ t ) 1011 and can be computed as follows (in parameter representation):
- Equation 3 Refractor of a ray that is spanned by two points.
- the normal vector of the mirror plane is not constant and the corresponding normals of the points on the mirror surface that are intersected by the actual lines of sight have to be applied.
- the optical line of sight 1008 is the refractor that results from the geometric line of sight 1005 which is spanned by the viewer's eye (E ) 1003 and the point is space (P) 1007 she's looking at.
- refraction is a spatial distortion and cannot be corrected within the image plane. Since no analytical correction methods exist, we apply a numerical minimization to precisely refract virtual objects that are located behind the mirror beam-splitter by transforming their vertices within the world coordinate system. Note that similar to Rolland's approach (Rolland & Hopkins, 1993), our method also requires subdividing large polygons of virtual objects to sufficiently express the refraction's curvilinearity.
- the goal is to find the coordinate P' 1004 where the virtual vertex P 1007 has to be translated in such a way that P 1001 appears spatially at the same position as it would appear as real point, observed through the half-silvered mirror - i.e. refracted.
- To find ' 1004 we first compute the geometric lines of sight 1005 from each eye (E,,E 2 ) 1003 to P 1007. We then compute the two corresponding optical lines of sight 1008 using equation 3 and their intersection (?”) 1013.
- F 1004 is the intersection of the (by some a, ⁇ , ⁇ ) 1014 rotated geometric lines of sight 1005 where P ⁇ F' ⁇ is minimal (i.e. below some threshold ⁇ ). This final state is illustrated in figure 10.
- Table 1 Comparison between precise refraction and approximated refraction.
- end-to-end system delay time difference between the moment that the tracking system measures a position/orientation and the moment the system reflects this measurement in the displayed image
- lag causes a "swimming effect" (virtual objects appear to float around real objects).
- Augmented reality (AR) technology has a lot of potential in this respect, since it allows the augmentation of real world environments with computer generated imagery.
- Augmented Reality systems use see-through head mounted displays. Such displays share most of the disadvantages of standard head mounted displays.
- Virtual Showcase a new Augmented Reality display device that has the same form factor as the real showcases traditionally used for museum exhibits. Real scientific and cultural artifacts are placed inside the Virtual Showcase, where they can be augmented using three-dimensional graphical techniques.
- Virtual Showcase virtual representations and real artifacts share the same space, thus providing new ways of merging and exploring real and virtual content.
- the virtual part of the Virtual Showcase can react in various ways to a visitor, enabling intuitive interaction with the displayed content.
- These interactive Virtual Showcases are an important step in the development of ambient intelligent landscapes, where the computer acts as an intelligent server in the background and visitors can focus on exploring the exhibited content rather than on operating computers.
- a Virtual Showcase consists of two main parts (cf, FIG. 4): a convex assembly of half-silvered mirrors 402 and a graphics display 403. So far, we have built Virtual Showcases with two different mirror configurations.
- Our first prototype 400 consists of four half-silvered mirrors assembled as a truncated pyramid.
- Our second prototype 401 uses a single mirror sheet to form a truncated cone. In other configurations, the mirrors may be fully silvered; further, other flat to convex assemblies of mirrors may be employed.
- the mirror assemblies are placed on top of a projection screen 403 which is driven by a system for creating a virtual environment.
- a virtual reality When a virtual reality includes a reflecting surface, the virtual reality system must of course deal with reflections of other objects in the virtual reality in the reflecting surface. What is reflected depends on the point of view from which the virtual reality is being viewed.
- a number of techniques [15] are used in virtual reality systems to generate reflections on reflecting surfaces in the virtual reality.
- the techniques include image-based methods [4], geometry-based approaches [7,11,26], and pixel-based techniques [13]. All of the techniques map a given description of a virtual object space (e.g. a computer-generated virtual scene) into a corresponding image space (i.e. a computer-generated reflection of the virtual scene on a virtual artificial mirror surface in the virtual scene).
- Our aim in rendering the image in the object space is to transform the image space geometry into the object space in such a way that the reflection of the displayed object space optically results in the expected image space.
- the transformation of the image space geometry is neutralized by the reflection of the mirror.
- a geometric description of the real object can be used to properly cull the virtual portion of the image space with regard to the real object. Because virtual and real objects coexist in conjunction within the image space, the appearance of the entire image space is known for every given viewpoint.
- the object space must of course be located on a portion of the projection plane where the object space's reflection is in the field of view of the person viewing the miror.
- the rendering techniques used in the Virtual Showcase always involve the following steps, shown in overview in FIG. 11 :
- scene transformation M i.e. the accumulation of glTranslate, glRotate, glScale, etc.
- viewpoint transformation V e.g. gluLookAt
- the common viewpoint transformation matrix V is applied with the reflected viewpoint e' , instead of the actual viewpoint e 504.
- the reflection matrix is given by: l-2a 2 -la r b r -2a r c, -la r ⁇ r
- the different images in object space 503 optically merge into a single consistent image space 502 which is produced by reflecting projection plane 506 in the mirrors 205.
- the image space thus visually equals the image of the untransformed image space geometry.
- FIG. 6c where the user's point of view includes two mirrors on two sides of the truncated pyramid. The field of view is thus these two sides, and the virtual reality system produces images in the object space such that a single image space 502 is visible in both mirrors.
- the photographs of FIG. 6 are not embellished. They are taken as seen from the viewer's perspective, but have been rendered in mono. However, the rendering algorithms normally produce stereo images.
- Figures 6a and 6b show two individual views onto the same image space (seen from different perspectives). For instance, these views can be seen by a single viewer while moving around the Virtual Showcase, or by two individual viewers while looking at different mirrors simultaneously. While FIG. 6a-6d show exclusively virtual exhibits, FIG. 6e-6h show an example of a mixed (real/virtual) exhibit, displayed within a Virtual Showcase. The surface of the real Buddha statue in FIG. 6e has been scanned three-dimensionally. This virtual model has then been partially projected back onto the real statue to demonstrate the precise superimposition of the two environments (cf. FIG. 6e-6g). Figure 6h illustrates the whole scenario with additional multi-media information.
- each viewer may see a different scene (i.e. a different image space is presented to each viewer).
- an individual has to be applied within each sub-pipe.
- a static mirror-viewer assignment is not required - even individual mirror sections can be dynamically assigned to moving viewers. In case multiple viewers look at the same mirror, an average viewpoint can be computed (this will result in slight perspective distortions).
- Compensation for refraction in the mirror and distortion caused by the projector can be done as described for the extended virtual table.
- the transformation of the image space geometry depends on the viewpoint 704 (i.e. the image space geometry transforms differently for different viewpoints).
- viewpoint transformations e -» e' depend on the image space geometry (i.e. each vertex v within the image space yields an individual e').
- the general technique of FIG. 11 avoids a direct access to the image space geometry, and consequently avoids the transformations of many scene vertices and the cost in time associated with these transformations.
- the method applies a sequence of intermediate non-affine image deformations.
- the sequence of deformations is that which we currently consider most efficient for curved mirror displays.
- the sequence represents a mixture between the extended camera concept [19] and projective textures [32]. While projective textures utilize a perspective texture matrix to map projection-surface vertices into texture coordinates of those pixels that project onto these vertices, our method projects image vertices directly on the projection surface, while the texture coordinate of each image vertex remains constant. This is necessary because curved mirrors yield a different projection (i.e.
- each pixel is generated from a modified primary ray.
- Our method deforms an existing image by projecting it individually for each pixel.
- the processing required with curved mirrors for the first and second rendering passes 1103 and 1106 will be explained in detail, as well as the processing required to deal with refraction at step 1103 and distortion correction at step 1105.
- the first rendering pass creates a picture of the image space and renders it into the texture buffer, rather than into the frame-buffer (step 1102 of FIG. 11).
- the processing in this pass is outlined by the generate image algorithm.
- an on-axis projection is carried out.
- the size of the projection's viewing frustrum is determined from the image space's bounding sphere (lines 1-4).
- the image is finally rendered into the texture-buffer (line 9). This is illustrated in FIG.12a for a truncated-cone-like Virtual Showcase.
- FIG. 24 shows the effects of the use of different renderers.
- An ordinary geometric renderer was used to generate the images shown in FIG. 24a-24b and 24e-24f; a volumetric renderer [9] was used to generate the image shown at 24c; and a progressive point-based renderer [30] was used for the image displayed in FIG. 24d.
- the image that has been generated during the first rendering pass has now to be transformed in such as way that its reflection in the mirror is perceived as being undistoited. This is done in step 1104 of FIG. 11.
- a geometric representation of the image plane is pre-generated.
- This image geometry consists of a uniformly tessellated grid (represented by an indexed triangle mesh) which is transformed into the current viewing frustum inside the image space in such a way that, if the image is mapped onto the grid each line-of-sight intersects its corresponding pixel (cf. FIG. 12b).
- each grid point is transformed with respect to the mirror geometry, the current viewpoint and the projection plane and is textured with the image that was generated during the first rendering pass (cf. FIG. 12c).
- the reflection transformation is outlined by the reflect image geometry algorithm. The reflection transformation is described in detail below.
- lines 11 and 13 of reflect image geometry set a marker flag for each vertex.
- the projection matrix is given by:
- the vertex is sent through the modified transformation pipeline that inco ⁇ orates the model transformations M , the reflection transformation R , and the projection transformation P (line 8). Since P is a perspective projection, a perspective division has to be done accordingly to produce correct device coordinates (line 9).
- a fast ray-triangle intersection method (such as [23]) that also delivers the barycentric coordinates of the intersection within a triangle is required.
- the barycentric coordinates can then be used to inte ⁇ olate between the three vertex normals of a triangle and to approximate the normal vector at the intersection.
- a more efficient way of describing the Virtual Showcase's dimensions is to apply an explicit function.
- This function can be used to calculate the intersections and the normal vectors (using its 1 st order derivatives) with an unlimited resolution.
- not all Virtual Showcase shapes can be expressed by explicit functions. Since cones are simple 2 n -order surfaces, we can use an explicit function and its l st -order derivative to describe the extensions of our curved Virtual Showcase: After a geometric line-of-sight has been transformed from the world- coordinate- into the cone-coordinate-system, it can be easily intersected with the cone by solving a linear equation system created by inserting a parametric ray representation into the cone equation. The normals are simply computed by inserting the intersection points into the 1 st order derivative.
- the transformed image geometry is finally displayed within the object space - mapping the outcome of the first rendering pass as texture onto the object space's surface (cf. FIG. 12d). Note, that only triangles with three visible vertices are rendered.
- a second projection transformation e.g. glFrustum
- the corresponding perspective divisions and viewpoint transformation e.g. gluLookAt
- a simple scale transformation is sufficient to normalize the device coordinates (e.g. glScale(l/device_width/2),l/device_height/2,l)).
- a subsequent view-port transformation finally up-scales them into the window coordinate system (e.g. glViewport(0,0,window_ width, window_height)).
- Time-consuming rendering operations that are not required to display the two-dimensional image (such as illumination computations, back-face culling, depth buffering, etc.) should be disabled to increase the rendering performance.
- the polygon order does not have to be reversed before rendering.
- FIG. 24a-24f show some results.
- FIG. 24a-24c show an exclusively virtual exhibit observed from different viewpoints.
- FIG. 24d-24f illustrate hybrid exhibits (a virtual lion on top of a real base (24d) and a virtual hand that places a virtual cartridge into a real printer (24e,f)).
- FIGs. 25 and 26 Optical distortion compensation with curved mirrors: FIGs. 25 and 26
- Optical distortion is caused by the elements of an optical system and affects the geometry of a perceived image.
- the elements that cause optical distortion in case of Virtual Showcases are the projectors) used to generate the picture within the object space, and the mirror optics that reflect this picture into the image space.
- Optical distortion can be critical, since it prevents the precise overlaying of the reflected image of the virtual environment onto the transmitted image of the real environment and can thus lead to inconsistency of the image space. Note that optical distortion is more complex in our case than it is with fixed-optics devices (head-mounted displays for instance), since the distortion dynamically changes with a moving viewpoint.
- FIG. 25a shows measures from one of our calibration experiments: While the undistoited black grid 2502 has been sent to the projector, it has been displayed in a deformed way (gray grid 2503), due to the geometry distortion of the projector. The gray grid has been measured by sampling the projected grid points with a precise 2D tracking device.
- the pre-distort algorithm (step 1105 in FIG. 11) shows how to use P to correct the transformed image vertices v' (after reflect image geometry has been applied).
- the algorithm differs from [40] in that the image transformation is dynamic, rather than static, and changes with a moving viewpoint.
- a pre-distorted vertex (v") can be computed by linear inte ⁇ olating within the corresponding grid cell of P 2505, using the normalized cell coordinates (line 4). This is illustrated in FIG. 25b.
- pre-distortion simply represents an additional image transformation.
- the projector pre-distortion transformation is applied after the reflection transformation (reflect image geometry) and before the second rendering pass is carried out. This transformation is optional and can be switched off to save rendering time- even though it does not slow down rendering performance significantly.
- FIG. 26 Light rays that travel through materials with different densities 2602 are refracted. Therefore, the transmitted image of the real environment inside the Virtual Showcase is also refracted. However, the image within the object space (i.e. the projected graphics of the virtual environment) that is reflected by the Virtual Showcase's front-surface mirror is not refracted. Consequently, both images do not overlay exactly, even if the spatial registration of both environments is precise. As it is the case with projector miscalibration, refraction distortion is dynamic and changes with a moving viewpoint (i.e. compensation methods for static optical distortion, such as [39,40] cannot be applied).
- the refract algorithm (step 1103 in FIG. 11) demonstrates how to apply refraction to the image that has been generated during the first rendering pass . Note that the image is refracted before the reflection transformation (reflect image geometry) is applied to the image geometry. As in the other image transformation steps, per-vertex computations are carried out explicitly since this transformation is not supported by standard rendering pipelines.
- the corresponding optical line-of-sight For each image vertex, the according geometric line-of-sight is computed. Using Snell's law of refraction, the corresponding optical line-of-sight can be determined by computing the in/out refractors at the associated surface intersections (lines 1-4). Note that the derivation of the optical lines-of-sight for planar mirrors is less complex, since in this case the optical lines-of- sight equal the parallel shifted geometric counte ⁇ arts. We can now determine the refraction of the image vertex (v ) by computing the geometric intersection of the out-refractors with the image geometry.
- composition of an appropriate texture matrix that computes new texture coordinates for each image vertex is outlined in lines 5-9. As illustrated in FIG. 26, an off-axis projection transformation is applied, where the center of projection is i ' . By multiplying x by the resulting texture matrix projects x to the correct location within the normalized texture space of the image (line 10). Finally, the resulting texture coordinate ( ') has to be assigned to v .
- a simple solution for these problems is to ensure that they do not occur for image portions which contain information:
- the image size depends on the radius of the scene's bounding sphere. We can simply increase the image by adding some constant amount to the bounding sphere's radius before carrying out the first rendering. An enlarged image does not affect the image content, but simply subjoins additional outer image space to the image. The subjoined space does not contain any information (i. e., it is just black pixels). In this way, we ensure that the problems occur only in the subjoined new (black) regions. Because these regions are black, they will not be visible as reflections in the mirror.
- the refraction computations represent another transformation of the image generated during the first rendering pass.
- the refraction transformation transforms texture coordinates.
- all image transformations have to be applied before the final image is displayed during the second rendering pass.
- FIG. 27 Shown in FIG. 27 is an upside-down configuration of mirror optics 2702 and projection display 2703. This important improvement eliminates disturbing reflections on the inside of the mirror optics and hides the projection display from the observer.
- system 2701 In system 2701,
- optical tracking technology 2704 will be utilized instead of electromagnetic tracking technology, making head-tracking more precise and stable and eliminating impeding cables.
- System 2701 will also use passive stereo projection 2705 (with multiple polarized projectors), instead of a single time-multiplexed projector, allowing the observers to wear light-weight and inexpensive polarized glasses 2706. In addition, the cost of the projection technology can be reduced.
Landscapes
- Engineering & Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Processing Or Creating Images (AREA)
Abstract
Description
Claims
Applications Claiming Priority (7)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US22467600P | 2000-08-11 | 2000-08-11 | |
US224676P | 2000-08-11 | ||
US25229600P | 2000-11-21 | 2000-11-21 | |
US252296P | 2000-11-21 | ||
WOPCT/US01/18327 | 2001-06-06 | ||
PCT/US2001/018327 WO2001095061A2 (en) | 1999-12-07 | 2001-06-06 | The extended virtual table: an optical extension for table-like projection systems |
PCT/US2001/025186 WO2002015110A1 (en) | 1999-12-07 | 2001-08-10 | Virtual showcases |
Publications (2)
Publication Number | Publication Date |
---|---|
EP1325460A1 true EP1325460A1 (en) | 2003-07-09 |
EP1325460A4 EP1325460A4 (en) | 2009-03-25 |
Family
ID=56290180
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP01963916A Ceased EP1325460A4 (en) | 2000-08-11 | 2001-08-10 | Virtual showcases |
Country Status (2)
Country | Link |
---|---|
EP (1) | EP1325460A4 (en) |
AU (1) | AU2001284827A1 (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8676615B2 (en) | 2010-06-15 | 2014-03-18 | Ticketmaster Llc | Methods and systems for computer aided event and venue setup and modeling and interactive maps |
US9781170B2 (en) | 2010-06-15 | 2017-10-03 | Live Nation Entertainment, Inc. | Establishing communication links using routing protocols |
US10573084B2 (en) | 2010-06-15 | 2020-02-25 | Live Nation Entertainment, Inc. | Generating augmented reality images using sensor and location data |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6045229A (en) * | 1996-10-07 | 2000-04-04 | Minolta Co., Ltd. | Method and apparatus for displaying real space and virtual space images |
-
2001
- 2001-08-10 AU AU2001284827A patent/AU2001284827A1/en not_active Abandoned
- 2001-08-10 EP EP01963916A patent/EP1325460A4/en not_active Ceased
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6045229A (en) * | 1996-10-07 | 2000-04-04 | Minolta Co., Ltd. | Method and apparatus for displaying real space and virtual space images |
Non-Patent Citations (6)
Title |
---|
AGRAWALA M ET AL: "THE TWO-USER RESPONSIVE WORKBENCH: SUPPORT FOR COLLABORATION THROUGH INDIVIDUAL VIEWS OF A SHARED SPACE" COMPUTER GRAPHICS PROCEEDINGS. SIGGRAPH 97. LOS ANGELES, AUG. 3 - 8, 1997; [COMPUTER GRAPHICS PROCEEDINGS. SIGGRAPH], READING, ADDISON WESLEY, US, 3 August 1997 (1997-08-03), pages 327-332, XP000765832 ISBN: 978-0-201-32220-0 * |
BIMBER ET AL: "Extended VR" [Online] 3 April 2000 (2000-04-03), INI-GRAPHICSNET , XP002514333 Retrieved from the Internet: URL:http://www.inigraphics.net/press/topics/2000/issue2/2_00a04.pdf> [retrieved on 2009-02-09] * the whole document * & "Concerning publication date of Bimber at al: "Extended VR"" 10 February 2009 (2009-02-10), * |
RASKAR R ET AL: "Table-top spatially-augmented realty: bringing physical models to life with projected imagery" AUGMENTED REALITY, 1999. (IWAR '99). PROCEEDINGS. 2ND IEEE AND ACM INT ERNATIONAL WORKSHOP ON SAN FRANCISCO, CA, USA 20-21 OCT. 1999, LOS ALAMITOS, CA, USA,IEEE COMPUT. SOC, US, 20 October 1999 (1999-10-20), pages 64-71, XP010358746 ISBN: 978-0-7695-0359-2 * |
RASKAR R ET AL: "The Office of the Future: A Unified Approach to Image-Based Modeling and Spatially Immersive Displays" COMPUTER GRAPHICS. SIGGRAPH 98 CONFERENCE PROCEEDINGS. ORLANDO, FL, JULY 19- - 24, 1998; [COMPUTER GRAPHICS PROCEEDINGS. SIGGRAPH], NEW YORK, NY : ACM, US, 19 July 1998 (1998-07-19), pages 1-10, XP002278293 ISBN: 978-0-89791-999-9 * |
See also references of WO0215110A1 * |
WELCH G ET AL: "Projected imagery in your office of the future" IEEE COMPUTER GRAPHICS AND APPLICATIONS, IEEE SERVICE CENTER, NEW YORK, NY, US, vol. 20, no. 4, 1 July 2000 (2000-07-01), pages 62-67, XP011201215 ISSN: 0272-1716 * |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8676615B2 (en) | 2010-06-15 | 2014-03-18 | Ticketmaster Llc | Methods and systems for computer aided event and venue setup and modeling and interactive maps |
US9781170B2 (en) | 2010-06-15 | 2017-10-03 | Live Nation Entertainment, Inc. | Establishing communication links using routing protocols |
US9954907B2 (en) | 2010-06-15 | 2018-04-24 | Live Nation Entertainment, Inc. | Establishing communication links using routing protocols |
US10051018B2 (en) | 2010-06-15 | 2018-08-14 | Live Nation Entertainment, Inc. | Establishing communication links using routing protocols |
US10573084B2 (en) | 2010-06-15 | 2020-02-25 | Live Nation Entertainment, Inc. | Generating augmented reality images using sensor and location data |
US10778730B2 (en) | 2010-06-15 | 2020-09-15 | Live Nation Entertainment, Inc. | Establishing communication links using routing protocols |
US11223660B2 (en) | 2010-06-15 | 2022-01-11 | Live Nation Entertainment, Inc. | Establishing communication links using routing protocols |
US11532131B2 (en) | 2010-06-15 | 2022-12-20 | Live Nation Entertainment, Inc. | Generating augmented reality images using sensor and location data |
Also Published As
Publication number | Publication date |
---|---|
EP1325460A4 (en) | 2009-03-25 |
AU2001284827A1 (en) | 2002-02-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20040135744A1 (en) | Virtual showcases | |
Bimber et al. | Modern approaches to augmented reality | |
US6803928B2 (en) | Extended virtual table: an optical extension for table-like projection systems | |
Bimber et al. | The virtual showcase | |
Bimber et al. | Spatial augmented reality: merging real and virtual worlds | |
Bimber et al. | Occlusion shadows: Using projected light to generate realistic occlusion effects for view-dependent optical see-through displays | |
Raskar et al. | The office of the future: A unified approach to image-based modeling and spatially immersive displays | |
US8937592B2 (en) | Rendition of 3D content on a handheld device | |
JP3575622B2 (en) | Apparatus and method for generating accurate stereoscopic three-dimensional images | |
Wen et al. | Toward a Compelling Sensation of Telepresence: Demonstrating a portal to a distant (static) office | |
JP4100531B2 (en) | Information presentation method and apparatus | |
JP2012079291A (en) | Program, information storage medium and image generation system | |
Martinez Plasencia et al. | Through the combining glass | |
Lee et al. | A new projection-based exhibition system for a museum | |
JP4744536B2 (en) | Information presentation device | |
WO2002015110A1 (en) | Virtual showcases | |
Bimber et al. | Augmented Reality with Back‐Projection Systems using Transflective Surfaces | |
Osato et al. | Compact optical system displaying mid-air images movable in depth by rotating light source and mirror | |
Bimber et al. | The extended virtual table: An optical extension for table-like projection systems | |
Hübner et al. | Multi-view point splatting | |
WO2021081442A1 (en) | Non-uniform stereo rendering | |
EP1325460A1 (en) | Virtual showcases | |
Raskar | Projector-based three dimensional graphics | |
Balázs et al. | Towards mixed reality applications on light-field displays | |
Ichikawa et al. | Multimedia ambiance communication |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
17P | Request for examination filed |
Effective date: 20030307 |
|
AK | Designated contracting states |
Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LI LU MC NL PT SE TR |
|
RAP1 | Party data changed (applicant data changed or rights of an application transferred) |
Owner name: FRAUNHOFER-GESELLSCHAFT ZUR FOERDERUNG DERANGEWAND |
|
RIN1 | Information on inventor provided before grant (corrected) |
Inventor name: BIMBER, OLIVER,FRAUNHOFER INST. COMPUTER GRAPHICS Inventor name: FROEHLICH, BERND Inventor name: ENCARNACAO, L. MIGUEL,FRAUNHOFER INST.F.RESEARCH Inventor name: SCHMALSTIEG, DIETER |
|
RAP1 | Party data changed (applicant data changed or rights of an application transferred) |
Owner name: FRAUNHOFER-GESELLSCHAFT ZUR FOERDERUNG DER ANGEWAN |
|
A4 | Supplementary search report drawn up and despatched |
Effective date: 20090220 |
|
17Q | First examination report despatched |
Effective date: 20090630 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE APPLICATION HAS BEEN REFUSED |
|
18R | Application refused |
Effective date: 20100922 |