WO2013111119A1 - Simulating interaction with a three-dimensional environment - Google Patents

Simulating interaction with a three-dimensional environment Download PDF

Info

Publication number
WO2013111119A1
WO2013111119A1 PCT/IB2013/050702 IB2013050702W WO2013111119A1 WO 2013111119 A1 WO2013111119 A1 WO 2013111119A1 IB 2013050702 W IB2013050702 W IB 2013050702W WO 2013111119 A1 WO2013111119 A1 WO 2013111119A1
Authority
WO
WIPO (PCT)
Prior art keywords
user
location
display
computer
responsively
Prior art date
Application number
PCT/IB2013/050702
Other languages
French (fr)
Inventor
Saar Wilf
Original Assignee
Saar Wilf
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Saar Wilf filed Critical Saar Wilf
Publication of WO2013111119A1 publication Critical patent/WO2013111119A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/20Input arrangements for video game devices
    • A63F13/21Input arrangements for video game devices characterised by their sensors, purposes or types
    • A63F13/213Input arrangements for video game devices characterised by their sensors, purposes or types comprising photodetecting means, e.g. cameras, photodiodes or infrared cells
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • A63F13/525Changing parameters of virtual cameras
    • A63F13/5255Changing parameters of virtual cameras according to dedicated instructions from a player, e.g. using a secondary joystick to rotate the camera around a player's character
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/80Special adaptations for executing a specific game genre or game mode
    • A63F13/837Shooting of targets
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/25Output arrangements for video game devices
    • A63F13/26Output arrangements for video game devices having at least one additional display device, e.g. on the game controller or outside a game booth
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/25Output arrangements for video game devices
    • A63F13/28Output arrangements for video game devices responding to control signals received from the game device for affecting ambient conditions, e.g. for vibrating players' seats, activating scent dispensers or affecting temperature or light
    • A63F13/285Generating tactile feedback signals via the game input device, e.g. force feedback
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/048Indexing scheme relating to G06F3/048
    • G06F2203/04805Virtual magnifying lens, i.e. window or frame movable on top of displayed information to enlarge it for better reading or selection

Definitions

  • the present invention relates generally to interactive computer display systems, and particularly to methods and apparatus for interactive simulation of three-dimensional (3D) environments.
  • Simulation of three-dimensional (3D) environments is a significant domain in computer graphics, used for a variety of purposes, such as simulation and training, architectural imaging and interactive games. It is usually desirable to immerse the user as deeply as possible within these simulations, through enhanced realism. In other words, the images displayed should be as close as possible to those viewed in real life, as should the reaction of the virtual world.
  • a wide variety of technologies have been developed (and continue to be improved) in order to achieve these goals.
  • VDR View-Dependent Rendering
  • the projection is computed and rendered according to the location of the viewer in relation to the display panel (i.e., not necessarily front and center).
  • VDR View-Dependent Rendering
  • Some of the problems of VDR and their solutions are surveyed by Slotsbo, in "3D Interactive and View Dependent Stereo Rendering” (Technical University of Denmark, IMM-THESIS:ISSN 1601-233X, 2004), which is incorporated herein by reference.
  • Slotsbo describes an implementation that can produce a view-dependent 2D stereo projection of 3D data onto a screen, which enables moving parallax using viewer tracking. It is possible to interact with the 3D data using a tracked pointing device.
  • Embodiments of the present invention provide improved methods, systems and software for interactive display and manipulation of 3D image data.
  • a method for computer interaction which includes defining in a computer a three-dimensional (3D) environment containing one or more graphical objects having respective 3D object locations.
  • a 3D user location of a user of the computer is tracked. Responsively to the 3D user location, a view of the 3D environment is projected onto a two-dimensional (2D) display viewed by the user, wherein the 3D object locations are projected onto corresponding 2D object locations in a plane of the display.
  • 3D location and orientation coordinates of a controller held by the user are tracked, so as to define a trajectory directed by the user from the controller into the 3D environment.
  • a processor detects and indicates to the user that the trajectory has intercepted a 3D location of a graphical object in the 3D environment while the trajectory does not intercept a corresponding 2D location of the graphical object on the display.
  • the trajectory may correspond to a path of a simulated projectile fired by the user by operating the controller and need not intercept the display.
  • tracking the 3D user location includes identifying a position of an eye of the user, and projecting the view includes generating the view corresponding to the position of the eye.
  • Generating the view may include detecting a proximity of the position of the eye to the 3D location coordinate of the controller, and enlarging at least a part of an image viewable on the display responsively to the proximity.
  • identifying the position of the eye may include identifying respective eye positions of left and right eyes of the user, and wherein projecting the view includes projecting a stereoscopic pair of views responsively to the eye positions.
  • tracking the 3D user location includes sensing a height of the user, and projecting the view includes rotating an angle of the view responsively to the height so as to enhance a visibility of the graphical objects on the display.
  • projecting the view includes projecting multiple different, respective views of the 3D environment onto multiple different displays, responsively to respective positions of the displays. Projecting the multiple different, respective views may include tracking a display location of an auxiliary display device held by the user, and projecting an additional view of the 3D environment onto the auxiliary display device responsively to the display location.
  • projecting the view includes calibrating the view interactively by detecting the 3D location and orientation coordinates of the controller while the user points the controller toward one or more predefined target locations.
  • the method includes detecting, responsively to movement of the 3D user location, a collision between the user and one of the 3D object locations in the 3D environment, and providing an indication to the user of the detected collision.
  • a method for computer interaction which includes defining in a computer a three-dimensional (3D) environment containing one or more graphical objects having respective 3D object locations.
  • a 3D eye location of an eye of a user of the computer is tracked. Responsively to the 3D eye location, a view of the 3D environment is projected onto a two-dimensional (2D) display viewed by the user, wherein the 3D object locations are projected onto corresponding 2D object locations in a plane of the display.
  • a 3D controller location of a controller held by a user is tracked, and responsively to a proximity between the 3D eye location and the 3D controller location, at least a part of an image viewable on the display is enlarged.
  • tracking the 3D controller location includes finding an orientation of the controller, and selecting the part of the image for enlargement responsively to the orientation.
  • the controller includes an auxiliary display
  • enlarging the image includes presenting the enlarged part of the image on the auxiliary display.
  • Presenting the enlarged part of the image may include projecting an additional view of the one of the objects onto the auxiliary display responsively to the 3D controller location.
  • a method for computer interaction which includes defining in a computer a three- dimensional (3D) environment containing one or more graphical objects having respective 3D object locations.
  • a 3D user location of a user of the computer is tracked. Responsively to the 3D user location, a view of the 3D environment is projected onto a two-dimensional (2D) display viewed by the user, wherein the 3D object locations are projected onto corresponding 2D object locations in a plane of the display.
  • a 3D display location of an auxiliary display device held by the user is tracked, and responsively to the 3D display location and the user location, an additional view of the 3D environment is projected onto the auxiliary display device.
  • tracking the 3D user location includes identifying a position of an eye of the user, and projecting the additional view includes zooming a size of at least a part of an image appearing in the additional view responsively to a distance between the position of the eye and the 3D display location.
  • a method for computer interaction which includes defining in a computer a three-dimensional (3D) environment containing one or more graphical objects having respective 3D object locations. Responsively to the 3D user location, a view of the 3D environment is projected onto a two-dimensional (2D) display viewed by the user, wherein the 3D object locations are projected onto corresponding 2D object locations in a plane of the display. Responsively to movement of the 3D user location, a collision between the user and one of the 3D object locations in the 3D environment is detected, and an indication is provided to the user of the detected collision.
  • 3D three-dimensional
  • the indication may include an audible or visible indication. Additionally or alternatively, the indication may include haptic feedback provided by a controller held by the user.
  • apparatus for computer interaction including a display, a tracking device, which is configured to detect a 3D user location of a user of the apparatus, and a controller, which is configured to be held by the user.
  • a processor is configured to accept a definition of a three- dimensional (3D) environment containing one or more graphical objects having respective 3D object locations, and to project, responsively to the 3D user location, a view of the 3D environment onto a two-dimensional (2D) display viewed by the user, wherein the 3D object locations are projected onto corresponding 2D object locations in a plane of the display.
  • the processor is coupled to track 3D location and orientation coordinates of the controller held by the user, so as to define a trajectory directed by the user from the controller into the 3D environment, and to detect and indicate to the user that the trajectory has intercepted a 3D location of a graphical object in the 3D environment while the trajectory does not intercept a corresponding 2D location of the graphical object on the display.
  • apparatus for computer interaction in which the tracking device is configured to detect a 3D eye location of an eye of a user of the apparatus.
  • the processor is coupled to track a 3D controller location of the controller held by the user, and responsively to a proximity between the 3D eye location and the 3D controller location, to enlarge at least a part of an image viewable on the display.
  • apparatus for computer interaction including an auxiliary display device configured to be held by a user, wherein the processor is configured to track a 3D display location of the auxiliary display device held by the user, and to project, responsively to the 3D display location and the user location, an additional view of the 3D environment for display on the auxiliary display device.
  • apparatus for computer interaction wherein the processor is configured to detect, responsively to movement of the 3D user location, a collision between the user and one of the 3D object locations in the 3D environment, and to provide an indication to the user of the detected collision.
  • a computer software product including a computer-readable medium in which program instructions are stored, which instructions, when read by a computer, cause the computer to accept a definition of a three-dimensional (3D) environment containing one or more graphical objects having respective 3D object locations, to track a 3D user location of a user of the computer, and to project, responsively to the 3D user location, a view of the 3D environment onto a two-dimensional (2D) display viewed by the user, wherein the 3D object locations are projected onto corresponding 2D object locations in a plane of the display.
  • 3D three-dimensional
  • the instructions cause the computer to track 3D location and orientation coordinates of a controller held by the user, so as to define a trajectory directed by the user from the controller into the 3D environment, and to detect and indicate to the user that the trajectory has intercepted a 3D location of a graphical object in the 3D environment while the trajectory does not intercept a corresponding 2D location of the graphical object on the display.
  • a computer software product wherein the instructions cause the computer to track a 3D eye location of an eye of a user of the computer, and to track a 3D controller location of the controller held by the user, and responsively to a proximity between the 3D eye location and the 3D controller location, to enlarge at least a part of an image viewable on the display.
  • a computer software product wherein the instructions cause the computer to track a 3D display location of an auxiliary display device held by the user, and to project, responsively to the 3D display location and the user location, an additional view of the 3D environment for display on the auxiliary display device.
  • a computer software product wherein the instructions cause the computer to detect, responsively to movement of the 3D user location, a collision between the user and one of the 3D object locations in the 3D environment, and to provide an indication to the user of the detected collision.
  • Fig. 1A is a schematic, pictorial illustration of an interactive gaming system, in accordance with an embodiment of the present invention
  • Fig. IB is a schematic top view showing geometrical relations between certain elements of the system of Fig. 1A, in accordance with an embodiment of the present invention
  • Fig. 2 is a schematic side view of a game controller, in accordance with an embodiment of the present invention.
  • Fig. 3 is a schematic, pictorial illustration of an interactive gaming system, showing a zoom functionality of the system in accordance with an embodiment of the present invention
  • Fig. 4 is a schematic, pictorial illustration of an interactive gaming system with an auxiliary, movable display, in accordance with an embodiment of the present invention
  • Fig. 5 is a schematic, pictorial illustration of an interactive gaming system, showing user feedback functionality of the system in accordance with an embodiment of the present invention.
  • Fig. 6 is a flow chart that schematically illustrates a method of operating an interactive gaming system, in accordance with an embodiment of the present invention.
  • Embodiments of the present invention that are described hereinbelow provide methods, apparatus and software for enhancing the realism and entertainment value of simulations of "virtual worlds" by combining view-dependent rendering with object-tracking techniques.
  • a computer calculates the correct perspective projection and displays it on the screen.
  • the computer also tracks the location and orientation of a controller that is held by the user, and is thus able to simulate a trajectory extending from the controller into the virtual world, such as when shooting projectiles or casting beams into the virtual world. This arrangement gives the user the ability to aim correctly at objects in the virtual world as though it were an extension of the physical world of the user.
  • a 3D environment is defined in a computer.
  • the 3D environment includes one or more graphical objects having respective 3D object locations in this environment.
  • the computer tracks a 3D user location of a user of the computer as explained above, and based on the user location, projects a view of the 3D environment onto a 2D display viewed by the user. To create this view, the 3D object locations are projected onto corresponding 2D object locations in a plane of the display.
  • the computer also tracks 3D location and orientation coordinates of a controller held by the user, such as a toy gun or other device that the user can point in order to define a trajectory, such as a beam of virtual projectiles or radiation, into the 3D environment.
  • the computer Based on these location and orientation coordinates of the controller, the computer is able to identify the user-defined trajectory. It can then detect and indicate to the user that the trajectory has intercepted the actual 3D location of a graphical object in the 3D environment. Depending on the user's view angle and the location and orientation of the controller - which may be different from the user's eye location and view angle - this trajectory may not intercept the corresponding 2D location of this graphical object on the display.
  • the computer integrates the 3D environment of the graphical objects with the actual physical environment of the user, so that the user aims realistically at 3D locations of the graphical objects, rather than just their projections on the screen. In this manner, the user may even aim at objects that he or she cannot currently see, and the trajectory from the controller to the target object may not even intercept the display itself.
  • the computer tracks the position of one or both of the user's eyes, and generates the view of the 3D environment that properly reflects to the position of one or both of the eyes.
  • the computer may project multiple different, respective views of the 3D environment onto multiple different displays, depending on the positions of the displays relative to the user. If the positions of both the left and right eyes of the user are detected, and a stereoscopic vision device is used, then a stereoscopic pair of views may be projected.
  • the computer may sense the height of the user, and may then rotate an angle of the view based on the height so as to enhance the visibility of the graphical objects on the display for the particular user.
  • the computer may detect the proximity of the position of the eye to the controller (for example, sensing when the user holds a "gun sight" of a toy weapon up to his or her eye), and may then enlarge the image of a target object on the display when the eye is in the appropriate location near the controller. This enlarged image may occupy the entire display or just a part of the display to simulate a telescopic sight, for example.
  • the user may hold an auxiliary display device, such as a smart phone or tablet computer.
  • the main computer may track the location of this auxiliary display device, and may then render an additional projected view of the 3D environment, based on the location of the device, for display on the device. In this way, the user can use and move the auxiliary display device in order to get an additional view into the virtual world of the 3D graphical objects.
  • the 3D locations of the graphical objects in the 3D environment may be located in front of the display plane as well as behind it, and in some cases may continue through the display plane.
  • the computer may track movement of the user's 3D location, and on this basis may detect a "collision" between the user and one of the 3D object locations in the 3D environment. When such a collision occurs, the computer typically gives the user an indication, such as an audible or visible indication or haptic feedback, for example via the controller that the user is holding.
  • the embodiments of the present invention that are described herein provide the user with a more realistic and lifelike interaction with the virtual world of a 3D environment.
  • Virtual World - A computer simulation which includes, at the minimum, objects with shape existing in a "virtual" 3D coordinate space, known as the “virtual world space” or “3D environment.”
  • the simulation is usually done through software code (such as a game), and can be as simple as a single schematic room or as complex as an entire online virtual world.
  • Coordinate Space In strict mathematical terms, an "n-dimensional vector space over a field.” In the present context it means a 2D or 3D vector space used to describe either location in a 3D space, or position on a projection panel (such as a screen, TV, head-mounted display).
  • the present embodiments deal with three central coordinate spaces - virtual world space (the 3D environment), room space (the physical environment, also 3D) and display space (the view of the virtual world that is projected onto a 2D display panel). Locations within these coordinate spaces are defined in sets of numbers - X, Y, Z for position in a three-dimensional space, X and Y for a two-dimensional one, wherein the coordinates are measured relative to a predefined origin in the physical or virtual environment.
  • the conversion is between the three aforementioned coordinate spaces - virtual, room and display, so that there are three important transformations: room space to virtual space, virtual space to room space, and virtual space to display space.
  • the act of converting coordinates is often called “transformation.”
  • this transformation involves the "loss" of a dimension - going from three-dimensional space to two- dimensional space - it is also called “projection.”
  • embodiments of the present invention involve transforming coordinates from virtual to room and from room to virtual, and with projecting from virtual to display.
  • Projection - Mapping of 3D points to a 2D plane This technique is used in computer graphics to display virtual environments or objects on a 2D viewing plane (monitor, screen, TV). More specifically, we refer to a "perspective projection,” which is a projection of a 3D world to a 2D plane that simulates the way the world is seen from a certain viewpoint.
  • Perspective Projection When the human eye views objects at a distance, they appear to be smaller than they are. This phenomenon is known as "perspective,” and it is a crucial to our perception of depth. Therefore, simulations of virtual worlds, especially those of the "first person” type (in which the virtual world is perceived through the eyes of an "avatar"), use this type of projection to simulate what the player would see.
  • the center of projection typically corresponds to the position of the "avatar” in virtual world space, so that the perspective projection presents the world as it would appear to a person standing precisely in that location (although other centers of projection, such as a point behind the avatar, are sometimes used).
  • the projection is usually implemented using a 4 x 4 matrix known as a projection matrix, which is constructed using the desired optical characteristics. Multiplying a 3D vector in virtual world space by the projection matrix gives the projected vector (in homogeneous coordinates).
  • Frustum - A term used in computer graphics to describe the 3D region viewable on screen.
  • a perspective projection definition is interchangeable with a frustum definition (i.e. a specific projection defines a specific frustum and vice versa) and may be symmetrical or asymmetrical. Further details of such projections, in the specific context of an embodiment of the present invention, are described hereinbelow with reference to Fig. IB.
  • VDR View-Dependent Rendering
  • Tracking Sensor - A device capable of tracking locations and/or orientations of real- world objects in room space (or in other coordinate system transformable to room space).
  • a tracking sensor can track one or more objects, and may have one or more "degrees of freedom," or sensing coordinates. Sensors used in embodiments of the present invention typically track either three degrees of freedom (x, y, z, also referred to as location), or six degrees of freedom (location and orientation). There are many types of and technologies for tracking, such as optical, magnetic, and inertial types.
  • the sensor may be located in or on the physical object that it is tracking, or it may alternatively comprise a remote device, which captures images or otherwise receives signals indicative of the object coordinates.
  • a controller is an object the user can hold in the real world, and which can be tracked by a suitable tracking sensor.
  • the embodiments described below use a game controller, i.e., a controller having a form and functionality suitable for use in a computer game, specifically a toy gun in these embodiments.
  • the controller can be passive (simply a representation of an object, such as a mock gun), or active (the mock gun can also include the ability to communicate commands to a computer, for example - via buttons, joysticks or other controls).
  • the tracking sensor can be embedded within the controller (for example, an accelerometer) or external to it (for example, a camera), or a combination of both (a camera and an accelerometer working together, for example).
  • the game controller it is desirable that the game controller have an obvious "pointer,” since it is to be used to simulate shooting of projectiles or beams from the room into the virtual world.
  • Examples of such game controllers may include not only guns, but also a bow, a crossbow, or a magician's wand.
  • calibration is the process of determining the relative position and orientation of sensor devices and their internal coordinate frames, as well as display panels.
  • this calibration may be either in relation to fixed room coordinates or in terms of relative coordinates, regardless of the absolute physical locations.
  • the view presented on a display may be calibrated interactively by detecting the 3D location and orientation coordinates of a controller while the user points the controller toward one or more predefined target locations on the display.
  • the computer may present targets on the display and prompt the user to aim the controller at the targets.
  • Fig. 1A is a schematic, pictorial illustration of an interactive gaming system 20, in accordance with an embodiment of the present invention.
  • a user 22 views a game scenario that is presented by a computer 24 on a display screen 26.
  • a sensor 28 tracks the location of the user - more specifically an eye 30 of the user - and passes this information to computer 24.
  • the computer then renders the 2D view of the virtual 3D environment of the game accordingly on screen 26, using view-dependent rendering, as explained above.
  • a 3D graphical object 36 representing an opponent in the game is projected by computer 24 onto a corresponding 2D location 32 on screen 26 based on the location of the object in the 3D environment and the location of eye 30.
  • User 30 holds a controller 34, having the form of a gun, and aims the gun to fire projectiles, such as simulated bullets, along a trajectory 38 toward object 36.
  • the user in other words, aims at the simulated 3D location of the object in the 3D environment of the game, rather than at 2D location 32 of the projection of the object on screen 26.
  • trajectory 38 may not intercept the 2D on-screen location at all.
  • the user sees screen 26 as a window into the virtual world of the game, and shoots into the virtual world as though it were simply an extension of the real, physical world in which the user is located.
  • Fig. IB is a schematic top view of system 20 showing geometrical relations between certain elements of the system, in accordance with an embodiment of the present invention. Specifically, this figure illustrates the asymmetrical perspective projection that is used in rendering the image on display 26 and the relation between this projection and trajectory 38 that is defined by controller 34. For the sake of simplicity, Fig. IB shows only the X-Z plane (wherein X is the horizontal coordinate along screen 26 and Z is the distance coordinate perpendicular to the screen), but the projections in all other planes follow similar principles.
  • the location of eye 30 relative to screen 26 defines an asymmetric frustum 37.
  • the right and left edges of the screen are at points (l,n) and (r,n).
  • the corresponding (X,Y) location on the screen can be found, in homogeneous coordinates, by multiplying the 3D point coordinates by the projection matrix:
  • f and n are the distances from the projection plane to the "far” and “near” planes of the frustum, respectively, and 1, r, b, and t are the left, right, bottom and top coordinates of the viewing plane, respectively.
  • f and n are the distances from the projection plane to the "far” and “near” planes of the frustum, respectively, and 1, r, b, and t are the left, right, bottom and top coordinates of the viewing plane, respectively.
  • controller 34 is not aligned with eye 30, but is rather held by user 22 some distance from the eye. Consequently, trajectory 38 from controller 34 to object 36 does not intercept location 32 (although in some cases the trajectory may intercept this location, depending on the specific 3D geometry at any given moment).
  • computer 24 will indicate that the user has "hit” object 36 (by causing the on-screen opponent to fall over or explode, for example).
  • the computer will register a miss. If object 38 is near the edge of frustum 37, trajectory 38 may not intercept screen 26 at all.
  • computer 24 may determine that trajectory 38 has intercepted a 3D object that does not appear on screen 26 at all, because it is hidden in the user's current view (for example, if the user hides behind a virtual "wall” and shoots around the edge).
  • computer 24 in this embodiment comprises a general-purpose processor, with input/output (I/O) connections to display 26, sensor 28, and other elements of system 20 as appropriate.
  • Computer 24 operates under the control of suitable software code, which typically includes a game application, which generates the interactive game scenario, as well as causing the computer to perform the sensing, view-dependent rendering, and user interaction functions that are described herein.
  • This software may be downloaded to computer 24 in electronic form, over a network, for example.
  • the software may be stored, on tangible, generally non-transitory, computer-readable media, such as optical, magnetic or electronic memory media.
  • the term "computer” is used broadly in the context of the present description and in the claims to refer to any sort of processing device with the above capabilities, including both game consoles (such as Xbox® or Playstation®) and general- purpose personal computers, whether stationary or mobile.
  • Sensor 28 may comprise any device that is capable of tracking the location of user 22, and specifically of eye 30.
  • sensor 28 may track the position and orientation of controller 34 and possibly other system elements (such as the mobile display device that is shown in Fig. 4).
  • sensor 28 may comprise one or more cameras, such as a video camera that may be used with face-tracking capabilities of computer 24, a depth-camera with body-tracking capabilities, or a camera with marker-tracking capabilities (such as infrared LEDs worn by user 22).
  • system 20 may comprise sensors worn by user 22 and/or fixed to or embedded in controller 34, such as accelerometers, gyroscopes and/or magnetometers, with suitable links (such as wireless links) to convey location, orientation and/or movement information to computer 24.
  • suitable links such as wireless links
  • Fig. 2 is a schematic side view of controller 34, showing functional details of the controller in accordance with an embodiment of the present invention.
  • controller 34 has the form of a toy gun, with a barrel that can be pointed to generate trajectory 38, with a user control 40 in the form of a trigger.
  • User 22 may use a sight 42 on controller 34 to align the trajectory, with possible effects on the image projection on display 26, as shown below in Fig. 3.
  • the controller may comprise any other suitable sort of user interface device, such as a joystick, buttons or touch panel.
  • One or more sensors 44, such as inertial sensors, embedded in controller 34 measure the location and orientation of the gun barrel and convey this information via a communication interface 46 to computer 24.
  • controller 34 may comprise one or more emitters, such as infrared or radio-frequency (RF) emitters, whose locations are detected by suitable sensors in the room, such as sensor 28.
  • Controller 34 may comprise any one of a variety of commercially-available game controllers with appropriate sensing capabilities, such as the SixenseTM or Playstation Move controller.
  • controller 34 may also comprise a feedback device 48, which may be actuated remotely view communication interface 46.
  • device 48 may provide an audible or visible indication to user 22 or may provide haptic feedback, by vibrating, for example. This capability may be used in notifying the user of particular events, such as collision with a virtual 3D object (as illustrated in Fig. 5).
  • screen 26 may comprise a stereoscopic vision device, which shows a different image to each of the player's eyes (for example, the Samsung 3D TV model UN55C7000WF).
  • a stereoscopic vision device which shows a different image to each of the player's eyes
  • each eye will see a different 2D projection of the 3D environment, wherein each projection is adjusted for that eye's position relative to the display.
  • computer 24 may choose one of the user's eyes to be the "dominant eye," and use the position of this eye in view-dependent rendering. The choice may be made by asking the user for his or her preference or may be made automatically by the computer.
  • One method of automatically identifying the dominant eye is by tracking which eye the user uses to aim controller 34, which should be the eye that is closer to the weapon when he or she shoots.
  • computer 24 may modify the view presented on screen 26 continually depending on changes in the location of user 22.
  • the computer can also sense that the user has moved his or her head to hide behind objects in the 3D environment (including both real and simulated, virtual objects) and thus avoid being seen by the simulated enemies.
  • the user may also move his or her head to avoid "dangerous" objects or projectiles (such as an enemy's weapon or falling ceiling tiles), as well as to get a different view of the environment, such as peeking around a corner or window, or moving closer to the screen to examine objects up close.
  • the user may not be positioned in an ideal location (relative to screen 26) to best view the most relevant content in the 3D environment. For example, if the viewer is tall and views the display from above, his view would be mostly of the ground in the 3D environment. To avoid this sort of situation, computer 24 may allow the user to rotate the 3D environment to compensate for his position. In the example above, the user would rotate the 3D environment downwards, so that his view will be similar to that of a person standing directly in front of the display. As another example, the computer may automatically detect the user's height and adjust the view accordingly. The height may be estimated, for example, as being the maximum height reported by sensor 28 for a few seconds without significant change (to filter out any jumps by the user).
  • Fig. 3 is a schematic, pictorial illustration of system 20, showing a zoom functionality of the system in accordance with an embodiment of the present invention.
  • User 22 has brought his eye 30 into proximity with controller 34, and specifically with sight 42.
  • Computer 24 detects this proximity, and in response presents an enlarged image 50 of object 36 on screen 26, as though the user were looking through an actual telescopic sight.
  • System 20 may similarly simulate other types of specialized gun sights, as well as other optical devices, which may or may not have an actual physical counterpart installed on controller 34.
  • computer 24 may present images in "night-vision mode.”
  • Fig. 4 is a schematic, pictorial illustration of system 20 with an auxiliary, movable display 54, in accordance with an embodiment of the present invention.
  • Display 54 in this case is pictured as a tablet computer, but the principles of this embodiment may be applied using substantially any sort of movable display device, such as a smart phone or a special-purpose display screen mounted on controller 34.
  • computer 24 may drive two or more stationary displays, mounted side by side or even on different walls, to allow viewer to see the virtual 3D environment through multiple, different "windows.”
  • Computer 24 tracks the position of display 54 in the room space, either by analyzing images provided by sensor 28, or receiving position signals from display 54 itself, or by any other suitable means.
  • user 22 can hold a mobile phone that is also tracked by the same camera that tracks the user's head.
  • Display 54 thus provides an extra "window" into the virtual world, which can be moved dynamically by the user 22.
  • Computer 24 renders a view on display 54 of another part 56 of the virtual 3D environment in the correct perspective projection and transmits an image of the view to display 54 for presentation to user 22.
  • display 54 may contain a suitable processor and software code that does this rendering on its own, according to information transmitted or shared by computer 24.
  • Mobile display 54 can also serve to simulate various real-life optical devices such as binoculars or a magnifying glass (with a suitable zoom function) or night vision goggles.
  • Fig. 5 is a schematic, pictorial illustration of system 20, showing user feedback functionality of the system in accordance with an embodiment of the present invention. This functionality makes use of feedback device 48 in controller 34, as described above. Computer 24 actuates this feedback capability to provide the user with immediate feedback concerning the results of bodily and/or controller movements in the real world and their effect upon the virtual world.
  • the simulated 3D environment includes a wall 60, which extends from virtual space behind display screen 26 into the actual room space where user 22 is standing.
  • the movement causes his or her virtual counterpart (the "avatar") to collide with wall 60.
  • computer 24 may immediately provide feedback, such as haptic feedback conveyed by vibrating feedback device 48 in controller 34.
  • the computer may generate, audio feedback (such as a collision noise) or visual feedback (such as a brief whitening of screen 26 or other visual effects to simulate a hit).
  • the computer may also shift the virtual world so that the user's avatar does not actually walk through the virtual wall.
  • Fig. 6 is a flow chart that schematically illustrates a method of operating system 20, in accordance with an embodiment of the present invention. Although the method is described, for the sake of clarity and convenience, with reference to the elements of the system described above, the elements of the method may alternatively be implemented in substantially any system having the appropriate display, sensing and control functions, as outlined above.
  • computer 24 Upon initiation of the method, computer 24 detects the starting positions of user 22 (and specifically of eye 30) and controller 34, at an initialization step 70. The computer transforms these real-world positions into corresponding coordinates in the virtual 3D environment. The method then proceeds iteratively, repeatedly tracking and updating the user's eye location, at an eye update step 72, and the location and orientation of controller 34, at a controller update step 74. The computer transforms these positions and orientations into virtual world coordinates and moves the virtual counterpart of the user in the virtual world accordingly, at a coordinate conversion step 76.
  • Computer 24 checks whether any contradiction has occurred between movement in the real world and movement in the virtual world, at a feedback checking step 78. For example, the computer may check whether the virtual counterpart of the user has collided with a virtual 3D object or fallen into a pit. If so, computer 24 invokes the appropriate action to provide feedback to the user, at a feedback invocation step 80.
  • computer 24 updates the image appearing on screen 26, at an image update step 82.
  • the new user position in room space in conjunction with the counterpart (avatar) position in virtual space, is used to generate an image on display screen 26 using view-dependent rendering, so that the virtual world appears to the user at the correct viewing angle and distance as though the screen were a window into the virtual world, as explained above.
  • the computer may also evaluate the relative proximity between the user's eye and controller 34 to determine whether the user is attempting to access a virtual optical device (such as a simulated telescopic sight or night vision device mounted on the controller). If so, the computer will modify the display accordingly. If user 22 is holding a mobile display 54, computer 24 also evaluates its position and updates the image presented on display 54 accordingly.
  • Computer 24 also evaluates whether user 22 has taken any action with respect to the virtual world, such as firing a projectile along trajectory 38, at an action checking step 84. If so, the computer calculates the result, for example to determine where the projectile has actually struck in the virtual world, at a result computation step 86.
  • the computer runs suitable simulation logic to simulate the interaction between the user, the controller and other components of the virtual world, based on their newly calculated locations as well as user actions, and computes the results at a status update step 88. The results of the interactions are presented to the user on screen 26, and possibly via feedback device 48 in controller 34.
  • computer 24 After updating the game status, computer 24 checks whether the game has ended, at a completion step 90. If not, the method returns to step 72, continuing until the game is over.

Abstract

A method for computer interaction includes defining in a computer (24) a three- dimensional (3D) environment containing one or more graphical objects (36) having respective 3D object locations. A 3D user location of a user (22) of the computer is tracked. Responsively to the 3D user location, a view of the 3D environment is projected onto a two- dimensional (2D) display (26) viewed by the user. Location and orientation coordinates of a controller (34) held by the user are also tracked, so as to define a trajectory (38) directed by the user from the controller into the 3D environment. The computer detects and indicates to the user that the trajectory has intercepted a 3D location (32) of a graphical object in the 3D environment while the trajectory does not intercept a corresponding 2D location of the graphical object on the display.

Description

SIMULATING INTERACTION WITH A THREE-DIMENSIONAL ENVIRONMENT
CROSS-REFERENCE TO RELATED APPLICATIONS
This application claims the benefit of U.S. Provisional Patent Application 61/591,821, file January 27, 2012, which is incorporated herein by reference. FIELD OF THE INVENTION
The present invention relates generally to interactive computer display systems, and particularly to methods and apparatus for interactive simulation of three-dimensional (3D) environments.
BACKGROUND
Simulation of three-dimensional (3D) environments is a significant domain in computer graphics, used for a variety of purposes, such as simulation and training, architectural imaging and interactive games. It is usually desirable to immerse the user as deeply as possible within these simulations, through enhanced realism. In other words, the images displayed should be as close as possible to those viewed in real life, as should the reaction of the virtual world. A wide variety of technologies have been developed (and continue to be improved) in order to achieve these goals.
Within the field of computer gaming, one particular genre of games has been a driving force in the advancement of these technologies (which can also be applied to other uses): the genre known as "First Person Shooter." In these fast-paced simulative games, the player views an interactive 3D "virtual world" through a 2D display panel, and controls the actions of his or her virtual equivalent by means of various human interface devices, such as a keyboard, mouse or joystick. In recent years, some games have begun to allow players to use motion-sensing controllers to control some aspects of their avatar's behavior. For example, a controller shaped like a gun, fitted with a motion sensing unit, allows the user to aim at objects appearing on screen by manipulating the controller's orientation and position. Devices of this sort are described, for instance, in U.S. Patent Application Publications 2010/0267454 and 2011/0092290, whose disclosure are incorporated herein by reference.
One of the challenges of realistically representing the 3D virtual world on the 2D display is the need to generate the correct perspective projection from 3D space to 2D. Some interactive systems use View-Dependent Rendering (VDR), in which the projection is computed and rendered according to the location of the viewer in relation to the display panel (i.e., not necessarily front and center). Some of the problems of VDR and their solutions are surveyed by Slotsbo, in "3D Interactive and View Dependent Stereo Rendering" (Technical University of Denmark, IMM-THESIS:ISSN 1601-233X, 2004), which is incorporated herein by reference. Slotsbo describes an implementation that can produce a view-dependent 2D stereo projection of 3D data onto a screen, which enables moving parallax using viewer tracking. It is possible to interact with the 3D data using a tracked pointing device.
SUMMARY
Embodiments of the present invention provide improved methods, systems and software for interactive display and manipulation of 3D image data.
There is therefore provided, in accordance with an embodiment of the present invention, a method for computer interaction, which includes defining in a computer a three-dimensional (3D) environment containing one or more graphical objects having respective 3D object locations. A 3D user location of a user of the computer is tracked. Responsively to the 3D user location, a view of the 3D environment is projected onto a two-dimensional (2D) display viewed by the user, wherein the 3D object locations are projected onto corresponding 2D object locations in a plane of the display. 3D location and orientation coordinates of a controller held by the user are tracked, so as to define a trajectory directed by the user from the controller into the 3D environment. A processor detects and indicates to the user that the trajectory has intercepted a 3D location of a graphical object in the 3D environment while the trajectory does not intercept a corresponding 2D location of the graphical object on the display.
The trajectory may correspond to a path of a simulated projectile fired by the user by operating the controller and need not intercept the display.
In some embodiments, tracking the 3D user location includes identifying a position of an eye of the user, and projecting the view includes generating the view corresponding to the position of the eye. Generating the view may include detecting a proximity of the position of the eye to the 3D location coordinate of the controller, and enlarging at least a part of an image viewable on the display responsively to the proximity. Additionally or alternatively, identifying the position of the eye may include identifying respective eye positions of left and right eyes of the user, and wherein projecting the view includes projecting a stereoscopic pair of views responsively to the eye positions. In one embodiment, tracking the 3D user location includes sensing a height of the user, and projecting the view includes rotating an angle of the view responsively to the height so as to enhance a visibility of the graphical objects on the display.
In some embodiments, projecting the view includes projecting multiple different, respective views of the 3D environment onto multiple different displays, responsively to respective positions of the displays. Projecting the multiple different, respective views may include tracking a display location of an auxiliary display device held by the user, and projecting an additional view of the 3D environment onto the auxiliary display device responsively to the display location.
In a disclosed embodiment, projecting the view includes calibrating the view interactively by detecting the 3D location and orientation coordinates of the controller while the user points the controller toward one or more predefined target locations.
In another embodiment, the method includes detecting, responsively to movement of the 3D user location, a collision between the user and one of the 3D object locations in the 3D environment, and providing an indication to the user of the detected collision.
There is also provided, in accordance with an embodiment of the present invention, a method for computer interaction, which includes defining in a computer a three-dimensional (3D) environment containing one or more graphical objects having respective 3D object locations. A 3D eye location of an eye of a user of the computer is tracked. Responsively to the 3D eye location, a view of the 3D environment is projected onto a two-dimensional (2D) display viewed by the user, wherein the 3D object locations are projected onto corresponding 2D object locations in a plane of the display. A 3D controller location of a controller held by a user is tracked, and responsively to a proximity between the 3D eye location and the 3D controller location, at least a part of an image viewable on the display is enlarged.
In one embodiment, tracking the 3D controller location includes finding an orientation of the controller, and selecting the part of the image for enlargement responsively to the orientation.
In another embodiment, the controller includes an auxiliary display, and enlarging the image includes presenting the enlarged part of the image on the auxiliary display. Presenting the enlarged part of the image may include projecting an additional view of the one of the objects onto the auxiliary display responsively to the 3D controller location.
There is additionally provided, in accordance with an embodiment of the present invention, a method for computer interaction, which includes defining in a computer a three- dimensional (3D) environment containing one or more graphical objects having respective 3D object locations. A 3D user location of a user of the computer is tracked. Responsively to the 3D user location, a view of the 3D environment is projected onto a two-dimensional (2D) display viewed by the user, wherein the 3D object locations are projected onto corresponding 2D object locations in a plane of the display. A 3D display location of an auxiliary display device held by the user is tracked, and responsively to the 3D display location and the user location, an additional view of the 3D environment is projected onto the auxiliary display device.
In a disclosed embodiment, tracking the 3D user location includes identifying a position of an eye of the user, and projecting the additional view includes zooming a size of at least a part of an image appearing in the additional view responsively to a distance between the position of the eye and the 3D display location.
There is further provided, in accordance with an embodiment of the present invention, a method for computer interaction, which includes defining in a computer a three-dimensional (3D) environment containing one or more graphical objects having respective 3D object locations. Responsively to the 3D user location, a view of the 3D environment is projected onto a two-dimensional (2D) display viewed by the user, wherein the 3D object locations are projected onto corresponding 2D object locations in a plane of the display. Responsively to movement of the 3D user location, a collision between the user and one of the 3D object locations in the 3D environment is detected, and an indication is provided to the user of the detected collision.
The indication may include an audible or visible indication. Additionally or alternatively, the indication may include haptic feedback provided by a controller held by the user.
There is moreover provided, in accordance with an embodiment of the present invention, apparatus for computer interaction, including a display, a tracking device, which is configured to detect a 3D user location of a user of the apparatus, and a controller, which is configured to be held by the user. A processor is configured to accept a definition of a three- dimensional (3D) environment containing one or more graphical objects having respective 3D object locations, and to project, responsively to the 3D user location, a view of the 3D environment onto a two-dimensional (2D) display viewed by the user, wherein the 3D object locations are projected onto corresponding 2D object locations in a plane of the display. The processor is coupled to track 3D location and orientation coordinates of the controller held by the user, so as to define a trajectory directed by the user from the controller into the 3D environment, and to detect and indicate to the user that the trajectory has intercepted a 3D location of a graphical object in the 3D environment while the trajectory does not intercept a corresponding 2D location of the graphical object on the display.
There is furthermore provided, in accordance with an embodiment of the present invention, apparatus for computer interaction, in which the tracking device is configured to detect a 3D eye location of an eye of a user of the apparatus. The processor is coupled to track a 3D controller location of the controller held by the user, and responsively to a proximity between the 3D eye location and the 3D controller location, to enlarge at least a part of an image viewable on the display.
There is also provided, in accordance with an embodiment of the present invention, apparatus for computer interaction, including an auxiliary display device configured to be held by a user, wherein the processor is configured to track a 3D display location of the auxiliary display device held by the user, and to project, responsively to the 3D display location and the user location, an additional view of the 3D environment for display on the auxiliary display device.
There is additionally provided, in accordance with an embodiment of the present invention, apparatus for computer interaction, wherein the processor is configured to detect, responsively to movement of the 3D user location, a collision between the user and one of the 3D object locations in the 3D environment, and to provide an indication to the user of the detected collision.
There is further provided, in accordance with an embodiment of the present invention, a computer software product, including a computer-readable medium in which program instructions are stored, which instructions, when read by a computer, cause the computer to accept a definition of a three-dimensional (3D) environment containing one or more graphical objects having respective 3D object locations, to track a 3D user location of a user of the computer, and to project, responsively to the 3D user location, a view of the 3D environment onto a two-dimensional (2D) display viewed by the user, wherein the 3D object locations are projected onto corresponding 2D object locations in a plane of the display. The instructions cause the computer to track 3D location and orientation coordinates of a controller held by the user, so as to define a trajectory directed by the user from the controller into the 3D environment, and to detect and indicate to the user that the trajectory has intercepted a 3D location of a graphical object in the 3D environment while the trajectory does not intercept a corresponding 2D location of the graphical object on the display.
There is moreover provided, in accordance with an embodiment of the present invention, a computer software product, wherein the instructions cause the computer to track a 3D eye location of an eye of a user of the computer, and to track a 3D controller location of the controller held by the user, and responsively to a proximity between the 3D eye location and the 3D controller location, to enlarge at least a part of an image viewable on the display.
There is furthermore provided, in accordance with an embodiment of the present invention, a computer software product, wherein the instructions cause the computer to track a 3D display location of an auxiliary display device held by the user, and to project, responsively to the 3D display location and the user location, an additional view of the 3D environment for display on the auxiliary display device.
There is also provided, in accordance with an embodiment of the present invention, a computer software product, wherein the instructions cause the computer to detect, responsively to movement of the 3D user location, a collision between the user and one of the 3D object locations in the 3D environment, and to provide an indication to the user of the detected collision.
The present invention will be more fully understood from the following detailed description of the embodiments thereof, taken together with the drawings in which:
BRIEF DESCRIPTION OF THE DRAWINGS
Fig. 1A is a schematic, pictorial illustration of an interactive gaming system, in accordance with an embodiment of the present invention;
Fig. IB is a schematic top view showing geometrical relations between certain elements of the system of Fig. 1A, in accordance with an embodiment of the present invention;
Fig. 2 is a schematic side view of a game controller, in accordance with an embodiment of the present invention;
Fig. 3 is a schematic, pictorial illustration of an interactive gaming system, showing a zoom functionality of the system in accordance with an embodiment of the present invention;
Fig. 4 is a schematic, pictorial illustration of an interactive gaming system with an auxiliary, movable display, in accordance with an embodiment of the present invention; Fig. 5 is a schematic, pictorial illustration of an interactive gaming system, showing user feedback functionality of the system in accordance with an embodiment of the present invention; and
Fig. 6 is a flow chart that schematically illustrates a method of operating an interactive gaming system, in accordance with an embodiment of the present invention.
DETAILED DESCRIPTION OF EMBODIMENTS
OVERVIEW
Embodiments of the present invention that are described hereinbelow provide methods, apparatus and software for enhancing the realism and entertainment value of simulations of "virtual worlds" by combining view-dependent rendering with object-tracking techniques. By tracking the location of the user's eyes (either directly or by tracking of the head or some implement worn on the head) relative to the known location of a display screen, a computer calculates the correct perspective projection and displays it on the screen. The computer also tracks the location and orientation of a controller that is held by the user, and is thus able to simulate a trajectory extending from the controller into the virtual world, such as when shooting projectiles or casting beams into the virtual world. This arrangement gives the user the ability to aim correctly at objects in the virtual world as though it were an extension of the physical world of the user.
In the embodiments that are described hereinbelow, a 3D environment is defined in a computer. The 3D environment includes one or more graphical objects having respective 3D object locations in this environment. The computer tracks a 3D user location of a user of the computer as explained above, and based on the user location, projects a view of the 3D environment onto a 2D display viewed by the user. To create this view, the 3D object locations are projected onto corresponding 2D object locations in a plane of the display. The computer also tracks 3D location and orientation coordinates of a controller held by the user, such as a toy gun or other device that the user can point in order to define a trajectory, such as a beam of virtual projectiles or radiation, into the 3D environment.
Based on these location and orientation coordinates of the controller, the computer is able to identify the user-defined trajectory. It can then detect and indicate to the user that the trajectory has intercepted the actual 3D location of a graphical object in the 3D environment. Depending on the user's view angle and the location and orientation of the controller - which may be different from the user's eye location and view angle - this trajectory may not intercept the corresponding 2D location of this graphical object on the display. In other words, the computer integrates the 3D environment of the graphical objects with the actual physical environment of the user, so that the user aims realistically at 3D locations of the graphical objects, rather than just their projections on the screen. In this manner, the user may even aim at objects that he or she cannot currently see, and the trajectory from the controller to the target object may not even intercept the display itself.
Typically, as noted earlier, the computer tracks the position of one or both of the user's eyes, and generates the view of the 3D environment that properly reflects to the position of one or both of the eyes. The computer may project multiple different, respective views of the 3D environment onto multiple different displays, depending on the positions of the displays relative to the user. If the positions of both the left and right eyes of the user are detected, and a stereoscopic vision device is used, then a stereoscopic pair of views may be projected. As still a further possibility, the computer may sense the height of the user, and may then rotate an angle of the view based on the height so as to enhance the visibility of the graphical objects on the display for the particular user.
As an optional additional feature, the computer may detect the proximity of the position of the eye to the controller (for example, sensing when the user holds a "gun sight" of a toy weapon up to his or her eye), and may then enlarge the image of a target object on the display when the eye is in the appropriate location near the controller. This enlarged image may occupy the entire display or just a part of the display to simulate a telescopic sight, for example.
In another embodiment, the user may hold an auxiliary display device, such as a smart phone or tablet computer. The main computer may track the location of this auxiliary display device, and may then render an additional projected view of the 3D environment, based on the location of the device, for display on the device. In this way, the user can use and move the auxiliary display device in order to get an additional view into the virtual world of the 3D graphical objects.
The 3D locations of the graphical objects in the 3D environment may be located in front of the display plane as well as behind it, and in some cases may continue through the display plane. To enhance the realism of user interaction with the 3D environment, the computer may track movement of the user's 3D location, and on this basis may detect a "collision" between the user and one of the 3D object locations in the 3D environment. When such a collision occurs, the computer typically gives the user an indication, such as an audible or visible indication or haptic feedback, for example via the controller that the user is holding. Thus, the embodiments of the present invention that are described herein provide the user with a more realistic and lifelike interaction with the virtual world of a 3D environment. These embodiments may be realized, however, using off-the-shelf equipment, without requiring special-purpose hardware, such as custom-built displays, controllers, or augmented reality devices (although such hardware may be used in alternative embodiments). The embodiments that are shown in the figures, as described below, relate particularly to the field of computer games, but the principles of the present invention may similarly be applied in various fields of interactive display, simulation and training, as will be apparent to those skilled in the art.
DEFINITIONS AND BASIC TECHNIQUES
Before describing specific embodiments, it will be useful to set forth some key terms and mathematical techniques that are used in defining and implementing these embodiments:
Virtual World - A computer simulation which includes, at the minimum, objects with shape existing in a "virtual" 3D coordinate space, known as the "virtual world space" or "3D environment." The simulation is usually done through software code (such as a game), and can be as simple as a single schematic room or as complex as an entire online virtual world.
Coordinate Space - In strict mathematical terms, an "n-dimensional vector space over a field." In the present context it means a 2D or 3D vector space used to describe either location in a 3D space, or position on a projection panel (such as a screen, TV, head-mounted display). The present embodiments deal with three central coordinate spaces - virtual world space (the 3D environment), room space (the physical environment, also 3D) and display space (the view of the virtual world that is projected onto a 2D display panel). Locations within these coordinate spaces are defined in sets of numbers - X, Y, Z for position in a three-dimensional space, X and Y for a two-dimensional one, wherein the coordinates are measured relative to a predefined origin in the physical or virtual environment. Therefore, we can say of a player in the game that he or she is "located at coordinates [xl, y2, zl] in the room," while his or her "avatar" is "located at coordinates [x2, y2, z2] in the virtual world," and the object currently viewed by the player, which is "located at coordinates [x3, y3, z3] in the virtual world" is "projected to coordinates [x4, y4] on the display panel." Orientations can similarly be represented, typically using mathematical tools such as Euler angles, quaternions, rotation matrices, or direction vectors. Coordinate Transformation - A function which converts a vector from one coordinate system to another. In the present embodiments, the conversion is between the three aforementioned coordinate spaces - virtual, room and display, so that there are three important transformations: room space to virtual space, virtual space to room space, and virtual space to display space. The act of converting coordinates is often called "transformation." When this transformation involves the "loss" of a dimension - going from three-dimensional space to two- dimensional space - it is also called "projection." Thus, embodiments of the present invention involve transforming coordinates from virtual to room and from room to virtual, and with projecting from virtual to display. Projection - Mapping of 3D points to a 2D plane. This technique is used in computer graphics to display virtual environments or objects on a 2D viewing plane (monitor, screen, TV). More specifically, we refer to a "perspective projection," which is a projection of a 3D world to a 2D plane that simulates the way the world is seen from a certain viewpoint.
Perspective Projection - When the human eye views objects at a distance, they appear to be smaller than they are. This phenomenon is known as "perspective," and it is a crucial to our perception of depth. Therefore, simulations of virtual worlds, especially those of the "first person" type (in which the virtual world is perceived through the eyes of an "avatar"), use this type of projection to simulate what the player would see. In computer graphics systems, the center of projection typically corresponds to the position of the "avatar" in virtual world space, so that the perspective projection presents the world as it would appear to a person standing precisely in that location (although other centers of projection, such as a point behind the avatar, are sometimes used). The projection is usually implemented using a 4 x 4 matrix known as a projection matrix, which is constructed using the desired optical characteristics. Multiplying a 3D vector in virtual world space by the projection matrix gives the projected vector (in homogeneous coordinates).
Frustum - A term used in computer graphics to describe the 3D region viewable on screen. A perspective projection definition is interchangeable with a frustum definition (i.e. a specific projection defines a specific frustum and vice versa) and may be symmetrical or asymmetrical. Further details of such projections, in the specific context of an embodiment of the present invention, are described hereinbelow with reference to Fig. IB. Graphical objects - 3D geometrical shapes in the 3D environment, which typically represent living beings or inanimate things that are present in the virtual world. The 3D locations of these objects are projected onto corresponding 2D locations on the display.
Rendering - In computer graphics, this is the procedure by which a virtual world is represented on a display screen. Specifically, embodiments of the present invention use rendering to present a 3D environment (virtual world) on a display screen by perspective projection
View-Dependent Rendering (VDR) - A special case of rendering, in which the position of a viewer in room space (or "real world") coordinates in relation to the display screen affects the rendering. The use of VDR creates a realistic interaction between the viewer and the "virtual world," as its representation on the display panel changes according to the viewer's movements, creating an effect similar to that of viewing scenery through an actual window.
Tracking Sensor - A device capable of tracking locations and/or orientations of real- world objects in room space (or in other coordinate system transformable to room space). A tracking sensor can track one or more objects, and may have one or more "degrees of freedom," or sensing coordinates. Sensors used in embodiments of the present invention typically track either three degrees of freedom (x, y, z, also referred to as location), or six degrees of freedom (location and orientation). There are many types of and technologies for tracking, such as optical, magnetic, and inertial types. The sensor may be located in or on the physical object that it is tracking, or it may alternatively comprise a remote device, which captures images or otherwise receives signals indicative of the object coordinates.
Controller - In the context of the present description and in the claims, a controller is an object the user can hold in the real world, and which can be tracked by a suitable tracking sensor. The embodiments described below use a game controller, i.e., a controller having a form and functionality suitable for use in a computer game, specifically a toy gun in these embodiments. The controller can be passive (simply a representation of an object, such as a mock gun), or active (the mock gun can also include the ability to communicate commands to a computer, for example - via buttons, joysticks or other controls). The tracking sensor can be embedded within the controller (for example, an accelerometer) or external to it (for example, a camera), or a combination of both (a camera and an accelerometer working together, for example). In the disclosed embodiments, it is desirable that the game controller have an obvious "pointer," since it is to be used to simulate shooting of projectiles or beams from the room into the virtual world. Examples of such game controllers may include not only guns, but also a bow, a crossbow, or a magician's wand.
Calibration - In embodiments of the present invention, calibration is the process of determining the relative position and orientation of sensor devices and their internal coordinate frames, as well as display panels. In order for a sensor or display panel to be useful in the embodiments described herein, it should be calibrated, but this calibration may be either in relation to fixed room coordinates or in terms of relative coordinates, regardless of the absolute physical locations. For example, the view presented on a display may be calibrated interactively by detecting the 3D location and orientation coordinates of a controller while the user points the controller toward one or more predefined target locations on the display. For this purpose, the computer may present targets on the display and prompt the user to aim the controller at the targets.
SYSTEM CONFIGURATION AND OPERATION
Fig. 1A is a schematic, pictorial illustration of an interactive gaming system 20, in accordance with an embodiment of the present invention. In the pictured game, a user 22 views a game scenario that is presented by a computer 24 on a display screen 26. A sensor 28 tracks the location of the user - more specifically an eye 30 of the user - and passes this information to computer 24. The computer then renders the 2D view of the virtual 3D environment of the game accordingly on screen 26, using view-dependent rendering, as explained above. In the present example, a 3D graphical object 36 representing an opponent in the game is projected by computer 24 onto a corresponding 2D location 32 on screen 26 based on the location of the object in the 3D environment and the location of eye 30.
User 30 holds a controller 34, having the form of a gun, and aims the gun to fire projectiles, such as simulated bullets, along a trajectory 38 toward object 36. The user, in other words, aims at the simulated 3D location of the object in the 3D environment of the game, rather than at 2D location 32 of the projection of the object on screen 26. In fact, trajectory 38 may not intercept the 2D on-screen location at all. The user sees screen 26 as a window into the virtual world of the game, and shoots into the virtual world as though it were simply an extension of the real, physical world in which the user is located.
Fig. IB is a schematic top view of system 20 showing geometrical relations between certain elements of the system, in accordance with an embodiment of the present invention. Specifically, this figure illustrates the asymmetrical perspective projection that is used in rendering the image on display 26 and the relation between this projection and trajectory 38 that is defined by controller 34. For the sake of simplicity, Fig. IB shows only the X-Z plane (wherein X is the horizontal coordinate along screen 26 and Z is the distance coordinate perpendicular to the screen), but the projections in all other planes follow similar principles.
The location of eye 30 relative to screen 26 defines an asymmetric frustum 37. Within this frustum, the right and left edges of the screen are at points (l,n) and (r,n). For points in the virtual world, the corresponding (X,Y) location on the screen can be found, in homogeneous coordinates, by multiplying the 3D point coordinates by the projection matrix:
Figure imgf000014_0001
where f and n are the distances from the projection plane to the "far" and "near" planes of the frustum, respectively, and 1, r, b, and t are the left, right, bottom and top coordinates of the viewing plane, respectively. In this manner, the 3D location of object 36 is projected onto a corresponding 2D location 32 on screen 26. Location 32 will shift either when user 22 moves in the room or when object 36 moves in the course of the game scenario.
As can be seen in Figs. 1A and IB, controller 34 is not aligned with eye 30, but is rather held by user 22 some distance from the eye. Consequently, trajectory 38 from controller 34 to object 36 does not intercept location 32 (although in some cases the trajectory may intercept this location, depending on the specific 3D geometry at any given moment). When user 22 aims controller 34 as shown in Fig. IB, computer 24 will indicate that the user has "hit" object 36 (by causing the on-screen opponent to fall over or explode, for example). On the other hand, in the pictured geometry, if the user aims trajectory 38 toward location 32, the computer will register a miss. If object 38 is near the edge of frustum 37, trajectory 38 may not intercept screen 26 at all. Furthermore, in some cases, computer 24 may determine that trajectory 38 has intercepted a 3D object that does not appear on screen 26 at all, because it is hidden in the user's current view (for example, if the user hides behind a virtual "wall" and shoots around the edge).
Returning now to Fig. 1 A, computer 24 in this embodiment comprises a general-purpose processor, with input/output (I/O) connections to display 26, sensor 28, and other elements of system 20 as appropriate. Computer 24 operates under the control of suitable software code, which typically includes a game application, which generates the interactive game scenario, as well as causing the computer to perform the sensing, view-dependent rendering, and user interaction functions that are described herein. This software may be downloaded to computer 24 in electronic form, over a network, for example. Alternatively or additionally, the software may be stored, on tangible, generally non-transitory, computer-readable media, such as optical, magnetic or electronic memory media. The term "computer" is used broadly in the context of the present description and in the claims to refer to any sort of processing device with the above capabilities, including both game consoles (such as Xbox® or Playstation®) and general- purpose personal computers, whether stationary or mobile.
Sensor 28 may comprise any device that is capable of tracking the location of user 22, and specifically of eye 30. In addition, sensor 28 may track the position and orientation of controller 34 and possibly other system elements (such as the mobile display device that is shown in Fig. 4). For example, sensor 28 may comprise one or more cameras, such as a video camera that may be used with face-tracking capabilities of computer 24, a depth-camera with body-tracking capabilities, or a camera with marker-tracking capabilities (such as infrared LEDs worn by user 22). Additionally or alternatively, system 20 may comprise sensors worn by user 22 and/or fixed to or embedded in controller 34, such as accelerometers, gyroscopes and/or magnetometers, with suitable links (such as wireless links) to convey location, orientation and/or movement information to computer 24. Substantially any sort of sensor or combination of sensors capable of outputting the desired tracking information may be used in this context.
Fig. 2 is a schematic side view of controller 34, showing functional details of the controller in accordance with an embodiment of the present invention. In this example, controller 34 has the form of a toy gun, with a barrel that can be pointed to generate trajectory 38, with a user control 40 in the form of a trigger. User 22 may use a sight 42 on controller 34 to align the trajectory, with possible effects on the image projection on display 26, as shown below in Fig. 3. Alternatively or additionally, the controller may comprise any other suitable sort of user interface device, such as a joystick, buttons or touch panel. One or more sensors 44, such as inertial sensors, embedded in controller 34 measure the location and orientation of the gun barrel and convey this information via a communication interface 46 to computer 24. Alternatively or additionally, controller 34 may comprise one or more emitters, such as infrared or radio-frequency (RF) emitters, whose locations are detected by suitable sensors in the room, such as sensor 28. Controller 34 may comprise any one of a variety of commercially-available game controllers with appropriate sensing capabilities, such as the Sixense™ or Playstation Move controller.
Optionally, controller 34 may also comprise a feedback device 48, which may be actuated remotely view communication interface 46. For example, device 48 may provide an audible or visible indication to user 22 or may provide haptic feedback, by vibrating, for example. This capability may be used in notifying the user of particular events, such as collision with a virtual 3D object (as illustrated in Fig. 5).
In order for system 20 to operate properly, it is desirable that all relevant components, including sensors 28 and 44 and display screen 26, be calibrated, i.e., their locations in relation to some common coordinate space should be known so that computer 24 can accurately calculate the distances and angles between them. Such calibration may be performed, for example, by asking user 22 to direct controller 34 at one or more known target points on screen 26.
Optionally, screen 26 may comprise a stereoscopic vision device, which shows a different image to each of the player's eyes (for example, the Samsung 3D TV model UN55C7000WF). In this case, each eye will see a different 2D projection of the 3D environment, wherein each projection is adjusted for that eye's position relative to the display.
If no stereoscopic vision device is used, computer 24 may choose one of the user's eyes to be the "dominant eye," and use the position of this eye in view-dependent rendering. The choice may be made by asking the user for his or her preference or may be made automatically by the computer. One method of automatically identifying the dominant eye is by tracking which eye the user uses to aim controller 34, which should be the eye that is closer to the weapon when he or she shoots.
As noted earlier, computer 24 may modify the view presented on screen 26 continually depending on changes in the location of user 22. The computer can also sense that the user has moved his or her head to hide behind objects in the 3D environment (including both real and simulated, virtual objects) and thus avoid being seen by the simulated enemies. The user may also move his or her head to avoid "dangerous" objects or projectiles (such as an enemy's weapon or falling ceiling tiles), as well as to get a different view of the environment, such as peeking around a corner or window, or moving closer to the screen to examine objects up close.
In some cases, the user may not be positioned in an ideal location (relative to screen 26) to best view the most relevant content in the 3D environment. For example, if the viewer is tall and views the display from above, his view would be mostly of the ground in the 3D environment. To avoid this sort of situation, computer 24 may allow the user to rotate the 3D environment to compensate for his position. In the example above, the user would rotate the 3D environment downwards, so that his view will be similar to that of a person standing directly in front of the display. As another example, the computer may automatically detect the user's height and adjust the view accordingly. The height may be estimated, for example, as being the maximum height reported by sensor 28 for a few seconds without significant change (to filter out any jumps by the user).
Fig. 3 is a schematic, pictorial illustration of system 20, showing a zoom functionality of the system in accordance with an embodiment of the present invention. User 22 has brought his eye 30 into proximity with controller 34, and specifically with sight 42. Computer 24 detects this proximity, and in response presents an enlarged image 50 of object 36 on screen 26, as though the user were looking through an actual telescopic sight. System 20 may similarly simulate other types of specialized gun sights, as well as other optical devices, which may or may not have an actual physical counterpart installed on controller 34. For example, instead of or in addition to enlarged image 50, computer 24 may present images in "night-vision mode."
Fig. 4 is a schematic, pictorial illustration of system 20 with an auxiliary, movable display 54, in accordance with an embodiment of the present invention. Display 54 in this case is pictured as a tablet computer, but the principles of this embodiment may be applied using substantially any sort of movable display device, such as a smart phone or a special-purpose display screen mounted on controller 34. By the same token, in an alternative embodiment, computer 24 may drive two or more stationary displays, mounted side by side or even on different walls, to allow viewer to see the virtual 3D environment through multiple, different "windows."
Computer 24 tracks the position of display 54 in the room space, either by analyzing images provided by sensor 28, or receiving position signals from display 54 itself, or by any other suitable means. For example, user 22 can hold a mobile phone that is also tracked by the same camera that tracks the user's head. Display 54 thus provides an extra "window" into the virtual world, which can be moved dynamically by the user 22. Computer 24 renders a view on display 54 of another part 56 of the virtual 3D environment in the correct perspective projection and transmits an image of the view to display 54 for presentation to user 22. Alternatively, display 54 may contain a suitable processor and software code that does this rendering on its own, according to information transmitted or shared by computer 24. Mobile display 54 can also serve to simulate various real-life optical devices such as binoculars or a magnifying glass (with a suitable zoom function) or night vision goggles.
Fig. 5 is a schematic, pictorial illustration of system 20, showing user feedback functionality of the system in accordance with an embodiment of the present invention. This functionality makes use of feedback device 48 in controller 34, as described above. Computer 24 actuates this feedback capability to provide the user with immediate feedback concerning the results of bodily and/or controller movements in the real world and their effect upon the virtual world.
This capability is especially important when there is a conflict between a user action in the real world and the result in the simulated world, as illustrated in Fig. 5. For example, in the pictured scenario, the simulated 3D environment includes a wall 60, which extends from virtual space behind display screen 26 into the actual room space where user 22 is standing. When the user moves right in the room space, the movement causes his or her virtual counterpart (the "avatar") to collide with wall 60. In such a case, computer 24 may immediately provide feedback, such as haptic feedback conveyed by vibrating feedback device 48 in controller 34. Alternatively or additionally, the computer may generate, audio feedback (such as a collision noise) or visual feedback (such as a brief whitening of screen 26 or other visual effects to simulate a hit). The computer may also shift the virtual world so that the user's avatar does not actually walk through the virtual wall.
METHOD OF OPERATION
Fig. 6 is a flow chart that schematically illustrates a method of operating system 20, in accordance with an embodiment of the present invention. Although the method is described, for the sake of clarity and convenience, with reference to the elements of the system described above, the elements of the method may alternatively be implemented in substantially any system having the appropriate display, sensing and control functions, as outlined above.
Upon initiation of the method, computer 24 detects the starting positions of user 22 (and specifically of eye 30) and controller 34, at an initialization step 70. The computer transforms these real-world positions into corresponding coordinates in the virtual 3D environment. The method then proceeds iteratively, repeatedly tracking and updating the user's eye location, at an eye update step 72, and the location and orientation of controller 34, at a controller update step 74. The computer transforms these positions and orientations into virtual world coordinates and moves the virtual counterpart of the user in the virtual world accordingly, at a coordinate conversion step 76.
Computer 24 checks whether any contradiction has occurred between movement in the real world and movement in the virtual world, at a feedback checking step 78. For example, the computer may check whether the virtual counterpart of the user has collided with a virtual 3D object or fallen into a pit. If so, computer 24 invokes the appropriate action to provide feedback to the user, at a feedback invocation step 80.
Based on the current location of eye 30 and any other relevant factors, computer 24 updates the image appearing on screen 26, at an image update step 82. The new user position in room space, in conjunction with the counterpart (avatar) position in virtual space, is used to generate an image on display screen 26 using view-dependent rendering, so that the virtual world appears to the user at the correct viewing angle and distance as though the screen were a window into the virtual world, as explained above. The computer may also evaluate the relative proximity between the user's eye and controller 34 to determine whether the user is attempting to access a virtual optical device (such as a simulated telescopic sight or night vision device mounted on the controller). If so, the computer will modify the display accordingly. If user 22 is holding a mobile display 54, computer 24 also evaluates its position and updates the image presented on display 54 accordingly.
Computer 24 also evaluates whether user 22 has taken any action with respect to the virtual world, such as firing a projectile along trajectory 38, at an action checking step 84. If so, the computer calculates the result, for example to determine where the projectile has actually struck in the virtual world, at a result computation step 86. The computer runs suitable simulation logic to simulate the interaction between the user, the controller and other components of the virtual world, based on their newly calculated locations as well as user actions, and computes the results at a status update step 88. The results of the interactions are presented to the user on screen 26, and possibly via feedback device 48 in controller 34.
After updating the game status, computer 24 checks whether the game has ended, at a completion step 90. If not, the method returns to step 72, continuing until the game is over.
As noted earlier, the above use scenario is presented solely for the sake of clarity and concretization of the certain features of the invention, and other scenarios that make use of the sorts of methods and systems described above are likewise considered to be within the scope of the present invention. It will thus be appreciated that the embodiments described above are cited by way of example, and that the present invention is not limited to what has been particularly shown and described hereinabove. Rather, the scope of the present invention includes both combinations and subcombinations of the various features described hereinabove, as well as variations and modifications thereof which would occur to persons skilled in the art upon reading the foregoing description and which are not disclosed in the prior art.

Claims

1. A method for computer interaction, comprising:
defining in a computer a three-dimensional (3D) environment containing one or more graphical objects having respective 3D object locations;
tracking a 3D user location of a user of the computer;
responsively to the 3D user location, projecting a view of the 3D environment onto a two-dimensional (2D) display viewed by the user, wherein the 3D object locations are projected onto corresponding 2D object locations in a plane of the display;
tracking 3D location and orientation coordinates of a controller held by the user, so as to define a trajectory directed by the user from the controller into the 3D environment; and
detecting and indicating to the user that the trajectory has intercepted a 3D location of a graphical object in the 3D environment while the trajectory does not intercept a corresponding 2D location of the graphical object on the display.
2. The method according to claim 1, wherein the trajectory corresponds to a path of a simulated projectile fired by the user by operating the controller.
3. The method according to claim 1 or 2, wherein the trajectory does not intercept the display.
4. The method according to any of claims 1-3, wherein tracking the 3D user location comprises identifying a position of an eye of the user, and wherein projecting the view comprises generating the view corresponding to the position of the eye.
5. The method according to claim 4, wherein generating the view comprises detecting a proximity of the position of the eye to the 3D location coordinate of the controller, and enlarging at least a part of an image viewable on the display responsively to the proximity.
6. The method according to claim 4, wherein identifying the position of the eye comprises identifying respective eye positions of left and right eyes of the user, and wherein projecting the view comprises projecting a stereoscopic pair of views responsively to the eye positions.
7. The method according to any of claims 1-6, wherein tracking the 3D user location comprises sensing a height of the user, and wherein projecting the view comprises rotating an angle of the view responsively to the height so as to enhance a visibility of the graphical objects on the display.
8. The method according to any of claims 1-7, wherein projecting the view comprises projecting multiple different, respective views of the 3D environment onto multiple different displays, responsively to respective positions of the displays.
9. The method according to claim 8, wherein projecting the multiple different, respective views comprises tracking a display location of an auxiliary display device held by the user, and projecting an additional view of the 3D environment onto the auxiliary display device responsively to the display location.
10. The method according to any of claims 1-9, wherein projecting the view comprises calibrating the view interactively by detecting the 3D location and orientation coordinates of the controller while the user points the controller toward one or more predefined target locations.
11. The method according to any of claims 1-10, and comprising detecting, responsively to movement of the 3D user location, a collision between the user and one of the 3D object locations in the 3D environment, and providing an indication to the user of the detected collision.
12. A method for computer interaction, comprising:
defining in a computer a three-dimensional (3D) environment containing one or more graphical objects having respective 3D object locations;
tracking a 3D eye location of an eye of a user of the computer;
responsively to the 3D eye location, projecting a view of the 3D environment onto a two-dimensional (2D) display viewed by the user, wherein the 3D object locations are projected onto corresponding 2D object locations in a plane of the display;
tracking a 3D controller location of a controller held by a user; and
responsively to a proximity between the 3D eye location and the 3D controller location, enlarging at least a part of an image viewable on the display.
13. The method according to claim 12, wherein tracking the 3D controller location comprises finding an orientation of the controller, and selecting the part of the image for enlargement responsively to the orientation.
14. The method according to claim 12 or 13, wherein the controller comprises an auxiliary display, and wherein enlarging the image comprises presenting the enlarged part of the image on the auxiliary display.
15. The method according to claim 14, wherein presenting the enlarged part of the image comprises projecting an additional view of the one of the objects onto the auxiliary display responsively to the 3D controller location.
16. A method for computer interaction, comprising:
defining in a computer a three-dimensional (3D) environment containing one or more graphical objects having respective 3D object locations;
tracking a 3D user location of a user of the computer;
responsively to the 3D user location, projecting a view of the 3D environment onto a two-dimensional (2D) display viewed by the user, wherein the 3D object locations are projected onto corresponding 2D object locations in a plane of the display;
tracking a 3D display location of an auxiliary display device held by the user; and responsively to the 3D display location and the user location, projecting an additional view of the 3D environment onto the auxiliary display device.
17. The method according to claim 16, wherein tracking the 3D user location comprises identifying a position of an eye of the user, and wherein projecting the additional view comprises zooming a size of at least a part of an image appearing in the additional view responsively to a distance between the position of the eye and the 3D display location.
18. A method for computer interaction, comprising:
defining in a computer a three-dimensional (3D) environment containing one or more graphical objects having respective 3D object locations;
responsively to the 3D user location, projecting a view of the 3D environment onto a two-dimensional (2D) display viewed by the user, wherein the 3D object locations are projected onto corresponding 2D object locations in a plane of the display;
detecting, responsively to movement of the 3D user location, a collision between the user and one of the 3D object locations in the 3D environment; and
providing an indication to the user of the detected collision.
19. The method according to claim 18, wherein the indication comprises an audible or visible indication.
20. The method according to claim 18 or 19, wherein the indication comprises haptic feedback provided by a controller held by the user.
21. Apparatus for computer interaction, comprising: a display;
a tracking device, which is configured to detect a 3D user location of a user of the apparatus;
a controller, which is configured to be held by the user; and
a processor, which is configured to accept a definition of a three-dimensional (3D) environment containing one or more graphical objects having respective 3D object locations, and to project, responsively to the 3D user location, a view of the 3D environment onto a two- dimensional (2D) display viewed by the user, wherein the 3D object locations are projected onto corresponding 2D object locations in a plane of the display,
wherein the processor is coupled to track 3D location and orientation coordinates of the controller held by the user, so as to define a trajectory directed by the user from the controller into the 3D environment, and to detect and indicate to the user that the trajectory has intercepted a 3D location of a graphical object in the 3D environment while the trajectory does not intercept a corresponding 2D location of the graphical object on the display.
22. The apparatus according to claim 21, wherein the trajectory corresponds to a path of a simulated projectile fired by the user by operating the controller.
23. The apparatus according to claim 21 or 22, wherein the trajectory does not intercept the display.
24. The apparatus according to any of claims 21-23, wherein the tracking device is configured to provide an input to the processor identifying a position of an eye of the user, and wherein the processor is configured to generate the view corresponding to the position of the eye.
25. The apparatus according to claim 24, wherein the processor is configured to detect a proximity of the position of the eye to the 3D location coordinate of the controller, and to enlarge at least a part of an image viewable on the display responsively to the proximity.
26. The apparatus according to claim 24, wherein the processor is configured to identify respective eye positions of left and right eyes of the user, and to project a stereoscopic pair of views responsively to the eye positions.
27. The apparatus according to any of claims 21-26, wherein the processor is configured to receive an input indicative of a height of the user, and to rotate an angle of the view responsively to the height so as to enhance a visibility of the graphical objects on the display.
28. The apparatus according to any of claims 21-27, wherein the processor is configured to generate multiple different, respective views of the 3D environment onto multiple different displays, responsively to respective positions of the displays.
29. The apparatus according to claim 28, wherein the processor is configured to track a display location of an auxiliary display device held by the user, and to provide an additional view of the 3D environment to the auxiliary display device responsively to the display location.
30. The apparatus according to any of claims 21-29, wherein the processor is configured to calibrate the view interactively by detecting the 3D location and orientation coordinates of the controller while the user points the controller toward one or more predefined target locations.
31. The apparatus according to any of claims 21-30, wherein the processor is configured to detect, responsively to movement of the 3D user location, a collision between the user and one of the 3D object locations in the 3D environment, and to provide an indication to the user of the detected collision.
32. Apparatus for computer interaction, comprising:
a display;
a tracking device, which is configured to detect a 3D eye location of an eye of a user of the apparatus;
a controller, which is configured to be held by the user; and
a processor, which is configured to accept a definition of a three-dimensional (3D) environment containing one or more graphical objects having respective 3D object locations, and to project, responsively to the 3D eye location, a view of the 3D environment onto a two- dimensional (2D) display viewed by the user, wherein the 3D object locations are projected onto corresponding 2D object locations in a plane of the display,
wherein the processor is coupled to track a 3D controller location of the controller held by the user, and responsively to a proximity between the 3D eye location and the 3D controller location, to enlarge at least a part of an image viewable on the display.
33. The apparatus according to claim 32, wherein the processor is configured to track an orientation of the controller, and to select the part of the image for enlargement responsively to the orientation.
34. The apparatus according to claim 32 or 33, wherein the controller comprises an auxiliary display, and wherein the processor is configured to cause the enlarged part of the image to appear on the auxiliary display.
35. The apparatus according to claim 34, wherein the enlarged part of the image comprises an additional view of one of the objects, which is projected onto the auxiliary display responsively to the 3D controller location.
36. Apparatus for computer interaction, comprising:
a display;
a tracking device, which is configured to detect a 3D user location of a user of the apparatus;
an auxiliary display device, which is configured to be held by the user; and
a processor, which is configured to accept a definition of a three-dimensional (3D) environment containing one or more graphical objects having respective 3D object locations, and to project, responsively to the 3D user location, a view of the 3D environment onto a two- dimensional (2D) display viewed by the user, wherein the 3D object locations are projected onto corresponding 2D object locations in a plane of the display,
wherein the processor is configured to track a 3D display location of the auxiliary display device held by the user, and to project, responsively to the 3D display location and the user location, an additional view of the 3D environment for display on the auxiliary display device.
37. The apparatus according to claim 36, wherein the tracking device is configured to provide an input to the processor identifying a position of an eye of the user, and wherein the processor is configured to zoom a size of at least a part of an image appearing in the additional view responsively to a distance between the position of the eye and the 3D display location.
38. Apparatus for computer interaction, comprising:
a display;
a tracking device, which is configured to detect a 3D user location of a user of the apparatus; and
a processor, which is configured to accept a definition of a three-dimensional (3D) environment containing one or more graphical objects having respective 3D object locations, and to project, responsively to the 3D user location, a view of the 3D environment onto a two- dimensional (2D) display viewed by the user, wherein the 3D object locations are projected onto corresponding 2D object locations in a plane of the display,
wherein the processor is configured to detect, responsively to movement of the 3D user location, a collision between the user and one of the 3D object locations in the 3D environment, and to provide an indication to the user of the detected collision.
39. The apparatus according to claim 38, wherein the indication comprises an audible or visible indication.
40. The apparatus according to claim 38 or 39, wherein the indication comprises haptic feedback provided by a controller held by the user.
41. A computer software product, comprising a computer-readable medium in which program instructions are stored, which instructions, when read by a computer, cause the computer to accept a definition of a three-dimensional (3D) environment containing one or more graphical objects having respective 3D object locations, to track a 3D user location of a user of the computer, and to project, responsively to the 3D user location, a view of the 3D environment onto a two-dimensional (2D) display viewed by the user, wherein the 3D object locations are projected onto corresponding 2D object locations in a plane of the display,
wherein the instructions cause the computer to track 3D location and orientation coordinates of a controller held by the user, so as to define a trajectory directed by the user from the controller into the 3D environment, and to detect and indicate to the user that the trajectory has intercepted a 3D location of a graphical object in the 3D environment while the trajectory does not intercept a corresponding 2D location of the graphical object on the display.
42. The product according to claim 41, wherein the trajectory corresponds to a path of a simulated projectile fired by the user by operating the controller.
43. The product according to claim 41 or 42, wherein the trajectory does not intercept the display.
44. The product according to any of claims 41-43, wherein the instructions cause the computer to identify a position of an eye of the user, and wherein the instructions cause the computer to generate the view corresponding to the position of the eye.
45. The product according to claim 44, wherein the instructions cause the computer to detect a proximity of the position of the eye to the 3D location coordinate of the controller, and to enlarge at least a part of an image viewable on the display responsively to the proximity.
46. The product according to claim 24, wherein the instructions cause the computer to identify respective eye positions of left and right eyes of the user, and to project a stereoscopic pair of views responsively to the eye positions.
47. The product according to any of claims 41-46, wherein the instructions cause the computer to receive an input indicative of a height of the user, and to rotate an angle of the view responsively to the height so as to enhance a visibility of the graphical objects on the display.
48. The product according to any of claims 41-47, wherein the instructions cause the computer to generate multiple different, respective views of the 3D environment onto multiple different displays, responsively to respective positions of the displays.
49. The product according to claim 48, wherein the instructions cause the computer to track a display location of an auxiliary display device held by the user, and to provide an additional view of the 3D environment to the auxiliary display device responsively to the display location.
50. The product according to any of claims 41-49, wherein the instructions cause the computer to calibrate the view interactively by detecting the 3D location and orientation coordinates of the controller while the user points the controller toward one or more predefined target locations.
51. The product according to any of claims 41-50, wherein the instructions cause the computer to detect, responsively to movement of the 3D user location, a collision between the user and one of the 3D object locations in the 3D environment, and to provide an indication to the user of the detected collision.
52. A computer software product, comprising a computer-readable medium in which program instructions are stored, which instructions, when read by a computer, cause the computer to accept a definition of a three-dimensional (3D) environment containing one or more graphical objects having respective 3D object locations, to track a 3D eye location of an eye of a user of the computer, and to project, responsively to the 3D eye location, a view of the 3D environment onto a two-dimensional (2D) display viewed by the user, wherein the 3D object locations are projected onto corresponding 2D object locations in a plane of the display, wherein the instructions cause the computer to track a 3D controller location of the controller held by the user, and responsively to a proximity between the 3D eye location and the 3D controller location, to enlarge at least a part of an image viewable on the display.
53. The product according to claim 52, wherein the instructions cause the computer to track an orientation of the controller, and to select the part of the image for enlargement responsively to the orientation.
54. The product according to claim 52 or 53, wherein the controller comprises an auxiliary display, and wherein the instructions cause the computer to cause the enlarged part of the image to appear on the auxiliary display.
55. The product according to claim 54, wherein the enlarged part of the image comprises an additional view of one of the objects, which is projected onto the auxiliary display responsively to the 3D controller location.
56. A computer software product, comprising a computer-readable medium in which program instructions are stored, which instructions, when read by a computer, cause the computer to accept a definition of a three-dimensional (3D) environment containing one or more graphical objects having respective 3D object locations, to track a 3D user location of a user of the computer, and to project, responsively to the 3D user location, a view of the 3D environment onto a two-dimensional (2D) display viewed by the user, wherein the 3D object locations are projected onto corresponding 2D object locations in a plane of the display,
wherein the instructions cause the computer to track a 3D display location of an auxiliary display device held by the user, and to project, responsively to the 3D display location and the user location, an additional view of the 3D environment for display on the auxiliary display device.
57. The product according to claim 56, wherein the instructions cause the computer to identify a position of an eye of the user, and to zoom a size of at least a part of an image appearing in the additional view responsively to a distance between the position of the eye and the 3D display location.
58. A computer software product, comprising a computer-readable medium in which program instructions are stored, which instructions, when read by a computer, cause the computer to accept a definition of a three-dimensional (3D) environment containing one or more graphical objects having respective 3D object locations, to track a 3D user location of a user of the computer, and to project, responsively to the 3D user location, a view of the 3D environment onto a two-dimensional (2D) display viewed by the user, wherein the 3D object locations are projected onto corresponding 2D object locations in a plane of the display, wherein the instructions cause the computer to detect, responsively to movement of the 3D user location, a collision between the user and one of the 3D object locations in the 3D environment, and to provide an indication to the user of the detected collision.
59. The product according to claim 58, wherein the indication comprises an audible or visible indication.
60. The product according to claim 58 or 59, wherein the indication comprises haptic feedback provided by a controller held by the user.
PCT/IB2013/050702 2012-01-27 2013-01-27 Simulating interaction with a three-dimensional environment WO2013111119A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201261591821P 2012-01-27 2012-01-27
US61/591,821 2012-01-27

Publications (1)

Publication Number Publication Date
WO2013111119A1 true WO2013111119A1 (en) 2013-08-01

Family

ID=48872940

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2013/050702 WO2013111119A1 (en) 2012-01-27 2013-01-27 Simulating interaction with a three-dimensional environment

Country Status (1)

Country Link
WO (1) WO2013111119A1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016167664A3 (en) * 2015-04-17 2017-01-05 Lagotronics Projects B.V. Game controller
CN108664231A (en) * 2018-05-11 2018-10-16 腾讯科技(深圳)有限公司 Display methods, device, equipment and the storage medium of 2.5 dimension virtual environments
CN110488972A (en) * 2013-11-08 2019-11-22 高通股份有限公司 Feature tracking for the additional mode in spatial interaction
CN114339194A (en) * 2021-03-16 2022-04-12 深圳市火乐科技发展有限公司 Projection display method and device, projection equipment and computer readable storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1585063A1 (en) * 2004-03-31 2005-10-12 Sega Corporation Image generation device, image display method and program product
US20080096657A1 (en) * 2006-10-20 2008-04-24 Sony Computer Entertainment America Inc. Method for aiming and shooting using motion sensing controller
US20110109628A1 (en) * 2008-06-24 2011-05-12 Rurin Oleg Stanislavovich Method for producing an effect on virtual objects
US20110285704A1 (en) * 2010-02-03 2011-11-24 Genyo Takeda Spatially-correlated multi-display human-machine interface

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1585063A1 (en) * 2004-03-31 2005-10-12 Sega Corporation Image generation device, image display method and program product
US20080096657A1 (en) * 2006-10-20 2008-04-24 Sony Computer Entertainment America Inc. Method for aiming and shooting using motion sensing controller
US20110109628A1 (en) * 2008-06-24 2011-05-12 Rurin Oleg Stanislavovich Method for producing an effect on virtual objects
US20110285704A1 (en) * 2010-02-03 2011-11-24 Genyo Takeda Spatially-correlated multi-display human-machine interface

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110488972A (en) * 2013-11-08 2019-11-22 高通股份有限公司 Feature tracking for the additional mode in spatial interaction
CN110488972B (en) * 2013-11-08 2023-06-09 高通股份有限公司 Face tracking for additional modalities in spatial interaction
WO2016167664A3 (en) * 2015-04-17 2017-01-05 Lagotronics Projects B.V. Game controller
CN108664231A (en) * 2018-05-11 2018-10-16 腾讯科技(深圳)有限公司 Display methods, device, equipment and the storage medium of 2.5 dimension virtual environments
CN108664231B (en) * 2018-05-11 2021-02-09 腾讯科技(深圳)有限公司 Display method, device, equipment and storage medium of 2.5-dimensional virtual environment
CN114339194A (en) * 2021-03-16 2022-04-12 深圳市火乐科技发展有限公司 Projection display method and device, projection equipment and computer readable storage medium
CN114339194B (en) * 2021-03-16 2023-12-08 深圳市火乐科技发展有限公司 Projection display method, apparatus, projection device, and computer-readable storage medium

Similar Documents

Publication Publication Date Title
JP5300777B2 (en) Program and image generation system
KR101926178B1 (en) Virtual reality system enabling compatibility of sense of immersion in virtual space and movement in real space, and battle training system using same
US9504920B2 (en) Method and system to create three-dimensional mapping in a two-dimensional game
EP1585063B1 (en) Image generation device, image display method and program product
US8142277B2 (en) Program, game system, and movement control method for assisting a user to position a game object
JP4179162B2 (en) Information processing device, game device, image generation method, and game image generation method
KR20170062533A (en) Driving a projector to generate a shared spatial augmented reality experience
US20100069152A1 (en) Method of generating image using virtual camera, storage medium, and computer device
JP2018081644A (en) Simulation system and program
US20230214005A1 (en) Information processing apparatus, method, program, and information processing system
JP2013158456A (en) Game device, game system, control method of game device and program
JP7071823B2 (en) Simulation system and program
JP5443129B2 (en) Program and network system
WO2013111119A1 (en) Simulating interaction with a three-dimensional environment
US20130109475A1 (en) Game system, control method therefor, and a storage medium storing a computer program
JP4363595B2 (en) Image generating apparatus and information storage medium
JP2018171320A (en) Simulation system and program
JP2018171319A (en) Simulation system and program
JP4114825B2 (en) Image generating apparatus and information storage medium
US20130109451A1 (en) Game system, control method therefor, and a storage medium storing a computer program
JP2011255114A (en) Program, information storage medium, and image generation system
CN110036359B (en) First-person role-playing interactive augmented reality
JP4420729B2 (en) Program, information storage medium, and image generation system
JP2011096017A (en) Program, information storage medium and terminal
JP2020201980A (en) Simulation system and program

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 13741612

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 13741612

Country of ref document: EP

Kind code of ref document: A1