US20100045666A1 - Anchored Navigation In A Three Dimensional Environment On A Mobile Device - Google Patents
Anchored Navigation In A Three Dimensional Environment On A Mobile Device Download PDFInfo
- Publication number
- US20100045666A1 US20100045666A1 US12/546,274 US54627409A US2010045666A1 US 20100045666 A1 US20100045666 A1 US 20100045666A1 US 54627409 A US54627409 A US 54627409A US 2010045666 A1 US2010045666 A1 US 2010045666A1
- Authority
- US
- United States
- Prior art keywords
- virtual camera
- mobile device
- finger
- user input
- touch screen
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0481—Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
- G06F3/04815—Interaction with a metaphor-based environment or interaction object displayed as three-dimensional, e.g. changing the user viewpoint with respect to the environment or object
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F1/00—Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
- G06F1/16—Constructional details or arrangements
- G06F1/1613—Constructional details or arrangements for portable computers
- G06F1/1626—Constructional details or arrangements for portable computers with a single-body enclosure integrating a flat display, e.g. Personal Digital Assistants [PDAs]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/017—Gesture based interaction, e.g. based on a set of recognized hand gestures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0484—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
- G06F3/0485—Scrolling or panning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0487—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
- G06F3/0488—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0487—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
- G06F3/0488—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
- G06F3/04883—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/003—Navigation within 3D models or images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2200/00—Indexing scheme relating to G06F1/04 - G06F1/32
- G06F2200/16—Indexing scheme relating to G06F1/16 - G06F1/18
- G06F2200/163—Indexing scheme relating to constructional details of the computer
- G06F2200/1637—Sensing arrangement for detection of housing movement or orientation, e.g. for controlling scrolling or cursor movement on the display of an handheld computer
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2200/00—Indexing scheme for image data processing or generation, in general
- G06T2200/24—Indexing scheme for image data processing or generation, in general involving graphical user interfaces [GUIs]
Definitions
- This invention generally relates to navigation in a three dimensional environment.
- the three dimensional environment includes a virtual camera that defines what three dimensional data to display.
- the virtual camera has a perspective according to its position and orientation. By changing the perspective of the virtual camera, a user can navigate through the three dimensional environment.
- Mobile devices such as cell phones, personal digital assistants (PDAs), portable navigation devices (PNDs) and handheld game consoles, are being made with improved computing capabilities. Many mobile devices can access one or more networks, such as the Internet. Also, some mobile devices, such as an IPHONE device available from Apple Inc., accept input from GPS sensors, accelerometers and touch screens. Improved computing capabilities make it possible to run a wide variety of software applications on mobile devices. Despite improved computing capabilities, many handheld mobile devices have a small display—generally less than 4 inches across. The small display may make it difficult for a user to navigate through a three dimensional environment on a mobile device.
- Methods and systems are needed that improve navigation in a three dimensional environment on a mobile device.
- This invention relates to anchored navigation in a three dimensional environment on a mobile device.
- a computer-implemented method navigates a virtual camera in a three dimensional environment on a mobile device having a touch screen.
- a first user input is received indicating that a first object is approximately stationary on a touch screen of the mobile device.
- a second user input is received indicating that a second object has moved on the touch screen.
- An orientation of the virtual camera of the virtual camera is changed according to the second user input.
- a system navigates a virtual camera in a three dimensional environment on a mobile device.
- the system includes a touch receiver that receives a first user input indicating that a first object is approximately stationary on a touch screen of the mobile device and receives a second user input indicating that a second object has moved on the touch screen.
- the system also includes a look around module that changes an orientation of the virtual camera according to the second user input.
- a computer-implemented method navigates a virtual camera in a three dimensional environment on a mobile device having a touch screen.
- a first user input is received indicating that a first object is approximately stationary on a touch screen of the mobile device.
- a second user input is received indicating that a second object has moved on the touch screen.
- a target location is determined in the three dimensional environment.
- a position of the virtual camera is changed according to the second user input. A distance between the target location and the position of the virtual camera stays approximately constant.
- a system navigates a virtual camera in a three dimensional environment on a mobile device.
- the system includes a touch receiver that receives a first user input indicating that a first object is approximately stationary on a touch screen of the mobile device and receives a second user input indicating that a second object has moved on the touch screen.
- the system also includes a target module that determines a target location in the three dimensional environment.
- the system includes a helicopter module that changes a position of the virtual camera according to the second user input. A distance between the target location and the position of the virtual camera stays approximately constant.
- a computer-implemented method navigates a virtual camera in a three dimensional environment on a mobile device having a touch screen.
- a first user input is received indicating that a first object is approximately stationary on a touch screen of the mobile device.
- a second user input is received indicating that a second object has moved on the touch screen.
- a target location is determined in the three dimensional environment.
- a tilt value of the virtual camera relative to a vector directed upwards from the target location is changed.
- An azimuth value of the virtual camera relative to the vector is also changed.
- a computer-implemented method navigates a virtual camera in a three dimensional environment on a mobile device having a touch screen.
- a first user input is received indicating that a first object has touched a first point on a touch screen of a mobile device.
- a second user input is received indicating that a second object has touched a second point on the touch screen after the first object touched the first point on the touch screen.
- a navigation mode is determined from a plurality of navigation modes based on the position of the first point relative to the second point.
- a computer-implemented method navigates a virtual camera in a three dimensional environment on a mobile device having a touch screen.
- a user input is received indicating that two objects have touched a touch screen of a mobile device and the two objects have moved on the touch screen approximately the same distance in approximately the same direction.
- a motion data representing motion of the two objects on the touch screen is determined.
- An orientation of the virtual camera is changed according to the motion data.
- FIG. 1 is a diagram illustrating a mobile device that navigates through a three dimensional environment.
- FIG. 2 is a diagram illustrating a virtual camera navigating through a three dimensional environment.
- FIG. 3 is a diagram illustrating a system that accepts user interface gestures to navigate through a three dimensional environment.
- FIG. 4 is a flowchart illustrating a method for angular jump navigation.
- FIG. 5 is a diagram illustrating angular jump navigation on a mobile device.
- FIGS. 6A-B are diagrams illustrating determining a target location according to a position selected on a view.
- FIG. 7 is a diagram illustrating an angular jump trajectory.
- FIG. 8 is a flowchart illustrating a method for anchored look-around navigation.
- FIGS. 9A-B are diagrams illustrating anchored look-around navigation on a mobile device.
- FIG. 10 is a flowchart illustrating a method for anchored helicopter navigation.
- FIGS. 11A-B are diagrams illustrating anchored helicopter navigation on a mobile device.
- FIG. 12 is a diagram illustrating a two finger gesture for looking around in a three dimensional environment on a mobile device.
- FIG. 13 is a flowchart illustrating a method for navigating a virtual camera based on an orientation of a mobile device.
- FIGS. 14A-C are diagrams illustrating navigating a virtual camera based on an orientation of a mobile device.
- FIG. 15 is a flowchart illustrating a method for navigating a virtual camera using a pinch momentum.
- FIGS. 16A-C are diagrams illustrating navigating a virtual camera through a three dimensional environment on a mobile device using a pinch momentum.
- FIG. 17 is a flowchart illustrating a method for panning on a mobile device.
- FIGS. 18A-B are diagrams illustrating panning through a three dimensional environment on a mobile device.
- FIGS. 19A-C are diagrams illustrating different panning modes which may be used in navigation on a mobile device.
- Embodiments of the present invention provide for navigation in a three dimensional environment on a mobile device.
- references to “one embodiment”, “an embodiment”, “an example embodiment”, etc. indicate that the embodiment described may include a particular feature, structure, or characteristic, but every embodiment may not necessarily include the particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same embodiment. Further, when a particular feature, structure, or characteristic is described in connection with an embodiment, it is submitted that it is within the knowledge of one skilled in the art to effect such feature, structure, or characteristic in connection with other embodiments whether or not explicitly described.
- the first section provides an introduction to navigation through three dimensional environment on a mobile device.
- the second section describes a system that accepts user interface gestures to navigate in a three dimensional environment on a mobile device.
- the next several sections describe the user interface gestures in greater detail.
- the third section describes an angular zoom user interface gesture.
- the fourth section describes two anchored navigation gestures.
- the fifth section describes a dual finger look-around gesture.
- the sixth section describes accelerometer navigation.
- the seventh section describes pitch momentum and a two-finger touch and rotate gesture.
- the eight section describes panning in a three dimensional environment on a mobile device.
- FIG. 1 is a diagram illustrating a mobile device 100 that can navigate through a three dimensional environment.
- mobile device 100 may be a PDA, cell phone, handheld game console or other handheld mobile device as known to those of skill in the art.
- mobile device 100 may be an IPHONE device, available from Apple Inc.
- mobile device 100 may be a device running an ANDROID platform, available from Google Inc.
- mobile device 100 may be a tablet computer, laptop computer, or other mobile device larger than a handheld mobile device but still easily carried by a user. These examples are illustrative and are not meant to limit the present invention.
- Mobile device 100 may have a touch screen that accepts touch input from the user. The user may touch the screen with his fingers, stylus, or other means known to those skilled in the art. Mobile device 100 also may have an accelerometer that detects when the mobile device accelerates or detects mobile device 100 's orientation relative to gravity. It should be noted that other devices may be user to determine mobile device 100 's orientation, and this invention is not meant to be limited to an accelerometer. Further one or more accelerometers may be used. Further, mobile device 100 may have a location receiver, such as a GPS receiver, and may be connected to one or more networks such as the Internet.
- a location receiver such as a GPS receiver
- Mobile device 100 has a view 102 .
- mobile device 100 may accept touch input when a user touches view 102 . Further, view 102 may output images to user.
- mobile device 100 may render a three dimensional environment and may display the three dimensional environment to the user in view 102 from the perspective of a virtual camera.
- Mobile device 100 enables the user to navigate a virtual camera through a three dimensional environment.
- the three dimensional environment may include a three dimensional model, such as a three dimensional model of the Earth.
- a three dimensional model of the Earth may include satellite imagery texture mapped to three dimensional terrain.
- the three dimensional model of the Earth may also include models of buildings and other points of interest. This example is merely illustrative and is not meant to limit the present invention.
- mobile device 100 may change a perspective of the virtual camera. Based on the virtual camera's new perspective, mobile device 100 may render a new image into view 102 .
- Various user interface gestures that change the virtual camera's perspective and result in a new image are described in detail below.
- FIG. 2 shows a diagram 200 illustrating a virtual camera in a three dimensional environment.
- Diagram 200 includes a virtual camera 202 .
- Virtual camera 202 is directed to view a three dimensional terrain 210 .
- Three dimensional terrain 210 may be a portion of a larger three dimensional model, such as a three dimensional model of the Earth.
- user input may cause a mobile device, such as mobile device 100 in FIG. 1 , to move virtual camera 202 to a new location. Further, user input may cause virtual camera 202 to change orientation, such as pitch, yaw, or roll.
- user interface gestures on a mobile device cause a virtual camera to navigate through a three dimensional environment on a mobile device.
- the various system components and details of the user interface gestures are described below.
- FIG. 3 is a diagram illustrating a system 300 that accepts user interface gestures for navigation in a three dimensional environment on a mobile device.
- System 300 includes a client 302 having a user interaction module 310 and a renderer module 322 .
- User interaction module 310 includes a motion model 314 .
- client 302 operates as follows.
- User interaction module 310 receives user input regarding a location that a user desires to view and, through motion model 314 , constructs a view specification defining the virtual camera.
- Renderer module 322 uses the view specification to decide what data is to be drawn and draws the data. If renderer module 322 needs to draw data that system 300 does not have, system 300 sends a request to a server for the additional data across one or more networks, such as the Internet, using a network interface 350 .
- Motion model 314 constructs a view specification.
- the view specification defines the virtual camera's viewable volume within a three dimensional space, known as a frustum, and the position and orientation of the frustum in the three dimensional environment.
- the frustum is in the shape of a truncated pyramid.
- the frustum has minimum and maximum view distances that can change depending on the viewing circumstances.
- changing the view specification changes the geographic data culled to the virtual camera's viewable volume.
- the culled geographic data is drawn by renderer module 322 .
- the view specification may specify three main parameter sets for the virtual camera: the camera tripod, the camera lens, and the camera focus capability.
- the camera tripod parameter set specifies the following: the virtual camera position (X, Y, Z coordinates); which way the virtual camera is oriented relative to a default orientation, such as heading angle (e.g., north?, south?, in-between?); pitch (e.g., level?, down?, up?, in-between?); yaw and roll (e.g., level?, clockwise?, anti-clockwise?, in-between?).
- the lens parameter set specifies the following: horizontal field of view (e.g., telephoto?, normal human eye—about 55 degrees?, or wide-angle?); and vertical field of view (e.g., telephoto?, normal human eye—about 55 degrees?, or wide-angle?).
- the focus parameter set specifies the following: distance to the near-clip plane (e.g., how close to the “lens” can the virtual camera see, where objects closer are not drawn); and distance to the far-clip plane (e.g., how far from the lens can the virtual camera see, where objects further are not drawn).
- moving the virtual camera includes zooming the virtual camera as well as translating the virtual camera.
- user interaction module 310 receives user input.
- Client 302 has various mechanisms for receiving input.
- client 302 may receive input using sensors including a touch receiver 340 , an accelerometer 342 , and a location module 344 . Each of the sensors will now be described in turn.
- Touch receiver 340 may be any type of touch receiver that accepts input from a touch screen. Touch receiver 340 may receive touch input on a view such as the view 102 in FIG. 1 . The touch input received may include a position that the user touched as defined by an X and Y coordinate on the screen. The user may touch the screen with a finger, stylus, or other object. Touch receiver 340 may be able to receive multiple touches simultaneously if, for example, the user selects multiple locations on the screen. The screen may detect touches using any technology known in the art including, but not limited to, resistive, capacitive, infrared, surface acoustic wave, strain gauge, optical imaging, acoustic pulse recognition, frustrated total internal reflection, and diffused laser imaging technologies.
- Accelerometer 342 may be any type of accelerometer as known to those skilled in the art. Accelerometer 342 may be able to detect when the mobile device moves. Accelerometer 342 also may be able to detect the orientation of a mobile device relative to gravity.
- Location receiver 344 detects the location of the mobile device.
- Location receiver 344 may detect a location of a mobile device from, for example, a GPS receiver.
- a GPS receiver determines a location of the mobile device using signals from GPS satellites.
- location receiver 344 may detect location of mobile device by, for example, collecting information from nearby cell towers and wi-fi hotspots. Location receiver 344 may use information from cell towers, wi-fi hotspots, and GPS satellites together to determine the location of the mobile device quickly and accurately.
- user interaction module 310 includes various modules that change the perspective of the virtual camera as defined by the view specification.
- User interaction module 310 includes a momentum module 316 , an angular jump module 312 , a navigation module 318 , an anchor module 320 , a pan module 348 , and a target module 346 . Each of these modules is described below.
- the modules in user interaction module 310 may change a virtual camera's perspective according to a target location.
- a target location may be determined by a target module 346 .
- target module 346 may extend a ray from a focal point of the virtual camera.
- the target location may be an intersection of the ray with a three dimensional model, such as a three dimensional model of the Earth.
- the ray may be extended according to a position on the view selected by a user. Alternatively, the ray may be extended through a center of the view frustum of the virtual camera.
- the operation of target module 346 is described in more detail with respect to FIGS. 6A-B .
- angular jump module 312 In response to a user selecting a feature in the three dimensional environment, angular jump module 312 moves the virtual camera toward the feature.
- touch receiver 340 receives a user input indicating that a user has selected a position of a view. In an example, a user may select a position on the view and initiate an angular jump by double tapping on the position.
- target module 346 determines a target location. Using the target location, angular jump module 312 moves the virtual camera. Angular jump module 312 may move the virtual camera toward the target location and may rotate the virtual camera toward the target location.
- angular jump module 312 may change the virtual camera's roll to simulate an airplane banking.
- Angular jump module 312 may orient the virtual camera such that the target location appears approximately at the center of the view.
- angular jump module 312 may change pitch or yaw values of the virtual camera. In this way, a user can double tap on a screen with one hand and easily navigate the virtual camera towards the target. Further, the smooth transition of the virtual camera to its new location may create a pleasing effect to a user.
- Anchor module 320 moves the virtual camera in response to other user interface gestures.
- anchor module 320 is called when touch receiver 340 receives a two finger touch with one finger stationary and the other finger in motion.
- the relative initial positions of the stationary and moving fingers may activate one of two navigation modes—an anchored look-around mode or an anchored helicopter mode.
- the anchored look-around mode is activated when the initial position of the first stationary finger is below the initial position of the second finger.
- the anchored helicopter mode is activated when the initial position of the first stationary finger is above the initial position of the second finger.
- the anchored look-around mode be executed by a look-around module 326 , and the anchored helicopter mode may be executed by a helicopter module 324 .
- Look-around module 326 changes an orientation of the virtual camera according to movement of the second finger.
- Touch receiver 340 may receive the direction of the second finger's movement and send the direction to look-around module 326 . Based on the direction, look-around module 326 may rotate the virtual camera along different axes.
- Look-around module 326 may change a yaw of the virtual camera when finger moves toward the left or right of the mobile device.
- look-around module 326 may change a pitch of the virtual camera when the finger moves toward the top or bottom of the mobile device.
- look-around module 326 also may change an orientation of the virtual camera in response to movement of two fingers. This embodiment is described with respect to FIG. 12 .
- Helicopter module 324 moves the virtual camera when the position of the stationary finger is initially below the moving finger.
- target module 346 may determine a target location.
- the target location may be determined by extending a ray based on the position of the stationary finger.
- the target location may be determined by extending a ray through a center of the virtual camera's view frustum. Determining a target location is described in more detail later with respect to FIGS. 6A-B .
- Touch receiver 340 may send a direction of the moving finger to helicopter module 324 . Based on the direction of the moving finger, helicopter module 324 may move the virtual camera in different directions, keeping a distance between the target location and the position of the virtual camera approximately constant. Helicopter module 324 may allow for small changes in the distance. For example, new terrain may be streamed into the client that causes the distance to change.
- Helicopter module 324 may extend a ray upwards from the target location determined by target module 346 .
- helicopter module 324 may change a tilt angle relative to the ray. Changing the tilt angle may move the virtual camera up or down.
- helicopter module 324 may change an azimuth angle relative to the ray. Changing an azimuth angle may move the virtual camera around the target location while maintaining a constant elevation.
- helicopter module 324 may change both the tilt and azimuth angles. In this way, helicopter module 324 enables a user to navigate easily around a target location.
- Helicopter module may also move the virtual camera when two fingers rotate on a screen of a mobile device as described for FIG. 16C .
- helicopter module 324 also may change a distance between the target location and the virtual camera.
- the virtual camera may move into or away from the target location.
- movement of the initially stationary finger may result in translating the virtual camera in towards or away from the target.
- helicopter module 324 may change an azimuth angle while allowing navigation module 318 to change a tilt angle based on an orientation of the mobile device relative to gravity. The operation of helicopter module 324 is described in more detail with respect to FIG. 10 and FIGS. 11A-B .
- Navigation module 318 orients and positions the virtual camera in the three dimensional environment according to orientation and position information received from accelerometer 342 and location receiver 344 .
- Navigation module 318 includes an accelerometer navigation module 330 .
- accelerometer 342 receives an orientation of the mobile device relative to gravity. Based on the orientation of the mobile device, accelerometer navigation module 330 changes a position or orientation of the virtual camera. Based on the orientation of the mobile device, accelerometer navigation module 330 may change a pitch of the virtual camera, causing the virtual camera to look up and down. Alternatively, accelerometer navigation module 330 may change a tilt of the virtual camera relative to a target location, causing the virtual camera to move up or down.
- Location receiver 344 may receive a heading value of the mobile device. For example, location receiver 344 may receive the cardinal direction (north, east, south, west) that the mobile device faces. Based on the heading value, navigation module 318 may orient the virtual camera in the direction of the mobile device. Also, location receiver 344 may receive a location value of the mobile device. For example, location receiver 344 may receive may receive a latitude, longitude and altitude of the mobile device. Based on the location of the mobile device, navigation module 318 may position a virtual camera in the three dimensional environment. The three dimensional environment may include a three dimensional model of the Earth. In this way, navigation module 318 may position and orient the virtual camera in the virtual Earth to correspond to the position and orientation of the mobile device in the real Earth. Navigation module 318 may continually update the position and orientation of the virtual camera to track the mobile device. The operation of navigation module 318 is described in more detail with respect to FIG. 13 and FIGS. 14A-B .
- Each of angular jump module 312 , momentum module 316 , accelerometer navigation module 330 , look-around module 326 , and helicopter module 324 accept user interface gestures to move the virtual camera.
- Each of those modules may coordinate with momentum module 316 to continue the motion of the virtual camera after the user interface is gesture is complete.
- Momentum module 316 may gradually decelerate the motion after the gesture is complete. In this way, momentum module 316 simulates the virtual camera having a momentum and simulates the virtual camera being subjected to friction, such as air resistance.
- anchor module 316 navigates a virtual camera when touch receiver 340 receives a two finger touch with one finger stationary and the other in motion.
- momentum module 316 also may navigate the virtual camera.
- a two finger touch with both fingers in motion is sometimes described herein as a pinch gesture with the fingers either moving away from each other or towards each other.
- Momentum module 316 may determine a speed that the fingers relative to each other. Based on the finger speed, momentum module 316 may determine a speed of the virtual camera and may move the virtual camera at the determined speed. Moving the fingers towards each other may cause the virtual camera to move forward, whereas moving the fingers away from each other may cause the virtual camera to move backwards.
- Momentum module 316 may simulate air resistance and consequently may reduce the speed of the virtual camera gradually.
- the virtual camera may remain stationary and a three dimensional model, such as a three dimensional model of the Earth, may move according to the finger speed.
- Momentum module 316 may rotate a model of the Earth at an angular velocity determined according to a finger speed. The operation of momentum module 316 is described in more detail with respect to FIG. 15 and FIG. 16A-B .
- a three dimensional model such as a three dimensional model of the Earth, may also be rotated by pan module 348 .
- touch receiver 340 may receive a user input indicating that a user has touched a first position on a view of the mobile device and moved his finger to a second position on the view (a touch-and-drag gesture).
- target module 346 may determine first and second points in the three dimensional environment.
- pan module 348 may move the three dimensional model relative to the virtual camera. This movement may be referred to herein as “panning.”
- pan module 348 may move the three dimensional model by determining a rotation axis on the three dimensional model and rotating the three dimensional model around the rotation axis.
- the operation of pan module 348 may change according to the orientation of the virtual camera.
- the orientation of the virtual camera may be determined according to an orientation of the mobile device relative to gravity.
- the user may pan in any direction.
- the virtual camera faces the horizon the user may pan only forward and backwards. Finger movements to the left and right instead may result in the virtual camera looking to the left or right.
- the operation of pan module 348 is described in greater detail with respect to FIG. 17 , FIGS. 18A-B , and FIGS. 19A-C .
- Each of the components of system 300 may be implemented in hardware, software, firmware, or any combination thereof.
- angular jump module 312 target module 346 , anchor module 320 , momentum module 316 , navigation module 318 and pan module 348 is described in greater detail.
- Angular jump navigation enables a user to navigate easily and intuitively in a three dimensional environment on a mobile device.
- the method navigates a virtual camera toward a location and angles the virtual camera toward the location.
- FIG. 4 is a flowchart illustrating a method 400 for angular jump navigation.
- Method 400 begins with receiving a user input indicating that a user has double tapped on a location of a view at step 402 .
- Step 402 is illustrated in FIG. 5 .
- FIG. 5 shows a diagram 500 illustrating angular jump navigation on a mobile device.
- Diagram 500 shows mobile device 100 with view 102 .
- a user double taps at a location 504 .
- Angular jump navigation navigates along a trajectory 502 as is described in the remaining steps of method 400 .
- FIG. 6A shows a diagram 600 illustrating extending a screen ray to determine a target location.
- Diagram 600 shows a virtual camera with a focal point 602 .
- the virtual camera has a focal length 606 and a viewport 604 .
- point 610 corresponds to a point selected by a user on a view of the mobile device.
- a ray 612 is extended through point 610 .
- Ray 612 intersects with a three dimensional model 616 to determine a target location 614 . In this way, target location 614 is determined based on the point selected (e.g., double tapped) by the user.
- FIG. 6B shows a diagram 650 with a virtual camera having focal point 602 , focal length 606 and viewport 604 .
- a user selects a point on the view close to a horizon. The point selected by the user corresponds to a point 652 on viewport 604 .
- a ray 654 extends from focal point 602 through point 652 on viewport 604 . Ray 654 intersects with a concave virtual surface 658 at a point 656 . Point 656 may be projected onto three dimensional model 660 to determine a target location.
- Diagram 650 shows one method for damping a user selection, but other methods may be used as are known to those of skill in the art.
- the virtual camera moves toward the target location at step 406 .
- the virtual camera rotates toward the target location at step 408 . Steps 406 and 408 are illustrated in FIG. 7 .
- FIG. 7 shows a diagram 700 illustrating an angular jump trajectory.
- Diagram 700 shows a virtual camera at an initial position 702 .
- the virtual camera moves along a trajectory 706 .
- the virtual camera may start with an initial forward velocity vector.
- the virtual camera rotates towards a target 708 .
- Rotating towards a target 708 may include changing a pitch or yaw of the virtual camera.
- the virtual camera may slow down, coming to rest at a position 704 facing the target 708 .
- the target 708 may appear at approximately the center of the view.
- the approximate center of the view may not be the exact center as small offsets from the center are allowed.
- the virtual camera may roll.
- the roll may simulate an aircraft-like turn toward the destination.
- the virtual camera may start trajectory 706 with no roll.
- the virtual camera's roll may increase as it moves along trajectory 706 and may attain the largest amount of roll midway through trajectory 706 . Then, the virtual camera's roll may decrease returning to zero roll when the virtual camera reaches its final position 704 .
- angular jump navigation enables a user to easily navigate towards a target location in a three dimensional environment. Additionally, by determining the target location based on a double touch gesture, the user can navigate towards the location with only one hand. This is useful because often users have one hand holding the mobile device, leaving only one hand free to navigate in the three dimensional environment.
- Each user interface gesture has one finger initially stationary with the other moving.
- the stationary finger may touch the screen before the moving finger.
- the initial relative position of the stationary and moving fingers may determine whether the user enters an anchored look-around navigation mode or an anchored helicopter navigation mode.
- FIG. 8 is a flowchart illustrating a method 800 for anchored look-around navigation.
- Method 800 begins by receiving a user input for a two finger touch on a view of the mobile device at step 802 .
- One of the two fingers is in motion and the direction of motion (e.g. a motion vector) of the second finger is received at step 804 .
- the two finger touch is illustrated in FIG. 9A .
- FIG. 9A shows a diagram 900 .
- Diagram 900 shows mobile device 100 with view 102 .
- a user has touched view 102 with a finger 902 and a finger 904 .
- Finger 902 is initially stationary and finger 904 is initially in motion.
- Finger 902 may touch the screen at least a certain amount time before finger 904 .
- the user enters an anchored look-around navigation mode.
- the user enters anchored navigation mode when the finger initially moving (finger 904 ) is above the finger that is initially stationary (finger 902 ).
- an orientation of the virtual camera is changed according to the movement of the second finger. How the virtual camera's orientation is changed is illustrated in FIG. 9B .
- FIG. 9B shows a diagram 950 illustrating a virtual camera looking around in a three dimensional environment.
- Diagram 950 shows three dimensional terrain 210 and virtual camera 202 .
- camera 202 may look up and down as show by an arrow 952 .
- camera 202 may look left and right as show by an arrow 954 .
- the virtual camera may look to the left and right based on the user input, while looking up and down based on an orientation of a mobile device.
- An orientation of the mobile device relative to gravity may be received from an accelerometer of the mobile device.
- a pitch of the virtual camera may be changed according to the orientation of the mobile device. In this way, the user can look up and down by angling the mobile device up and down.
- an axis of the virtual camera may be determined based on the position of the first, stationary finger.
- a target location may be determined based on the position of the stationary finger.
- the axis is the line connecting the virtual camera and the target location.
- movement of the second finger causes the virtual camera to rotate about the axis.
- a user enters an anchored look-around mode when the stationary finger is below the moving finger.
- the stationary finger is above the moving finger, the user may enter an anchored helicopter mode.
- Anchored helicopter mode is described with respect to FIGS. 10 and 11 A-B.
- FIG. 10 is a flowchart illustrating a method 1000 for anchored helicopter navigation.
- Method 1000 begins by receiving a user input for a two finger touch on a view of the mobile device at step 1002 .
- One of the two fingers is in motion and the direction of motion (e.g. a motion vector) of the second finger is received at step 1004 .
- the two finger touch is illustrated in FIG. 11A .
- FIG. 11A shows a diagram 1100 .
- Diagram 1100 shows mobile device 100 with view 102 .
- a user has touched view 102 with a finger 1102 and a finger 1104 .
- Finger 1102 is initially stationary and finger 1104 is initially in motion.
- a user may touch view 102 with a finger 1102 a certain time prior to touching the view with a finger 1104 .
- the user enters an anchored helicopter mode.
- the user may enter anchored helicopter mode when the finger initially moving (finger 1104 ) is below the finger that is initially stationary (finger 1102 ).
- a target location is determined at step 1004 .
- the target location may be determined based on the position of the first, stationary finger.
- the target location may be determined by extending a screen ray as described in FIG. 6A . Further, the screen ray may be damped as described with respect to FIG. 6B .
- the target location may be determined by extending a ray through a center of the virtual camera's view frustum. The ray may intersect with a three dimensional model at a target location.
- Step 1008 is illustrated in FIG. 11B .
- FIG. 11B shows a diagram 1150 illustrating anchored helicopter navigation.
- Diagram 1150 shows virtual camera 202 directed towards three dimensional terrain 210 .
- a ray 1160 is extended to determine a target 1158 as described for step 1006 .
- a vector 1162 directed upwards is determined.
- virtual camera 202 has a tilt angle 1156 and an azimuth angle 1154 .
- Changing tilt angle 1156 causes virtual camera 202 to move up or down, and changing azimuth angle 1154 causes virtual camera 202 to orbit around target 1158 at a constant elevation.
- changing tilt angle 1156 and azimuth angle 1154 does not change the distance between virtual camera 202 and target 1158 . In this way, changing tilt angle 1156 and azimuth angle 1154 navigates the virtual camera around target 1158 while staying equidistant to target 1158 .
- FIG. 11A when a user moves finger 1104 left or right, as shown by arrows 1108 and 1106 , an azimuth angle changes causing virtual camera 202 to orbit around the target 1158 at a constant elevation.
- a tilt angle may change moving the virtual camera up and down relative to target 1158 .
- both the tilt and azimuth angles may change.
- the tilt and azimuth values may change according the components of the motion vector along the axes of the mobile device. In this way, by moving a finger, a user can cause the virtual camera to move around a target location, viewing a target location from different perspectives.
- An orientation of the virtual camera may also change such that the virtual camera continues to face the target.
- a user may move finger 1104 down and to the right.
- both a tilt and azimuth value relative to a target location may increase in response to the finger movement.
- the tilt value increases the virtual camera moves down towards the elevation of the target location.
- the increasing azimuth value causes the virtual camera to rotate around the target location. While the virtual camera is moving, the virtual camera may remain oriented toward the target location. In this way, a user can easily view a feature in the three dimensional environment from different perspectives.
- the distance between the virtual camera and the target location may also change.
- the virtual camera may swoop into the target by moving the virtual camera into a target while changing a tilt or azimuth value.
- the virtual camera can move away from the target while changing a tilt or azimuth value.
- moving finger 1104 left or right may change an azimuth angle, while a tilt angle is determined according to an orientation of the mobile device.
- An orientation of the mobile device relative to gravity may be received from an accelerometer of the mobile device. Based on the orientation of the mobile device, the tilt angle is determined. In this way, the user may move the virtual camera up and down by moving the mobile device up and down.
- a user holding the mobile device and viewing a display may move the device relative to the ground.
- the virtual camera may move above the target and face down toward the target.
- the virtual camera may move to the target's elevation and view the target from a ground-level view.
- a user may cause a virtual camera to look around by moving one finger and keeping another stationary.
- This section describes another gesture that may cause a virtual camera to look around.
- the gesture described in this section includes two fingers touching the display. In general, two fingers move in approximately the same direction by approximately the same distance and the virtual camera moves according to the finger movement.
- FIG. 12 shows a diagram 1200 illustrating a two finger gesture for looking around in a three dimensional environment on a mobile device.
- Diagram 1200 shows mobile device 100 with view 102 . Touching view 102 are fingers 1202 and 1204 . With the user touching view 102 , user moves fingers 1202 and 1204 on view 102 as shown by vectors 1206 and 1208 .
- Vectors 1206 and 1208 represent the direction and distance that a user moves fingers 1202 and 1204 .
- Vectors 1206 and 1208 may be in approximately in the same direction. Vectors 1206 and 1208 need not be exactly parallel. A small angle between directions 1206 and 1208 may be allowed up to a threshold. Similarly, vectors 1206 and 1208 may have approximately the same length. A small difference in the length of vectors 1206 and 1208 may be allowed up to a threshold.
- a virtual camera's orientation changes. Fingers 1202 and 1204 have moved slightly different directions and distances then the direction and distance values may be combined to determine an aggregate vector. In an example, the direction and distance values of vectors 1206 and 1208 may be averaged to determine the aggregate vector.
- a vector is described but any type of motion data may be used.
- FIG. 9B shows a diagram 950 with three dimensional terrain 210 and virtual camera 202 .
- Diagram 950 shows three dimensional terrain 210 and virtual camera 202 .
- the virtual camera's yaw may change. Changing the virtual camera's yaw causes the camera to look left or right as show by arrow 954 .
- the virtual camera's pitch may change. Changing the virtual camera's pitch causes the camera to look to up or down as shown by arrow 956 .
- both a pitch and a yaw of the virtual camera may change.
- the pitch and yaw may change according to the components of the vector of the finger movement along the axes of the mobile device. In this way, by moving two fingers, a user can cause the virtual camera to look-around, viewing the three dimensional environment from different perspectives.
- the virtual camera may look to the left and right based on the user input, while looking up and down based on an orientation of a mobile device.
- An orientation of the mobile device relative to gravity may be received from an accelerometer of the mobile device.
- a pitch of the virtual camera may be changed according to the orientation of the mobile device. In this way, the user can look up and down by angling the mobile device up and down.
- the orientation of the mobile device may be determined by an accelerometer. The next section describes accelerometer navigation in greater detail.
- FIG. 13 is a flowchart illustrating a method 1300 for navigating a virtual camera based on an orientation of a mobile device.
- Method 1300 begins with enabling accelerometer navigation at step 1302 .
- Accelerometer navigation may be enabled, for example, when a user makes a setting change to turn it on or at startup if a default setting is set for accelerometer navigation. In another example, entering a navigation mode such as anchored navigation or look-around navigation may enable accelerometer navigation.
- accelerometer navigation may be enabled when a change in orientation of the mobile device exceeds a threshold. This way minor changes in orientation do not unintentionally change the perspective of the virtual camera.
- the accelerometer navigation may be enabled when an orientation of the mobile device relative to gravity exceeds a threshold. If an orientation of the mobile device relative to gravity is below a threshold, the orientation may be in a “dead zone”.
- an orientation of the mobile device is determined at step 1304 .
- an accelerometer determines the direction of gravity and an orientation of the mobile device relative to gravity. Based on the orientation of the mobile device, the virtual camera's position or orientation is changed at step 1306 . Steps 1304 and 1306 are illustrated in FIGS. 14A-C . Further, the accelerometer readings may be damped.
- FIGS. 14A-C each show a mobile device with a different orientation.
- FIG. 14A shows a profile of a mobile device 1402 facing the ground.
- An orientation of mobile device 1402 is defined relative to a vector normal to the plane of the screen of mobile device 1402 .
- An accelerometer of the mobile device detects that gravity is facing straight down. In other words, gravity is parallel to the orientation of mobile device 1402 .
- the virtual camera is oriented straight down at a three dimensional model, such as a three dimensional model of the Earth. With the virtual camera facing the ground, the virtual camera may capture an image 1404 of the ground.
- FIG. 14B shows a profile of a mobile device 1422 at an angle relative the ground.
- An accelerometer of the mobile device detects that gravity has an angle 1426 relative to the orientation of the mobile device.
- the virtual camera's pitch may be set to angle 1426 .
- an image captured by the virtual camera and displayed to the user may appear as an image 1424 .
- the virtual camera's pitch may be determined based on angle 1426 .
- a range of angles of the mobile device may interpolate smoothly to a range of angles of the virtual camera.
- the interpolation may be a linear interpolation.
- the range of angles of the mobile device is 30 degrees to 90 degrees. That range interpolate to a range of angles of the virtual camera of 0 degrees to 90 degrees.
- an angle of the virtual camera may be set to 45 degrees. This example is merely illustrative.
- FIG. 14C shows a profile of a mobile device 1432 normal to the ground.
- An accelerometer of the mobile device detects that gravity has an angle 1436 relative to the mobile device.
- the virtual camera's pitch may be set to angle 1436 .
- an image captured by the virtual camera and displayed to the user may appear as an image 1434 facing the horizon.
- an orientation of the virtual camera changes.
- the virtual camera looks toward the horizon.
- the virtual camera looks toward the sky.
- the virtual camera looks toward the ground.
- a position of the virtual camera may also change according to an orientation of a mobile device.
- a target location and a tilt angle may be determined as described with respect to FIGS. 11A-B .
- a tilt angle of the virtual camera relative to a target location may change.
- a user can navigate through a three dimensional environment by changing an orientation of a mobile device.
- the anchored navigation section discussed a two finger gesture with one finger initially stationary and the other initially in motion.
- This section describes a two finger gesture with both fingers initially in motion.
- the two finger gesture may be referred to as a pinch and is described with respect to FIG. 15 and FIGS. 16A-B .
- a pinch also may be distinguished from anchored navigation by the timing of the first and second finger touches. For example, when a time between first and second fingers is above a threshold, an anchored navigation mode may be activated. When a time between first and second fingers is above a threshold, the virtual camera may be moved with a pinch momentum. In an alternative embodiment, the anchored navigation mode may be activated when the time is below a threshold, and the virtual camera may be moved with a pinch momentum when the time is above a threshold.
- FIG. 15 is a flowchart illustrating a method 1500 for navigating a virtual camera using a pinch.
- Method 1500 begins by receiving an input for a user pinch on the view at 1502 .
- a user pinch is described is illustrated in FIG. 16A .
- FIG. 16A shows a diagram 1600 illustrating a pinch gesture on a mobile device.
- Diagram 1600 shows mobile device 100 with view 102 .
- a user has touched view with fingers 1604 and 1602 . Both fingers are in motion and their relative motion is a speed of the pinch determined in step 1504 .
- Moving fingers 1604 and 1602 apart as shown with arrows 1612 and 1614 may result in a positive pinch speed, whereas moving fingers 1604 and 1602 together as shown with arrows 1624 and 1622 may result in a negative pinch speed.
- a virtual camera speed is determined at step 1506 .
- the virtual camera speed may be positive (forward) if the pinch speed is positive, and the virtual camera speed may be negative (reverse) if the pinch speed is negative.
- the virtual camera speed may be linearly interpolated from the pinch speed. This is just an illustrative example and this not meant to limit the present invention.
- the virtual camera accelerates to the speed determined at step 1506 .
- the virtual camera may decelerate gradually. To decelerate the virtual camera, a momentum of the virtual camera may be simulated, and the virtual camera may be exposed to a simulated air resistance. Steps 1508 and 1510 are illustrated in FIG. 16B .
- FIG. 16B shows a diagram 1650 illustrating a virtual camera subjected to a pinch momentum.
- Diagram 1650 shows a virtual camera starting at a position 1652 and ending at a position 1654 .
- Diagram 1650 shows the virtual camera at several points in time t 0 , t 1 , t 2 , t 3 , t 4 , and t 5 . As time passed, the virtual camera decelerates.
- both fingers need not be initially in motion.
- One or both fingers could be initially stationary.
- a pinch may translate the virtual camera or cause a virtual camera to zoom without any momentum.
- the virtual camera zooms or translates according a distance or speed of the pinch.
- the pinch gesture is completed, the virtual camera may stop zooming or translating.
- the virtual camera may be translated in a straight line.
- the virtual camera may stay stationary and the three dimensional model may move.
- the three dimensional model may rotate. This motion of the three dimensional model relative to the virtual camera may be referred to as “panning”.
- the virtual camera is both zoomed (or translated) and rotated.
- the rotation of the camera is based on the angle between the two fingers, and the zoom is based on the distance between the two fingers.
- finger 1 and finger 2 are in contact with surface at the same time. Further, finger 1 and finger 2 may be in motion at the same time. Rotating finger 1 and finger 2 as illustrated by arrows 1671 and 1673 may result in rotating the camera around a target point.
- the target point may be determined by extending a screen ray as described for FIGS. 6A-B . In examples, the screen ray may be determined based on the location of one the fingers, such as the first finger to touch the screen. Alternatively, the screen ray may be determined based on a midpoint between the fingers. In this way, the target point is not covered by one of the user's fingers on the display.
- the camera may rotate around the target point.
- the camera may rotate around the target point by changing an azimuth value as described for FIG. 11B . In this way, the camera may helicopter around a target point, viewing the target from different perspectives.
- an “invisible” line may be determined connecting finger 1 and finger 2 .
- an angle between the invisible line and the display of the mobile device changes as well.
- an azimuth angle relative to a target point may change as well.
- the azimuth angle may change by the same amount, or approximately the same amount, as the angle between the invisible line and the display of the mobile device. In this way, when a user rotates two fingers on the display of the mobile device by 360 degrees, the virtual camera helicopters 360 degrees around the target point.
- changing a distance between finger 1 and finger 2 may change a range of virtual camera, e.g., by zooming or translating the virtual camera.
- an invisible line connecting finger 1 and 2 is determined as described above. When the invisible line decreases in length, the camera may move away from a target point. Similarly, when the invisible line increases in length, the camera may move toward a target point, or vice versa. Changing the range is described above with respect to FIGS. 16A-B . Further, a momentum may be applied to continue the gesture as discussed above. A speed of either the rotation, the zoom, or both may diminish gradually after removal of fingers based on a speed at end of gesture.
- the user may rotate finger 1 and 2 counter-clockwise by 90 degrees and may move finger 1 and 2 apart.
- the virtual camera may helicopter around the target point by 90 degrees counter-clockwise and may translate closer to the target point.
- the user may rotate finger 1 and 2 clockwise by 45 degrees and may move finger 1 and 2 closer together.
- the virtual camera may helicopter around the target point by 45 degrees clockwise and may translate away from the target point.
- embodiments enable a user to navigate easily around a target point and to view a target from different perspectives.
- This section describes panning a virtual camera through a three dimensional environment on a mobile device.
- a user pans by selecting a position on the view of the mobile device with a finger. Based on the selected position, a target location is determined. As the user drags his finger, the position of the three dimensional model relative to the virtual camera moves to follow the target location. This may be referred to as a touch-and-drag gesture.
- the three dimensional model rotates to follow the user's finger in response to the touch-and-drag gesture.
- FIG. 17 is a flowchart illustrating a method 1700 for panning on a mobile device.
- Method 1700 begins at step 1702 with receiving a first and second position selected by a user of a mobile device. Selecting the first and second positions is illustrated in FIG. 18A . Each of the first and second position may be defined by an X and Y coordinates on the view.
- FIG. 18A shows a diagram 1800 illustrating panning on a mobile device.
- Diagram 1800 shows mobile device 100 with view 102 .
- a user touches a position 1802 with his finger and drags his finger to a new position 1804 .
- first and second target points are determined at step 1704 .
- the first and second target points may be determined with rays as described with respect to FIG. 6A-B . If the ray is nearly tangential to the three dimensional model, the target point may need to be damped as described with respect to FIG. 6B .
- Each target point may be defined by, for example, a latitude, longitude, and altitude. Altitude (as the term is meant here) may be the distance from the target point to a center of the three dimensional model.
- the first target point is determined by intersecting a ray with the three dimensional model and the second target point is determined by intersecting a ray with a virtual surface sphere. Determining the target points is illustrated in FIG. 18B .
- FIG. 18B shows a diagram 1800 with virtual camera 202 facing three dimensional terrain 210 .
- three dimensional terrain 210 may be a portion of a three dimensional model.
- the first target point (target point 1854 ) may be determined by extending a ray 1852 to intersect with the three dimensional model at three dimensional terrain 210 .
- a virtual sphere surface 1862 is determined.
- Virtual sphere surface 1862 may have a center at the center of the three dimensional model and may be tangent target point 1854 .
- a target point 1856 is determined.
- a virtual surface may not be used and the second target point may be determined by intersecting a ray with the three dimensional model. These two points, target point 1854 and target point 1856 , form a geocentric vector relative to the center of the three dimensional model.
- a rotation axis is determined at step 1706 .
- cross product between the two target points may be determined.
- the two target points may be defined by two vectors V 1 ′ and V 1 .
- the rotation axis is computed by taking the cross product between V 1 ′ and V 1 (V 1 ′ ⁇ V 1 ).
- the three dimensional model is rotated at step 1708 .
- a rotation matrix is computed based on the angle ⁇ and the rotation axis.
- the three dimensional model is rotated based on the rotation matrix.
- the last screen space position of the finger may be recorded. Further, the panning motion may continue after the user gesture is completed. This gives the feeling to the user that he is spinning a globe. The speed of rotation may decrease gradually to simulate friction.
- a target grabbed by a user with his finger follows the user's finger movements. To the user, it may feel as if he is touching the planet and manipulating it. Due to the size of the view, the first and second positions of the finger cannot be too far apart. This limits the speed at which a user can pan and improves stability of the pan gesture.
- a touch-and-drag gesture may have a different behavior.
- the touch-and-drag gesture in the vertical direction may cause panning as described above with respect FIG. 17
- a touch-and-drag gesture in the horizontal direction may cause the virtual camera to look-around. This is illustrated in FIGS. 19A-C .
- FIG. 19A shows a diagram 1900 illustrating a mobile device 1904 .
- Mobile device 1904 has an accelerometer that detects its angle ⁇ relative to gravity. When the angle ⁇ of the mobile device is above a threshold ⁇ , a user can pan in all directions as illustrated in diagram 1930 in FIG. 19B . When the ⁇ of the mobile device is below a threshold ⁇ , a touch-and-grab gesture to the left and right does not pan, but causes the virtual camera to look left and right as illustrated in diagram 1960 in FIG. 19C . The virtual camera may look to the left and right by changing a yaw value of the virtual camera.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- Software Systems (AREA)
- Computer Hardware Design (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Computer Graphics (AREA)
- User Interface Of Digital Computer (AREA)
- Position Input By Displaying (AREA)
- Processing Or Creating Images (AREA)
Abstract
This invention relates to anchored navigation in a three dimensional environment on a mobile device. In an embodiment, a computer-implemented method navigates a virtual camera in a three dimensional environment on a mobile device having a touch screen. A first user input is received indicating that a first object is approximately stationary on a touch screen of the mobile device. A second user input is received indicating that a second object has moved on the touch screen. An orientation of the virtual camera of the virtual camera is changed according to the second user input.
Description
- This application claims the benefit of U.S. Provisional Pat. Appl. No. 61/091,234, filed Aug. 22, 2008, which is incorporated by reference herein in its entirety.
- 1. Field of the Invention
- This invention generally relates to navigation in a three dimensional environment.
- 2. Background Art
- Systems exist for navigating through a three dimensional environment to display three dimensional data. The three dimensional environment includes a virtual camera that defines what three dimensional data to display. The virtual camera has a perspective according to its position and orientation. By changing the perspective of the virtual camera, a user can navigate through the three dimensional environment.
- Mobile devices, such as cell phones, personal digital assistants (PDAs), portable navigation devices (PNDs) and handheld game consoles, are being made with improved computing capabilities. Many mobile devices can access one or more networks, such as the Internet. Also, some mobile devices, such as an IPHONE device available from Apple Inc., accept input from GPS sensors, accelerometers and touch screens. Improved computing capabilities make it possible to run a wide variety of software applications on mobile devices. Despite improved computing capabilities, many handheld mobile devices have a small display—generally less than 4 inches across. The small display may make it difficult for a user to navigate through a three dimensional environment on a mobile device.
- Methods and systems are needed that improve navigation in a three dimensional environment on a mobile device.
- This invention relates to anchored navigation in a three dimensional environment on a mobile device. In an embodiment, a computer-implemented method navigates a virtual camera in a three dimensional environment on a mobile device having a touch screen. A first user input is received indicating that a first object is approximately stationary on a touch screen of the mobile device. A second user input is received indicating that a second object has moved on the touch screen. An orientation of the virtual camera of the virtual camera is changed according to the second user input.
- In a second embodiment, a system navigates a virtual camera in a three dimensional environment on a mobile device. The system includes a touch receiver that receives a first user input indicating that a first object is approximately stationary on a touch screen of the mobile device and receives a second user input indicating that a second object has moved on the touch screen. The system also includes a look around module that changes an orientation of the virtual camera according to the second user input.
- In a third embodiment, a computer-implemented method navigates a virtual camera in a three dimensional environment on a mobile device having a touch screen. A first user input is received indicating that a first object is approximately stationary on a touch screen of the mobile device. A second user input is received indicating that a second object has moved on the touch screen. A target location is determined in the three dimensional environment. A position of the virtual camera is changed according to the second user input. A distance between the target location and the position of the virtual camera stays approximately constant.
- In a fourth embodiment, a system navigates a virtual camera in a three dimensional environment on a mobile device. The system includes a touch receiver that receives a first user input indicating that a first object is approximately stationary on a touch screen of the mobile device and receives a second user input indicating that a second object has moved on the touch screen. The system also includes a target module that determines a target location in the three dimensional environment. Finally, the system includes a helicopter module that changes a position of the virtual camera according to the second user input. A distance between the target location and the position of the virtual camera stays approximately constant.
- In a fifth embodiment, a computer-implemented method navigates a virtual camera in a three dimensional environment on a mobile device having a touch screen. A first user input is received indicating that a first object is approximately stationary on a touch screen of the mobile device. A second user input is received indicating that a second object has moved on the touch screen. A target location is determined in the three dimensional environment. A tilt value of the virtual camera relative to a vector directed upwards from the target location is changed. An azimuth value of the virtual camera relative to the vector is also changed.
- In a sixth embodiment, a computer-implemented method navigates a virtual camera in a three dimensional environment on a mobile device having a touch screen. A first user input is received indicating that a first object has touched a first point on a touch screen of a mobile device. A second user input is received indicating that a second object has touched a second point on the touch screen after the first object touched the first point on the touch screen. A navigation mode is determined from a plurality of navigation modes based on the position of the first point relative to the second point.
- In a seventh embodiment, a computer-implemented method navigates a virtual camera in a three dimensional environment on a mobile device having a touch screen. A user input is received indicating that two objects have touched a touch screen of a mobile device and the two objects have moved on the touch screen approximately the same distance in approximately the same direction. A motion data representing motion of the two objects on the touch screen is determined. An orientation of the virtual camera is changed according to the motion data.
- Further embodiments, features, and advantages of the invention, as well as the structure and operation of the various embodiments of the invention are described in detail below with reference to accompanying drawings.
- The accompanying drawings, which are incorporated herein and form a part of the specification, illustrate the present invention and, together with the description, further serve to explain the principles of the invention and to enable a person skilled in the pertinent art to make and use the invention.
-
FIG. 1 is a diagram illustrating a mobile device that navigates through a three dimensional environment. -
FIG. 2 is a diagram illustrating a virtual camera navigating through a three dimensional environment. -
FIG. 3 is a diagram illustrating a system that accepts user interface gestures to navigate through a three dimensional environment. -
FIG. 4 is a flowchart illustrating a method for angular jump navigation. -
FIG. 5 is a diagram illustrating angular jump navigation on a mobile device. -
FIGS. 6A-B are diagrams illustrating determining a target location according to a position selected on a view. -
FIG. 7 is a diagram illustrating an angular jump trajectory. -
FIG. 8 is a flowchart illustrating a method for anchored look-around navigation. -
FIGS. 9A-B are diagrams illustrating anchored look-around navigation on a mobile device. -
FIG. 10 is a flowchart illustrating a method for anchored helicopter navigation. -
FIGS. 11A-B are diagrams illustrating anchored helicopter navigation on a mobile device. -
FIG. 12 is a diagram illustrating a two finger gesture for looking around in a three dimensional environment on a mobile device. -
FIG. 13 is a flowchart illustrating a method for navigating a virtual camera based on an orientation of a mobile device. -
FIGS. 14A-C are diagrams illustrating navigating a virtual camera based on an orientation of a mobile device. -
FIG. 15 is a flowchart illustrating a method for navigating a virtual camera using a pinch momentum. -
FIGS. 16A-C are diagrams illustrating navigating a virtual camera through a three dimensional environment on a mobile device using a pinch momentum. -
FIG. 17 is a flowchart illustrating a method for panning on a mobile device. -
FIGS. 18A-B are diagrams illustrating panning through a three dimensional environment on a mobile device. -
FIGS. 19A-C are diagrams illustrating different panning modes which may be used in navigation on a mobile device. - The drawing in which an element first appears is typically indicated by the leftmost digit or digits in the corresponding reference number. In the drawings, like reference numbers may indicate identical or functionally similar elements.
- Embodiments of the present invention provide for navigation in a three dimensional environment on a mobile device. In the detailed description of embodiments that follows, references to “one embodiment”, “an embodiment”, “an example embodiment”, etc., indicate that the embodiment described may include a particular feature, structure, or characteristic, but every embodiment may not necessarily include the particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same embodiment. Further, when a particular feature, structure, or characteristic is described in connection with an embodiment, it is submitted that it is within the knowledge of one skilled in the art to effect such feature, structure, or characteristic in connection with other embodiments whether or not explicitly described.
- This detailed description is divided into sections. The first section provides an introduction to navigation through three dimensional environment on a mobile device. The second section describes a system that accepts user interface gestures to navigate in a three dimensional environment on a mobile device. The next several sections describe the user interface gestures in greater detail. The third section describes an angular zoom user interface gesture. The fourth section describes two anchored navigation gestures. The fifth section describes a dual finger look-around gesture. The sixth section describes accelerometer navigation. The seventh section describes pitch momentum and a two-finger touch and rotate gesture. Finally, the eight section describes panning in a three dimensional environment on a mobile device.
- This section provides an overview of navigation in a three dimensional environment on a mobile device.
FIG. 1 is a diagram illustrating amobile device 100 that can navigate through a three dimensional environment. In embodiments,mobile device 100 may be a PDA, cell phone, handheld game console or other handheld mobile device as known to those of skill in the art. In an example,mobile device 100 may be an IPHONE device, available from Apple Inc. In another example,mobile device 100 may be a device running an ANDROID platform, available from Google Inc. In other further embodiments,mobile device 100 may be a tablet computer, laptop computer, or other mobile device larger than a handheld mobile device but still easily carried by a user. These examples are illustrative and are not meant to limit the present invention. -
Mobile device 100 may have a touch screen that accepts touch input from the user. The user may touch the screen with his fingers, stylus, or other means known to those skilled in the art.Mobile device 100 also may have an accelerometer that detects when the mobile device accelerates or detectsmobile device 100's orientation relative to gravity. It should be noted that other devices may be user to determinemobile device 100's orientation, and this invention is not meant to be limited to an accelerometer. Further one or more accelerometers may be used. Further,mobile device 100 may have a location receiver, such as a GPS receiver, and may be connected to one or more networks such as the Internet. -
Mobile device 100 has aview 102. As mentioned earlier,mobile device 100 may accept touch input when a user touchesview 102. Further,view 102 may output images to user. In an example,mobile device 100 may render a three dimensional environment and may display the three dimensional environment to the user inview 102 from the perspective of a virtual camera. -
Mobile device 100 enables the user to navigate a virtual camera through a three dimensional environment. In an example, the three dimensional environment may include a three dimensional model, such as a three dimensional model of the Earth. A three dimensional model of the Earth may include satellite imagery texture mapped to three dimensional terrain. The three dimensional model of the Earth may also include models of buildings and other points of interest. This example is merely illustrative and is not meant to limit the present invention. - In response to user input,
mobile device 100 may change a perspective of the virtual camera. Based on the virtual camera's new perspective,mobile device 100 may render a new image intoview 102. Various user interface gestures that change the virtual camera's perspective and result in a new image are described in detail below. -
FIG. 2 shows a diagram 200 illustrating a virtual camera in a three dimensional environment. Diagram 200 includes avirtual camera 202.Virtual camera 202 is directed to view a threedimensional terrain 210. Threedimensional terrain 210 may be a portion of a larger three dimensional model, such as a three dimensional model of the Earth. - As mentioned earlier, user input may cause a mobile device, such as
mobile device 100 inFIG. 1 , to movevirtual camera 202 to a new location. Further, user input may causevirtual camera 202 to change orientation, such as pitch, yaw, or roll. - In this way, user interface gestures on a mobile device cause a virtual camera to navigate through a three dimensional environment on a mobile device. The various system components and details of the user interface gestures are described below.
- This section describes a system that navigates a virtual camera through a three dimensional environment on a mobile device in response to user interface gestures.
FIG. 3 is a diagram illustrating asystem 300 that accepts user interface gestures for navigation in a three dimensional environment on a mobile device. -
System 300 includes aclient 302 having a user interaction module 310 and arenderer module 322. User interaction module 310 includes amotion model 314. In general,client 302 operates as follows. User interaction module 310 receives user input regarding a location that a user desires to view and, throughmotion model 314, constructs a view specification defining the virtual camera.Renderer module 322 uses the view specification to decide what data is to be drawn and draws the data. Ifrenderer module 322 needs to draw data thatsystem 300 does not have,system 300 sends a request to a server for the additional data across one or more networks, such as the Internet, using a network interface 350. -
Motion model 314 constructs a view specification. The view specification defines the virtual camera's viewable volume within a three dimensional space, known as a frustum, and the position and orientation of the frustum in the three dimensional environment. In an embodiment, the frustum is in the shape of a truncated pyramid. The frustum has minimum and maximum view distances that can change depending on the viewing circumstances. Thus, changing the view specification changes the geographic data culled to the virtual camera's viewable volume. The culled geographic data is drawn byrenderer module 322. - The view specification may specify three main parameter sets for the virtual camera: the camera tripod, the camera lens, and the camera focus capability. The camera tripod parameter set specifies the following: the virtual camera position (X, Y, Z coordinates); which way the virtual camera is oriented relative to a default orientation, such as heading angle (e.g., north?, south?, in-between?); pitch (e.g., level?, down?, up?, in-between?); yaw and roll (e.g., level?, clockwise?, anti-clockwise?, in-between?). The lens parameter set specifies the following: horizontal field of view (e.g., telephoto?, normal human eye—about 55 degrees?, or wide-angle?); and vertical field of view (e.g., telephoto?, normal human eye—about 55 degrees?, or wide-angle?). The focus parameter set specifies the following: distance to the near-clip plane (e.g., how close to the “lens” can the virtual camera see, where objects closer are not drawn); and distance to the far-clip plane (e.g., how far from the lens can the virtual camera see, where objects further are not drawn). As used herein “moving the virtual camera” includes zooming the virtual camera as well as translating the virtual camera.
- To construct a view specification, user interaction module 310 receives user input.
Client 302 has various mechanisms for receiving input. For example,client 302 may receive input using sensors including atouch receiver 340, anaccelerometer 342, and alocation module 344. Each of the sensors will now be described in turn. -
Touch receiver 340 may be any type of touch receiver that accepts input from a touch screen.Touch receiver 340 may receive touch input on a view such as theview 102 inFIG. 1 . The touch input received may include a position that the user touched as defined by an X and Y coordinate on the screen. The user may touch the screen with a finger, stylus, or other object.Touch receiver 340 may be able to receive multiple touches simultaneously if, for example, the user selects multiple locations on the screen. The screen may detect touches using any technology known in the art including, but not limited to, resistive, capacitive, infrared, surface acoustic wave, strain gauge, optical imaging, acoustic pulse recognition, frustrated total internal reflection, and diffused laser imaging technologies. -
Accelerometer 342 may be any type of accelerometer as known to those skilled in the art.Accelerometer 342 may be able to detect when the mobile device moves.Accelerometer 342 also may be able to detect the orientation of a mobile device relative to gravity. -
Location receiver 344 detects the location of the mobile device.Location receiver 344 may detect a location of a mobile device from, for example, a GPS receiver. A GPS receiver determines a location of the mobile device using signals from GPS satellites. In other examples,location receiver 344 may detect location of mobile device by, for example, collecting information from nearby cell towers and wi-fi hotspots.Location receiver 344 may use information from cell towers, wi-fi hotspots, and GPS satellites together to determine the location of the mobile device quickly and accurately. - As mentioned earlier, user interaction module 310 includes various modules that change the perspective of the virtual camera as defined by the view specification. User interaction module 310 includes a
momentum module 316, anangular jump module 312, anavigation module 318, ananchor module 320, apan module 348, and atarget module 346. Each of these modules is described below. - The modules in user interaction module 310 may change a virtual camera's perspective according to a target location. A target location may be determined by a
target module 346. In an embodiment,target module 346 may extend a ray from a focal point of the virtual camera. The target location may be an intersection of the ray with a three dimensional model, such as a three dimensional model of the Earth. The ray may be extended according to a position on the view selected by a user. Alternatively, the ray may be extended through a center of the view frustum of the virtual camera. The operation oftarget module 346 is described in more detail with respect toFIGS. 6A-B . - One module that uses
target module 346 isangular jump module 312. In response to a user selecting a feature in the three dimensional environment,angular jump module 312 moves the virtual camera toward the feature. In an embodiment,touch receiver 340 receives a user input indicating that a user has selected a position of a view. In an example, a user may select a position on the view and initiate an angular jump by double tapping on the position. Based on the position selected by the user,target module 346 determines a target location. Using the target location,angular jump module 312 moves the virtual camera.Angular jump module 312 may move the virtual camera toward the target location and may rotate the virtual camera toward the target location. As the virtual camera moves,angular jump module 312 may change the virtual camera's roll to simulate an airplane banking.Angular jump module 312 may orient the virtual camera such that the target location appears approximately at the center of the view. To orient the virtual camera,angular jump module 312 may change pitch or yaw values of the virtual camera. In this way, a user can double tap on a screen with one hand and easily navigate the virtual camera towards the target. Further, the smooth transition of the virtual camera to its new location may create a pleasing effect to a user. -
Anchor module 320 moves the virtual camera in response to other user interface gestures. In an embodiment,anchor module 320 is called whentouch receiver 340 receives a two finger touch with one finger stationary and the other finger in motion. The relative initial positions of the stationary and moving fingers may activate one of two navigation modes—an anchored look-around mode or an anchored helicopter mode. In an embodiment, the anchored look-around mode is activated when the initial position of the first stationary finger is below the initial position of the second finger. The anchored helicopter mode is activated when the initial position of the first stationary finger is above the initial position of the second finger. The anchored look-around mode be executed by a look-aroundmodule 326, and the anchored helicopter mode may be executed by ahelicopter module 324. - Look-around
module 326 changes an orientation of the virtual camera according to movement of the second finger.Touch receiver 340 may receive the direction of the second finger's movement and send the direction to look-aroundmodule 326. Based on the direction, look-aroundmodule 326 may rotate the virtual camera along different axes. Look-aroundmodule 326 may change a yaw of the virtual camera when finger moves toward the left or right of the mobile device. Similarly, look-aroundmodule 326 may change a pitch of the virtual camera when the finger moves toward the top or bottom of the mobile device. The operation of look-aroundmodule 326 is described in more detail with respect to FIGS. 8 and 9A-B. In an embodiment, look-aroundmodule 326 also may change an orientation of the virtual camera in response to movement of two fingers. This embodiment is described with respect toFIG. 12 . -
Helicopter module 324 moves the virtual camera when the position of the stationary finger is initially below the moving finger. In an embodiment, whentouch receiver 340 receives a two finger touch with the stationary finger below the moving finger,target module 346 may determine a target location. The target location may be determined by extending a ray based on the position of the stationary finger. Alternatively, the target location may be determined by extending a ray through a center of the virtual camera's view frustum. Determining a target location is described in more detail later with respect toFIGS. 6A-B . -
Touch receiver 340 may send a direction of the moving finger tohelicopter module 324. Based on the direction of the moving finger,helicopter module 324 may move the virtual camera in different directions, keeping a distance between the target location and the position of the virtual camera approximately constant.Helicopter module 324 may allow for small changes in the distance. For example, new terrain may be streamed into the client that causes the distance to change. -
Helicopter module 324 may extend a ray upwards from the target location determined bytarget module 346. When the direction of the moving finger is towards the top or bottom of the mobile device,helicopter module 324 may change a tilt angle relative to the ray. Changing the tilt angle may move the virtual camera up or down. When the direction of the moving finger is towards the left or right of the mobile device,helicopter module 324 may change an azimuth angle relative to the ray. Changing an azimuth angle may move the virtual camera around the target location while maintaining a constant elevation. Further, when the direction of the moving finger has components on both axes of the mobile device,helicopter module 324 may change both the tilt and azimuth angles. In this way,helicopter module 324 enables a user to navigate easily around a target location. Helicopter module may also move the virtual camera when two fingers rotate on a screen of a mobile device as described forFIG. 16C . - In an embodiment,
helicopter module 324 also may change a distance between the target location and the virtual camera. For example, the virtual camera may move into or away from the target location. For example, movement of the initially stationary finger may result in translating the virtual camera in towards or away from the target. - In an embodiment,
helicopter module 324 may change an azimuth angle while allowingnavigation module 318 to change a tilt angle based on an orientation of the mobile device relative to gravity. The operation ofhelicopter module 324 is described in more detail with respect toFIG. 10 andFIGS. 11A-B . -
Navigation module 318 orients and positions the virtual camera in the three dimensional environment according to orientation and position information received fromaccelerometer 342 andlocation receiver 344.Navigation module 318 includes an accelerometer navigation module 330. In an embodiment,accelerometer 342 receives an orientation of the mobile device relative to gravity. Based on the orientation of the mobile device, accelerometer navigation module 330 changes a position or orientation of the virtual camera. Based on the orientation of the mobile device, accelerometer navigation module 330 may change a pitch of the virtual camera, causing the virtual camera to look up and down. Alternatively, accelerometer navigation module 330 may change a tilt of the virtual camera relative to a target location, causing the virtual camera to move up or down. -
Location receiver 344 may receive a heading value of the mobile device. For example,location receiver 344 may receive the cardinal direction (north, east, south, west) that the mobile device faces. Based on the heading value,navigation module 318 may orient the virtual camera in the direction of the mobile device. Also,location receiver 344 may receive a location value of the mobile device. For example,location receiver 344 may receive may receive a latitude, longitude and altitude of the mobile device. Based on the location of the mobile device,navigation module 318 may position a virtual camera in the three dimensional environment. The three dimensional environment may include a three dimensional model of the Earth. In this way,navigation module 318 may position and orient the virtual camera in the virtual Earth to correspond to the position and orientation of the mobile device in the real Earth.Navigation module 318 may continually update the position and orientation of the virtual camera to track the mobile device. The operation ofnavigation module 318 is described in more detail with respect toFIG. 13 andFIGS. 14A-B . - Each of
angular jump module 312,momentum module 316, accelerometer navigation module 330, look-aroundmodule 326, andhelicopter module 324 accept user interface gestures to move the virtual camera. Each of those modules may coordinate withmomentum module 316 to continue the motion of the virtual camera after the user interface is gesture is complete.Momentum module 316 may gradually decelerate the motion after the gesture is complete. In this way,momentum module 316 simulates the virtual camera having a momentum and simulates the virtual camera being subjected to friction, such as air resistance. - As described above,
anchor module 316 navigates a virtual camera whentouch receiver 340 receives a two finger touch with one finger stationary and the other in motion. According to a further feature, when both fingers are in motion,momentum module 316 also may navigate the virtual camera. A two finger touch with both fingers in motion is sometimes described herein as a pinch gesture with the fingers either moving away from each other or towards each other.Momentum module 316 may determine a speed that the fingers relative to each other. Based on the finger speed,momentum module 316 may determine a speed of the virtual camera and may move the virtual camera at the determined speed. Moving the fingers towards each other may cause the virtual camera to move forward, whereas moving the fingers away from each other may cause the virtual camera to move backwards.Momentum module 316 may simulate air resistance and consequently may reduce the speed of the virtual camera gradually. - Alternatively, the virtual camera may remain stationary and a three dimensional model, such as a three dimensional model of the Earth, may move according to the finger speed.
Momentum module 316 may rotate a model of the Earth at an angular velocity determined according to a finger speed. The operation ofmomentum module 316 is described in more detail with respect toFIG. 15 andFIG. 16A-B . - A three dimensional model, such as a three dimensional model of the Earth, may also be rotated by
pan module 348. In an embodiment,touch receiver 340 may receive a user input indicating that a user has touched a first position on a view of the mobile device and moved his finger to a second position on the view (a touch-and-drag gesture). Based on the first and second positions,target module 346 may determine first and second points in the three dimensional environment. Based on the first and second points,pan module 348 may move the three dimensional model relative to the virtual camera. This movement may be referred to herein as “panning.” In an example,pan module 348 may move the three dimensional model by determining a rotation axis on the three dimensional model and rotating the three dimensional model around the rotation axis. - In an embodiment, the operation of
pan module 348 may change according to the orientation of the virtual camera. As mentioned earlier, the orientation of the virtual camera may be determined according to an orientation of the mobile device relative to gravity. In an example, when the virtual camera faces the ground, the user may pan in any direction. However, when the virtual camera faces the horizon, the user may pan only forward and backwards. Finger movements to the left and right instead may result in the virtual camera looking to the left or right. The operation ofpan module 348 is described in greater detail with respect toFIG. 17 ,FIGS. 18A-B , andFIGS. 19A-C . - Each of the components of
system 300 may be implemented in hardware, software, firmware, or any combination thereof. - In the following sections, the operation of
angular jump module 312,target module 346,anchor module 320,momentum module 316,navigation module 318 andpan module 348 is described in greater detail. - This section describes a method for angular jump navigation with respect to
FIGS. 4-5 , 6A-B, and 7. Angular jump navigation enables a user to navigate easily and intuitively in a three dimensional environment on a mobile device. In general, in response to a user double tapping on a location, the method navigates a virtual camera toward a location and angles the virtual camera toward the location. -
FIG. 4 is a flowchart illustrating amethod 400 for angular jump navigation.Method 400 begins with receiving a user input indicating that a user has double tapped on a location of a view at step 402. Step 402 is illustrated inFIG. 5 .FIG. 5 shows a diagram 500 illustrating angular jump navigation on a mobile device. Diagram 500 showsmobile device 100 withview 102. A user double taps at alocation 504. Angular jump navigation navigates along atrajectory 502 as is described in the remaining steps ofmethod 400. - Based on a location of the tap location received in step 402, a target location is determined at
step 404. Determining a target location is illustrated inFIGS. 6A-B .FIG. 6A shows a diagram 600 illustrating extending a screen ray to determine a target location. Diagram 600 shows a virtual camera with afocal point 602. The virtual camera has afocal length 606 and aviewport 604. Onviewport 604,point 610 corresponds to a point selected by a user on a view of the mobile device. Fromfocal point 602, aray 612 is extended throughpoint 610.Ray 612 intersects with a threedimensional model 616 to determine atarget location 614. In this way,target location 614 is determined based on the point selected (e.g., double tapped) by the user. - While being easy for user, double tapping a view with a finger can be imprecise. Mobile devices tend to have small views (handheld mobile devices, for example, may have views generally not larger than 4 inches). As result, a finger touch may occupy a substantial portion of the view. When the user selects a position that is close to the horizon, the screen ray may be nearly tangential to the three dimensional model. Small changes in the position of the wide finger may result in large changes in the target location. As result, angular jump navigation may be unstable.
- To deal with potential instability, the user selection may be damped as illustrated in
FIG. 6B .FIG. 6B shows a diagram 650 with a virtual camera havingfocal point 602,focal length 606 andviewport 604. A user selects a point on the view close to a horizon. The point selected by the user corresponds to apoint 652 onviewport 604. Aray 654 extends fromfocal point 602 throughpoint 652 onviewport 604.Ray 654 intersects with a concavevirtual surface 658 at apoint 656.Point 656 may be projected onto threedimensional model 660 to determine a target location. By intersecting a screen ray with a virtual surface, the user's selection is damped, thus improving stability. Diagram 650 shows one method for damping a user selection, but other methods may be used as are known to those of skill in the art. - Referring back to
FIG. 4 , once a target location is determined, the virtual camera moves toward the target location atstep 406. As the virtual camera moves toward the target location, the virtual camera rotates toward the target location atstep 408.Steps FIG. 7 . -
FIG. 7 shows a diagram 700 illustrating an angular jump trajectory. Diagram 700 shows a virtual camera at aninitial position 702. The virtual camera moves along atrajectory 706. The virtual camera may start with an initial forward velocity vector. As the virtual camera continues ontrajectory 706, the virtual camera rotates towards atarget 708. Rotating towards atarget 708 may include changing a pitch or yaw of the virtual camera. As the virtual camera continues ontrajectory 706, the virtual camera may slow down, coming to rest at aposition 704 facing thetarget 708. When the virtual camera comes to rest, thetarget 708 may appear at approximately the center of the view. The approximate center of the view may not be the exact center as small offsets from the center are allowed. - As the virtual camera moves along
trajectory 706, the virtual camera may roll. The roll may simulate an aircraft-like turn toward the destination. The virtual camera may starttrajectory 706 with no roll. The virtual camera's roll may increase as it moves alongtrajectory 706 and may attain the largest amount of roll midway throughtrajectory 706. Then, the virtual camera's roll may decrease returning to zero roll when the virtual camera reaches itsfinal position 704. - In this way, angular jump navigation enables a user to easily navigate towards a target location in a three dimensional environment. Additionally, by determining the target location based on a double touch gesture, the user can navigate towards the location with only one hand. This is useful because often users have one hand holding the mobile device, leaving only one hand free to navigate in the three dimensional environment.
- With one free hand to navigate, several user interface gestures may use two fingers. This section describes two user interface gestures using two fingers—anchored look-around and anchored helicopter. Each user interface gesture has one finger initially stationary with the other moving. The stationary finger may touch the screen before the moving finger. The initial relative position of the stationary and moving fingers may determine whether the user enters an anchored look-around navigation mode or an anchored helicopter navigation mode.
-
FIG. 8 is a flowchart illustrating amethod 800 for anchored look-around navigation.Method 800 begins by receiving a user input for a two finger touch on a view of the mobile device at step 802. One of the two fingers is in motion and the direction of motion (e.g. a motion vector) of the second finger is received at step 804. The two finger touch is illustrated inFIG. 9A . -
FIG. 9A shows a diagram 900. Diagram 900 showsmobile device 100 withview 102. A user has touchedview 102 with afinger 902 and afinger 904.Finger 902 is initially stationary andfinger 904 is initially in motion.Finger 902 may touch the screen at least a certain amount time beforefinger 904. As result of the relative position offinger - Referring back to
FIG. 8 , once the movement of the second finger is received, an orientation of the virtual camera is changed according to the movement of the second finger. How the virtual camera's orientation is changed is illustrated inFIG. 9B . -
FIG. 9B shows a diagram 950 illustrating a virtual camera looking around in a three dimensional environment. Diagram 950 shows threedimensional terrain 210 andvirtual camera 202. By changing its pitch,camera 202 may look up and down as show by anarrow 952. By changing its yaw,camera 202 may look left and right as show by anarrow 954. - In
FIG. 9A , when a user movesfinger 904 to the left or right, as shown byarrows finger 904 up or down, as shown byarrows arrow 920, both a pitch and a yaw of the virtual camera may change. The pitch and yaw may change according to the components of the motion vector along the axes of the mobile device. In this way, by moving a finger, a user can cause the virtual camera to look-around, viewing the three dimensional environment from different perspectives. - In an alternative embodiment, the virtual camera may look to the left and right based on the user input, while looking up and down based on an orientation of a mobile device. An orientation of the mobile device relative to gravity may be received from an accelerometer of the mobile device. A pitch of the virtual camera may be changed according to the orientation of the mobile device. In this way, the user can look up and down by angling the mobile device up and down.
- In an embodiment, an axis of the virtual camera may be determined based on the position of the first, stationary finger. In an example, a target location may be determined based on the position of the stationary finger. The axis is the line connecting the virtual camera and the target location. In this embodiment, movement of the second finger causes the virtual camera to rotate about the axis.
- As mentioned earlier, a user enters an anchored look-around mode when the stationary finger is below the moving finger. However, when the stationary finger is above the moving finger, the user may enter an anchored helicopter mode. Anchored helicopter mode is described with respect to FIGS. 10 and 11A-B.
-
FIG. 10 is a flowchart illustrating amethod 1000 for anchored helicopter navigation.Method 1000 begins by receiving a user input for a two finger touch on a view of the mobile device atstep 1002. One of the two fingers is in motion and the direction of motion (e.g. a motion vector) of the second finger is received at step 1004. The two finger touch is illustrated inFIG. 11A . -
FIG. 11A shows a diagram 1100. Diagram 1100 showsmobile device 100 withview 102. A user has touchedview 102 with afinger 1102 and afinger 1104.Finger 1102 is initially stationary andfinger 1104 is initially in motion. A user may touch view 102 with a finger 1102 a certain time prior to touching the view with afinger 1104. As result of the relative position offinger - Referring back to
FIG. 10 , after receiving user input, a target location is determined at step 1004. In an embodiment, the target location may be determined based on the position of the first, stationary finger. The target location may be determined by extending a screen ray as described inFIG. 6A . Further, the screen ray may be damped as described with respect toFIG. 6B . Alternatively, the target location may be determined by extending a ray through a center of the virtual camera's view frustum. The ray may intersect with a three dimensional model at a target location. These examples are illustrative, and other methods of determining a target location may be used as are known to those of skill in the art. - Once a target location is determined, a tilt or azimuth value relative to the target location is changed according to the movement of the second finger at step 1008. Step 1008 is illustrated in
FIG. 11B . -
FIG. 11B shows a diagram 1150 illustrating anchored helicopter navigation. Diagram 1150 showsvirtual camera 202 directed towards threedimensional terrain 210. Aray 1160 is extended to determine atarget 1158 as described forstep 1006. Fromtarget 1158, avector 1162 directed upwards is determined. Relative tovector 1162,virtual camera 202 has atilt angle 1156 and anazimuth angle 1154. Changingtilt angle 1156 causesvirtual camera 202 to move up or down, and changingazimuth angle 1154 causesvirtual camera 202 to orbit aroundtarget 1158 at a constant elevation. In an embodiment, changingtilt angle 1156 andazimuth angle 1154 does not change the distance betweenvirtual camera 202 andtarget 1158. In this way, changingtilt angle 1156 andazimuth angle 1154 navigates the virtual camera aroundtarget 1158 while staying equidistant totarget 1158. - In
FIG. 11A , when a user movesfinger 1104 left or right, as shown byarrows virtual camera 202 to orbit around thetarget 1158 at a constant elevation. Similarly, when a user movesfinger 904 up or down, as shown byarrows arrow 1116, both the tilt and azimuth angles may change. The tilt and azimuth values may change according the components of the motion vector along the axes of the mobile device. In this way, by moving a finger, a user can cause the virtual camera to move around a target location, viewing a target location from different perspectives. An orientation of the virtual camera may also change such that the virtual camera continues to face the target. - In an example, a user may move
finger 1104 down and to the right. In this example, both a tilt and azimuth value relative to a target location may increase in response to the finger movement. As the tilt value increases the virtual camera moves down towards the elevation of the target location. Meanwhile, the increasing azimuth value causes the virtual camera to rotate around the target location. While the virtual camera is moving, the virtual camera may remain oriented toward the target location. In this way, a user can easily view a feature in the three dimensional environment from different perspectives. - In an embodiment, the distance between the virtual camera and the target location may also change. For example, the virtual camera may swoop into the target by moving the virtual camera into a target while changing a tilt or azimuth value. Also, the virtual camera can move away from the target while changing a tilt or azimuth value.
- In an embodiment, moving
finger 1104 left or right may change an azimuth angle, while a tilt angle is determined according to an orientation of the mobile device. An orientation of the mobile device relative to gravity may be received from an accelerometer of the mobile device. Based on the orientation of the mobile device, the tilt angle is determined. In this way, the user may move the virtual camera up and down by moving the mobile device up and down. - For instance, a user holding the mobile device and viewing a display may move the device relative to the ground. As the device moves to face the ground, the virtual camera may move above the target and face down toward the target. As the device moves to perpendicular to the ground, the virtual camera may move to the target's elevation and view the target from a ground-level view.
- As mentioned earlier, a user may cause a virtual camera to look around by moving one finger and keeping another stationary. This section describes another gesture that may cause a virtual camera to look around. The gesture described in this section includes two fingers touching the display. In general, two fingers move in approximately the same direction by approximately the same distance and the virtual camera moves according to the finger movement.
-
FIG. 12 shows a diagram 1200 illustrating a two finger gesture for looking around in a three dimensional environment on a mobile device. Diagram 1200 showsmobile device 100 withview 102. Touchingview 102 arefingers user touching view 102, user movesfingers view 102 as shown byvectors Vectors fingers -
Vectors Vectors directions vectors vectors - Based on the direction and distance that the user moves
fingers Fingers vectors - In response to movement of
fingers FIG. 9B .FIG. 9B shows a diagram 950 with threedimensional terrain 210 andvirtual camera 202. Diagram 950 shows threedimensional terrain 210 andvirtual camera 202. When the vector of finger movement is to the left or right on the mobile device, the virtual camera's yaw may change. Changing the virtual camera's yaw causes the camera to look left or right as show byarrow 954. Similarly, when the vector of finger movement is up or down on the mobile device, the virtual camera's pitch may change. Changing the virtual camera's pitch causes the camera to look to up or down as shown by arrow 956. - When a user moves his finger at a diagonal, both a pitch and a yaw of the virtual camera may change. The pitch and yaw may change according to the components of the vector of the finger movement along the axes of the mobile device. In this way, by moving two fingers, a user can cause the virtual camera to look-around, viewing the three dimensional environment from different perspectives.
- In an alternative embodiment, the virtual camera may look to the left and right based on the user input, while looking up and down based on an orientation of a mobile device. An orientation of the mobile device relative to gravity may be received from an accelerometer of the mobile device. A pitch of the virtual camera may be changed according to the orientation of the mobile device. In this way, the user can look up and down by angling the mobile device up and down. The orientation of the mobile device may be determined by an accelerometer. The next section describes accelerometer navigation in greater detail.
- This section describes navigating a virtual camera with an accelerometer in greater detail.
FIG. 13 is a flowchart illustrating amethod 1300 for navigating a virtual camera based on an orientation of a mobile device. -
Method 1300 begins with enabling accelerometer navigation atstep 1302. Accelerometer navigation may be enabled, for example, when a user makes a setting change to turn it on or at startup if a default setting is set for accelerometer navigation. In another example, entering a navigation mode such as anchored navigation or look-around navigation may enable accelerometer navigation. Also, accelerometer navigation may be enabled when a change in orientation of the mobile device exceeds a threshold. This way minor changes in orientation do not unintentionally change the perspective of the virtual camera. Also, the accelerometer navigation may be enabled when an orientation of the mobile device relative to gravity exceeds a threshold. If an orientation of the mobile device relative to gravity is below a threshold, the orientation may be in a “dead zone”. - Once accelerometer navigation is enabled, an orientation of the mobile device is determined at
step 1304. In an embodiment, an accelerometer determines the direction of gravity and an orientation of the mobile device relative to gravity. Based on the orientation of the mobile device, the virtual camera's position or orientation is changed at step 1306.Steps 1304 and 1306 are illustrated inFIGS. 14A-C . Further, the accelerometer readings may be damped. -
FIGS. 14A-C each show a mobile device with a different orientation.FIG. 14A shows a profile of amobile device 1402 facing the ground. Suppose that an orientation ofmobile device 1402 is defined relative to a vector normal to the plane of the screen ofmobile device 1402. An accelerometer of the mobile device detects that gravity is facing straight down. In other words, gravity is parallel to the orientation ofmobile device 1402. As result, the virtual camera is oriented straight down at a three dimensional model, such as a three dimensional model of the Earth. With the virtual camera facing the ground, the virtual camera may capture animage 1404 of the ground. -
FIG. 14B shows a profile of amobile device 1422 at an angle relative the ground. An accelerometer of the mobile device detects that gravity has anangle 1426 relative to the orientation of the mobile device. As result, the virtual camera's pitch may be set toangle 1426. Orienting the virtual camera atangle 1426, an image captured by the virtual camera and displayed to the user may appear as animage 1424. - Alternatively, the virtual camera's pitch may be determined based on
angle 1426. In an embodiment, a range of angles of the mobile device may interpolate smoothly to a range of angles of the virtual camera. The interpolation may be a linear interpolation. In an example, suppose the range of angles of the mobile device is 30 degrees to 90 degrees. That range interpolate to a range of angles of the virtual camera of 0 degrees to 90 degrees. In that example, if a user holds the device at 60 degrees, an angle of the virtual camera may be set to 45 degrees. This example is merely illustrative. -
FIG. 14C shows a profile of amobile device 1432 normal to the ground. An accelerometer of the mobile device detects that gravity has an angle 1436 relative to the mobile device. As result, the virtual camera's pitch may be set to angle 1436. Orienting the virtual camera at angle 1436, an image captured by the virtual camera and displayed to the user may appear as animage 1434 facing the horizon. - As illustrated in
FIGS. 14A-C , as the user changes an orientation of the mobile device, an orientation of the virtual camera changes. Thus, as the user directs the mobile device toward the horizon, the virtual camera looks toward the horizon. As the user directs the mobile device towards the sky, the virtual camera looks toward the sky. Finally, as the user directs the mobile device towards the ground, the virtual camera looks toward the ground. - In addition to changing an orientation of the virtual camera, a position of the virtual camera may also change according to an orientation of a mobile device. In an embodiment, a target location and a tilt angle may be determined as described with respect to
FIGS. 11A-B . As the orientation of the mobile device changes, a tilt angle of the virtual camera relative to a target location may change. - In this way, a user can navigate through a three dimensional environment by changing an orientation of a mobile device.
- The anchored navigation section discussed a two finger gesture with one finger initially stationary and the other initially in motion. This section describes a two finger gesture with both fingers initially in motion. The two finger gesture may be referred to as a pinch and is described with respect to
FIG. 15 andFIGS. 16A-B . A pinch also may be distinguished from anchored navigation by the timing of the first and second finger touches. For example, when a time between first and second fingers is above a threshold, an anchored navigation mode may be activated. When a time between first and second fingers is above a threshold, the virtual camera may be moved with a pinch momentum. In an alternative embodiment, the anchored navigation mode may be activated when the time is below a threshold, and the virtual camera may be moved with a pinch momentum when the time is above a threshold. -
FIG. 15 is a flowchart illustrating amethod 1500 for navigating a virtual camera using a pinch.Method 1500 begins by receiving an input for a user pinch on the view at 1502. A user pinch is described is illustrated inFIG. 16A . -
FIG. 16A shows a diagram 1600 illustrating a pinch gesture on a mobile device. Diagram 1600 showsmobile device 100 withview 102. A user has touched view withfingers step 1504. Movingfingers arrows fingers arrows - Based on the pinch speed determined in
step 1504, a virtual camera speed is determined at step 1506. The virtual camera speed may be positive (forward) if the pinch speed is positive, and the virtual camera speed may be negative (reverse) if the pinch speed is negative. In an example, the virtual camera speed may be linearly interpolated from the pinch speed. This is just an illustrative example and this not meant to limit the present invention. - At
step 1508, the virtual camera accelerates to the speed determined at step 1506. At step 1510, the virtual camera may decelerate gradually. To decelerate the virtual camera, a momentum of the virtual camera may be simulated, and the virtual camera may be exposed to a simulated air resistance.Steps 1508 and 1510 are illustrated inFIG. 16B . -
FIG. 16B shows a diagram 1650 illustrating a virtual camera subjected to a pinch momentum. Diagram 1650 shows a virtual camera starting at aposition 1652 and ending at aposition 1654. Diagram 1650 shows the virtual camera at several points in time t0, t1, t2, t3, t4, and t5. As time passed, the virtual camera decelerates. - In another embodiment, both fingers need not be initially in motion. One or both fingers could be initially stationary. Further, a pinch may translate the virtual camera or cause a virtual camera to zoom without any momentum. In that embodiment, the virtual camera zooms or translates according a distance or speed of the pinch. When the pinch gesture is completed, the virtual camera may stop zooming or translating.
- In an embodiment, the virtual camera may be translated in a straight line. Alternatively, the virtual camera may stay stationary and the three dimensional model may move. In an example, the three dimensional model may rotate. This motion of the three dimensional model relative to the virtual camera may be referred to as “panning”.
- In another embodiment, the virtual camera is both zoomed (or translated) and rotated. The rotation of the camera is based on the angle between the two fingers, and the zoom is based on the distance between the two fingers. These two actions can be done simultaneously. Neither finger needs to be anchored for this gesture, but either finger may be anchored. This embodiment is illustrated in
FIG. 16C . - In
FIG. 16C ,finger 1 andfinger 2 are in contact with surface at the same time. Further,finger 1 andfinger 2 may be in motion at the same time.Rotating finger 1 andfinger 2 as illustrated byarrows FIGS. 6A-B . In examples, the screen ray may be determined based on the location of one the fingers, such as the first finger to touch the screen. Alternatively, the screen ray may be determined based on a midpoint between the fingers. In this way, the target point is not covered by one of the user's fingers on the display. - Once the target point is determined, the camera may rotate around the target point. In one embodiment, the camera may rotate around the target point by changing an azimuth value as described for
FIG. 11B . In this way, the camera may helicopter around a target point, viewing the target from different perspectives. - In one embodiment, an “invisible” line may be determined connecting
finger 1 andfinger 2. When a user rotatesfinger arrows - Further, changing a distance between
finger 1 andfinger 2, as illustrated witharrow 1679, may change a range of virtual camera, e.g., by zooming or translating the virtual camera. In one example, an invisibleline connecting finger FIGS. 16A-B . Further, a momentum may be applied to continue the gesture as discussed above. A speed of either the rotation, the zoom, or both may diminish gradually after removal of fingers based on a speed at end of gesture. - In one example operation, the user may rotate
finger finger finger finger - By zooming and rotating in a single user interface gesture, embodiments enable a user to navigate easily around a target point and to view a target from different perspectives.
- This section describes panning a virtual camera through a three dimensional environment on a mobile device. In general, a user pans by selecting a position on the view of the mobile device with a finger. Based on the selected position, a target location is determined. As the user drags his finger, the position of the three dimensional model relative to the virtual camera moves to follow the target location. This may be referred to as a touch-and-drag gesture. In an embodiment, the three dimensional model rotates to follow the user's finger in response to the touch-and-drag gesture.
-
FIG. 17 is a flowchart illustrating a method 1700 for panning on a mobile device. Method 1700 begins at step 1702 with receiving a first and second position selected by a user of a mobile device. Selecting the first and second positions is illustrated inFIG. 18A . Each of the first and second position may be defined by an X and Y coordinates on the view.FIG. 18A shows a diagram 1800 illustrating panning on a mobile device. Diagram 1800 showsmobile device 100 withview 102. A user touches aposition 1802 with his finger and drags his finger to anew position 1804. - Based on
position 1802 andposition 1804, first and second target points are determined at step 1704. The first and second target points may be determined with rays as described with respect toFIG. 6A-B . If the ray is nearly tangential to the three dimensional model, the target point may need to be damped as described with respect toFIG. 6B . Each target point may be defined by, for example, a latitude, longitude, and altitude. Altitude (as the term is meant here) may be the distance from the target point to a center of the three dimensional model. In an embodiment, the first target point is determined by intersecting a ray with the three dimensional model and the second target point is determined by intersecting a ray with a virtual surface sphere. Determining the target points is illustrated inFIG. 18B . -
FIG. 18B shows a diagram 1800 withvirtual camera 202 facing threedimensional terrain 210. As mentioned earlier, threedimensional terrain 210 may be a portion of a three dimensional model. In an embodiment, the first target point (target point 1854) may be determined by extending aray 1852 to intersect with the three dimensional model at threedimensional terrain 210. Based ontarget point 1854, avirtual sphere surface 1862 is determined.Virtual sphere surface 1862 may have a center at the center of the three dimensional model and may betangent target point 1854. By intersecting asecond ray 1864 withvirtual sphere surface 1862, atarget point 1856 is determined. Alternatively, a virtual surface may not be used and the second target point may be determined by intersecting a ray with the three dimensional model. These two points,target point 1854 andtarget point 1856, form a geocentric vector relative to the center of the three dimensional model. - Referring back to
FIG. 17 , once the target points are determined, a rotation axis is determined at step 1706. To compute a rotation axis, cross product between the two target points may be determined. Referring toFIG. 18B , the two target points may be defined by two vectors V1′ and V1. The rotation axis is computed by taking the cross product between V1′ and V1 (V1′×V1). Once the rotation axis is determined, the three dimensional model is rotated at step 1708. The three dimensional model is rotated by an angle α determined by computing the dot product between the two vectors V1′ and V1 (α=V1′·V1). A rotation matrix is computed based on the angle α and the rotation axis. Finally, the three dimensional model is rotated based on the rotation matrix. - Upon the completion of the panning motion, the last screen space position of the finger may be recorded. Further, the panning motion may continue after the user gesture is completed. This gives the feeling to the user that he is spinning a globe. The speed of rotation may decrease gradually to simulate friction.
- In this way, a target grabbed by a user with his finger follows the user's finger movements. To the user, it may feel as if he is touching the planet and manipulating it. Due to the size of the view, the first and second positions of the finger cannot be too far apart. This limits the speed at which a user can pan and improves stability of the pan gesture.
- There may be several panning modes. When accelerometer navigation is enabled and the mobile device is angled below a certain value, a touch-and-drag gesture may have a different behavior. In that case, while the touch-and-drag gesture in the vertical direction may cause panning as described above with respect
FIG. 17 , a touch-and-drag gesture in the horizontal direction may cause the virtual camera to look-around. This is illustrated inFIGS. 19A-C . -
FIG. 19A shows a diagram 1900 illustrating amobile device 1904.Mobile device 1904 has an accelerometer that detects its angle β relative to gravity. When the angle β of the mobile device is above a threshold α, a user can pan in all directions as illustrated in diagram 1930 inFIG. 19B . When the β of the mobile device is below a threshold α, a touch-and-grab gesture to the left and right does not pan, but causes the virtual camera to look left and right as illustrated in diagram 1960 inFIG. 19C . The virtual camera may look to the left and right by changing a yaw value of the virtual camera. - Note that in the preceding description embodiments have for clarity been described with respect to fingers making contact with a touch screen. However, any other object, such as a stylus, may be used as is known to those of skill in the art.
- The Summary and Abstract sections may set forth one or more but not all exemplary embodiments of the present invention as contemplated by the inventor(s), and thus, are not intended to limit the present invention and the appended claims in any way.
- The present invention has been described above with the aid of functional building blocks illustrating the implementation of specified functions and relationships thereof. The boundaries of these functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternate boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed.
- The foregoing description of the specific embodiments will so fully reveal the general nature of the invention that others can, by applying knowledge within the skill of the art, readily modify and/or adapt for various applications such specific embodiments, without undue experimentation, without departing from the general concept of the present invention. Therefore, such adaptations and modifications are intended to be within the meaning and range of equivalents of the disclosed embodiments, based on the teaching and guidance presented herein. It is to be understood that the phraseology or terminology herein is for the purpose of description and not of limitation, such that the terminology or phraseology of the present specification is to be interpreted by the skilled artisan in light of the teachings and guidance.
- The breadth and scope of the present invention should not be limited by any of the above-described exemplary embodiments, but should be defined only in accordance with the following claims and their equivalents.
Claims (46)
1. A computer-implemented method for navigating a virtual camera in a three dimensional environment on a mobile device having a touch screen, comprising:
(a) receiving a first user input indicating that a first object is approximately stationary on a touch screen of the mobile device;
(b) receiving a second user input indicating that a second object has moved on the touch screen; and
(c) changing an orientation of the virtual camera according to the second user input.
2. The method of claim 1 , wherein the receiving (a) comprises receiving the first user input
indicating that a first finger is approximately stationary on the touch screen, and
wherein the receiving (b) comprises receiving the second user input indicating that a second finger has moved on the touch screen.
3. The method of claim 2 , wherein the receiving (b) includes receiving a direction of the second finger's motion.
4. The method of claim 3 , wherein the changing (c) includes changing a yaw of the virtual camera when the direction is toward the left or right side of the mobile device.
5. The method of claim 3 , wherein the changing (c) includes changing a pitch of the virtual camera when the direction is toward the top or bottom of the mobile device.
6. The method of claim 2 , further comprising:
(d) receiving an orientation of the mobile device from an accelerometer of the mobile device; and
(e) changing an orientation of the virtual camera according to the orientation of the mobile device.
7. The method of claim 2 , wherein the changing (c) comprises changing the orientation of the virtual camera when a position of the first finger is initially below a position of the second finger, further comprising:
(d) determining a target location in the three dimensional environment; and
(e) changing a position of the virtual camera according to the second user input when the position of the first finger is above the position of the second finger, wherein a distance between the target location and the position of the virtual camera stays approximately constant.
8. The method of claim 1 , further comprising:
(d) determining an axis of the virtual camera based on the first user input, and
wherein the changing (c) comprises rotating the virtual camera about the axis.
9. A system for navigating a virtual camera in a three dimensional environment on a mobile device, comprising:
a touch receiver that receives a first user input indicating that a first object is approximately stationary on a touch screen of the mobile device and receives a second user input indicating that a second object has moved on the touch screen; and
a look around module that changes an orientation of the virtual camera according to the second user input.
10. The system of claim 9 , wherein the first and second objects are fingers.
11. The system of claim 10 , wherein the second user input includes a direction of the second finger's motion.
12. The system of claim 10 , wherein the look around module changes a yaw of the virtual camera when the direction is toward the left or right side of the mobile device.
13. The system of claim 10 , wherein the look around module changes a pitch of the virtual camera when the direction is toward the top or bottom of the mobile device.
14. The system of claim 10 , further comprising:
an accelerometer that receives an orientation of the mobile device from an accelerometer of the mobile device; and
a navigation module that changes an orientation of the virtual camera according to the orientation of the mobile device.
15. The system of claim 10 , wherein the look around module changes the orientation of the virtual camera when a position of the first finger is above a position of the second finger, further comprising:
a target module that determines a target location in the three dimensional environment; and
a helicopter module that changes a position of the virtual camera according to the second user input when the position of the first finger is below the position of the second finger, wherein a distance between the target location and the position of the virtual camera stays approximately constant.
16. A computer-implemented method for navigating a virtual camera in a three dimensional environment on a mobile device having a touch screen, comprising:
(a) receiving a first user input indicating that a first object is approximately stationary on a touch screen of the mobile device;
(b) receiving a second user input indicating that a second object has moved on the touch screen;
(c) determining a target location in the three dimensional environment; and
(d) changing a position of the virtual camera according to the second user input, wherein a distance between the target location and the position of the virtual camera stays approximately constant.
17. The method of claim 16 , wherein the receiving (a) comprises receiving the first user input
indicating that a first finger is approximately stationary on the touch screen, and
wherein the receiving (b) comprises receiving the second user input indicating that a second finger has moved on the touch screen.
18. The method of claim 17 , wherein the receiving (b) includes receiving a direction of the second finger's motion.
19. The method of claim 18 , wherein the changing (d) includes changing an azimuth relative to a vector directed upwards from the target when the direction is toward the left or right of the mobile device.
20. The method of claim 18 , wherein the changing (d) includes changing a tilt relative to a vector directed upwards from the target when the direction is toward the top or bottom of the mobile device.
21. The method of claim 17 , further comprising:
(e) receiving an orientation of the mobile device from an accelerometer of the mobile device; and
(f) changing a position of the virtual camera according to the second orientation of the mobile device, wherein a distance between the target location and the position of the virtual camera stays approximately constant.
22. The method of claim 17 , wherein the changing (d) comprises changing the position of the virtual camera when the position of the first finger is initially above the position of the second finger, further comprising:
(e) changing an orientation of the virtual camera according to the second user input when the position of the first finger is below the position of the second finger.
23. The method of claim 17 , wherein the determining (c) comprises determining the target location based on a position of the first finger on the touch screen.
24. The method of claim 23 , wherein the determining (c) further comprises damping the target location when the position of the first finger is close to the horizon.
25. The method of claim 17 , wherein the determining (c) comprises determining:
(i) extending a ray based on a position of the virtual camera and a position of the finger; and
(ii) intersecting the ray with a three dimensional model in the three dimensional environment to determine the target location.
26. The method of claim 17 , further comprising:
(e) determining a speed to move for the virtual camera based on the second input; and wherein the changing (d) comprises changing the position of the virtual camera at the speed determined in (e) after movement of the second object is complete.
27. The method of claim 26 , further comprising:
(f) slowing the virtual camera gradually.
28. A system for navigating a virtual camera in a three dimensional environment on a mobile device, comprising:
a touch receiver that receives a first user input indicating that a first object is approximately stationary on a touch screen of the mobile device and a second user input indicating that a second object has moved on the touch screen;
a target module that determines a target location in the three dimensional environment; and
a helicopter module that changes a position of the virtual camera according to the second user input, wherein a distance between the target location and the position of the virtual camera stays approximately constant.
29. The system of claim 28 , wherein the first and second objects are fingers.
30. The system of claim 29 , wherein the second user input includes a direction of the second finger's motion.
31. The system of claim 30 , wherein the helicopter module changes an azimuth relative to a vector directed upwards from the target when the direction is toward the left or right of the mobile device.
32. The system of claim 30 , wherein the helicopter module changes a tilt relative to a vector directed upwards from the target when the direction is toward the top or bottom of the mobile device.
33. The system of claim 29 , further comprising:
an accelerometer that receives an orientation of the mobile device the mobile device; and
a navigation module that changes a position of the virtual camera according to the second user input, wherein a distance between the target location and the position of the virtual camera stays approximately constant.
34. The system of claim 29 , wherein the helicopter module changes the position of the virtual camera when the position of the first finger is above the position of the second finger, and further comprising:
a helicopter module that changes an orientation of the virtual camera according to the second user input when the position of the first finger is below the position of the second finger.
35. The system of claim 29 , wherein the target module determines the target location based on a position of the first finger on the touch screen.
36. The system of claim 35 , wherein the target module damps the target location when the position of the first finger is close to the horizon.
37. The system of claim 29 , wherein the target module extends a ray based on a position of the virtual camera and a position of the finger and intersects the ray with the three dimensional environment to determine the target location.
38. A computer-implemented method for navigating a virtual camera in a three dimensional environment on a mobile device having a touch screen, comprising:
(a) receiving a first user input indicating that a first object is approximately stationary on a touch screen of the mobile device;
(b) receiving a second user input indicating that a second object has moved on the touch screen;
(c) determining a target location in the three dimensional environment;
(d) changing a tilt value of the virtual camera relative to a vector directed upwards from the target location; and
(e) changing an azimuth value of the virtual camera relative to the vector directed upwards from the target location.
39. A computer-implemented method for navigating a virtual camera in a three dimensional environment on a mobile device having a touch screen, comprising:
(a) receiving a first user input indicating that a first object has touched a first point on a touch screen of a mobile device;
(b) receiving a second user input indicating that a second object has touched a second point on the touch screen after the first object touched the first point on the screen; and
(c) determining a navigation mode from a plurality of navigation modes based on the position of the first point relative to the second point.
40. The method of claim 39 , wherein the determining (c) comprises determining the navigation mode from the plurality of navigation modes, wherein the plurality of navigation modes includes a first navigation mode that changes the virtual camera's position and a second navigation mode that changes the virtual camera's orientation.
41. A computer-implemented method for navigating a virtual camera in a three dimensional environment on a mobile device having a touch screen, comprising:
(a) receiving a first user input indicating that a first object has touched a first point on a touch screen of a mobile device;
(b) receiving a second user input indicating that a second object has touched a second point on the touch screen after the first object touched the first point on the screen; and
(c) determining a navigation mode from a plurality of navigation modes based on the position of the first point relative to the second point.
42. A computer-implemented method for navigating a virtual camera in a three dimensional environment on a mobile device having a touch screen, comprising:
(a) receiving a user input indicating that two objects have touched the touch screen of a mobile device and the two objects have moved on the touch screen approximately the same distance in approximately the same direction;
(b) determining motion data representing motion of the two objects on the touch screen; and
(c) changing an orientation of the virtual camera according to the motion data determined in (b).
43. The method of claim 42 , the determining (b) comprises determining vector representing motion of the two objects on the touch screen.
44. The method of claim 43 , wherein the changing (c) comprises:
(i) changing a yaw of the virtual camera based on a component of the vector in on a left-right axis of the mobile device.
45. The method of claim 44 , wherein the changing (c) further comprises:
(ii) changing a pitch of the virtual camera based on a component of the vector in on an up-down axis of the mobile device.
46. The method of claim 44 , wherein further comprising:
(d) changing a pitch of the virtual camera based on an orientation of the mobile device.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US12/546,274 US20100045666A1 (en) | 2008-08-22 | 2009-08-24 | Anchored Navigation In A Three Dimensional Environment On A Mobile Device |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US9123408P | 2008-08-22 | 2008-08-22 | |
US12/546,274 US20100045666A1 (en) | 2008-08-22 | 2009-08-24 | Anchored Navigation In A Three Dimensional Environment On A Mobile Device |
Publications (1)
Publication Number | Publication Date |
---|---|
US20100045666A1 true US20100045666A1 (en) | 2010-02-25 |
Family
ID=41695929
Family Applications (8)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US12/546,274 Abandoned US20100045666A1 (en) | 2008-08-22 | 2009-08-24 | Anchored Navigation In A Three Dimensional Environment On A Mobile Device |
US12/546,261 Active 2034-07-16 US9310992B2 (en) | 2008-08-22 | 2009-08-24 | Panning in a three dimensional environment on a mobile device |
US12/546,245 Abandoned US20100045703A1 (en) | 2008-08-22 | 2009-08-24 | User Interface Gestures For Moving a Virtual Camera On A Mobile Device |
US12/546,293 Active 2030-01-04 US8847992B2 (en) | 2008-08-22 | 2009-08-24 | Navigation in a three dimensional environment using an orientation of a mobile device |
US15/095,442 Active 2030-07-13 US10222931B2 (en) | 2008-08-22 | 2016-04-11 | Panning in a three dimensional environment on a mobile device |
US16/291,067 Active US11054964B2 (en) | 2008-08-22 | 2019-03-04 | Panning in a three dimensional environment on a mobile device |
US16/291,063 Active US10942618B2 (en) | 2008-08-22 | 2019-03-04 | Panning in a three dimensional environment on a mobile device |
US17/366,775 Active US12032802B2 (en) | 2008-08-22 | 2021-07-02 | Panning in a three dimensional environment on a mobile device |
Family Applications After (7)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US12/546,261 Active 2034-07-16 US9310992B2 (en) | 2008-08-22 | 2009-08-24 | Panning in a three dimensional environment on a mobile device |
US12/546,245 Abandoned US20100045703A1 (en) | 2008-08-22 | 2009-08-24 | User Interface Gestures For Moving a Virtual Camera On A Mobile Device |
US12/546,293 Active 2030-01-04 US8847992B2 (en) | 2008-08-22 | 2009-08-24 | Navigation in a three dimensional environment using an orientation of a mobile device |
US15/095,442 Active 2030-07-13 US10222931B2 (en) | 2008-08-22 | 2016-04-11 | Panning in a three dimensional environment on a mobile device |
US16/291,067 Active US11054964B2 (en) | 2008-08-22 | 2019-03-04 | Panning in a three dimensional environment on a mobile device |
US16/291,063 Active US10942618B2 (en) | 2008-08-22 | 2019-03-04 | Panning in a three dimensional environment on a mobile device |
US17/366,775 Active US12032802B2 (en) | 2008-08-22 | 2021-07-02 | Panning in a three dimensional environment on a mobile device |
Country Status (8)
Country | Link |
---|---|
US (8) | US20100045666A1 (en) |
EP (1) | EP2327010A2 (en) |
JP (1) | JP2012501016A (en) |
KR (1) | KR101665034B1 (en) |
CN (3) | CN103324386A (en) |
AU (1) | AU2009282724B2 (en) |
CA (1) | CA2734987A1 (en) |
WO (1) | WO2010022386A2 (en) |
Cited By (103)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20090325607A1 (en) * | 2008-05-28 | 2009-12-31 | Conway David P | Motion-controlled views on mobile computing devices |
US20100045667A1 (en) * | 2008-08-22 | 2010-02-25 | Google Inc. | Navigation In a Three Dimensional Environment Using An Orientation Of A Mobile Device |
US20100223093A1 (en) * | 2009-02-27 | 2010-09-02 | Hubbard Robert B | System and method for intelligently monitoring subscriber's response to multimedia content |
US20110190052A1 (en) * | 2010-02-03 | 2011-08-04 | Nintendo Co., Ltd. | Game system, controller device and game method |
EP2362302A1 (en) * | 2010-02-26 | 2011-08-31 | Alcatel Lucent | Method for controlling motions of an object in a 3-dimensional virtual environment |
US20110285622A1 (en) * | 2010-05-20 | 2011-11-24 | Samsung Electronics Co., Ltd. | Rendition of 3d content on a handheld device |
US8108147B1 (en) * | 2009-02-06 | 2012-01-31 | The United States Of America As Represented By The Secretary Of The Navy | Apparatus and method for automatic omni-directional visual motion-based collision avoidance |
EP2413104A1 (en) * | 2010-07-30 | 2012-02-01 | Pantech Co., Ltd. | Apparatus and method for providing road view |
WO2012045513A1 (en) * | 2010-10-08 | 2012-04-12 | Hicat Gmbh | Computer and method for visually navigating in a three-dimensional image data set |
WO2012075435A2 (en) * | 2010-12-03 | 2012-06-07 | Google Inc. | Showing realistic horizons on mobile computing devices |
GB2487039A (en) * | 2010-10-11 | 2012-07-11 | Michele Sciolette | Visualizing Illustrated Books And Comics On Digital Devices |
US20120200510A1 (en) * | 2011-02-09 | 2012-08-09 | Robotzone, Llc | Multichannel controller |
EP2422854A3 (en) * | 2010-08-20 | 2012-08-22 | Nintendo Co., Ltd. | Game system, game device, storage medium storing game program, and game process method |
US20120249786A1 (en) * | 2011-03-31 | 2012-10-04 | Geovs Ltd. | Display System |
US20120264406A1 (en) * | 2011-04-15 | 2012-10-18 | Avaya Inc. | Obstacle warning system and method |
US8339364B2 (en) | 2010-02-03 | 2012-12-25 | Nintendo Co., Ltd. | Spatially-correlated multi-display human-machine interface |
US20120326975A1 (en) * | 2010-06-03 | 2012-12-27 | PixArt Imaging Incorporation, R.O.C. | Input device and input method |
US20130027342A1 (en) * | 2010-05-21 | 2013-01-31 | Nec Corporation | Pointed position determination apparatus of touch panel, touch panel apparatus, electronics apparatus including the same, method of determining pointed position on touch panel, and computer program storage medium |
US20130070852A1 (en) * | 2011-09-19 | 2013-03-21 | Acer Incorporated | Method for assisting video compression by using touch screen and monitoring system |
US8498816B2 (en) * | 2010-06-15 | 2013-07-30 | Brother Kogyo Kabushiki Kaisha | Systems including mobile devices and head-mountable displays that selectively display content, such mobile devices, and computer-readable storage media for controlling such mobile devices |
US20130208127A1 (en) * | 2012-02-13 | 2013-08-15 | Htc Corporation | Auto burst image capture method applied to a mobile device, method for tracking an object applied to a mobile device, and related mobile device |
US20130227493A1 (en) * | 2012-02-27 | 2013-08-29 | Ryan Michael SCHMIDT | Systems and methods for manipulating a 3d object in a 3d model using a software widget and surface constraints |
EP2635876A1 (en) * | 2010-11-01 | 2013-09-11 | Nokia Corp. | Visually representing a three-dimensional environment |
US20130321472A1 (en) * | 2012-06-05 | 2013-12-05 | Patrick S. Piemonte | Method, system and apparatus for selectively obtaining map image data according to virtual camera velocity |
US20130342533A1 (en) * | 2012-06-22 | 2013-12-26 | Matterport, Inc. | Multi-modal method for interacting with 3d models |
US8626434B1 (en) * | 2012-03-01 | 2014-01-07 | Google Inc. | Automatic adjustment of a camera view for a three-dimensional navigation system |
WO2014014806A1 (en) * | 2012-07-15 | 2014-01-23 | Apple Inc. | Disambiguation of multitouch gesture recognition for 3d interaction |
CN103677620A (en) * | 2012-08-31 | 2014-03-26 | 株式会社得那 | System and method for facilitating interaction with virtual space via touch sensitive surface |
US8702514B2 (en) | 2010-11-01 | 2014-04-22 | Nintendo Co., Ltd. | Controller device and controller system |
JP2014511534A (en) * | 2011-03-02 | 2014-05-15 | ザ・ボーイング・カンパニー | System and method for navigating a three-dimensional environment using a multi-input interface |
US8780174B1 (en) | 2010-10-12 | 2014-07-15 | The Boeing Company | Three-dimensional vision system for displaying images taken from a moving vehicle |
US20140229871A1 (en) * | 2011-10-27 | 2014-08-14 | The Hong Kong University Of Science And Technology | System and Method for Constrained Manipulations of 3D Objects by Multitouch Inputs |
US8814686B2 (en) | 2010-02-03 | 2014-08-26 | Nintendo Co., Ltd. | Display device, game system, and game method |
US20140267235A1 (en) * | 2013-03-15 | 2014-09-18 | Legend3D, Inc. | Tilt-based look around effect image enhancement method |
US8845430B2 (en) | 2011-03-08 | 2014-09-30 | Nintendo Co., Ltd. | Storage medium having stored thereon game program, game apparatus, game system, and game processing method |
US8845426B2 (en) | 2011-04-07 | 2014-09-30 | Nintendo Co., Ltd. | Input system, information processing device, storage medium storing information processing program, and three-dimensional position calculation method |
US20140306886A1 (en) * | 2011-10-26 | 2014-10-16 | Konami Digital Entertainment Co., Ltd. | Image processing device, method for controlling image processing device, program, and information recording medium |
US8907943B2 (en) | 2010-07-07 | 2014-12-09 | Apple Inc. | Sensor based display environment |
US8913009B2 (en) | 2010-02-03 | 2014-12-16 | Nintendo Co., Ltd. | Spatially-correlated multi-display human-machine interface |
US8956209B2 (en) | 2010-08-30 | 2015-02-17 | Nintendo Co., Ltd. | Game system, game apparatus, storage medium having game program stored therein, and game process method |
US8977987B1 (en) * | 2010-06-14 | 2015-03-10 | Google Inc. | Motion-based interface control on computing device |
US9021387B2 (en) | 2012-07-31 | 2015-04-28 | Hewlett-Packard Development Company, L.P. | Re-sizing user interface object on touch sensitive display |
CN104769393A (en) * | 2012-09-05 | 2015-07-08 | 赫尔环球有限公司 | Method and apparatus for transitioning from a partial map view to an augmented reality view |
US20150237328A1 (en) * | 2014-02-14 | 2015-08-20 | Dayu Optoelectronics Co., Ltd. | Method for generating three-dimensional images and three-dimensional imaging device |
US9132347B2 (en) | 2010-08-30 | 2015-09-15 | Nintendo Co., Ltd. | Game system, game apparatus, storage medium having game program stored therein, and game process method |
CN104965653A (en) * | 2015-06-15 | 2015-10-07 | 联想(北京)有限公司 | Control method and electronic equipment |
US9199168B2 (en) | 2010-08-06 | 2015-12-01 | Nintendo Co., Ltd. | Game system, game apparatus, storage medium having game program stored therein, and game process method |
US9208698B2 (en) | 2011-12-27 | 2015-12-08 | Apple Inc. | Device, method, and graphical user interface for manipulating a three-dimensional map view based on a device orientation |
US9241147B2 (en) | 2013-05-01 | 2016-01-19 | Legend3D, Inc. | External depth map transformation method for conversion of two-dimensional images to stereoscopic images |
EP2859535A4 (en) * | 2012-06-06 | 2016-01-20 | Google Inc | System and method for providing content for a point of interest |
US20160018981A1 (en) * | 2014-07-17 | 2016-01-21 | Facebook, Inc. | Touch-Based Gesture Recognition and Application Navigation |
US9282321B2 (en) | 2011-02-17 | 2016-03-08 | Legend3D, Inc. | 3D model multi-reviewer system |
US9286941B2 (en) | 2001-05-04 | 2016-03-15 | Legend3D, Inc. | Image sequence enhancement and motion picture project management system |
US9288476B2 (en) | 2011-02-17 | 2016-03-15 | Legend3D, Inc. | System and method for real-time depth modification of stereo images of a virtual reality environment |
WO2016057997A1 (en) * | 2014-10-10 | 2016-04-14 | Pantomime Corporation | Support based 3d navigation |
US9324179B2 (en) | 2010-07-19 | 2016-04-26 | Lucasfilm Entertainment Company Ltd. | Controlling a virtual camera |
US9329750B2 (en) | 2013-09-10 | 2016-05-03 | Google Inc. | Three-dimensional tilt and pan navigation using a single gesture |
WO2016071171A1 (en) * | 2014-11-07 | 2016-05-12 | Thales | Method and device for changing a view point of a three-dimensional map (3d) or of an image of an observed three-dimensional physical object (3d) by recognising a gesture on a multi-point touch screen |
US9375640B2 (en) | 2011-03-08 | 2016-06-28 | Nintendo Co., Ltd. | Information processing system, computer-readable storage medium, and information processing method |
EP2390777A3 (en) * | 2010-05-26 | 2016-07-06 | Sony Ericsson Mobile Communications AB | Touch interface for three-dimensional display control |
US9390617B2 (en) | 2011-06-10 | 2016-07-12 | Robotzone, Llc | Camera motion control system with variable autonomy |
US9407904B2 (en) | 2013-05-01 | 2016-08-02 | Legend3D, Inc. | Method for creating 3D virtual reality from 2D images |
CN105915877A (en) * | 2015-12-27 | 2016-08-31 | 乐视致新电子科技(天津)有限公司 | Free film watching method and device of three-dimensional video |
US9438878B2 (en) | 2013-05-01 | 2016-09-06 | Legend3D, Inc. | Method of converting 2D video to 3D video using 3D object models |
US20160320846A1 (en) * | 2013-12-18 | 2016-11-03 | Nu-Tech Sas Di De Michele Marco & C. | Method for providing user commands to an electronic processor and related processor program and electronic circuit |
US9507513B2 (en) | 2012-08-17 | 2016-11-29 | Google Inc. | Displaced double tap gesture |
US9539511B2 (en) | 2011-03-08 | 2017-01-10 | Nintendo Co., Ltd. | Computer-readable storage medium, information processing system, and information processing method for operating objects in a virtual world based on orientation data related to an orientation of a device |
US9547937B2 (en) | 2012-11-30 | 2017-01-17 | Legend3D, Inc. | Three-dimensional annotation system and method |
EP2437221A3 (en) * | 2010-10-04 | 2017-01-25 | Fujitsu Limited | Device, method, and computer-readable storage medium for operation of a three-dimensional movement of a virtual object |
US9561443B2 (en) | 2011-03-08 | 2017-02-07 | Nintendo Co., Ltd. | Computer-readable storage medium, information processing system, and information processing method |
US20170052684A1 (en) * | 2014-04-07 | 2017-02-23 | Sony Corporation | Display control apparatus, display control method, and program |
US9609307B1 (en) | 2015-09-17 | 2017-03-28 | Legend3D, Inc. | Method of converting 2D video to 3D video using machine learning |
US9643085B2 (en) | 2011-03-08 | 2017-05-09 | Nintendo Co., Ltd. | Computer-readable storage medium, information processing system, and information processing method for controlling a virtual object using attitude data |
US9679413B2 (en) | 2015-08-13 | 2017-06-13 | Google Inc. | Systems and methods to transition between viewpoints in a three-dimensional environment |
US9726463B2 (en) | 2014-07-16 | 2017-08-08 | Robtozone, LLC | Multichannel controller for target shooting range |
US20170269712A1 (en) * | 2016-03-16 | 2017-09-21 | Adtile Technologies Inc. | Immersive virtual experience using a mobile communication device |
US9773346B1 (en) * | 2013-03-12 | 2017-09-26 | Amazon Technologies, Inc. | Displaying three-dimensional virtual content |
US9836211B2 (en) | 2011-12-21 | 2017-12-05 | Apple Inc. | Device, method, and graphical user interface for selection of views in a three-dimensional map based on gesture inputs |
US20170374276A1 (en) * | 2016-06-23 | 2017-12-28 | Intel Corporation | Controlling capturing of a multimedia stream with user physical responses |
US9925464B2 (en) | 2011-03-08 | 2018-03-27 | Nintendo Co., Ltd. | Computer-readable storage medium, information processing system, and information processing method for displaying an image on a display device using attitude data of a display device |
US10007419B2 (en) | 2014-07-17 | 2018-06-26 | Facebook, Inc. | Touch-based gesture recognition and application navigation |
US20180182168A1 (en) * | 2015-09-02 | 2018-06-28 | Thomson Licensing | Method, apparatus and system for facilitating navigation in an extended scene |
US10102552B2 (en) | 2010-02-12 | 2018-10-16 | Mary Anne Fletcher | Mobile device streaming media application |
US10127722B2 (en) | 2015-06-30 | 2018-11-13 | Matterport, Inc. | Mobile capture visualization incorporating three-dimensional and two-dimensional imagery |
US10139985B2 (en) | 2012-06-22 | 2018-11-27 | Matterport, Inc. | Defining, displaying and interacting with tags in a three-dimensional model |
US10140765B2 (en) | 2013-02-25 | 2018-11-27 | Google Llc | Staged camera traversal for three dimensional environment |
US10150033B2 (en) | 2010-08-20 | 2018-12-11 | Nintendo Co., Ltd. | Position calculation system, position calculation device, storage medium storing position calculation program, and position calculation method |
WO2018227230A1 (en) * | 2017-06-16 | 2018-12-20 | Canon Kabushiki Kaisha | System and method of configuring a virtual camera |
US10163261B2 (en) | 2014-03-19 | 2018-12-25 | Matterport, Inc. | Selecting two-dimensional imagery data for display within a three-dimensional model |
US20190087020A1 (en) * | 2016-10-04 | 2019-03-21 | Hewlett-Packard Development Company, L.P. | Three-dimensional input device |
US10298839B2 (en) * | 2009-09-29 | 2019-05-21 | Sony Interactive Entertainment Inc. | Image processing apparatus, image processing method, and image communication system |
US10602200B2 (en) | 2014-05-28 | 2020-03-24 | Lucasfilm Entertainment Company Ltd. | Switching modes of a media content item |
CN110944727A (en) * | 2017-09-19 | 2020-03-31 | 佳能株式会社 | System and method for controlling virtual camera |
AU2018336341B2 (en) * | 2017-09-19 | 2020-05-07 | Canon Kabushiki Kaisha | Control device, control method and program |
US10884576B2 (en) | 2015-06-16 | 2021-01-05 | Nokia Technologies Oy | Mediated reality |
EP3596587A4 (en) * | 2017-03-13 | 2021-01-27 | Rescan, Inc. | Navigation system |
US11061557B2 (en) | 2016-12-22 | 2021-07-13 | ReScan, Inc. | Dynamic single touch point navigation |
EP3865984A1 (en) * | 2020-02-13 | 2021-08-18 | Honeywell International Inc. | Methods and systems for searchlight control for aerial vehicles |
US11172231B2 (en) | 2017-07-07 | 2021-11-09 | Canon Kabushiki Kaisha | Method, apparatus and system for encoding or decoding video data of precincts by using wavelet transform |
JP2022505457A (en) * | 2019-01-30 | 2022-01-14 | テンセント・テクノロジー・(シェンジェン)・カンパニー・リミテッド | How to build buildings in virtual environments, equipment, equipment and programs |
CN114144753A (en) * | 2019-07-30 | 2022-03-04 | 索尼集团公司 | Image processing apparatus, image processing method, and recording medium |
US11625037B2 (en) | 2020-02-13 | 2023-04-11 | Honeywell International Inc. | Methods and systems for searchlight control for aerial vehicles |
US12131357B2 (en) | 2024-01-25 | 2024-10-29 | Weple Ip Holdings Llc | Mobile device streaming media application |
Families Citing this family (105)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8239132B2 (en) * | 2008-01-22 | 2012-08-07 | Maran Ma | Systems, apparatus and methods for delivery of location-oriented information |
US20100053151A1 (en) * | 2008-09-02 | 2010-03-04 | Samsung Electronics Co., Ltd | In-line mediation for manipulating three-dimensional content on a display device |
US20100088632A1 (en) * | 2008-10-08 | 2010-04-08 | Research In Motion Limited | Method and handheld electronic device having dual mode touchscreen-based navigation |
US8294766B2 (en) * | 2009-01-28 | 2012-10-23 | Apple Inc. | Generating a three-dimensional model using a portable electronic device recording |
US9007379B1 (en) * | 2009-05-29 | 2015-04-14 | Two Pic Mc Llc | Methods and apparatus for interactive user control of virtual cameras |
US8933925B2 (en) * | 2009-06-15 | 2015-01-13 | Microsoft Corporation | Piecewise planar reconstruction of three-dimensional scenes |
US8723988B2 (en) * | 2009-07-17 | 2014-05-13 | Sony Corporation | Using a touch sensitive display to control magnification and capture of digital images by an electronic device |
CN101996021B (en) * | 2009-08-12 | 2013-02-13 | 幻音科技(深圳)有限公司 | Handheld electronic equipment and method for controlling display contents thereby |
JP5304544B2 (en) * | 2009-08-28 | 2013-10-02 | ソニー株式会社 | Information processing apparatus, information processing method, and program |
KR20110026066A (en) * | 2009-09-07 | 2011-03-15 | 삼성전자주식회사 | Apparatus and method for changing screen status in portable terminal |
US20110199516A1 (en) * | 2010-02-12 | 2011-08-18 | Honeywell International Inc. | Method of showing video on a touch-sensitive display |
US8638371B2 (en) * | 2010-02-12 | 2014-01-28 | Honeywell International Inc. | Method of manipulating assets shown on a touch-sensitive display |
US8570286B2 (en) * | 2010-02-12 | 2013-10-29 | Honeywell International Inc. | Gestures on a touch-sensitive display |
US20110199386A1 (en) * | 2010-02-12 | 2011-08-18 | Honeywell International Inc. | Overlay feature to provide user assistance in a multi-touch interactive display environment |
JP2011197777A (en) * | 2010-03-17 | 2011-10-06 | Sony Corp | Information processing device, information processing method and program |
US8756522B2 (en) * | 2010-03-19 | 2014-06-17 | Blackberry Limited | Portable electronic device and method of controlling same |
US8640020B2 (en) * | 2010-06-02 | 2014-01-28 | Microsoft Corporation | Adjustable and progressive mobile device street view |
US20110298887A1 (en) * | 2010-06-02 | 2011-12-08 | Maglaque Chad L | Apparatus Using an Accelerometer to Capture Photographic Images |
KR20120005124A (en) * | 2010-07-08 | 2012-01-16 | 삼성전자주식회사 | Apparatus and method for operation according to movement in portable terminal |
US8451192B2 (en) | 2010-08-13 | 2013-05-28 | T-Mobile Usa, Inc. | Utilization of interactive device-adjacent ambiently displayed images |
US8449118B2 (en) * | 2010-08-13 | 2013-05-28 | T-Mobile Usa, Inc. | Device-adjacent ambiently displayed image |
JP5664036B2 (en) * | 2010-09-07 | 2015-02-04 | ソニー株式会社 | Information processing apparatus, program, and control method |
US9001053B2 (en) * | 2010-10-28 | 2015-04-07 | Honeywell International Inc. | Display system for controlling a selector symbol within an image |
US9342998B2 (en) | 2010-11-16 | 2016-05-17 | Microsoft Technology Licensing, Llc | Techniques to annotate street view images with contextual information |
US20120194556A1 (en) * | 2011-01-28 | 2012-08-02 | L3 Communications Avionics Systems, Inc. | 3d avionics viewpoint control system |
US8836802B2 (en) | 2011-03-21 | 2014-09-16 | Honeywell International Inc. | Method of defining camera scan movements using gestures |
JP5918618B2 (en) | 2011-06-03 | 2016-05-18 | 任天堂株式会社 | Information processing program, information processing apparatus, information processing system, and information processing method |
JP5591281B2 (en) * | 2011-06-03 | 2014-09-17 | 任天堂株式会社 | Information processing system, information processing apparatus, information processing program, and moving image reproduction control method |
US8675049B2 (en) * | 2011-06-09 | 2014-03-18 | Microsoft Corporation | Navigation model to render centered objects using images |
US9508002B2 (en) * | 2011-06-14 | 2016-11-29 | Google Inc. | Generating cinematic flyby sequences following paths and GPS tracks |
US8914037B2 (en) | 2011-08-11 | 2014-12-16 | Qualcomm Incorporated | Numerically stable computation of heading without a reference axis |
JP2013084029A (en) * | 2011-10-06 | 2013-05-09 | Sony Corp | Display control device |
CN104094193B (en) * | 2011-12-27 | 2017-11-17 | 英特尔公司 | Full 3D interactions on mobile device |
US10191641B2 (en) | 2011-12-29 | 2019-01-29 | Apple Inc. | Device, method, and graphical user interface for navigation of information in a map-based interface |
US8630458B2 (en) * | 2012-03-21 | 2014-01-14 | Google Inc. | Using camera input to determine axis of rotation and navigation |
JP5959047B2 (en) * | 2012-04-04 | 2016-08-02 | 任天堂株式会社 | Display control system, display control method, display control program, and display control apparatus |
TW201343227A (en) * | 2012-04-25 | 2013-11-01 | Fu Li Ye Internat Corp | Interaction game control method having touch panel device media |
US8983778B2 (en) | 2012-06-05 | 2015-03-17 | Apple Inc. | Generation of intersection information by a mapping service |
US9092900B2 (en) * | 2012-06-05 | 2015-07-28 | Google Inc. | Terrain-based virtual camera tilting, and applications thereof |
US9159153B2 (en) | 2012-06-05 | 2015-10-13 | Apple Inc. | Method, system and apparatus for providing visual feedback of a map view change |
US9311750B2 (en) * | 2012-06-05 | 2016-04-12 | Apple Inc. | Rotation operations in a mapping application |
US9997069B2 (en) | 2012-06-05 | 2018-06-12 | Apple Inc. | Context-aware voice guidance |
US10176633B2 (en) | 2012-06-05 | 2019-01-08 | Apple Inc. | Integrated mapping and navigation application |
US9052197B2 (en) | 2012-06-05 | 2015-06-09 | Apple Inc. | Providing navigation instructions while device is in locked mode |
US9886794B2 (en) | 2012-06-05 | 2018-02-06 | Apple Inc. | Problem reporting in maps |
US9482296B2 (en) | 2012-06-05 | 2016-11-01 | Apple Inc. | Rendering road signs during navigation |
US10156455B2 (en) | 2012-06-05 | 2018-12-18 | Apple Inc. | Context-aware voice guidance |
US9418672B2 (en) | 2012-06-05 | 2016-08-16 | Apple Inc. | Navigation application with adaptive instruction text |
US9025860B2 (en) | 2012-08-06 | 2015-05-05 | Microsoft Technology Licensing, Llc | Three-dimensional object browsing in documents |
GB2505404B (en) * | 2012-08-07 | 2016-08-31 | Samsung Electronics Co Ltd | Portable apparatus with a GUI |
US20150040073A1 (en) * | 2012-09-24 | 2015-02-05 | Google Inc. | Zoom, Rotate, and Translate or Pan In A Single Gesture |
US10492053B2 (en) * | 2012-10-01 | 2019-11-26 | Scott R. Copeland | System for a monitored and reconstructible personal rendezvous session |
US10178188B2 (en) * | 2012-10-01 | 2019-01-08 | Scott R. Copeland | System for a monitored and reconstructible personal rendezvous session |
US20140109016A1 (en) * | 2012-10-16 | 2014-04-17 | Yu Ouyang | Gesture-based cursor control |
CN104769543B (en) * | 2012-10-16 | 2018-10-26 | 田载雄 | Method and system and computer readable recording medium storing program for performing for controlling virtual camera in virtual three-dimensional space |
CN103853471B (en) * | 2012-12-03 | 2017-05-10 | 昆达电脑科技(昆山)有限公司 | User touch behavior based map display method |
US9606709B2 (en) | 2012-12-27 | 2017-03-28 | Google Inc. | System and method for geographic data layer management in a geographic information system |
US20140267600A1 (en) * | 2013-03-14 | 2014-09-18 | Microsoft Corporation | Synth packet for interactive view navigation of a scene |
US9712746B2 (en) | 2013-03-14 | 2017-07-18 | Microsoft Technology Licensing, Llc | Image capture and ordering |
GB2512628A (en) * | 2013-04-04 | 2014-10-08 | Sony Corp | Method and apparatus |
US9417835B2 (en) | 2013-05-10 | 2016-08-16 | Google Inc. | Multiplayer game for display across multiple devices |
US9786075B2 (en) * | 2013-06-07 | 2017-10-10 | Microsoft Technology Licensing, Llc | Image extraction and image-based rendering for manifolds of terrestrial and aerial visualizations |
WO2015066560A1 (en) * | 2013-11-01 | 2015-05-07 | InvenSense, Incorporated | Systems and methods for optical sensor navigation |
WO2015089451A1 (en) * | 2013-12-14 | 2015-06-18 | Handscape Inc. | Method for detecting user gestures from alternative touchpads of a handheld computerized device |
CN104849953B (en) * | 2014-02-19 | 2017-09-12 | 大昱光电股份有限公司 | Stereoscopic image generation method and stereopsis camera device |
US9851880B2 (en) * | 2014-03-14 | 2017-12-26 | Adobe Systems Incorporated | Image rotation based on touch gestures |
US20170220225A1 (en) * | 2014-06-02 | 2017-08-03 | Apelab Sarl | A method and system for providing interactivity within a virtual environment |
WO2015185110A1 (en) | 2014-06-03 | 2015-12-10 | Metaio Gmbh | Method and system for presenting a digital information related to a real object |
US9595130B2 (en) * | 2014-06-17 | 2017-03-14 | Chief Architect Inc. | Virtual model navigation methods and apparatus |
US9575564B2 (en) * | 2014-06-17 | 2017-02-21 | Chief Architect Inc. | Virtual model navigation methods and apparatus |
US10724864B2 (en) | 2014-06-17 | 2020-07-28 | Chief Architect Inc. | Step detection methods and apparatus |
US9589354B2 (en) | 2014-06-17 | 2017-03-07 | Chief Architect Inc. | Virtual model viewing methods and apparatus |
CN104680588B (en) * | 2015-02-13 | 2017-11-24 | 上海同筑信息科技有限公司 | Event marker method and system based on BIM |
EP3337585B1 (en) * | 2015-08-17 | 2022-08-10 | Lego A/S | Method of creating a virtual game environment and interactive game system employing the method |
FR3046229A1 (en) * | 2015-12-29 | 2017-06-30 | Thales Sa | METHOD FOR GRAPHIC REPRESENTATION OF A THREE-DIMENSIONAL SYNTHETIC VIEW OF THE EXTERIOR LANDSCAPE IN AN AIRCRAFT VISUALIZATION SYSTEM |
CN107038682B (en) * | 2016-02-03 | 2020-06-26 | 上海源胜文化传播有限公司 | Scaling system and method of three-dimensional human body model |
US10198861B2 (en) * | 2016-03-31 | 2019-02-05 | Intel Corporation | User interactive controls for a priori path navigation in virtual environment |
US10247568B2 (en) | 2016-06-12 | 2019-04-02 | Apple Inc. | Style sheet driven virtual camera for defining a navigation presentation |
JP6228267B2 (en) * | 2016-06-20 | 2017-11-08 | 株式会社スクウェア・エニックス | Video game processing apparatus, video game processing method, and video game processing program |
US10726673B2 (en) | 2016-09-20 | 2020-07-28 | Acres Technology | Automatic application of a bonus to an electronic gaming device responsive to player interaction with a mobile computing device |
US10041800B2 (en) | 2016-09-23 | 2018-08-07 | Qualcomm Incorporated | Pedestrian sensor assistance in a mobile device during typical device motions |
US10553036B1 (en) | 2017-01-10 | 2020-02-04 | Lucasfilm Entertainment Company Ltd. | Manipulating objects within an immersive environment |
CN108984087B (en) * | 2017-06-02 | 2021-09-14 | 腾讯科技(深圳)有限公司 | Social interaction method and device based on three-dimensional virtual image |
CN107436745B (en) * | 2017-06-19 | 2021-01-08 | 广州励丰文化科技股份有限公司 | Picture display method and device of three-dimensional model based on digital artistic landscape device |
US10663298B2 (en) * | 2017-06-25 | 2020-05-26 | Invensense, Inc. | Method and apparatus for characterizing platform motion |
US20190007672A1 (en) | 2017-06-30 | 2019-01-03 | Bobby Gene Burrough | Method and Apparatus for Generating Dynamic Real-Time 3D Environment Projections |
CN107506038B (en) * | 2017-08-28 | 2020-02-25 | 荆门程远电子科技有限公司 | Three-dimensional virtual earth interaction method based on mobile terminal |
CN109550246B (en) * | 2017-09-25 | 2022-03-25 | 腾讯科技(深圳)有限公司 | Control method and device for game client, storage medium and electronic device |
CN108363531A (en) * | 2018-01-17 | 2018-08-03 | 网易(杭州)网络有限公司 | Exchange method and device in a kind of game |
WO2019164514A1 (en) * | 2018-02-23 | 2019-08-29 | Google Llc | Transitioning between map view and augmented reality view |
JP7045218B2 (en) * | 2018-02-28 | 2022-03-31 | キヤノン株式会社 | Information processing equipment and information processing methods, programs |
CN108509139B (en) * | 2018-03-30 | 2019-09-10 | 腾讯科技(深圳)有限公司 | Control method for movement, device, electronic device and the storage medium of virtual objects |
CN108379844B (en) * | 2018-03-30 | 2020-10-23 | 腾讯科技(深圳)有限公司 | Method, device, electronic device and storage medium for controlling movement of virtual object |
US10964110B2 (en) * | 2018-05-07 | 2021-03-30 | Vmware, Inc. | Managed actions using augmented reality |
JP6916150B2 (en) * | 2018-06-05 | 2021-08-11 | 任天堂株式会社 | Game systems, game programs, game devices, and game processing methods |
CN110610523B (en) * | 2018-06-15 | 2023-04-25 | 杭州海康威视数字技术股份有限公司 | Method and device for calibrating automobile looking around and computer readable storage medium |
JP6830473B2 (en) * | 2018-12-13 | 2021-02-17 | 株式会社スクウェア・エニックス | Video game processor, video game processing method, and video game processing program |
US11216149B2 (en) * | 2019-03-15 | 2022-01-04 | Samsung Electronics Co., Ltd. | 360° video viewer control using smart device |
CN112130551A (en) * | 2019-06-25 | 2020-12-25 | 北京百度网讯科技有限公司 | Decision planning method and device for travel path and speed of unmanned vehicle |
CN110523085A (en) * | 2019-08-30 | 2019-12-03 | 腾讯科技(深圳)有限公司 | Control method, device, terminal and the storage medium of virtual objects |
JP2020113314A (en) * | 2020-03-25 | 2020-07-27 | 任天堂株式会社 | Information processing program, information processing device, information processing system, and information processing method |
CN112087575B (en) * | 2020-08-24 | 2022-03-08 | 广州启量信息科技有限公司 | Virtual camera control method |
CA3146804A1 (en) * | 2020-11-13 | 2022-05-13 | Tencent Technology (Shenzhen) Company Limited | Virtual object control method and apparatus, storage medium, and electronic device |
CN112354179B (en) * | 2020-11-23 | 2023-09-05 | 浙江中控信息产业股份有限公司 | Three-dimensional geographic information content display and interaction method |
US11899204B2 (en) * | 2021-06-09 | 2024-02-13 | Snap Inc. | Soft follow and pitch angle effects for VR/AR interface |
Citations (21)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6034692A (en) * | 1996-08-01 | 2000-03-07 | U.S. Philips Corporation | Virtual environment navigation |
US6452544B1 (en) * | 2001-05-24 | 2002-09-17 | Nokia Corporation | Portable map display system for presenting a 3D map image and method thereof |
US6597347B1 (en) * | 1991-11-26 | 2003-07-22 | Itu Research Inc. | Methods and apparatus for providing touch-sensitive input in multiple degrees of freedom |
US20040051709A1 (en) * | 2002-05-31 | 2004-03-18 | Eit Co., Ltd. | Apparatus for controlling the shift of virtual space and method and program for controlling same |
US20040128071A1 (en) * | 2002-10-23 | 2004-07-01 | Stefan Schradi | Method and apparatus for generating a GPS simulation scenario |
US20050187015A1 (en) * | 2004-02-19 | 2005-08-25 | Nintendo Co., Ltd. | Game machine and data storage medium having stored therein game program |
US20050225559A1 (en) * | 2001-03-29 | 2005-10-13 | Microsoft Corporation | 3D navigation techniques |
US6975959B2 (en) * | 2002-12-03 | 2005-12-13 | Robert Bosch Gmbh | Orientation and navigation for a mobile device using inertial sensors |
US7159194B2 (en) * | 2001-11-30 | 2007-01-02 | Palm, Inc. | Orientation dependent functionality of an electronic device |
US7184037B2 (en) * | 1997-10-14 | 2007-02-27 | Koninklijke Philips Electronics N.V. | Virtual environment navigation aid |
US7277571B2 (en) * | 2002-05-21 | 2007-10-02 | Sega Corporation | Effective image processing, apparatus and method in virtual three-dimensional space |
US20080094358A1 (en) * | 2006-09-15 | 2008-04-24 | Industrial Light & Magic | Constrained Virtual Camera Control |
US20080180406A1 (en) * | 2007-01-31 | 2008-07-31 | Han Jefferson Y | Methods of interfacing with multi-point input devices and multi-point input systems employing interfacing techniques |
US7411594B2 (en) * | 2002-01-15 | 2008-08-12 | Canon Kabushiki Kaisha | Information processing apparatus and method |
US20090046110A1 (en) * | 2007-08-16 | 2009-02-19 | Motorola, Inc. | Method and apparatus for manipulating a displayed image |
US7589732B2 (en) * | 2002-11-05 | 2009-09-15 | Autodesk, Inc. | System and method of integrated spatial and temporal navigation |
US20090256837A1 (en) * | 2008-04-11 | 2009-10-15 | Sidhartha Deb | Directing camera behavior in 3-d imaging system |
US7839405B2 (en) * | 2006-04-10 | 2010-11-23 | Sony Corporation | Apparatus, method, and program for projection of 3D spatial image into planar images using visual points |
US7877707B2 (en) * | 2007-01-06 | 2011-01-25 | Apple Inc. | Detecting and interpreting real-world and security gestures on touch and hover sensitive devices |
US8045837B2 (en) * | 1997-04-21 | 2011-10-25 | Sony Corporation | Controller for photographing apparatus and photographing system |
US8094204B2 (en) * | 2006-08-28 | 2012-01-10 | Sony Corporation | Image movement based device control method, program, and apparatus |
Family Cites Families (111)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5111288A (en) * | 1988-03-02 | 1992-05-05 | Diamond Electronics, Inc. | Surveillance camera system |
US5276785A (en) * | 1990-08-02 | 1994-01-04 | Xerox Corporation | Moving viewpoint with respect to a target in a three-dimensional workspace |
JP2827612B2 (en) * | 1991-10-07 | 1998-11-25 | 富士通株式会社 | A touch panel device and a method for displaying an object on the touch panel device. |
CA2077173C (en) * | 1991-11-22 | 2003-04-22 | Michael Chen | Method and apparatus for direct manipulation of 3-d objects on computer displays |
US5483261A (en) * | 1992-02-14 | 1996-01-09 | Itu Research, Inc. | Graphical input controller and method with rear screen image detection |
US5880411A (en) * | 1992-06-08 | 1999-03-09 | Synaptics, Incorporated | Object position detector with edge motion feature and gesture recognition |
US5557714A (en) * | 1993-01-29 | 1996-09-17 | Microsoft Corporation | Method and system for rotating a three-dimensional model about two orthogonal axes |
JPH0764754A (en) * | 1993-08-24 | 1995-03-10 | Hitachi Ltd | Compact information processor |
US5689628A (en) * | 1994-04-14 | 1997-11-18 | Xerox Corporation | Coupling a display object to a viewpoint in a navigable workspace |
GB9606791D0 (en) * | 1996-03-29 | 1996-06-05 | British Telecomm | Control interface |
US5808613A (en) * | 1996-05-28 | 1998-09-15 | Silicon Graphics, Inc. | Network navigator with enhanced navigational abilities |
JPH1049290A (en) * | 1996-08-05 | 1998-02-20 | Sony Corp | Device and method for processing information |
US7663607B2 (en) * | 2004-05-06 | 2010-02-16 | Apple Inc. | Multipoint touchscreen |
EP1717679B1 (en) * | 1998-01-26 | 2016-09-21 | Apple Inc. | Method for integrating manual input |
US8479122B2 (en) * | 2004-07-30 | 2013-07-02 | Apple Inc. | Gestures for touch sensitive input devices |
US6347290B1 (en) * | 1998-06-24 | 2002-02-12 | Compaq Information Technologies Group, L.P. | Apparatus and method for detecting and executing positional and gesture commands corresponding to movement of handheld computing device |
US6029854A (en) * | 1998-09-16 | 2000-02-29 | Wissen; William T. | Portable liquid dispenser |
US6400376B1 (en) * | 1998-12-21 | 2002-06-04 | Ericsson Inc. | Display control for hand-held data processing device |
US7119819B1 (en) * | 1999-04-06 | 2006-10-10 | Microsoft Corporation | Method and apparatus for supporting two-dimensional windows in a three-dimensional environment |
US6288704B1 (en) * | 1999-06-08 | 2001-09-11 | Vega, Vista, Inc. | Motion detection and tracking system to control navigation and display of object viewers |
US6466198B1 (en) * | 1999-11-05 | 2002-10-15 | Innoventions, Inc. | View navigation and magnification of a hand-held device with a display |
US6388655B1 (en) * | 1999-11-08 | 2002-05-14 | Wing-Keung Leung | Method of touch control of an input device and such a device |
US6980690B1 (en) * | 2000-01-20 | 2005-12-27 | Canon Kabushiki Kaisha | Image processing apparatus |
US6636210B1 (en) * | 2000-03-03 | 2003-10-21 | Muse Corporation | Method and system for auto-navigation in a three dimensional viewing environment |
US7142205B2 (en) * | 2000-03-29 | 2006-11-28 | Autodesk, Inc. | Single gesture map navigation graphical user interface for a personal digital assistant |
US6326846B1 (en) * | 2000-04-11 | 2001-12-04 | National Semiconductor Corporation | Low voltage fet differential amplifier and method |
US7027642B2 (en) * | 2000-04-28 | 2006-04-11 | Orametrix, Inc. | Methods for registration of three-dimensional frames to create three-dimensional virtual models of objects |
US6864886B1 (en) * | 2000-08-10 | 2005-03-08 | Sportvision, Inc. | Enhancing video using a virtual surface |
DE50014953D1 (en) * | 2000-08-24 | 2008-03-20 | Siemens Vdo Automotive Ag | Method and navigation device for querying destination information and navigating in a map view |
US6600475B2 (en) * | 2001-01-22 | 2003-07-29 | Koninklijke Philips Electronics N.V. | Single camera system for gesture-based input and target indication |
US7030861B1 (en) * | 2001-02-10 | 2006-04-18 | Wayne Carl Westerman | System and method for packing multi-touch gestures onto a hand |
EP1363246A4 (en) * | 2001-02-23 | 2006-11-08 | Fujitsu Ltd | Display control device, information terminal device equipped with the display control device, and view point position control device |
US6834249B2 (en) * | 2001-03-29 | 2004-12-21 | Arraycomm, Inc. | Method and apparatus for controlling a computing system |
US6798429B2 (en) * | 2001-03-29 | 2004-09-28 | Intel Corporation | Intuitive mobile device interface to virtual spaces |
FI117488B (en) * | 2001-05-16 | 2006-10-31 | Myorigo Sarl | Browsing information on screen |
US6907579B2 (en) * | 2001-10-30 | 2005-06-14 | Hewlett-Packard Development Company, L.P. | User interface and method for interacting with a three-dimensional graphical environment |
KR100433628B1 (en) * | 2001-12-27 | 2004-05-31 | 주식회사 케이티 | Method for network adaptive error control in FoIP |
US6690387B2 (en) * | 2001-12-28 | 2004-02-10 | Koninklijke Philips Electronics N.V. | Touch-screen image scrolling system and method |
US20030132913A1 (en) * | 2002-01-11 | 2003-07-17 | Anton Issinski | Touchless computer input device to control display cursor mark position by using stereovision input from two video cameras |
CN1360283A (en) * | 2002-01-15 | 2002-07-24 | 天津大学 | 3D model creating system for actual human head sculpture |
US7324085B2 (en) * | 2002-01-25 | 2008-01-29 | Autodesk, Inc. | Techniques for pointing to locations within a volumetric display |
US7107285B2 (en) * | 2002-03-16 | 2006-09-12 | Questerra Corporation | Method, system, and program for an improved enterprise spatial system |
GB2387519B (en) | 2002-04-08 | 2005-06-22 | Canon Europa Nv | Viewing controller for three-dimensional computer graphics |
JP3090450U (en) * | 2002-06-04 | 2002-12-13 | 株式会社ワコー | Portable information processing device with azimuth display function |
US7042449B2 (en) * | 2002-06-28 | 2006-05-09 | Autodesk Canada Co. | Push-tumble three dimensional navigation system |
US20040125114A1 (en) * | 2002-12-31 | 2004-07-01 | Hauke Schmidt | Multiresolution image synthesis for navigation |
DE10300527A1 (en) * | 2003-01-09 | 2004-07-22 | Realtime Technology Ag | Display system for virtual three-dimensional scenes, adjusts spatial angles of camera model when trackball is rotated, so that view of virtual three-dimensional scene varies accordingly |
JP4100195B2 (en) * | 2003-02-26 | 2008-06-11 | ソニー株式会社 | Three-dimensional object display processing apparatus, display processing method, and computer program |
US7259778B2 (en) * | 2003-07-01 | 2007-08-21 | L-3 Communications Corporation | Method and apparatus for placing sensors using 3D models |
WO2005010623A2 (en) * | 2003-07-24 | 2005-02-03 | Zebra Imaging, Inc. | Enhanced environment visualization using holographic stereograms |
WO2006002298A2 (en) * | 2004-06-22 | 2006-01-05 | Sarnoff Corporation | Method and apparatus determining camera pose |
US7519223B2 (en) * | 2004-06-28 | 2009-04-14 | Microsoft Corporation | Recognizing gestures and using gestures for interacting with software applications |
US7743348B2 (en) * | 2004-06-30 | 2010-06-22 | Microsoft Corporation | Using physical objects to adjust attributes of an interactive display application |
CN100510623C (en) * | 2004-07-15 | 2009-07-08 | 阿莫善斯有限公司 | Mobile terminal device |
US7719523B2 (en) * | 2004-08-06 | 2010-05-18 | Touchtable, Inc. | Bounding box gesture recognition on a touch detecting interactive display |
US7728821B2 (en) * | 2004-08-06 | 2010-06-01 | Touchtable, Inc. | Touch detecting interactive display |
JP2006122241A (en) * | 2004-10-27 | 2006-05-18 | Nintendo Co Ltd | Game device and game program |
JP4515221B2 (en) * | 2004-10-29 | 2010-07-28 | 任天堂株式会社 | Game program |
US7934893B2 (en) * | 2004-12-10 | 2011-05-03 | A.V. Custom Style B.V. | Quick-change and plug eject arbor for a hole saw |
KR100641182B1 (en) * | 2004-12-30 | 2006-11-02 | 엘지전자 주식회사 | Apparatus and method for moving virtual screen in a mobile terminal |
US20060164382A1 (en) * | 2005-01-25 | 2006-07-27 | Technology Licensing Company, Inc. | Image manipulation in response to a movement of a display |
US20080300780A1 (en) * | 2005-02-07 | 2008-12-04 | Dmitry Domnin | Method for automating task with portable device |
US7605804B2 (en) * | 2005-04-29 | 2009-10-20 | Microsoft Corporation | System and method for fine cursor positioning using a low resolution imaging touch screen |
US20060271281A1 (en) | 2005-05-20 | 2006-11-30 | Myron Ahn | Geographic information knowledge systems |
JP5063871B2 (en) * | 2005-06-15 | 2012-10-31 | 株式会社デンソー | Map display system for portable devices |
US10198521B2 (en) * | 2005-06-27 | 2019-02-05 | Google Llc | Processing ambiguous search requests in a geographic information system |
JP4783603B2 (en) * | 2005-08-26 | 2011-09-28 | 株式会社デンソー | MAP DISPLAY DEVICE, MAP DISPLAY METHOD, MAP DISPLAY PROGRAM, AND RECORDING MEDIUM CONTAINING THE PROGRAM |
US20070046661A1 (en) * | 2005-08-31 | 2007-03-01 | Siemens Medical Solutions Usa, Inc. | Three or four-dimensional medical imaging navigation methods and systems |
JP4404830B2 (en) * | 2005-09-28 | 2010-01-27 | シャープ株式会社 | Operation system |
JP4246195B2 (en) * | 2005-11-01 | 2009-04-02 | パナソニック株式会社 | Car navigation system |
US7587684B2 (en) * | 2006-01-23 | 2009-09-08 | Nokia Corporation | Mobile communication terminal and method therefore |
US20070206030A1 (en) * | 2006-03-06 | 2007-09-06 | The Protomold Company, Inc. | Graphical user interface for three-dimensional manipulation of a part |
US9070402B2 (en) * | 2006-03-13 | 2015-06-30 | Autodesk, Inc. | 3D model presentation system with motion and transitions at each camera view point of interest (POI) with imageless jumps to each POI |
US8077153B2 (en) * | 2006-04-19 | 2011-12-13 | Microsoft Corporation | Precise selection techniques for multi-touch screens |
JP2009535727A (en) * | 2006-05-02 | 2009-10-01 | コーニンクレッカ フィリップス エレクトロニクス エヌ ヴィ | 3D input / navigation device with freeze and resume functions |
US20070257891A1 (en) * | 2006-05-03 | 2007-11-08 | Esenther Alan W | Method and system for emulating a mouse on a multi-touch sensitive surface |
US7707516B2 (en) * | 2006-05-26 | 2010-04-27 | Google Inc. | Embedded navigation interface |
US7643673B2 (en) * | 2006-06-12 | 2010-01-05 | Google Inc. | Markup language for interactive geographic information system |
US20080062126A1 (en) * | 2006-07-06 | 2008-03-13 | Algreatly Cherif A | 3D method and system for hand-held devices |
WO2008014486A2 (en) * | 2006-07-28 | 2008-01-31 | Accelerated Pictures, Inc. | Improved camera control |
US8106856B2 (en) * | 2006-09-06 | 2012-01-31 | Apple Inc. | Portable electronic device for photo management |
US8277316B2 (en) * | 2006-09-14 | 2012-10-02 | Nintendo Co., Ltd. | Method and apparatus for using a common pointing input to control 3D viewpoint and object targeting |
US8130203B2 (en) * | 2007-01-03 | 2012-03-06 | Apple Inc. | Multi-touch input discrimination |
US7752555B2 (en) * | 2007-01-31 | 2010-07-06 | Microsoft Corporation | Controlling multiple map application operations with a single gesture |
CN101030982A (en) * | 2007-03-22 | 2007-09-05 | 宇龙计算机通信科技(深圳)有限公司 | Apparatus and method for automatically adjusting display-screen content and direction |
US8487957B1 (en) * | 2007-05-29 | 2013-07-16 | Google Inc. | Displaying and navigating within photo placemarks in a geographic information system, and applications thereof |
US8681104B2 (en) * | 2007-06-13 | 2014-03-25 | Apple Inc. | Pinch-throw and translation gestures |
US8302033B2 (en) * | 2007-06-22 | 2012-10-30 | Apple Inc. | Touch screen device, method, and graphical user interface for providing maps, directions, and location-based information |
TW200907764A (en) * | 2007-08-01 | 2009-02-16 | Unique Instr Co Ltd | Three-dimensional virtual input and simulation apparatus |
US10504285B2 (en) * | 2007-09-26 | 2019-12-10 | Autodesk, Inc. | Navigation system for a 3D virtual scene |
US8803881B2 (en) * | 2007-09-26 | 2014-08-12 | Autodesk, Inc. | Navigation system for a 3D virtual scene |
JP5390115B2 (en) * | 2008-03-31 | 2014-01-15 | 株式会社バンダイナムコゲームス | Program, game system |
KR101626037B1 (en) * | 2008-04-14 | 2016-06-13 | 구글 인코포레이티드 | Panning using virtual surfaces |
CN102067179A (en) * | 2008-04-14 | 2011-05-18 | 谷歌公司 | Swoop navigation |
US10180714B1 (en) * | 2008-04-24 | 2019-01-15 | Pixar | Two-handed multi-stroke marking menus for multi-touch devices |
US8799821B1 (en) * | 2008-04-24 | 2014-08-05 | Pixar | Method and apparatus for user inputs for three-dimensional animation |
US8375336B2 (en) * | 2008-05-23 | 2013-02-12 | Microsoft Corporation | Panning content utilizing a drag operation |
US20090303251A1 (en) * | 2008-06-10 | 2009-12-10 | Andras Balogh | Displaying, processing and storing geo-located information |
US8700301B2 (en) * | 2008-06-19 | 2014-04-15 | Microsoft Corporation | Mobile computing devices, architecture and user interfaces based on dynamic direction information |
WO2010022386A2 (en) * | 2008-08-22 | 2010-02-25 | Google Inc. | Navigation in a three dimensional environment on a mobile device |
KR20100041006A (en) * | 2008-10-13 | 2010-04-22 | 엘지전자 주식회사 | A user interface controlling method using three dimension multi-touch |
KR20100050103A (en) * | 2008-11-05 | 2010-05-13 | 엘지전자 주식회사 | Method of controlling 3 dimension individual object on map and mobile terminal using the same |
US8788977B2 (en) * | 2008-11-20 | 2014-07-22 | Amazon Technologies, Inc. | Movement recognition as input mechanism |
US8294766B2 (en) * | 2009-01-28 | 2012-10-23 | Apple Inc. | Generating a three-dimensional model using a portable electronic device recording |
US20100188397A1 (en) * | 2009-01-28 | 2010-07-29 | Apple Inc. | Three dimensional navigation using deterministic movement of an electronic device |
US20100208033A1 (en) * | 2009-02-13 | 2010-08-19 | Microsoft Corporation | Personal Media Landscapes in Mixed Reality |
US20110205229A1 (en) | 2010-02-23 | 2011-08-25 | Google Inc. | Portable Globe Creation for a Geographical Information System |
US8683387B2 (en) * | 2010-03-03 | 2014-03-25 | Cast Group Of Companies Inc. | System and method for visualizing virtual objects on a mobile device |
US8321166B2 (en) * | 2010-03-17 | 2012-11-27 | Qualcomm Incorporated | Methods and systems for wireless platform attitude determination |
US9134799B2 (en) * | 2010-07-16 | 2015-09-15 | Qualcomm Incorporated | Interacting with a projected user interface using orientation sensors |
US20120019522A1 (en) * | 2010-07-25 | 2012-01-26 | Raytheon Company | ENHANCED SITUATIONAL AWARENESS AND TARGETING (eSAT) SYSTEM |
-
2009
- 2009-08-24 WO PCT/US2009/054727 patent/WO2010022386A2/en active Application Filing
- 2009-08-24 KR KR1020117006535A patent/KR101665034B1/en active IP Right Grant
- 2009-08-24 US US12/546,274 patent/US20100045666A1/en not_active Abandoned
- 2009-08-24 US US12/546,261 patent/US9310992B2/en active Active
- 2009-08-24 CN CN201310095018XA patent/CN103324386A/en active Pending
- 2009-08-24 CN CN201911350849.0A patent/CN111522493A/en active Pending
- 2009-08-24 CN CN2009801413564A patent/CN102187309A/en active Pending
- 2009-08-24 US US12/546,245 patent/US20100045703A1/en not_active Abandoned
- 2009-08-24 EP EP09791827A patent/EP2327010A2/en not_active Withdrawn
- 2009-08-24 JP JP2011524067A patent/JP2012501016A/en active Pending
- 2009-08-24 CA CA2734987A patent/CA2734987A1/en not_active Abandoned
- 2009-08-24 US US12/546,293 patent/US8847992B2/en active Active
- 2009-08-24 AU AU2009282724A patent/AU2009282724B2/en not_active Ceased
-
2016
- 2016-04-11 US US15/095,442 patent/US10222931B2/en active Active
-
2019
- 2019-03-04 US US16/291,067 patent/US11054964B2/en active Active
- 2019-03-04 US US16/291,063 patent/US10942618B2/en active Active
-
2021
- 2021-07-02 US US17/366,775 patent/US12032802B2/en active Active
Patent Citations (22)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6597347B1 (en) * | 1991-11-26 | 2003-07-22 | Itu Research Inc. | Methods and apparatus for providing touch-sensitive input in multiple degrees of freedom |
US6034692A (en) * | 1996-08-01 | 2000-03-07 | U.S. Philips Corporation | Virtual environment navigation |
US8045837B2 (en) * | 1997-04-21 | 2011-10-25 | Sony Corporation | Controller for photographing apparatus and photographing system |
US7184037B2 (en) * | 1997-10-14 | 2007-02-27 | Koninklijke Philips Electronics N.V. | Virtual environment navigation aid |
US20050225559A1 (en) * | 2001-03-29 | 2005-10-13 | Microsoft Corporation | 3D navigation techniques |
US6452544B1 (en) * | 2001-05-24 | 2002-09-17 | Nokia Corporation | Portable map display system for presenting a 3D map image and method thereof |
US7159194B2 (en) * | 2001-11-30 | 2007-01-02 | Palm, Inc. | Orientation dependent functionality of an electronic device |
US7411594B2 (en) * | 2002-01-15 | 2008-08-12 | Canon Kabushiki Kaisha | Information processing apparatus and method |
US7791618B2 (en) * | 2002-01-15 | 2010-09-07 | Canon Kabushiki Kaisha | Information processing apparatus and method |
US7277571B2 (en) * | 2002-05-21 | 2007-10-02 | Sega Corporation | Effective image processing, apparatus and method in virtual three-dimensional space |
US20040051709A1 (en) * | 2002-05-31 | 2004-03-18 | Eit Co., Ltd. | Apparatus for controlling the shift of virtual space and method and program for controlling same |
US20040128071A1 (en) * | 2002-10-23 | 2004-07-01 | Stefan Schradi | Method and apparatus for generating a GPS simulation scenario |
US7589732B2 (en) * | 2002-11-05 | 2009-09-15 | Autodesk, Inc. | System and method of integrated spatial and temporal navigation |
US6975959B2 (en) * | 2002-12-03 | 2005-12-13 | Robert Bosch Gmbh | Orientation and navigation for a mobile device using inertial sensors |
US20050187015A1 (en) * | 2004-02-19 | 2005-08-25 | Nintendo Co., Ltd. | Game machine and data storage medium having stored therein game program |
US7839405B2 (en) * | 2006-04-10 | 2010-11-23 | Sony Corporation | Apparatus, method, and program for projection of 3D spatial image into planar images using visual points |
US8094204B2 (en) * | 2006-08-28 | 2012-01-10 | Sony Corporation | Image movement based device control method, program, and apparatus |
US20080094358A1 (en) * | 2006-09-15 | 2008-04-24 | Industrial Light & Magic | Constrained Virtual Camera Control |
US7877707B2 (en) * | 2007-01-06 | 2011-01-25 | Apple Inc. | Detecting and interpreting real-world and security gestures on touch and hover sensitive devices |
US20080180406A1 (en) * | 2007-01-31 | 2008-07-31 | Han Jefferson Y | Methods of interfacing with multi-point input devices and multi-point input systems employing interfacing techniques |
US20090046110A1 (en) * | 2007-08-16 | 2009-02-19 | Motorola, Inc. | Method and apparatus for manipulating a displayed image |
US20090256837A1 (en) * | 2008-04-11 | 2009-10-15 | Sidhartha Deb | Directing camera behavior in 3-d imaging system |
Non-Patent Citations (5)
Title |
---|
Jung et al., Adapting X3D for Multi-touch Environments, Conference Web3D 2008, 13th Internations Conference on 3D Web Technology, ACM August 9-10, 2008 * |
Kim et al., HCI (Human Computer Interactions) Using Multi-touch Tablebop Display, Communications, Computers and Signal Processing, 2007, PacRim 2007, IEEE Pacific Rim Conference, August 2007, pages 391-394 * |
Lee et al., A multi-touch three dimensional touch-sensitive tablet, CHI '85 Proceedings of the SIGCHI conference on Human factors in computing systems, ACM, April 1985, pages 21-25 * |
Ware et al., Exploration and Virtual Camera Control in Virtual Three Dimensional Environments, ACM 1990, pages 175-183 * |
Zeleznik et al., UniCam-2D Gestural Camera Controls for 3D Environments, 1999 ACM Symposium on Interactive 3D Graphics, pages 169-173 * |
Cited By (200)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9529440B2 (en) | 1999-01-25 | 2016-12-27 | Apple Inc. | Disambiguation of multitouch gesture recognition for 3D interaction |
US10782873B2 (en) | 1999-01-25 | 2020-09-22 | Apple Inc. | Disambiguation of multitouch gesture recognition for 3D interaction |
US9286941B2 (en) | 2001-05-04 | 2016-03-15 | Legend3D, Inc. | Image sequence enhancement and motion picture project management system |
US8948788B2 (en) | 2008-05-28 | 2015-02-03 | Google Inc. | Motion-controlled views on mobile computing devices |
US20090325607A1 (en) * | 2008-05-28 | 2009-12-31 | Conway David P | Motion-controlled views on mobile computing devices |
US12032802B2 (en) * | 2008-08-22 | 2024-07-09 | Google Llc | Panning in a three dimensional environment on a mobile device |
US20220100350A1 (en) * | 2008-08-22 | 2022-03-31 | Google Llc | Panning in a three dimensional environment on a mobile device |
US10222931B2 (en) * | 2008-08-22 | 2019-03-05 | Google Llc | Panning in a three dimensional environment on a mobile device |
US11054964B2 (en) | 2008-08-22 | 2021-07-06 | Google Llc | Panning in a three dimensional environment on a mobile device |
US10942618B2 (en) | 2008-08-22 | 2021-03-09 | Google Llc | Panning in a three dimensional environment on a mobile device |
US20100045667A1 (en) * | 2008-08-22 | 2010-02-25 | Google Inc. | Navigation In a Three Dimensional Environment Using An Orientation Of A Mobile Device |
US20160224204A1 (en) * | 2008-08-22 | 2016-08-04 | Google Inc. | Panning in a Three Dimensional Environment on a Mobile Device |
US8847992B2 (en) * | 2008-08-22 | 2014-09-30 | Google Inc. | Navigation in a three dimensional environment using an orientation of a mobile device |
US8108147B1 (en) * | 2009-02-06 | 2012-01-31 | The United States Of America As Represented By The Secretary Of The Navy | Apparatus and method for automatic omni-directional visual motion-based collision avoidance |
US20100223093A1 (en) * | 2009-02-27 | 2010-09-02 | Hubbard Robert B | System and method for intelligently monitoring subscriber's response to multimedia content |
US10298839B2 (en) * | 2009-09-29 | 2019-05-21 | Sony Interactive Entertainment Inc. | Image processing apparatus, image processing method, and image communication system |
US9776083B2 (en) | 2010-02-03 | 2017-10-03 | Nintendo Co., Ltd. | Spatially-correlated multi-display human-machine interface |
US20110190052A1 (en) * | 2010-02-03 | 2011-08-04 | Nintendo Co., Ltd. | Game system, controller device and game method |
US8961305B2 (en) | 2010-02-03 | 2015-02-24 | Nintendo Co., Ltd. | Game system, controller device and game method |
US8814686B2 (en) | 2010-02-03 | 2014-08-26 | Nintendo Co., Ltd. | Display device, game system, and game method |
US9358457B2 (en) | 2010-02-03 | 2016-06-07 | Nintendo Co., Ltd. | Game system, controller device, and game method |
US8317615B2 (en) | 2010-02-03 | 2012-11-27 | Nintendo Co., Ltd. | Display device, game system, and game method |
US20110190061A1 (en) * | 2010-02-03 | 2011-08-04 | Nintendo Co., Ltd. | Display device, game system, and game method |
US8684842B2 (en) | 2010-02-03 | 2014-04-01 | Nintendo Co., Ltd. | Display device, game system, and game process method |
US8339364B2 (en) | 2010-02-03 | 2012-12-25 | Nintendo Co., Ltd. | Spatially-correlated multi-display human-machine interface |
US8896534B2 (en) | 2010-02-03 | 2014-11-25 | Nintendo Co., Ltd. | Spatially-correlated multi-display human-machine interface |
US8913009B2 (en) | 2010-02-03 | 2014-12-16 | Nintendo Co., Ltd. | Spatially-correlated multi-display human-machine interface |
US12118591B1 (en) | 2010-02-12 | 2024-10-15 | Weple Ip Holdings Llc | Mobile device streaming media application |
US11074627B2 (en) | 2010-02-12 | 2021-07-27 | Mary Anne Fletcher | Mobile device streaming media application |
US10565628B2 (en) | 2010-02-12 | 2020-02-18 | Mary Anne Fletcher | Mobile device streaming media application |
US11734730B2 (en) | 2010-02-12 | 2023-08-22 | Weple Ip Holdings Llc | Mobile device streaming media application |
US11605112B2 (en) | 2010-02-12 | 2023-03-14 | Weple Ip Holdings Llc | Mobile device streaming media application |
US10102553B2 (en) | 2010-02-12 | 2018-10-16 | Mary Anne Fletcher | Mobile device streaming media application |
US10909583B2 (en) | 2010-02-12 | 2021-02-02 | Mary Anne Fletcher | Mobile device streaming media application |
US12112357B2 (en) | 2010-02-12 | 2024-10-08 | Weple Ip Holdings Llc | Mobile device streaming media application |
US10102552B2 (en) | 2010-02-12 | 2018-10-16 | Mary Anne Fletcher | Mobile device streaming media application |
US11966952B1 (en) | 2010-02-12 | 2024-04-23 | Weple Ip Holdings Llc | Mobile device streaming media application |
KR101513343B1 (en) | 2010-02-26 | 2015-04-17 | 알까뗄 루슨트 | Method for controlling motions of an object in a 3-dimensional virtual environment |
US8907946B2 (en) | 2010-02-26 | 2014-12-09 | Alcatel Lucent | Method for controlling motions of an object in a 3-dimensional virtual environment |
EP2362302A1 (en) * | 2010-02-26 | 2011-08-31 | Alcatel Lucent | Method for controlling motions of an object in a 3-dimensional virtual environment |
CN102770836A (en) * | 2010-02-26 | 2012-11-07 | 阿尔卡特朗讯公司 | Method for controlling motions of an object in a 3-dimensional virtual environment |
WO2011104154A1 (en) * | 2010-02-26 | 2011-09-01 | Alcatel Lucent | Method for controlling motions of an object in a 3-dimensional virtual environment |
US8937592B2 (en) * | 2010-05-20 | 2015-01-20 | Samsung Electronics Co., Ltd. | Rendition of 3D content on a handheld device |
US20110285622A1 (en) * | 2010-05-20 | 2011-11-24 | Samsung Electronics Co., Ltd. | Rendition of 3d content on a handheld device |
US20130027342A1 (en) * | 2010-05-21 | 2013-01-31 | Nec Corporation | Pointed position determination apparatus of touch panel, touch panel apparatus, electronics apparatus including the same, method of determining pointed position on touch panel, and computer program storage medium |
EP2390777A3 (en) * | 2010-05-26 | 2016-07-06 | Sony Ericsson Mobile Communications AB | Touch interface for three-dimensional display control |
US20120326975A1 (en) * | 2010-06-03 | 2012-12-27 | PixArt Imaging Incorporation, R.O.C. | Input device and input method |
US9075436B1 (en) | 2010-06-14 | 2015-07-07 | Google Inc. | Motion-based interface control on computing device |
US8977987B1 (en) * | 2010-06-14 | 2015-03-10 | Google Inc. | Motion-based interface control on computing device |
US8498816B2 (en) * | 2010-06-15 | 2013-07-30 | Brother Kogyo Kabushiki Kaisha | Systems including mobile devices and head-mountable displays that selectively display content, such mobile devices, and computer-readable storage media for controlling such mobile devices |
US8907943B2 (en) | 2010-07-07 | 2014-12-09 | Apple Inc. | Sensor based display environment |
US9626786B1 (en) * | 2010-07-19 | 2017-04-18 | Lucasfilm Entertainment Company Ltd. | Virtual-scene control device |
US9781354B2 (en) | 2010-07-19 | 2017-10-03 | Lucasfilm Entertainment Company Ltd. | Controlling a virtual camera |
US10142561B2 (en) | 2010-07-19 | 2018-11-27 | Lucasfilm Entertainment Company Ltd. | Virtual-scene control device |
US9324179B2 (en) | 2010-07-19 | 2016-04-26 | Lucasfilm Entertainment Company Ltd. | Controlling a virtual camera |
EP2413104A1 (en) * | 2010-07-30 | 2012-02-01 | Pantech Co., Ltd. | Apparatus and method for providing road view |
CN102420936A (en) * | 2010-07-30 | 2012-04-18 | 株式会社泛泰 | Apparatus and method for providing road view |
US9199168B2 (en) | 2010-08-06 | 2015-12-01 | Nintendo Co., Ltd. | Game system, game apparatus, storage medium having game program stored therein, and game process method |
EP2422854A3 (en) * | 2010-08-20 | 2012-08-22 | Nintendo Co., Ltd. | Game system, game device, storage medium storing game program, and game process method |
US8690675B2 (en) | 2010-08-20 | 2014-04-08 | Nintendo Co., Ltd. | Game system, game device, storage medium storing game program, and game process method |
US8337308B2 (en) | 2010-08-20 | 2012-12-25 | Nintendo Co., Ltd. | Game system, game device, storage medium storing game program, and game process method |
US10150033B2 (en) | 2010-08-20 | 2018-12-11 | Nintendo Co., Ltd. | Position calculation system, position calculation device, storage medium storing position calculation program, and position calculation method |
US9132347B2 (en) | 2010-08-30 | 2015-09-15 | Nintendo Co., Ltd. | Game system, game apparatus, storage medium having game program stored therein, and game process method |
US8956209B2 (en) | 2010-08-30 | 2015-02-17 | Nintendo Co., Ltd. | Game system, game apparatus, storage medium having game program stored therein, and game process method |
EP2437221A3 (en) * | 2010-10-04 | 2017-01-25 | Fujitsu Limited | Device, method, and computer-readable storage medium for operation of a three-dimensional movement of a virtual object |
WO2012045513A1 (en) * | 2010-10-08 | 2012-04-12 | Hicat Gmbh | Computer and method for visually navigating in a three-dimensional image data set |
EP3474114A1 (en) | 2010-10-08 | 2019-04-24 | Hicat GmbH | Computer and method for visual navigation in a three-dimensional image data set |
GB2487039A (en) * | 2010-10-11 | 2012-07-11 | Michele Sciolette | Visualizing Illustrated Books And Comics On Digital Devices |
US8780174B1 (en) | 2010-10-12 | 2014-07-15 | The Boeing Company | Three-dimensional vision system for displaying images taken from a moving vehicle |
EP2635876A4 (en) * | 2010-11-01 | 2014-09-03 | Nokia Corp | Visually representing a three-dimensional environment |
US9889384B2 (en) | 2010-11-01 | 2018-02-13 | Nintendo Co., Ltd. | Controller device and controller system |
US9026359B2 (en) | 2010-11-01 | 2015-05-05 | Nokia Corporation | Visually representing a three-dimensional environment |
US8702514B2 (en) | 2010-11-01 | 2014-04-22 | Nintendo Co., Ltd. | Controller device and controller system |
US8827818B2 (en) | 2010-11-01 | 2014-09-09 | Nintendo Co., Ltd. | Controller device and information processing device |
EP2635876A1 (en) * | 2010-11-01 | 2013-09-11 | Nokia Corp. | Visually representing a three-dimensional environment |
US8804326B2 (en) | 2010-11-01 | 2014-08-12 | Nintendo Co., Ltd. | Device support system and support device |
US9272207B2 (en) | 2010-11-01 | 2016-03-01 | Nintendo Co., Ltd. | Controller device and controller system |
US8814680B2 (en) | 2010-11-01 | 2014-08-26 | Nintendo Co., Inc. | Controller device and controller system |
US8326528B2 (en) | 2010-12-03 | 2012-12-04 | Google Inc. | Showing realistic horizons on mobile computing devices |
US8380427B2 (en) | 2010-12-03 | 2013-02-19 | Google Inc. | Showing realistic horizons on mobile computing devices |
WO2012075435A3 (en) * | 2010-12-03 | 2012-11-08 | Google Inc. | Showing realistic horizons on mobile computing devices |
WO2012075435A2 (en) * | 2010-12-03 | 2012-06-07 | Google Inc. | Showing realistic horizons on mobile computing devices |
US9823825B2 (en) | 2011-02-09 | 2017-11-21 | Robotzone, Llc | Multichannel controller |
US8791911B2 (en) * | 2011-02-09 | 2014-07-29 | Robotzone, Llc | Multichannel controller |
US20120200510A1 (en) * | 2011-02-09 | 2012-08-09 | Robotzone, Llc | Multichannel controller |
US9282321B2 (en) | 2011-02-17 | 2016-03-08 | Legend3D, Inc. | 3D model multi-reviewer system |
US9288476B2 (en) | 2011-02-17 | 2016-03-15 | Legend3D, Inc. | System and method for real-time depth modification of stereo images of a virtual reality environment |
JP2014511534A (en) * | 2011-03-02 | 2014-05-15 | ザ・ボーイング・カンパニー | System and method for navigating a three-dimensional environment using a multi-input interface |
US9632677B2 (en) | 2011-03-02 | 2017-04-25 | The Boeing Company | System and method for navigating a 3-D environment using a multi-input interface |
US9345962B2 (en) | 2011-03-08 | 2016-05-24 | Nintendo Co., Ltd. | Storage medium having stored thereon information processing program, information processing apparatus, information processing system, and information processing method |
US9370712B2 (en) | 2011-03-08 | 2016-06-21 | Nintendo Co., Ltd. | Information processing system, information processing apparatus, storage medium having information processing program stored therein, and image display method for controlling virtual objects based on at least body state data and/or touch position data |
US9205327B2 (en) | 2011-03-08 | 2015-12-08 | Nintento Co., Ltd. | Storage medium having information processing program stored thereon, information processing apparatus, information processing system, and information processing method |
US9539511B2 (en) | 2011-03-08 | 2017-01-10 | Nintendo Co., Ltd. | Computer-readable storage medium, information processing system, and information processing method for operating objects in a virtual world based on orientation data related to an orientation of a device |
US9492742B2 (en) | 2011-03-08 | 2016-11-15 | Nintendo Co., Ltd. | Storage medium having stored thereon information processing program, information processing apparatus, information processing system, and information processing method |
US9561443B2 (en) | 2011-03-08 | 2017-02-07 | Nintendo Co., Ltd. | Computer-readable storage medium, information processing system, and information processing method |
US9925464B2 (en) | 2011-03-08 | 2018-03-27 | Nintendo Co., Ltd. | Computer-readable storage medium, information processing system, and information processing method for displaying an image on a display device using attitude data of a display device |
US9643085B2 (en) | 2011-03-08 | 2017-05-09 | Nintendo Co., Ltd. | Computer-readable storage medium, information processing system, and information processing method for controlling a virtual object using attitude data |
US9375640B2 (en) | 2011-03-08 | 2016-06-28 | Nintendo Co., Ltd. | Information processing system, computer-readable storage medium, and information processing method |
US8845430B2 (en) | 2011-03-08 | 2014-09-30 | Nintendo Co., Ltd. | Storage medium having stored thereon game program, game apparatus, game system, and game processing method |
US9526981B2 (en) | 2011-03-08 | 2016-12-27 | Nintendo Co., Ltd. | Storage medium having stored thereon information processing program, information processing apparatus, information processing system, and information processing method |
US9522323B2 (en) | 2011-03-08 | 2016-12-20 | Nintendo Co., Ltd. | Storage medium having stored thereon information processing program, information processing apparatus, information processing system, and information processing method |
EP2497548A3 (en) * | 2011-03-08 | 2014-11-26 | Nintendo Co., Ltd. | Information processing program, information processing apparatus, information processing system, and information processing method |
US9492743B2 (en) | 2011-03-08 | 2016-11-15 | Nintendo Co., Ltd. | Storage medium having stored thereon information processing program, information processing apparatus, information processing system, and information processing method |
US10235804B2 (en) * | 2011-03-31 | 2019-03-19 | Srt Marine System Solutions Limited | Display system |
GB2489685B (en) * | 2011-03-31 | 2017-01-25 | Geovs Ltd | A Display System |
US20120249786A1 (en) * | 2011-03-31 | 2012-10-04 | Geovs Ltd. | Display System |
GB2489685A (en) * | 2011-03-31 | 2012-10-10 | Geovs Ltd | Displaying a Plurality of Viewpoints of an Environment |
US8845426B2 (en) | 2011-04-07 | 2014-09-30 | Nintendo Co., Ltd. | Input system, information processing device, storage medium storing information processing program, and three-dimensional position calculation method |
US20120264406A1 (en) * | 2011-04-15 | 2012-10-18 | Avaya Inc. | Obstacle warning system and method |
US8760275B2 (en) * | 2011-04-15 | 2014-06-24 | Avaya Inc. | Obstacle warning system and method |
US9390617B2 (en) | 2011-06-10 | 2016-07-12 | Robotzone, Llc | Camera motion control system with variable autonomy |
US20130070852A1 (en) * | 2011-09-19 | 2013-03-21 | Acer Incorporated | Method for assisting video compression by using touch screen and monitoring system |
US20140306886A1 (en) * | 2011-10-26 | 2014-10-16 | Konami Digital Entertainment Co., Ltd. | Image processing device, method for controlling image processing device, program, and information recording medium |
US9542068B2 (en) * | 2011-10-27 | 2017-01-10 | The Hong Kong University Of Science And Technology | System and method for constrained manipulations of 3D objects by multitouch inputs |
US20140229871A1 (en) * | 2011-10-27 | 2014-08-14 | The Hong Kong University Of Science And Technology | System and Method for Constrained Manipulations of 3D Objects by Multitouch Inputs |
US9836211B2 (en) | 2011-12-21 | 2017-12-05 | Apple Inc. | Device, method, and graphical user interface for selection of views in a three-dimensional map based on gesture inputs |
US9208698B2 (en) | 2011-12-27 | 2015-12-08 | Apple Inc. | Device, method, and graphical user interface for manipulating a three-dimensional map view based on a device orientation |
TWI501165B (en) * | 2012-02-13 | 2015-09-21 | Htc Corp | Auto burst image capture method applied to a mobile device, method for tracking an object in a scene applied to a mobile device, and related mobile device |
US20130208127A1 (en) * | 2012-02-13 | 2013-08-15 | Htc Corporation | Auto burst image capture method applied to a mobile device, method for tracking an object applied to a mobile device, and related mobile device |
US9124800B2 (en) * | 2012-02-13 | 2015-09-01 | Htc Corporation | Auto burst image capture method applied to a mobile device, method for tracking an object applied to a mobile device, and related mobile device |
US10481754B2 (en) | 2012-02-27 | 2019-11-19 | Autodesk, Inc. | Systems and methods for manipulating a 3D object in a 3D model using a software widget and surface constraints |
US9594487B2 (en) * | 2012-02-27 | 2017-03-14 | Autodesk, Inc | Systems and methods for manipulating a 3D object in a 3D model using a software widget and surface constraints |
US20130227493A1 (en) * | 2012-02-27 | 2013-08-29 | Ryan Michael SCHMIDT | Systems and methods for manipulating a 3d object in a 3d model using a software widget and surface constraints |
US8626434B1 (en) * | 2012-03-01 | 2014-01-07 | Google Inc. | Automatic adjustment of a camera view for a three-dimensional navigation system |
US20130321472A1 (en) * | 2012-06-05 | 2013-12-05 | Patrick S. Piemonte | Method, system and apparatus for selectively obtaining map image data according to virtual camera velocity |
US9200919B2 (en) * | 2012-06-05 | 2015-12-01 | Apple Inc. | Method, system and apparatus for selectively obtaining map image data according to virtual camera velocity |
EP2859535A4 (en) * | 2012-06-06 | 2016-01-20 | Google Inc | System and method for providing content for a point of interest |
US10775959B2 (en) | 2012-06-22 | 2020-09-15 | Matterport, Inc. | Defining, displaying and interacting with tags in a three-dimensional model |
US11422671B2 (en) | 2012-06-22 | 2022-08-23 | Matterport, Inc. | Defining, displaying and interacting with tags in a three-dimensional model |
US12086376B2 (en) | 2012-06-22 | 2024-09-10 | Matterport, Inc. | Defining, displaying and interacting with tags in a three-dimensional model |
US10139985B2 (en) | 2012-06-22 | 2018-11-27 | Matterport, Inc. | Defining, displaying and interacting with tags in a three-dimensional model |
US9786097B2 (en) * | 2012-06-22 | 2017-10-10 | Matterport, Inc. | Multi-modal method for interacting with 3D models |
US11062509B2 (en) * | 2012-06-22 | 2021-07-13 | Matterport, Inc. | Multi-modal method for interacting with 3D models |
US20130342533A1 (en) * | 2012-06-22 | 2013-12-26 | Matterport, Inc. | Multi-modal method for interacting with 3d models |
US10304240B2 (en) | 2012-06-22 | 2019-05-28 | Matterport, Inc. | Multi-modal method for interacting with 3D models |
US11551410B2 (en) | 2012-06-22 | 2023-01-10 | Matterport, Inc. | Multi-modal method for interacting with 3D models |
US11422694B2 (en) | 2012-07-15 | 2022-08-23 | Apple Inc. | Disambiguation of multitouch gesture recognition for 3D interaction |
WO2014014806A1 (en) * | 2012-07-15 | 2014-01-23 | Apple Inc. | Disambiguation of multitouch gesture recognition for 3d interaction |
US9021387B2 (en) | 2012-07-31 | 2015-04-28 | Hewlett-Packard Development Company, L.P. | Re-sizing user interface object on touch sensitive display |
US9507513B2 (en) | 2012-08-17 | 2016-11-29 | Google Inc. | Displaced double tap gesture |
CN103677620A (en) * | 2012-08-31 | 2014-03-26 | 株式会社得那 | System and method for facilitating interaction with virtual space via touch sensitive surface |
EP2711059A3 (en) * | 2012-08-31 | 2015-03-04 | DeNA Co., Ltd. | System and method for facilitating interaction with a virtual space |
US9700787B2 (en) | 2012-08-31 | 2017-07-11 | DeNA Co., Ltd. | System and method for facilitating interaction with a virtual space via a touch sensitive surface |
CN104769393A (en) * | 2012-09-05 | 2015-07-08 | 赫尔环球有限公司 | Method and apparatus for transitioning from a partial map view to an augmented reality view |
US9547937B2 (en) | 2012-11-30 | 2017-01-17 | Legend3D, Inc. | Three-dimensional annotation system and method |
US10140765B2 (en) | 2013-02-25 | 2018-11-27 | Google Llc | Staged camera traversal for three dimensional environment |
US9773346B1 (en) * | 2013-03-12 | 2017-09-26 | Amazon Technologies, Inc. | Displaying three-dimensional virtual content |
US9007404B2 (en) * | 2013-03-15 | 2015-04-14 | Legend3D, Inc. | Tilt-based look around effect image enhancement method |
US20140267235A1 (en) * | 2013-03-15 | 2014-09-18 | Legend3D, Inc. | Tilt-based look around effect image enhancement method |
US9438878B2 (en) | 2013-05-01 | 2016-09-06 | Legend3D, Inc. | Method of converting 2D video to 3D video using 3D object models |
US9407904B2 (en) | 2013-05-01 | 2016-08-02 | Legend3D, Inc. | Method for creating 3D virtual reality from 2D images |
US9241147B2 (en) | 2013-05-01 | 2016-01-19 | Legend3D, Inc. | External depth map transformation method for conversion of two-dimensional images to stereoscopic images |
US10606360B2 (en) | 2013-09-10 | 2020-03-31 | Google Llc | Three-dimensional tilt and pan navigation using a single gesture |
US9329750B2 (en) | 2013-09-10 | 2016-05-03 | Google Inc. | Three-dimensional tilt and pan navigation using a single gesture |
US20160320846A1 (en) * | 2013-12-18 | 2016-11-03 | Nu-Tech Sas Di De Michele Marco & C. | Method for providing user commands to an electronic processor and related processor program and electronic circuit |
US10372223B2 (en) * | 2013-12-18 | 2019-08-06 | Nu-Tech Sas Di Michele Marco & C. | Method for providing user commands to an electronic processor and related processor program and electronic circuit |
US20150237328A1 (en) * | 2014-02-14 | 2015-08-20 | Dayu Optoelectronics Co., Ltd. | Method for generating three-dimensional images and three-dimensional imaging device |
US10163261B2 (en) | 2014-03-19 | 2018-12-25 | Matterport, Inc. | Selecting two-dimensional imagery data for display within a three-dimensional model |
US11600046B2 (en) | 2014-03-19 | 2023-03-07 | Matterport, Inc. | Selecting two-dimensional imagery data for display within a three-dimensional model |
US10909758B2 (en) | 2014-03-19 | 2021-02-02 | Matterport, Inc. | Selecting two-dimensional imagery data for display within a three-dimensional model |
US20170052684A1 (en) * | 2014-04-07 | 2017-02-23 | Sony Corporation | Display control apparatus, display control method, and program |
US10602200B2 (en) | 2014-05-28 | 2020-03-24 | Lucasfilm Entertainment Company Ltd. | Switching modes of a media content item |
US10600245B1 (en) * | 2014-05-28 | 2020-03-24 | Lucasfilm Entertainment Company Ltd. | Navigating a virtual environment of a media content item |
US11508125B1 (en) | 2014-05-28 | 2022-11-22 | Lucasfilm Entertainment Company Ltd. | Navigating a virtual environment of a media content item |
US9726463B2 (en) | 2014-07-16 | 2017-08-08 | Robtozone, LLC | Multichannel controller for target shooting range |
US10007419B2 (en) | 2014-07-17 | 2018-06-26 | Facebook, Inc. | Touch-based gesture recognition and application navigation |
US20160018981A1 (en) * | 2014-07-17 | 2016-01-21 | Facebook, Inc. | Touch-Based Gesture Recognition and Application Navigation |
US10324619B2 (en) | 2014-07-17 | 2019-06-18 | Facebook, Inc. | Touch-based gesture recognition and application navigation |
US9430142B2 (en) * | 2014-07-17 | 2016-08-30 | Facebook, Inc. | Touch-based gesture recognition and application navigation |
WO2016057997A1 (en) * | 2014-10-10 | 2016-04-14 | Pantomime Corporation | Support based 3d navigation |
FR3028330A1 (en) * | 2014-11-07 | 2016-05-13 | Thales Sa | METHOD AND DEVICE FOR CHANGING THE POINT OF VIEW OF A 3D CARD OR IMAGE OF A 3D PHYSICAL OBJECT BY RECOGNIZING A GESTUAL ON A TOUCH SCREEN |
WO2016071171A1 (en) * | 2014-11-07 | 2016-05-12 | Thales | Method and device for changing a view point of a three-dimensional map (3d) or of an image of an observed three-dimensional physical object (3d) by recognising a gesture on a multi-point touch screen |
CN104965653A (en) * | 2015-06-15 | 2015-10-07 | 联想(北京)有限公司 | Control method and electronic equipment |
US10884576B2 (en) | 2015-06-16 | 2021-01-05 | Nokia Technologies Oy | Mediated reality |
US10127722B2 (en) | 2015-06-30 | 2018-11-13 | Matterport, Inc. | Mobile capture visualization incorporating three-dimensional and two-dimensional imagery |
US9679413B2 (en) | 2015-08-13 | 2017-06-13 | Google Inc. | Systems and methods to transition between viewpoints in a three-dimensional environment |
US11699266B2 (en) * | 2015-09-02 | 2023-07-11 | Interdigital Ce Patent Holdings, Sas | Method, apparatus and system for facilitating navigation in an extended scene |
US20180182168A1 (en) * | 2015-09-02 | 2018-06-28 | Thomson Licensing | Method, apparatus and system for facilitating navigation in an extended scene |
US9609307B1 (en) | 2015-09-17 | 2017-03-28 | Legend3D, Inc. | Method of converting 2D video to 3D video using machine learning |
CN105915877A (en) * | 2015-12-27 | 2016-08-31 | 乐视致新电子科技(天津)有限公司 | Free film watching method and device of three-dimensional video |
US20170269712A1 (en) * | 2016-03-16 | 2017-09-21 | Adtile Technologies Inc. | Immersive virtual experience using a mobile communication device |
US20170374276A1 (en) * | 2016-06-23 | 2017-12-28 | Intel Corporation | Controlling capturing of a multimedia stream with user physical responses |
US10712836B2 (en) * | 2016-10-04 | 2020-07-14 | Hewlett-Packard Development Company, L.P. | Three-dimensional input device |
US20190087020A1 (en) * | 2016-10-04 | 2019-03-21 | Hewlett-Packard Development Company, L.P. | Three-dimensional input device |
US11061557B2 (en) | 2016-12-22 | 2021-07-13 | ReScan, Inc. | Dynamic single touch point navigation |
EP3596587A4 (en) * | 2017-03-13 | 2021-01-27 | Rescan, Inc. | Navigation system |
WO2018227230A1 (en) * | 2017-06-16 | 2018-12-20 | Canon Kabushiki Kaisha | System and method of configuring a virtual camera |
US11172231B2 (en) | 2017-07-07 | 2021-11-09 | Canon Kabushiki Kaisha | Method, apparatus and system for encoding or decoding video data of precincts by using wavelet transform |
CN110944727A (en) * | 2017-09-19 | 2020-03-31 | 佳能株式会社 | System and method for controlling virtual camera |
AU2018336341B2 (en) * | 2017-09-19 | 2020-05-07 | Canon Kabushiki Kaisha | Control device, control method and program |
US11003350B2 (en) | 2017-09-19 | 2021-05-11 | Canon Kabushiki Kaisha | Control apparatus, control method, and storage medium |
JP2022505457A (en) * | 2019-01-30 | 2022-01-14 | テンセント・テクノロジー・(シェンジェン)・カンパニー・リミテッド | How to build buildings in virtual environments, equipment, equipment and programs |
US11675488B2 (en) | 2019-01-30 | 2023-06-13 | Tencent Technology (Shenzhen) Company Limited | Method and apparatus for constructing building in virtual environment, device, and storage medium |
JP7224459B2 (en) | 2019-01-30 | 2023-02-17 | テンセント・テクノロジー・(シェンジェン)・カンパニー・リミテッド | Method, apparatus, equipment and program for constructing a building in a virtual environment |
US20220253198A1 (en) * | 2019-07-30 | 2022-08-11 | Sony Corporation | Image processing device, image processing method, and recording medium |
CN114144753A (en) * | 2019-07-30 | 2022-03-04 | 索尼集团公司 | Image processing apparatus, image processing method, and recording medium |
US11625037B2 (en) | 2020-02-13 | 2023-04-11 | Honeywell International Inc. | Methods and systems for searchlight control for aerial vehicles |
EP3865984A1 (en) * | 2020-02-13 | 2021-08-18 | Honeywell International Inc. | Methods and systems for searchlight control for aerial vehicles |
US12131357B2 (en) | 2024-01-25 | 2024-10-29 | Weple Ip Holdings Llc | Mobile device streaming media application |
US12131356B2 (en) | 2024-01-25 | 2024-10-29 | Weple Ip Holdings Llc | Mobile device streaming media application |
Also Published As
Publication number | Publication date |
---|---|
US20100045667A1 (en) | 2010-02-25 |
US9310992B2 (en) | 2016-04-12 |
CN103324386A (en) | 2013-09-25 |
US20190196691A1 (en) | 2019-06-27 |
KR20110049873A (en) | 2011-05-12 |
AU2009282724B2 (en) | 2014-12-04 |
EP2327010A2 (en) | 2011-06-01 |
KR101665034B1 (en) | 2016-10-24 |
CN111522493A (en) | 2020-08-11 |
CN102187309A (en) | 2011-09-14 |
US11054964B2 (en) | 2021-07-06 |
CA2734987A1 (en) | 2010-02-25 |
US12032802B2 (en) | 2024-07-09 |
US20190205009A1 (en) | 2019-07-04 |
US10222931B2 (en) | 2019-03-05 |
US20100053219A1 (en) | 2010-03-04 |
US20220100350A1 (en) | 2022-03-31 |
US20160224204A1 (en) | 2016-08-04 |
JP2012501016A (en) | 2012-01-12 |
US10942618B2 (en) | 2021-03-09 |
AU2009282724A1 (en) | 2010-02-25 |
US8847992B2 (en) | 2014-09-30 |
WO2010022386A3 (en) | 2010-11-04 |
WO2010022386A2 (en) | 2010-02-25 |
US20100045703A1 (en) | 2010-02-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US12032802B2 (en) | Panning in a three dimensional environment on a mobile device | |
US20150040073A1 (en) | Zoom, Rotate, and Translate or Pan In A Single Gesture | |
US11989826B2 (en) | Generating a three-dimensional model using a portable electronic device recording | |
US8253649B2 (en) | Spatially correlated rendering of three-dimensional content on display components having arbitrary positions | |
US8994644B2 (en) | Viewing images with tilt control on a hand-held device | |
US11922588B2 (en) | Cooperative augmented reality map interface | |
EP3616035A1 (en) | Augmented reality interface for interacting with displayed maps | |
US20100188397A1 (en) | Three dimensional navigation using deterministic movement of an electronic device | |
US20150169119A1 (en) | Major-Axis Pinch Navigation In A Three-Dimensional Environment On A Mobile Device | |
KR20110104096A (en) | User interface for mobile devices | |
WO2012007745A2 (en) | User interactions | |
Kokaji et al. | User Interface Input by Device Movement | |
Joshi et al. | Looking At You |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: GOOGLE INC.,CALIFORNIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:KORNMANN, DAVID;BIRCH, PETER;REEL/FRAME:023490/0299 Effective date: 20091030 |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- AFTER EXAMINER'S ANSWER OR BOARD OF APPEALS DECISION |
|
AS | Assignment |
Owner name: GOOGLE LLC, CALIFORNIA Free format text: CHANGE OF NAME;ASSIGNOR:GOOGLE INC.;REEL/FRAME:044142/0357 Effective date: 20170929 |