CA2281229A1 - Method and system for navigating a virtual environment - Google Patents
Method and system for navigating a virtual environment Download PDFInfo
- Publication number
- CA2281229A1 CA2281229A1 CA002281229A CA2281229A CA2281229A1 CA 2281229 A1 CA2281229 A1 CA 2281229A1 CA 002281229 A CA002281229 A CA 002281229A CA 2281229 A CA2281229 A CA 2281229A CA 2281229 A1 CA2281229 A1 CA 2281229A1
- Authority
- CA
- Canada
- Prior art keywords
- scene
- user
- focal point
- processing unit
- virtual environment
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
- 238000000034 method Methods 0.000 title claims abstract 14
- 230000000881 depressing effect Effects 0.000 claims 1
- 238000009877 rendering Methods 0.000 claims 1
- 238000004590 computer program Methods 0.000 abstract 1
- 230000006870 function Effects 0.000 abstract 1
- 230000008447 perception Effects 0.000 abstract 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0481—Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
- G06F3/04815—Interaction with a metaphor-based environment or interaction object displayed as three-dimensional, e.g. changing the user viewpoint with respect to the environment or object
Landscapes
- Engineering & Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Processing Or Creating Images (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
A method and system for navigating in a three-dimensional virtual environment is provided. The present invention is particularly suited towards three-dimensional graphs representing computer programs, where nodes represent objects or other programming structures, and arcs in between the nodes represent function calls or other types of relationships between the structures. The collective nodes and arcs represent a specialized type of virtual environment.
At least a portion of the virtual environment is displayed on a monitor, the virtual environment portion being represented by a frustum, the frustum having a narrow portion at the front of the monitor and diverging towards the back of the monitor to give the perception of depth to a user. The frustum remains stationary, and has a focal point, usually at the centre of the monitor screen. When a user selects an object displayed on the monitor, the scene on the monitor is scaled by a predetermined amount. In addition, the scene is moved such that the selected object is translated towards the focal point. The scaled and moved scene is then displayed on the monitor. The steps of scaling and moving are repeated until the desired navigation is achieved. Rates of scaling and translating are preferably chosen so that the animation of the navigation is smooth. By making the navigation rapid, a user can be provided with both focus and context when viewing a three-dimensional graph. Other navigational features can be. included, such as automatic selection of the entity, or rotation of the virtual environment about the focal point.
At least a portion of the virtual environment is displayed on a monitor, the virtual environment portion being represented by a frustum, the frustum having a narrow portion at the front of the monitor and diverging towards the back of the monitor to give the perception of depth to a user. The frustum remains stationary, and has a focal point, usually at the centre of the monitor screen. When a user selects an object displayed on the monitor, the scene on the monitor is scaled by a predetermined amount. In addition, the scene is moved such that the selected object is translated towards the focal point. The scaled and moved scene is then displayed on the monitor. The steps of scaling and moving are repeated until the desired navigation is achieved. Rates of scaling and translating are preferably chosen so that the animation of the navigation is smooth. By making the navigation rapid, a user can be provided with both focus and context when viewing a three-dimensional graph. Other navigational features can be. included, such as automatic selection of the entity, or rotation of the virtual environment about the focal point.
Claims (15)
1. A method for navigating a three-dimensional virtual environment containing a plurality of nodes interconnected by at least one arc, said nodes representing entities in an information structure, said at least one arc representing a relationship between said entities, at least a portion of said virtual environment being displayed in a scene, said method comprising the steps of:
receiving a selection of an entity within said virtual environment;
scaling said scene about a focal point;
moving said scene in a direction such that said entity is translated towards said focal point; and displaying said scene.
receiving a selection of an entity within said virtual environment;
scaling said scene about a focal point;
moving said scene in a direction such that said entity is translated towards said focal point; and displaying said scene.
2. The method according to claim 1 further comprising the steps of repeating said scaling and moving until a desired level of navigation has been achieved.
3. The method according to claim 2 further comprising the steps of rotating said scene about said focal point.
4. The method according to claim 1 wherein said scaling is an increase based on a multiple determined by a predetermined rate-of-scaling based and a given frame-rate.
5. The method according to claim 4 wherein said predefined rate-of-scaling is about one-hundred-and-sixty-five percent per second.
6. The method according to claim 4 wherein said predefined rate-of-scaling is from about fifty percent per second to about four-hundred percent per second.
7. The method according to claim 1 wherein said translation is along a line between said selected entity and said focal point.
8. The method according to claim 2 wherein said translation occurs at a rate-of-deceleration such that said selected entity reaches the focal point in about one second.
9. The method according to claim 2 wherein said translation occurs at a rate-of-deceleration such that said selected entity reaches the focal point in a range of from about 0.25 seconds to about 4.0 seconds.
10. The method according to claim 1 wherein said selected is displayed in said scene.
11. The method according to claim 1 wherein said selected entity not visible in said scene and said step of receiving said selection is based on an operation that determines said selected entity has a relationship with a displayed entity of interest to a user.
12. A system for navigating a three-dimensional virtual environment containing one or more entities comprising:
a processing unit having a microprocessor, random access memory, a video output card and a rendering engine, said processing unit being operable to store said three-dimensional virtual environment;
a user-output device connected to said processing unit, said processing unit being operable to model said portion as a frustum and present said frustum on said user-output device, said user-output device being operable to present said frustum as a viewable scene to a user, said frustum having a focal point;
a user-input device connected to said processing unit, said user-input device being operable to allow said user to select an entity presented in said scene;
said processing unit being operable to perform at least one scaling operation scaling said scene about said focal point and to perform at least one movement operation that shifts said scene in a direction such that said selected entity is translated towards said focal point; and said processing unit being further operable to present said scaled and moved scene to said user-output device.
a processing unit having a microprocessor, random access memory, a video output card and a rendering engine, said processing unit being operable to store said three-dimensional virtual environment;
a user-output device connected to said processing unit, said processing unit being operable to model said portion as a frustum and present said frustum on said user-output device, said user-output device being operable to present said frustum as a viewable scene to a user, said frustum having a focal point;
a user-input device connected to said processing unit, said user-input device being operable to allow said user to select an entity presented in said scene;
said processing unit being operable to perform at least one scaling operation scaling said scene about said focal point and to perform at least one movement operation that shifts said scene in a direction such that said selected entity is translated towards said focal point; and said processing unit being further operable to present said scaled and moved scene to said user-output device.
13. The system according to claim 10 wherein said user-input device is a mouse and said processing unit is operable to display a cursor on said user-output device in response to movements of said mouse; said processing unit being further operable to select said entity via placement of said cursor over said entity and depressing a button on said mouse.
14. The system according to claim 10 wherein said processing unit continues to scale and move said scene until said selected entity is translated to said focal point.
15. The system according to claim 10 further comprising a second user-output device connected to said processing unit, said processing unit being further operable to present said scene as a stereoscopic image on said user-output devices.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CA002281229A CA2281229A1 (en) | 1999-08-31 | 1999-08-31 | Method and system for navigating a virtual environment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CA002281229A CA2281229A1 (en) | 1999-08-31 | 1999-08-31 | Method and system for navigating a virtual environment |
Publications (1)
Publication Number | Publication Date |
---|---|
CA2281229A1 true CA2281229A1 (en) | 2001-02-28 |
Family
ID=4164052
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CA002281229A Abandoned CA2281229A1 (en) | 1999-08-31 | 1999-08-31 | Method and system for navigating a virtual environment |
Country Status (1)
Country | Link |
---|---|
CA (1) | CA2281229A1 (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP2017709A1 (en) * | 2006-05-03 | 2009-01-21 | Sony Computer Entertainment Inc. | Multimedia reproducing device and background image display method |
CN102012906A (en) * | 2010-10-27 | 2011-04-13 | 南京聚社数字科技有限公司 | Three-dimensional scene management platform based on SaaS architecture and editing and browsing method |
CN113225387A (en) * | 2021-04-22 | 2021-08-06 | 国网山东省电力公司淄博供电公司 | Visual monitoring method and system for machine room |
-
1999
- 1999-08-31 CA CA002281229A patent/CA2281229A1/en not_active Abandoned
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP2017709A1 (en) * | 2006-05-03 | 2009-01-21 | Sony Computer Entertainment Inc. | Multimedia reproducing device and background image display method |
EP2017709A4 (en) * | 2006-05-03 | 2010-06-16 | Sony Computer Entertainment Inc | Multimedia reproducing device and background image display method |
CN102012906A (en) * | 2010-10-27 | 2011-04-13 | 南京聚社数字科技有限公司 | Three-dimensional scene management platform based on SaaS architecture and editing and browsing method |
CN102012906B (en) * | 2010-10-27 | 2012-01-25 | 南京聚社数字科技有限公司 | Three-dimensional scene management platform based on SaaS architecture and editing and browsing method |
CN113225387A (en) * | 2021-04-22 | 2021-08-06 | 国网山东省电力公司淄博供电公司 | Visual monitoring method and system for machine room |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US7310619B2 (en) | Detail-in-context lenses for interacting with objects in digital image presentations | |
US9304651B2 (en) | Method of real-time incremental zooming | |
US8478026B2 (en) | Method and system for transparency adjustment and occlusion resolution for urban landscape visualization | |
JP4434541B2 (en) | Navigation method in composition of 3D image by operation of 3D image “Navigation 3D” | |
US7486302B2 (en) | Fisheye lens graphical user interfaces | |
US8350872B2 (en) | Graphical user interfaces and occlusion prevention for fisheye lenses with line segment foci | |
US20040085335A1 (en) | System and method of integrated spatial and temporal navigation | |
Pastoor et al. | An experimental multimedia system allowing 3-D visualization and eye-controlled interaction without user-worn devices | |
US20090172587A1 (en) | Dynamic detail-in-context user interface for application access and content access on electronic displays | |
KR20010104376A (en) | Video sample rate conversion to achieve 3-D effects | |
GB2380382B (en) | Method for navigating in a multi-scale three-dimensional scene | |
Marton et al. | Natural exploration of 3D massive models on large-scale light field displays using the FOX proximal navigation technique | |
US6828962B1 (en) | Method and system for altering object views in three dimensions | |
EP1821258B1 (en) | Method and apparatus for automated dynamics of three-dimensional graphics scenes for enhanced 3D visualization | |
CN115175004A (en) | Method and device for video playing, wearable device and electronic device | |
Jáuregui et al. | Design and evaluation of 3D cursors and motion parallax for the exploration of desktop virtual environments | |
CA2281229A1 (en) | Method and system for navigating a virtual environment | |
CN114327174A (en) | Virtual reality scene display method and cursor three-dimensional display method and device | |
CN113332712B (en) | Game scene picture moving method and device and electronic equipment | |
Bowman et al. | Effortless 3D Selection through Progressive Refinement. | |
CN115564932A (en) | Data processing method and data processing device | |
CN116466854A (en) | Virtual decoration display method, device, equipment and storage medium | |
Yu et al. | Interactive Object Manipulation Strategies for Browsing Object Centered Image Set |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
EEER | Examination request | ||
FZDE | Dead |
Effective date: 20140416 |