WO2001016694A9 - Automatic conversion between sets of text urls and cohesive scenes of visual urls - Google Patents

Automatic conversion between sets of text urls and cohesive scenes of visual urls

Info

Publication number
WO2001016694A9
WO2001016694A9 PCT/US2000/024067 US0024067W WO0116694A9 WO 2001016694 A9 WO2001016694 A9 WO 2001016694A9 US 0024067 W US0024067 W US 0024067W WO 0116694 A9 WO0116694 A9 WO 0116694A9
Authority
WO
WIPO (PCT)
Prior art keywords
visually
linked objects
file references
user
cohesive
Prior art date
Application number
PCT/US2000/024067
Other languages
French (fr)
Other versions
WO2001016694A1 (en
Inventor
Brian Backus
Nathaniel Kushman
Original Assignee
Ububu Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ububu Inc filed Critical Ububu Inc
Priority to AU73419/00A priority Critical patent/AU7341900A/en
Publication of WO2001016694A1 publication Critical patent/WO2001016694A1/en
Publication of WO2001016694A9 publication Critical patent/WO2001016694A9/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/954Navigation, e.g. using categorised browsing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/74Browsing; Visualisation therefor
    • G06F16/748Hypervideo
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/955Retrieval from the web using information identifiers, e.g. uniform resource locators [URL]
    • G06F16/9558Details of hyperlinks; Management of linked annotations

Abstract

Methods and systems for automatic conversion between textual file references and visual file references are described. According to one aspect of the present invention, textual file references are automatically converted into visual file references by providing a conversion interface enabling a user to identify a set of textual file references (504), creating a set of visually-linked objects (460) corresponding to the set of textual file references identified by the user (508), and integrating the set of visually-linked objects into a cohesive scene (510, 800).

Claims

CLAIMS What is claimed is:
1. A method for automatically converting textual file references into visual file references, the method comprising: creating a set of visually-linked objects corresponding to a set of textual file references identified by a user using a conversion interface; and integrating the set of visually-linked objects into a cohesive scene.
2. The method of claim 1 wherein the set of visually-linked objects is integrated using a real world visual metaphor as a cohesive scene.
3. The method of claim 1 wherein the set of visually-linked objects is integrated using a non-real world visual metaphor as a cohesive scene.
4. The method of claim 1 wherein the cohesive scene is two-dimensional.
5. The method of claim 1 wherein the cohesive scene is three-dimensional.
6. The method of claim 1 wherein the conversion interface enables the user to enter the set of textual file references.
7. The method of claim 1 wherein the conversion interface enables the user to specify a file containing a plurality of textual file references.
8. The method of claim 7 wherein the file is a URL of a source web page and the plurality of textual file references are recursively URLs of web pages referred to by the source web page.
25
9. The method of claim 7 wherein the conversion interface further enables the user to select the set of textual file references from the plurality of textual file references.
10. The method of claim 2 wherein the real world visual metaphor is represented as planets, solar systems, galaxies, clusters, universes, land masses, cities, buildings, floors, and rooms.
11. The method of claim 1 wherein creating the set of visually-linked objects is performed in response to user selection of graphical representations for the set of visually-linked objects.
12. The method of claim 1 wherein the set of visually-linked objects is created using a default set of graphical representations.
13. The method of claim 1 wherein creating the set of visually-linked objects includes selecting graphical representations for the set of visually-linked objects using user personal information.
14. The method of claim 1 wherein creating the set of visually-linked objects includes selecting graphical representations for the set of visually-linked objects using visual association with either content referred to by the textual file references or the textual file references.
15. The method of claim 1 further comprising:
26 preserving a hierarchical structure contained in the set of textual file references when creating the set of corresponding visually-linked objects; and representing the set of visually-linked objects as visual hierarchies using the preserved hierarchical structure.
16. The method of claim 15 wherein the visual hierarchies are represented as planets, solar systems, galaxies, clusters, universes, land masses, cities, buildings, floors, and rooms.
17. The method of claim 1 further comprising adding a visually-linked object to the cohesive scene using drag and drop graphical tools.
18. The method of claim 1 further comprising modifying attributes pertaining to the visually-linked objects within the cohesive scene to reflect user perspective regarding the visually-linked objects.
19. A method for automatically converting visually-linked objects into textual file references, the method comprising: receiving a request to convert a set of visually-linked objects into a set of textual file references; and creating the set of textual file references corresponding to visually- linked objects within the cohesive scene.
20. The method of claim 19 further comprising:
27 preserving a hierarchical structure contained within the cohesive scene of visually-linked objects; and incorporating the preserved hierarchical structure into the set of textual file references.
21. A method for automatically converting between cohesive scenes of visually-linked objects, the method comprising: extracting a set of visually-linked objects identified by a user using a conversion interface from a first cohesive scene of visually-linked objects; and converting the extracted set of visually-linked objects into a set of visually-linked objects of a second cohesive scene.
22. The method of claim 21 further comprising preserving a hierarchical structure contained within the set of visually-linked objects of the first cohesive scene when converting the extracted set of visually-linked objects into the second cohesive scene of visually-linked objects.
23. A system for automatically converting textual file references into visual file references, the system comprising: creating a set of visually-linked objects identified by a user using a conversion interface corresponding to the set of textual file references identified by the user; and means for integrating the set of visually-linked objects into a cohesive scene.
28
24. A system for automatically converting visual file references into textual file references, the system comprising: means for receiving a request to convert a set of visually-linked objects into a set of textual file references; and means for creating the set of textual file references corresponding to visually-linked objects within the cohesive scene.
25. A system for automatically converting between cohesive scenes of visually-linked objects, the system comprising: means for extracting a set of visually-linked objects identified by a user using a conversion interface from a first cohesive scene of visually-linked objects; and means for converting the extracted set of visually-linked objects into a set of visually-linked objects of a second cohesive scene.
26. A machine-readable medium that provides instructions, which when executed by a processor, cause the processor to perform operations comprising:
creating a set of visually-linked objects identified by a user using a conversion interface corresponding to the set of textual file references identified by the user; and integrating the set of visually-linked objects into a cohesive scene.
29
27. A machine-readable medium that provides instructions, which when executed by a processor, cause the processor to perform operations comprising: receiving a request to convert a set of visually-linked objects into a set of textual file references; and creating the set of textual file references corresponding visually-linked objects within the cohesive scene.
28. A machine-readable medium that provides instructions, which when executed by a processor, cause the processor to perform operations comprising:
extracting a set of visually-linked objects identified by a user using a conversion interface from a first cohesive scene of visually-linked objects; and converting the extracted set of visually-linked objects into a set of visually-linked objects of a second cohesive scene.
29. An apparatus for automatically converting textual file references into visual file references, the apparatus comprising: a controller to create a set of visually-linked objects identified by a user using a conversion interface corresponding to the set of textual file references identified by the user; and a scene renderer to integrate the set of visually-linked objects into a cohesive scene.
30
30. The apparatus of claim 29 wherein the set of visually-linked objects is integrated using a real world visual metaphor as a cohesive scene.
31. The apparatus of claim 29 wherein the set of visually-linked objects is integrated using a non-real world visual metaphor as a cohesive scene.
32. The apparatus of claim 29 wherein the cohesive scene is two- dimensional.
33. The apparatus of claim 29 wherein the cohesive scene is three- dimensional.
34. The apparatus of claim 29 wherein the graphical user interface enables the user to enter the set of textual file references.
35. The apparatus of claim 29 wherein the graphical user interface enables the user to specify a file containing a plurality of textual file references.
36. The apparatus of claim 35 wherein the file is a URL of a source web page and the plurality of textual file references are recursively URLs of web pages referred to by the source web page .
37. The apparatus of claim 35 wherein the graphical user interface further enables the user to select the set of textual file references from the plurality of textual file references.
31
38. The apparatus of claim 30 wherein the real world visual metaphor is represented as planets, solar systems, galaxies, clusters, universe, land masses, cities, building, floors and rooms.
39. The apparatus of claim 29 further comprising a user interface to enable the user to select graphical representations for the set of visually-linked objects.
40. The apparatus of claim 29 wherein the set of visually-linked objects is created using a default set of graphical representations.
41. The apparatus of claim 29 wherein the controller is capable of selecting graphical representations for the set of visually-linked objects using user personal information.
42. The apparatus of claim 29 wherein the controller is capable of selecting graphical representations for the set of visually-linked objects using visual association with content referred to by the textual file references.
43. The apparatus of claim 29 wherein the controller is capable of preserving a hierarchical structure contained in the set of textual file references when creating the set of corresponding visually-linked objects, and the scene renderer is capable of representing the set of visually-linked objects as visual hierarchies using the preserved hierarchical structure.
32
44. The apparatus of claim 43 wherein the visual hierarchies are represented as planets, solar systems, galaxies, clusters, universes, land masses, cities, buildings, floors, and rooms.
45. The apparatus of claim 29 further comprising drag and drop graphical tools to add a visually-linked object to the cohesive scene.
46. The apparatus of claim 29 further comprising graphical tools to modify attributes pertaining to the visually-linked objects within the cohesive scene to reflect user perspective regarding the visually-linked objects.
47. An apparatus for automatically converting visually-linked objects into textual file references, the apparatus comprising: a resource manager to receive a request to convert a cohesive scene of visually-linked objects into a set of textual file references; and a controller to create the set of textual file references corresponding to visually-linked objects within the cohesive scene.
48. The apparatus of claim 47 wherein the controller is capable of preserving a hierarchical structure contained within the cohesive scene of visually-linked objects, and incorporating the preserved hierarchical structure into the set of textual file references.
49. An apparatus for automatically converting between cohesive scenes of visually-linked objects, the apparatus comprising:
33 a controller to extract a set of visually-linked objects identified by a user using a conversion interface from a first cohesive scene of a visually-linked objects; and a scene renderer to convert the extracted set of visually-linked objects into a set of visually-linked objects of a second cohesive scene.
50. The apparatus of claim 49 wherein the controller is capable to preserve a hierarchical structure contained within the set of visually-linked objects of the first cohesive scene when converting the extracted set of visually- linked objects into the second cohesive scene of visually-linked objects.
34
PCT/US2000/024067 1999-08-31 2000-08-31 Automatic conversion between sets of text urls and cohesive scenes of visual urls WO2001016694A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU73419/00A AU7341900A (en) 1999-08-31 2000-08-31 Automatic conversion between sets of text urls and cohesive scenes of visual urls

Applications Claiming Priority (10)

Application Number Priority Date Filing Date Title
US15214199P 1999-08-31 1999-08-31
US15167299P 1999-08-31 1999-08-31
US60/151,672 1999-08-31
US60/152,141 1999-08-31
US54086000A 2000-03-31 2000-03-31
US54043300A 2000-03-31 2000-03-31
US09/540,433 2000-03-31
US09/540,860 2000-03-31
US65167100A 2000-08-30 2000-08-30
US09/651,671 2000-08-30

Publications (2)

Publication Number Publication Date
WO2001016694A1 WO2001016694A1 (en) 2001-03-08
WO2001016694A9 true WO2001016694A9 (en) 2001-10-18

Family

ID=27538405

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2000/024067 WO2001016694A1 (en) 1999-08-31 2000-08-31 Automatic conversion between sets of text urls and cohesive scenes of visual urls

Country Status (2)

Country Link
AU (1) AU7341900A (en)
WO (1) WO2001016694A1 (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8464302B1 (en) 1999-08-03 2013-06-11 Videoshare, Llc Method and system for sharing video with advertisements over a network
US20020056123A1 (en) 2000-03-09 2002-05-09 Gad Liwerant Sharing a streaming video
US9304985B1 (en) 2012-02-03 2016-04-05 Google Inc. Promoting content
US9378191B1 (en) 2012-02-03 2016-06-28 Google Inc. Promoting content
US9471551B1 (en) 2012-02-03 2016-10-18 Google Inc. Promoting content
US10824313B2 (en) 2013-04-04 2020-11-03 P.J. Factory Co., Ltd. Method and device for creating and editing object-inserted images
KR101501028B1 (en) * 2013-04-04 2015-03-12 박정환 Method and Apparatus for Generating and Editing a Detailed Image

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5528735A (en) * 1993-03-23 1996-06-18 Silicon Graphics Inc. Method and apparatus for displaying data within a three-dimensional information landscape
US5877775A (en) * 1996-08-08 1999-03-02 Theisen; Karen E. Method of generating a 3-D representation of a hierarchical data structure
US5835094A (en) * 1996-12-31 1998-11-10 Compaq Computer Corporation Three-dimensional computer environment
US6094196A (en) * 1997-07-03 2000-07-25 International Business Machines Corporation Interaction spheres of three-dimensional objects in three-dimensional workspace displays
US6069630A (en) * 1997-08-22 2000-05-30 International Business Machines Corporation Data processing system and method for creating a link map

Also Published As

Publication number Publication date
WO2001016694A1 (en) 2001-03-08
AU7341900A (en) 2001-03-26

Similar Documents

Publication Publication Date Title
AU2017210597B2 (en) System and method for the online editing of pdf documents
US6363404B1 (en) Three-dimensional models with markup documents as texture
CN102262528B (en) The method of instant communication client and dragging on embedded webpage thereof
Huang et al. GeoVR: a web-based tool for virtual reality presentation from 2D GIS data
US6789263B1 (en) Data conversion method and apparatus
WO2001060142A2 (en) Method and apparatus for a three-dimensional web-navigator
EP0982669A3 (en) Property based context portals
EP0982671A3 (en) Dynamic object properties
CA2218593A1 (en) Method and system for automatic persistence of controls in a windowing environment
WO2001016694A9 (en) Automatic conversion between sets of text urls and cohesive scenes of visual urls
CN114491775A (en) Method for stylized migration of three-dimensional architectural model of metauniverse
Trapp et al. A prototype for a WWW-based visualization service
Crossley Three-dimensional internet developments
Tsukamoto Image-based pseudo-3D visualization of real space on WWW
CN116107972A (en) Lightweight ocean scalar field visualization method
WO2001016683A1 (en) Using the placement of visual urls to communicate and assess their relationship to each other
Ahn et al. Webizing mobile AR contents
Andrews et al. Hooking up 3-space: three-dimensional models as fully-fledged hypermedia documents
Hudson-Smith et al. Public domain GIS, mapping & imaging using web-based services
KR100268631B1 (en) Data conversion method and apparatus
Ressler Approaches using virtual environments with mosaic
Shen Design of 3D Exhibition Hall System of Art Museum Based On Virtual Reality
Kalkofen et al. Adaptive visualization in outdoor AR displays
Benford et al. The populated web: Browsing, searching and inhabiting the WWW using collaborative virtual environments
Gaborit et al. A collaborative virtual environment for public consultation in the urban planning process

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CR CU CZ DE DK DM DZ EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NO NZ PL PT RO RU SD SE SG SI SK SL TJ TM TR TT TZ UA UG UZ VN YU ZA ZW

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE BF BJ CF CG CI CM GA GN GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
AK Designated states

Kind code of ref document: C2

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CR CU CZ DE DK DM DZ EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NO NZ PL PT RO RU SD SE SG SI SK SL TJ TM TR TT TZ UA UG UZ VN YU ZA ZW

AL Designated countries for regional patents

Kind code of ref document: C2

Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE BF BJ CF CG CI CM GA GN GW ML MR NE SN TD TG

COP Corrected version of pamphlet

Free format text: PAGES 1/18-18/18, DRAWINGS, REPLACED BY NEW PAGES 1/18-18/18; DUE TO LATE TRANSMITTAL BY THE RECEIVING OFFICE

REG Reference to national code

Ref country code: DE

Ref legal event code: 8642

122 Ep: pct application non-entry in european phase
NENP Non-entry into the national phase in:

Ref country code: JP