WO2008073135A2 - Virtual and real navigation systems - Google Patents

Virtual and real navigation systems Download PDF

Info

Publication number
WO2008073135A2
WO2008073135A2 PCT/US2007/008189 US2007008189W WO2008073135A2 WO 2008073135 A2 WO2008073135 A2 WO 2008073135A2 US 2007008189 W US2007008189 W US 2007008189W WO 2008073135 A2 WO2008073135 A2 WO 2008073135A2
Authority
WO
WIPO (PCT)
Prior art keywords
sources
network
users
headset
enabled
Prior art date
Application number
PCT/US2007/008189
Other languages
French (fr)
Other versions
WO2008073135A3 (en
Inventor
Arjuna I. Rajasingham
Original Assignee
Rajasingham Arjuna I
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US11/639,088 external-priority patent/US8251444B2/en
Priority claimed from PCT/US2006/048016 external-priority patent/WO2007075467A2/en
Priority claimed from US11/730,161 external-priority patent/US9063633B2/en
Application filed by Rajasingham Arjuna I filed Critical Rajasingham Arjuna I
Priority to EP07867048.6A priority Critical patent/EP1999741A4/en
Priority to JP2009503068A priority patent/JP2009538461A/en
Publication of WO2008073135A2 publication Critical patent/WO2008073135A2/en
Publication of WO2008073135A3 publication Critical patent/WO2008073135A3/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60NSEATS SPECIALLY ADAPTED FOR VEHICLES; VEHICLE PASSENGER ACCOMMODATION NOT OTHERWISE PROVIDED FOR
    • B60N2/00Seats specially adapted for vehicles; Arrangement or mounting of seats in vehicles
    • B60N2/24Seats specially adapted for vehicles; Arrangement or mounting of seats in vehicles for particular purposes or particular vehicles
    • B60N2/42Seats specially adapted for vehicles; Arrangement or mounting of seats in vehicles for particular purposes or particular vehicles the seat constructed to protect the occupant from the effect of abnormal g-forces, e.g. crash or safety seats
    • B60N2/427Seats or parts thereof displaced during a crash
    • B60N2/42727Seats or parts thereof displaced during a crash involving substantially rigid displacement
    • B60N2/42736Seats or parts thereof displaced during a crash involving substantially rigid displacement of the whole seat
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/131Protocols for games, networked simulations or virtual reality
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/80Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game specially adapted for executing a specific type of game
    • A63F2300/8082Virtual reality

Definitions

  • the present inventions are systems for safe travel in virtual and real spaces - both virtually and physically. There are four such combinations.
  • the First system provides a unique mechanism for the first three and the Second System provides an occupant support for the fourth of these.
  • the virtual system in this invention includes a set of interface devices and a unique network that provides a new approach for virtual navigation of physical and virtual environments.
  • the second related invention provides a safe occupant support in vehicles for real or physical travel (in real environments).
  • This virtual system comprises a set of interface devices and a unique network that provides a new approach for virtual navigation of physical and virtual environments.
  • Navigation is a virtual field would be applications such as hands free navigation in 3-D video games, 3-D computer environments that have 3D needs as in CAD applications and special 3D interfaces that can speed navigation.
  • Such virtual 3D fields can also be multi source 3D TV or Motion Picture programming, where the viewer can have some degrees of navigation freedom in the field of view. In some cases such multiple sources can be interpolated or "stitched" to form a continuum of possible points of view in the 3D field.
  • Navigation in a Real Field would be an application where the 3D virtual navigation with the device in a real 3D field in real time live programming at any location with possibly multiple sources from a network of members.
  • Such members may be Sources or Users.
  • the Sources provide live programming and will have one or more "World Camera(s)" (and/or "World microphone sets” for audio mono, stereo or 3D surround fields) that is aimed in the direction of sight of the Source to see what these members see.( or hears ) Mutiple World Cameras or Microphones can give a 3D virtual space rendition.
  • the User members have interface devices to receive the Source programming and navigation devices to navigate the 2D or 3D field of the programming. Such navigation may be with Mice Joysticks or may use the Intelligent Eye (US 7,091,928) for such 3D navigation directly by using the eyes.
  • Sources agree to share their programming with Users
  • the same member may be both a Source and a User. Arrangements among such members may be reciprocal or for a fee. While such a network is not necessarily active and may simply transfer information from a source node to a user note, an active Network with distributed processing of video and audio information will in most cases provide the distributed computing resources to enable the operation of this infrastructure
  • the Occupant support for travel is a unique arrangement for reorienting the occupant in a vehicle during rapid acceleration or deceleration of the vehicle such that the occupant presents a larger surface area for supporting the related G-forces and concurrently allowing such reorientation to reduce the peak accelerations that the occupant is subjected to in such situations.
  • Figure 1 illustrates the block diagram for a headset/interface for the Virtual navigation system for Virtual and real spaces.
  • the preferred form of the interface is a universal interface for audio and 3D/2D -video communications for the user.
  • a basic form of the device will have a 3-D or stereo video source providing the native field possibly with dual displays, one or more cameras for tracking the gaze and eye blink patters and other possible signaling possibilities with eye appendages of the User, thereby permitting the user to navigate in the available 3D field
  • the headset may have the processing required for converting gaze tracking outputs to parametric form in a 3D field
  • the device may have a multiple microphone for both voice commands of the user and for the sound field of the user for programming (mono, stereo or surround fields with multiple appropriately placed microphones) and earphones with appropriate sound processing technology for recreating surround sound fields (ideally calibrated to the user anatomy).
  • the field of view may have icons in 3D that may be selected. One possible icon can be to return to the native field of the user. Communication of the headset will use one or more standard protocols, cellular or wired network protocols.
  • the headset unit may have one or more clients and protocols for interface with different kinds of hosts and switching software to meet the communications requirements of the user.
  • the headset may comprise clients and protocols for multiple functions such as computer interface for inputs and outputs for 2&3-D navigation, audio I/O; Email and browser clients for TCP/IP and other internet protocols, interface for a wearable computer or storage device (which may have the same communications clients and protocols as for other host computers) with navigable MP3 or other music and video files for playing and recording by the universal interface.
  • clients and protocols for multiple functions such as computer interface for inputs and outputs for 2&3-D navigation, audio I/O; Email and browser clients for TCP/IP and other internet protocols, interface for a wearable computer or storage device (which may have the same communications clients and protocols as for other host computers) with navigable MP3 or other music and video files for playing and recording by the universal interface.
  • Examples of possible protocols are Wifi protocols, 802.11, Blue tooth, GPRS, EDGE, satellite protocols and revisions and improvements on these.
  • connection will be preceded by a command from the user and authentication to be a part of a "trusted" network at some predetermined level for the user
  • Ergonomic and comfort conditions could make it desirable to locate the heavier parts of the head set at the base of the head/neck and derive additional support from the ears. There may or may not be a need for contact with the front of the face or the nose bridge as in conventional headphones in these embodiments.
  • the Headset will be used with computers or other conventional devices as a user interface where the use of eye and eye appendages navigate (Vclicks) to and select locations in the field. These may be 3-dimensional fields.
  • the headset may be used for navigation with the eyes in 3D video games, 3D motion picture programming (stereo or synthesized with parallax information from moving objects). Morever with multiple source cameras the user may move in the video field to different points and related perspectives, thereby offering a new paradigm in programmed entertainment and Motion pictures.
  • the User may be a Source Member of a Network and would therefore have the apparatus for this.
  • the device headset may therefore have a pair of cluster "World” cameras pointing in the direction of the face of the user (field of view of the User) to capture 3D video inputs for use of the Source Member for distribution on the Network., or recording on a local device as off-line programming..
  • This embodiment will also have an organizer and other basic personal mobility resources.
  • the preferred embodiment will also have a wearable computer that can be accessed by the universal interface using one of the wireless protocols for communication to databases computing capabilities and large MP3 and other audio files and video files for playback by the interface.
  • the wearable computer may also be synchronized to the databases on the universal interface.
  • the Network provides multiple Sources for real time video at known physical locations derived from Source members in the Network, thereby enabling members of the network to navigate among these source points and even to interpolated points between them Considering that source points may be moving such movement will provide additional information for the reconstruction of fixed objects in the local 3D space of users in any local neighborhood.
  • a User Member may choose a physical location or a special interest location and the Network will locate Sources in that location that will appear for selection by the User on the interface. This may be in the form of icons in a 3D field of real time programming derived from a Source. Selection of other Sources by the User Member may be by clicking or V-clicking on the desired Source. This action will transfer the User Member to the Source World Cameras and microphones of the selected Source Member. Such navigation may be induced by visual or audio stimuli from the currently selected Source Member.
  • the preferred embodiment can be used for interactive access to real video (and audio) fields through the Network.
  • the device will have a menu that can select a local environment (some of these can be on a list of bookmarks or can even be the native real environment) and the resulting displayed field will show a starting point as requested and several Sources that may be color coded for accessibility, cost, and quality where the user can navigate to and select with a V-click On selection, the user video display shows the perspective of the selected Source and the user will passively follow the Source unless the Source has wide angle or cluster World Cameras with the Active Network enabled to pan and zoom in this field, in which case the user has some navigational freedom even within one Source However, the remaining nodes may still be displayed for further selection and navigation thereto, (there will be menu options to turn off these node displays as well).
  • the user can therefore hop from one Source Member to another and see (and hear if enabled by Source) the perspective of the Source.
  • any given Source usually for a fee
  • the Network may be hosted in part by a community of distributed wireless hub providers and other network resource providers. Who (possibly for a fee) provide local wireless bandwidth and other hub intelligence and link into the internet. Such an infrastructure with distributed resources will also enable the high computational needs for reconstruction of 3D local neighborhoods and video processing bandwidth.
  • the infrastructure of this invention enables new approaches for TV programming with access for viewers to navigate in the real environments of interest in news or documentaries and even interview (possibly for a fee) the Source members of choice in the relevant local environment
  • the Network in the preferred embodiment will comprise a distributed network of wireless bandwidth providers (Providers) that for a fee from the Network administrator will provide bandwidth locally to User Members that are a part of the Network. User Members will pay for bandwidth, bandwidth can be hot spots in a local region within a city where many such local regions provide "cellular" coverage for neighborhoods and even cities.
  • Provides wireless bandwidth providers
  • the Providers may be required to have installed in their hubs the computing infrastructure to stitch together the available Sources in the local neighborhood and have the intelligence to accept and hand off dialog with Sources/Users as they move through local neighborhoods
  • Providers in this preferred embodiment will initially provide bandwidth for a fixed fee set by the Active Network Administrator, and then after the initial probation period set a rate based on the market Local providers will compete for providing bandwidth Users will have options in selecting lowest cost or highest available bandwidth (and quality) options when selecting the Providers
  • bandwidth selection may be a background process that is set by the User Users may select from multiple Providers (which may be other Users or Sources) based on possible fees such Providers may charge through a Network Administrator This arrangement forms a new Business Model for Distributed Bandwidth
  • An additional embodiment incorporates stitching algorithms for interpolation of fields available in the Network as source nodes of other users, thereby giving the user a continuous or near continuous range of view points and fields even between the available sources
  • the Active Network can recreate the fixed 3D local landscape and use that video information for interpolation for navigation of users
  • Such stiching together of local landscapes and city scapes can allow User Members to physically navigate from available Source Users in one local neighborhood to another using "Real Maps" created by this interpolation of the fixed landscape While such interpolation will not be able to give perfect views, the user has the choice of using the available Source nodes or an interpolation or a combination for zooming out to encompass a wide panorama
  • Such interpolation processing may be enabled by distributed processing on the Active Network given the computing resources needed for each of the neighborhoods
  • the Network will need intelligence to locate the physical location of the sources These can be done in some embodiments with GPS and in others simply using t ⁇ angulation from the transmit points of the sources to the recent hubs or host devices that the device has latched onto recently
  • the invention enables User members to join parties and gatherings of Source Members and hop from the "eyes" of one Source Member to another.
  • One of the unique characteristics of the video Network is that location of the Sources are relevant for content unlike conventional media such as mobile phones and Network based phone services Moreover, this invention enables navigation in that video field which is a new and unique characteristic of this invention.
  • the virtual navigation system may be used to navigate in 2D visual or in text based lexicographic or other ordering of Sources.
  • the Stimulii from the Sources may be Audio or Visual or both and may be mono (audio), 2D or 3D/Surround for audio and Visual
  • Some embodiments may utilize either live or programmed 2D video data to synthesize 3D fields by using time lagged images to create parallax information from objects identified with object recognition algorithms.
  • a simple alternative embodiment of the User /Source interface for 2D video is a Computer or a mobile/cell phone (with cameras for Source functionality)
  • the headset in an alternative embodiment may be similar to conventional eye glasses supported on the nose bridge and ears.
  • a fixed wired interface adapter may be used to interface between the wireless Universal Interface and conventional video game consoles and computers with only conventional controller outputs/inputs.
  • a split mobile unit is an alternative embodiment where the functionality of the device is split between a light headset and a connected wearable interface (wired or wirelessly).
  • the headset- here has only interface functions
  • Wearable computer has all data and computing functions.
  • the headset embodiment may have minimal componentry on the headset to lower weight and offload much of the processing to a wire connected or wireless connected wearable interface.
  • One possible embodiment would have a pair of cameras for gaze and eye tracking, a microphone and the processing required for gaze tracking outputs in parametric form along with one or more audio channels for voice commands and communication. There will also be earphones for one or both ears.
  • An alternative to the headset/interface is a fixed device with a 3D projection device and cameras to track gaze and the eyes along with a microphone It is an alternative to the mobile version and may also used wired Internet connections for the user
  • the rear of the headset may be used to contain heavy components to balance out the load If the rear section is lighter than the front section the had set may rest on the External occipital protuberance and allow articulation of the head at the occipital condoyle without significant disturbance of the headset position
  • the World cameras may be wide angle or clusters to give a broader field of view for network members, along with network intelligence to navigate within the panoramic filed data
  • Satellite antennae can be attached to the sun shield/cap independently of or incorporated with the solar cells.
  • Visual displays -transparent option To allow for real navigation in the native real space (!) the Visual display may have the option to become transparent or semi transparent
  • Pan & Zoom on "world cameras”' Pan and zoom capabilities in the World Camera will give in a 1-1 arrangement between a User and a Source (usually for a fee) the ability to orient the camera tot the desired field within the available Source hardware.
  • Audio/Video Player data The Audio and video files may be stored on the device rather than on a wearable computer or remotely.
  • Organizer May be on the Interface device rather than on a wearable computer.
  • the headset may include a basic computer with related functionality for stand alone use.
  • CASE H OCCUPANT SUPPORT IN A VEHICLE DRAWING DESCRIPTION
  • a Flat bed sleeper in an aircraft cabin The arrangement as shown in the figures illustrate a Flat bed that provides safety during rapid deceleration of the aircraft in all positions that the occupant chooses, thereby allowing the occupant to sleep through take off and landings.
  • Thfiures illustrate the different poostions of the Sleeper.
  • Shock absorbers control the mtion of the lateral movemetn of the sleeper under acceleration conditions to lower the peak accelerations and and to reorient the occupant and support the occupant better during the acceleration conditions

Landscapes

  • Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Mechanical Engineering (AREA)
  • Transportation (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Processing Or Creating Images (AREA)
  • Seats For Vehicles (AREA)
  • Information Transfer Between Computers (AREA)
  • Navigation (AREA)

Abstract

Systems for Virtual Navigation in real and Virtual Spaces Such navigation in Virtual spaces may be applied particularly in 3D computer games and 3D programming of motion pictures Such navigation in real spaces creates a new paradigm in communication and media particularly with unique 3 D video interface devices disclosed in this invention A new occupant support for vehicles is also presented.

Description

APPLICANT:
A.I.Rajasingham.
6024 Bradley Boulevard, Bethesda MD 20817, United Sates
Granrisstigen 2, 181-62 Lidingo. Stockholm. Sweden
Email: Indi@StanfordAlumni.org
TITLE; Virtual and Real Navigation Systems.
CROSS REFERENCE TO RELATED APPLICATIONS
US 60/787,444; PCT US 2006/048016; EP 02 076388.4; US 11/639,088. All of which are hereby incorporated herein by reference and to which priority is claimed in the applicable jurisdictions.
SUMMARY
The present inventions are systems for safe travel in virtual and real spaces - both virtually and physically. There are four such combinations. The First system provides a unique mechanism for the first three and the Second System provides an occupant support for the fourth of these. The virtual system in this invention includes a set of interface devices and a unique network that provides a new approach for virtual navigation of physical and virtual environments. The second related invention provides a safe occupant support in vehicles for real or physical travel (in real environments).
The invention for virtual navigation in virtual and real spaces will be considered first in this disclosure.
This virtual system comprises a set of interface devices and a unique network that provides a new approach for virtual navigation of physical and virtual environments.
There are two distinct applications for such virtual navigation: First navigation in a virtual field; and second navigation in a real field.
Navigation is a virtual field would be applications such as hands free navigation in 3-D video games, 3-D computer environments that have 3D needs as in CAD applications and special 3D interfaces that can speed navigation. Such virtual 3D fields can also be multi source 3D TV or Motion Picture programming, where the viewer can have some degrees of navigation freedom in the field of view. In some cases such multiple sources can be interpolated or "stitched" to form a continuum of possible points of view in the 3D field.
Navigation in a Real Field would be an application where the 3D virtual navigation with the device in a real 3D field in real time live programming at any location with possibly multiple sources from a network of members. Such members may be Sources or Users. The Sources provide live programming and will have one or more "World Camera(s)" (and/or "World microphone sets" for audio mono, stereo or 3D surround fields) that is aimed in the direction of sight of the Source to see what these members see.( or hears ) Mutiple World Cameras or Microphones can give a 3D virtual space rendition. The User members have interface devices to receive the Source programming and navigation devices to navigate the 2D or 3D field of the programming. Such navigation may be with Mice Joysticks or may use the Intelligent Eye (US 7,091,928) for such 3D navigation directly by using the eyes.
In such virtual navigation of real fields an Active Network of members, Sources agree to share their programming with Users The same member may be both a Source and a User. Arrangements among such members may be reciprocal or for a fee. While such a network is not necessarily active and may simply transfer information from a source node to a user note, an active Network with distributed processing of video and audio information will in most cases provide the distributed computing resources to enable the operation of this infrastructure
The Occupant support for travel is a unique arrangement for reorienting the occupant in a vehicle during rapid acceleration or deceleration of the vehicle such that the occupant presents a larger surface area for supporting the related G-forces and concurrently allowing such reorientation to reduce the peak accelerations that the occupant is subjected to in such situations.
CASE I: VIRTUAL NAVIGATION IN REAL AND VIRTUAL SPACES
DRAWING DESCRIPTION
Figure 1 illustrates the block diagram for a headset/interface for the Virtual navigation system for Virtual and real spaces.
PREFERRED EMBODIMENT
THE HEADSET/INTERFACE OF THE USER/SOURCE
The preferred form of the interface is a universal interface for audio and 3D/2D -video communications for the user. A basic form of the device will have a 3-D or stereo video source providing the native field possibly with dual displays, one or more cameras for tracking the gaze and eye blink patters and other possible signaling possibilities with eye appendages of the User, thereby permitting the user to navigate in the available 3D field In addition there could be intelligence for interpreting the gaze directions and computing the 3D point of focus. In addition the headset may have the processing required for converting gaze tracking outputs to parametric form in a 3D field In addition the device may have a multiple microphone for both voice commands of the user and for the sound field of the user for programming (mono, stereo or surround fields with multiple appropriately placed microphones) and earphones with appropriate sound processing technology for recreating surround sound fields (ideally calibrated to the user anatomy). The field of view may have icons in 3D that may be selected. One possible icon can be to return to the native field of the user. Communication of the headset will use one or more standard protocols, cellular or wired network protocols. Moreover the headset unit may have one or more clients and protocols for interface with different kinds of hosts and switching software to meet the communications requirements of the user.
For example the headset may comprise clients and protocols for multiple functions such as computer interface for inputs and outputs for 2&3-D navigation, audio I/O; Email and browser clients for TCP/IP and other internet protocols, interface for a wearable computer or storage device (which may have the same communications clients and protocols as for other host computers) with navigable MP3 or other music and video files for playing and recording by the universal interface. Examples of possible protocols are Wifi protocols, 802.11, Blue tooth, GPRS, EDGE, satellite protocols and revisions and improvements on these.
There may also be clients and protocols to connect directly with other users in the local neighborhood of the user for peer to peer connection without channeling through the Network. For communication in the local peer to peer network or the Network, when any of these hosts are in the range, in this embodiment connection will be preceded by a command from the user and authentication to be a part of a "trusted" network at some predetermined level for the user
Ergonomic and comfort conditions could make it desirable to locate the heavier parts of the head set at the base of the head/neck and derive additional support from the ears. There may or may not be a need for contact with the front of the face or the nose bridge as in conventional headphones in these embodiments.
In the stand alone environment the Headset will be used with computers or other conventional devices as a user interface where the use of eye and eye appendages navigate (Vclicks) to and select locations in the field. These may be 3-dimensional fields. In the case of 3D navigation the headset may be used for navigation with the eyes in 3D video games, 3D motion picture programming (stereo or synthesized with parallax information from moving objects). Morever with multiple source cameras the user may move in the video field to different points and related perspectives, thereby offering a new paradigm in programmed entertainment and Motion pictures.
In addition the User may be a Source Member of a Network and would therefore have the apparatus for this. The device headset may therefore have a pair of cluster "World" cameras pointing in the direction of the face of the user (field of view of the User) to capture 3D video inputs for use of the Source Member for distribution on the Network., or recording on a local device as off-line programming..
WEARABLE COMPUTER
This embodiment will also have an organizer and other basic personal mobility resources.
The preferred embodiment will also have a wearable computer that can be accessed by the universal interface using one of the wireless protocols for communication to databases computing capabilities and large MP3 and other audio files and video files for playback by the interface. The wearable computer may also be synchronized to the databases on the universal interface.
THE NETWORK The Network provides multiple Sources for real time video at known physical locations derived from Source members in the Network, thereby enabling members of the network to navigate among these source points and even to interpolated points between them Considering that source points may be moving such movement will provide additional information for the reconstruction of fixed objects in the local 3D space of users in any local neighborhood.
Therefore a User Member may choose a physical location or a special interest location and the Network will locate Sources in that location that will appear for selection by the User on the interface. This may be in the form of icons in a 3D field of real time programming derived from a Source. Selection of other Sources by the User Member may be by clicking or V-clicking on the desired Source. This action will transfer the User Member to the Source World Cameras and microphones of the selected Source Member. Such navigation may be induced by visual or audio stimuli from the currently selected Source Member.
For example the preferred embodiment can be used for interactive access to real video (and audio) fields through the Network. To enable such access the device will have a menu that can select a local environment (some of these can be on a list of bookmarks or can even be the native real environment) and the resulting displayed field will show a starting point as requested and several Sources that may be color coded for accessibility, cost, and quality where the user can navigate to and select with a V-click On selection, the user video display shows the perspective of the selected Source and the user will passively follow the Source unless the Source has wide angle or cluster World Cameras with the Active Network enabled to pan and zoom in this field, in which case the user has some navigational freedom even within one Source However, the remaining nodes may still be displayed for further selection and navigation thereto, (there will be menu options to turn off these node displays as well).
The user can therefore hop from one Source Member to another and see (and hear if enabled by Source) the perspective of the Source. In addition if enabled by any given Source (usually for a fee) interactive dialog with the Source and even requesting the source to "Show" a preferred field of interest.
This will amount to virtual navigation in a real field.
The Network may be hosted in part by a community of distributed wireless hub providers and other network resource providers. Who (possibly for a fee) provide local wireless bandwidth and other hub intelligence and link into the internet. Such an infrastructure with distributed resources will also enable the high computational needs for reconstruction of 3D local neighborhoods and video processing bandwidth.
The infrastructure of this invention enables new approaches for TV programming with access for viewers to navigate in the real environments of interest in news or documentaries and even interview (possibly for a fee) the Source members of choice in the relevant local environment
The Network in the preferred embodiment will comprise a distributed network of wireless bandwidth providers (Providers) that for a fee from the Network administrator will provide bandwidth locally to User Members that are a part of the Network. User Members will pay for bandwidth, bandwidth can be hot spots in a local region within a city where many such local regions provide "cellular" coverage for neighborhoods and even cities. These may be substituted for or supplemented with conventional cellular bandwidth with standard billing to the administrator and indeed satellite bandwidth in remote locations The Providers may be required to have installed in their hubs the computing infrastructure to stitch together the available Sources in the local neighborhood and have the intelligence to accept and hand off dialog with Sources/Users as they move through local neighborhoods Providers in this preferred embodiment will initially provide bandwidth for a fixed fee set by the Active Network Administrator, and then after the initial probation period set a rate based on the market Local providers will compete for providing bandwidth Users will have options in selecting lowest cost or highest available bandwidth (and quality) options when selecting the Providers Such bandwidth selection may be a background process that is set by the User Users may select from multiple Providers (which may be other Users or Sources) based on possible fees such Providers may charge through a Network Administrator This arrangement forms a new Business Model for Distributed Bandwidth
An additional embodiment incorporates stitching algorithms for interpolation of fields available in the Network as source nodes of other users, thereby giving the user a continuous or near continuous range of view points and fields even between the available sources As he Sources are moving the Active Network can recreate the fixed 3D local landscape and use that video information for interpolation for navigation of users Such stiching together of local landscapes and city scapes can allow User Members to physically navigate from available Source Users in one local neighborhood to another using "Real Maps" created by this interpolation of the fixed landscape While such interpolation will not be able to give perfect views, the user has the choice of using the available Source nodes or an interpolation or a combination for zooming out to encompass a wide panorama Such interpolation processing may be enabled by distributed processing on the Active Network given the computing resources needed for each of the neighborhoods
The Network will need intelligence to locate the physical location of the sources These can be done in some embodiments with GPS and in others simply using tπangulation from the transmit points of the sources to the recent hubs or host devices that the device has latched onto recently
If interpolation algorithms are used these will also have interpolated locations from the source locations
To supplement this, there are well established web based approaches to find physical locations of sources on the internet
APPLICATIONS OF NETWORK
All the above functionality enables the use of this infrastructure to create virtual tours and expeditions to remote parts of the world or places of interest, where the "tour guide" is a Source Member, that for a fee travels to or through a local neighborhood with User Members selecting (on the Network ) the Tour guide's Source
The nature of documentary and News programs will be transformed by the use of the Active Network and Sources as a verification source and a source of additional information in real time, available in the local environment of the Issue and location of interest For a fee User Members may even interview first hand the Sources in situ for example in Breaking News A new concept of "truth" media is enabled with this invention where Users of the Network can travel virtually to any part of the globe where there are Source Members and discover the reality of news coverage themselves using the "World Cameras" and World Microphones of Source Members. ( Such Source Members may of course be reciprocal members and User Members as well and therefore utilize the headset infrastructure for their own purposes.
Virtual parties and meetings. The invention enables User members to join parties and gatherings of Source Members and hop from the "eyes" of one Source Member to another.
One of the unique characteristics of the video Network is that location of the Sources are relevant for content unlike conventional media such as mobile phones and Network based phone services Moreover, this invention enables navigation in that video field which is a new and unique characteristic of this invention.
ALTERNATIVE EMBODIMENTS
The virtual navigation system may be used to navigate in 2D visual or in text based lexicographic or other ordering of Sources. The Stimulii from the Sources may be Audio or Visual or both and may be mono (audio), 2D or 3D/Surround for audio and Visual
Some embodiments may utilize either live or programmed 2D video data to synthesize 3D fields by using time lagged images to create parallax information from objects identified with object recognition algorithms.
A simple alternative embodiment of the User /Source interface for 2D video is a Computer or a mobile/cell phone (with cameras for Source functionality)
The headset in an alternative embodiment may be similar to conventional eye glasses supported on the nose bridge and ears.
A fixed wired interface adapter may be used to interface between the wireless Universal Interface and conventional video game consoles and computers with only conventional controller outputs/inputs.
A split mobile unit is an alternative embodiment where the functionality of the device is split between a light headset and a connected wearable interface (wired or wirelessly). Here the headset- here has only interface functions Wearable computer has all data and computing functions.
The headset embodiment may have minimal componentry on the headset to lower weight and offload much of the processing to a wire connected or wireless connected wearable interface. One possible embodiment would have a pair of cameras for gaze and eye tracking, a microphone and the processing required for gaze tracking outputs in parametric form along with one or more audio channels for voice commands and communication. There will also be earphones for one or both ears. An alternative to the headset/interface is a fixed device with a 3D projection device and cameras to track gaze and the eyes along with a microphone It is an alternative to the mobile version and may also used wired Internet connections for the user
ADDITIONAL EMBODIMENTS
ERGONOMICS OF THE HEADSET
Support on the ear, and/or the ear canal and possibly the bone orifice of the ear Considering that a significant portion of the weight of the headset will be in the front of the head, the rear of the headset may be used to contain heavy components to balance out the load If the rear section is lighter than the front section the had set may rest on the External occipital protuberance and allow articulation of the head at the occipital condoyle without significant disturbance of the headset position
Considering that many users of the headset may not wish to have their hair compressed with a fitted helmet, another design allows for a large number of soft headed thin fingers that touch the scalp through the hair. The impact on hair will be not much different to a comb Moreover, such a design will provide excellent ventilation in a helmet design.
HARDWARE ON HEADSET
Rear view camera Particularly when the module with all the heavier componentry is located at the base of the head at the neck, there are synergies in having a rear view "World" camera as well on the module
Wide angle or multi camera World with network directional intelligence
The World cameras may be wide angle or clusters to give a broader field of view for network members, along with network intelligence to navigate within the panoramic filed data
Outdoor version-solar cells/cap on sun shield: Solar cells will reduce the physical load of batteries on the head and synergistically provide protection for the user
For use in remote locations- satellite antenna: Satellite antennae can be attached to the sun shield/cap independently of or incorporated with the solar cells.
Visual displays -transparent option: To allow for real navigation in the native real space (!) the Visual display may have the option to become transparent or semi transparent
Pan & Zoom on "world cameras"' Pan and zoom capabilities in the World Camera will give in a 1-1 arrangement between a User and a Source (usually for a fee) the ability to orient the camera tot the desired field within the available Source hardware.
Audio/Video Player data: The Audio and video files may be stored on the device rather than on a wearable computer or remotely.
Organizer: May be on the Interface device rather than on a wearable computer.
Computer: The headset may include a basic computer with related functionality for stand alone use. CASE H: OCCUPANT SUPPORT IN A VEHICLE DRAWING DESCRIPTION
Drawings in the priority applications are hereby incorporated herein by reference. Fig A1-A7 are annotated with the description of the figures.
PREFERRED EMBODIMENT
A Flat bed sleeper in an aircraft cabin. The arrangement as shown in the figures illustrate a Flat bed that provides safety during rapid deceleration of the aircraft in all positions that the occupant chooses, thereby allowing the occupant to sleep through take off and landings.
Thfiures illustrate the different poostions of the Sleeper. Shock absorbers control the mtion of the lateral movemetn of the sleeper under acceleration conditions to lower the peak accelerations and and to reorient the occupant and support the occupant better during the acceleration conditions

Claims

CLAIMS:
.1. A system for virtual navigation of one or both of real and virtual spaces, comprising a network of members that are either or both of Sources and Users, wherein Sources provide real time media input to the network and Users consume real time media input provided by one or more Sources, and wherein each of the Sources are enables with one or more of one or both of video and audio recording devices, and said Users are enabled with one or both of video and audio reception devices, and wherein the experiences of the Sources may be transmitted to the Users, and wherein the Users are enabled to move among one or more Sources.
.2. Asystem as in claim 1, wherein the sources are enabled with stereoscopic World Cameras as media input devices and are are thereby enabled to transmit 3D virual environments to the Users in the Network.
.3. A system as in claim 1 wherein said Users are enabled to navigate to location points of Sources in the field of view of a current Source by selection of an icon representing the desired Source in the field of view of the current Source. .
.4. A network as in claim 1, further comprising distributed bandwidth providers wherein the bandwidth providers participate in a market for bandwidth for one or both of a video and audio for User Members, wherein User Members pay for such Bandwidth.
.5. A network as in claim 4, wherein the bandwidth providers participate in a local market for one or both of computational resources and storage resources, for video processing for the generation of local fixed local landscapes and topologies.
.6. A headset for 3D virtual navigation wherein gaze location and facial movement in the region of the eyes are used for such navigation by the user of said headset.
.7. A headset as I n claim 6, wherein said headset is supported with the ears and the rear of the headset is lighter than the front of the headset and is supported by the external occipital protuberance of the skull and allows articulation of the head at the occipital condoyle without significant disturbance of the headset position.
.8. A network as in claim 1, wherein stimuli available to Source Members enable a real time verifiable media network where Users Jcan get real time coverage of eye-witnesses and are enabled to move among a plurality of eye-witness
Sources to get one or both of different video and audio inputs for a balanced perspective.
.9. A network as in claim 8, further comprising a website which captures the real time programming from Sources from events for debates among sources and users of the Networks.
.10.An occupant support mechanism, that allows discretionary movement by the occupant of the plurality of support sections of the support mechanism, furthermore enabled to provide a predetermined controlled motion of the occupant in the event of a rapid deceleration of the vehicle.
.11. An occupant support mechanism as in claim 10, deployed in an aircraft cabin, whrein the orientation of the occupant support allows each occupant support in said aircraft cabin to have independent access to a cabin aisle Fig Al.
PCT/US2007/008189 2006-03-30 2007-03-30 Virtual and real navigation systems WO2008073135A2 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP07867048.6A EP1999741A4 (en) 2006-03-30 2007-03-30 Virtual and real navigation systems
JP2009503068A JP2009538461A (en) 2006-03-30 2007-03-30 Virtual and real navigation systems

Applications Claiming Priority (8)

Application Number Priority Date Filing Date Title
US78744406P 2006-03-30 2006-03-30
US60/787,444 2006-03-30
US11/639,088 US8251444B2 (en) 2005-04-25 2006-12-14 Vehicle occupant support
US11/639,088 2006-12-14
PCT/US2006/048016 WO2007075467A2 (en) 2005-12-19 2006-12-15 Vehicle occupant support
USPCT/US2006/048016 2006-12-15
US11/730,161 US9063633B2 (en) 2006-03-30 2007-03-29 Virtual navigation system for virtual and real spaces
US11/730,161 2007-03-29

Publications (2)

Publication Number Publication Date
WO2008073135A2 true WO2008073135A2 (en) 2008-06-19
WO2008073135A3 WO2008073135A3 (en) 2008-08-07

Family

ID=39930438

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2007/008189 WO2008073135A2 (en) 2006-03-30 2007-03-30 Virtual and real navigation systems

Country Status (3)

Country Link
EP (1) EP1999741A4 (en)
JP (1) JP2009538461A (en)
WO (1) WO2008073135A2 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101320477B1 (en) 2012-03-30 2013-10-23 세종대학교산학협력단 Building internal navication apparatus and method for controlling distance and speed of camera

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4970589A (en) 1986-07-10 1990-11-13 Varo, Inc. Head mounted video display and remote camera system
US6271842B1 (en) 1997-04-04 2001-08-07 International Business Machines Corporation Navigation via environmental objects in three-dimensional workspace interactive displays
US7091928B2 (en) 2001-03-02 2006-08-15 Rajasingham Arjuna Indraeswara Intelligent eye

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6000659A (en) * 1996-12-23 1999-12-14 The Boeing Company Fully reclinable, pivotable airplane passenger sleeper seat assemblies
DE10103922A1 (en) * 2001-01-30 2002-08-01 Physoptics Opto Electronic Gmb Interactive data viewing and operating system
WO2002093352A1 (en) * 2001-05-11 2002-11-21 3Dna Corp. Method and system for generating a three-dimensional graphical user interface (3d gui) for computer systems and websites
US7057662B2 (en) * 2002-11-22 2006-06-06 Hewlett-Packard Development Company, L.P. Retractable camera apparatus
JP2005108121A (en) * 2003-10-01 2005-04-21 Canon Sales Co Inc Apparatus, system, and method for information processing, and program therefor
JP2005293294A (en) * 2004-03-31 2005-10-20 Mcs:Kk Network camera auction system

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4970589A (en) 1986-07-10 1990-11-13 Varo, Inc. Head mounted video display and remote camera system
US6271842B1 (en) 1997-04-04 2001-08-07 International Business Machines Corporation Navigation via environmental objects in three-dimensional workspace interactive displays
US7091928B2 (en) 2001-03-02 2006-08-15 Rajasingham Arjuna Indraeswara Intelligent eye

Also Published As

Publication number Publication date
WO2008073135A3 (en) 2008-08-07
JP2009538461A (en) 2009-11-05
EP1999741A2 (en) 2008-12-10
EP1999741A4 (en) 2014-09-03

Similar Documents

Publication Publication Date Title
US10120440B2 (en) Virtual navigation system for virtual and real spaces
US10009542B2 (en) Systems and methods for environment content sharing
US20200186858A1 (en) Immersive cognitive reality system with real time surrounding media
US9230500B2 (en) Expanded 3D stereoscopic display system
US9618747B2 (en) Head mounted display for viewing and creating a media file including omnidirectional image data and corresponding audio data
Kurata et al. Remote collaboration using a shoulder-worn active camera/laser
JP5891131B2 (en) Image generating apparatus and image generating method
US10268276B2 (en) Autonomous computing and telecommunications head-up displays glasses
WO2017110632A1 (en) Information processing device and operation reception method
CN107534784A (en) Server, user terminal apparatus and its control method
JP2016025633A (en) Information processing apparatus, management device, information processing method, and program
CN108479060A (en) A kind of display control method and electronic equipment
WO2022072901A1 (en) Multi-sensor camera systems, devices and methods for providing image pan, tilt and zoom functionality
CN113302545A (en) System and method for maintaining directional wireless link of mobile device
Kimura et al. Eyeglass-based hands-free videophone
JP6822413B2 (en) Server equipment, information processing methods, and computer programs
WO2019142432A1 (en) Information processing device, information processing method, and recording medium
JP6919568B2 (en) Information terminal device and its control method, information processing device and its control method, and computer program
KR102140077B1 (en) Master device, slave device and control method thereof
WO2008073135A2 (en) Virtual and real navigation systems
US10423295B1 (en) Collaboration system on mobile network
CN111670577A (en) Image processing apparatus, image processing method, and image processing program
WO2021049356A1 (en) Playback device, playback method, and recording medium
WO2017068928A1 (en) Information processing device, control method therefor, and computer program
WO2024209549A1 (en) Video display system, video display method, and portable information terminal

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 07867048

Country of ref document: EP

Kind code of ref document: A2

REEP Request for entry into the european phase

Ref document number: 2007867048

Country of ref document: EP

WWE Wipo information: entry into national phase

Ref document number: 2007867048

Country of ref document: EP

WWE Wipo information: entry into national phase

Ref document number: 2009503068

Country of ref document: JP

NENP Non-entry into the national phase

Ref country code: DE