WO2016079470A1 - Mixed reality information and entertainment system and method - Google Patents

Mixed reality information and entertainment system and method Download PDF

Info

Publication number
WO2016079470A1
WO2016079470A1 PCT/GB2015/053362 GB2015053362W WO2016079470A1 WO 2016079470 A1 WO2016079470 A1 WO 2016079470A1 GB 2015053362 W GB2015053362 W GB 2015053362W WO 2016079470 A1 WO2016079470 A1 WO 2016079470A1
Authority
WO
WIPO (PCT)
Prior art keywords
environment
mixed reality
user
reality environment
processor
Prior art date
Application number
PCT/GB2015/053362
Other languages
French (fr)
Inventor
Julian David WRIGHT
Nicholas Giacomo Robert Colosimo
Christopher James WHITEFORD
Original Assignee
Bae Systems Plc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Bae Systems Plc filed Critical Bae Systems Plc
Publication of WO2016079470A1 publication Critical patent/WO2016079470A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B64AIRCRAFT; AVIATION; COSMONAUTICS
    • B64DEQUIPMENT FOR FITTING IN OR TO AIRCRAFT; FLIGHT SUITS; PARACHUTES; ARRANGEMENT OR MOUNTING OF POWER PLANTS OR PROPULSION TRANSMISSIONS IN AIRCRAFT
    • B64D11/00Passenger or crew accommodation; Flight-deck installations not otherwise provided for
    • B64D11/0015Arrangements for entertainment or communications, e.g. radio, television
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/017Head mounted
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • G02B2027/0138Head-up displays characterised by optical features comprising image capture systems, e.g. camera

Definitions

  • This invention relates generally to a mixed reality information and entertainment system and method and, more particularly, to a mixed reality information and entertainment system and method for use onboard a vehicle, for example, onboard an aircraft.
  • a mixed reality information and entertainment system for use onboard a vehicle, the system comprising a headset for placing over a user's eyes, in use, said headset including a screen, image capture means for capturing images of the real world environment in the vicinity of a user, and a processor configured to generate a selected three-dimensional virtual reality environment and blend images of said real world environment into said three-dimensional virtual reality environment to create a mixed reality environment and display said mixed reality environment on said screen, the system further comprising a control function for enabling selection of a portion of said images of said real world environment, the processor being further configured to, in response to control signals from said control function, selectively remove said selected portions from, or blend said selected portions into, said mixed reality environment.
  • the system may be configured, in use, to allow a user to select and/or manipulate elements of said mixed reality environment by means of hand gestures.
  • the system may be configured to allow a user, in use, to select objects, portions or people from their real world environment to be included within, or removed from, said mixed reality environment, by means of one or more predefined hand gestures
  • the processor may be configured to generate said three dimensional virtual environment including at least one virtual display for displaying selected data.
  • the system may be configured, in use, to allow a user, in use, to manipulate data displayed on said virtual display by means of hand gestures.
  • the system may comprise a pair of spatially separated image capture devices for capturing respective images of the real world environment in the vicinity of the user, said processor being configured to define a depth map using respective image frame pairs to produce three dimensional image data.
  • the image capture devices may be mounted on said headset. In this case, the image capture devices may be mounted on said headset so as to be substantially aligned with a user's eyes, in use.
  • the processor may be configured to receive image data captured externally of said vehicle and to selectively blend said image data into said three dimensional virtual environment.
  • aspects of the invention extend to a mixed reality information and/or entertainment apparatus for use onboard a vehicle, the apparatus comprising at least two systems as described above, and being configured to display the same mixed reality environment, or at least a portion thereof, on each screen.
  • Another aspect of the invention extends to a method of providing an information and entertainment facility onboard a vehicle, the method comprising providing at least one mixed reality system as described above, and configuring the processor to: generate a selected three-dimensional virtual reality environment and blend images of said real world environment into said three-dimensional virtual reality environment to create a mixed reality environment and display said mixed reality environment on said screen; - receive control signals from said control function representative of selected portions of said real world environment within said captured images; and in response to control signals from said control function, selectively remove said selected portions from, or blend said selected portions into, said mixed reality environment.
  • the information and entertainment facility may be an in-flight information and entertainment facility for use on an aircraft, and the method may include the step of configuring said processor to automatically render and display within said mixed reality environment, one or more predefined data items.
  • Figure 1 is a front perspective view of a headset for use in a control system according to an exemplary embodiment of the present invention
  • FIG. 2 is a schematic block diagram of a control system according to an exemplary embodiment of the present invention.
  • Figure 3 is a schematic user's view of a mixed reality information and entertainment environment created by a system according to an exemplary embodiment of the present invention.
  • a system may comprise a headset comprising a visor 10 having a pair of arms 12 hingedly attached at opposing sides thereof in order to allow the visor to be secured onto a user's head, over their eyes, in use, by placing the curved ends of the arms 12 over and behind the user's ears, in a manner similar to conventional spectacles.
  • the headset is illustrated herein in the form of a visor, it may alternatively comprise a helmet for placing over a user's head, or even a pair of contact lenses or the like, for placing within a user's eyes, and the present invention is not intended to be in any way limited in this regard.
  • a pair of image capture devices 14 for capturing images of the environment, such image capture devices being mounted as closely as possible aligned with the user's eyes, in use.
  • the system of the present invention further comprises a processor, which is communicably connected in some way to a screen which is provided inside the visor 10.
  • a processor which is communicably connected in some way to a screen which is provided inside the visor 10.
  • Such communicable connection may be a hard wired electrical connection, in which case the processor and associated circuitry will also be mounted on the headset.
  • the processor may be configured to wirelessly communicate with the visor, for example, by means of Bluetooth or similar wireless communication protocol, in which case, the processor need not be mounted on the headset but can instead be located remotely from the headset, with the relative allowable distance between them being dictated and limited only by the wireless communication protocol being employed.
  • the processor could be mounted on, or formed integrally with, the user's clothing, or instead located remotely from the user, either as a stand-alone unit or as an integral part of a larger control unit, for example.
  • a system according to an exemplary embodiment of the invention comprises, generally, a headset 100, incorporating a screen 102, a processor 104, and a pair of external digital image capture devices (only one shown) 106.
  • the processor 104 is configured to display interactive virtual data items and portals, simultaneously if required, within a three dimensional (3D) virtual environment.
  • the user may be provided with virtual rendering of their own computing device (stored elsewhere) or a generic PC terminal, an audio and/or video menu, a telepresence/video conferencing panel, a control panel for enabling personalisation/customisation of their visual environment, a refreshment/goods ordering panel, etc.
  • the user can select which items to display and the user is able to configure the data items displayed within the virtual environment in any way they wish to, simply by means of hand gestures, for example, as will be described in more detail later.
  • the selected item(s) could appear in the conventional manner on a "screen" 40 in front of the user.
  • the user could choose to have one or more large "screens" 20 appearing to float or hover around them within their 3D virtual environment.
  • Some of the items displayed, such as safety announcements and the like, may be preconfigured to be fed into all users' 3D virtual environments at any required time, but the user could then be provided with options as to how that information is displayed, the language/format in which the information is provided, etc.
  • Digital video image frames of the user's real world environment 42, 50 are captured by the image capture devices provided on the headset 10, and two image capture devices are used in this exemplary embodiment to capture respective images such that the data representative thereof can be blended to produce a stereoscopic depth map which enables the processor to determine depth within the captured images without any additional infrastructure being required.
  • the user can select portions or objects from these images to be blended into the virtual environment being displayed on the screen.
  • the user may initially be presented with a rendered and blended image of the entire real world environment and may then be able to select portions thereof to be removed from the displayed image. In either case, the resultant displayed image is continuously updated as the user's field of view changes, either due to their own movement or movement within their environment.
  • the user can selectively "build” or configure their virtual environment, virtually unconstrained by their real world environment.
  • the user may choose to remove all of their real world environment from the displayed environment and, instead, replace it with an entirely different environment, or simply remove elements of the real world environment from their view in order to reduce distractions, whilst maintaining an awareness of what is going on around them.
  • Data relating to any of the items displayed within the 3D virtual environment may be received by the processor over a wide bandwidth data link from several different sources via, for example, an Internet connection, a secure LAN, Blue tooth, or any other communications protocol, which may be wireless or otherwise, and that data is fed to, and updated within, the displayed items in the user's three-dimensional virtual environment, as required.
  • colour data sampled from the source image can be backward warped, using homography, to each pixel in the resultant virtual scene. All of these computational steps require minimal processing and time and can, therefore, be performed quickly and in real (or near real) time.
  • the selected object is moving, for example, a person, the corresponding image data within the virtual environment can be updated in real time.
  • selection of real world elements from the captured images and also selection and manipulation of, and interaction with, the displayed data may be effected by, for example, hand gestures made by the user.
  • hand gestures made by the user.
  • predefined hand gestures may be provided that are associated with specific actions, in which case, the processor is preconfigured to recognise those specific predefined hand gestures and cause the associated action to be performed in respect of the selected data item.
  • a passive (physical or virtual) control panel or keyboard may be provided which the user "operates" like a normal keyboard or control panel, except that the user's actions in respect thereof are captured by the image capture devices, and the processor is configured to employ image recognition techniques to determine which keys or icons the user has pressed on the keyboard or control panel, and cause the required action to be performed in respect of the selected data item.
  • the image capture devices provided in the system described above can be used to capture video images of the user's hands (which can be selected to be blended into the 3D virtual environment displayed on the user's screen).
  • one relatively simple method of automated hand gesture recognition and control using captured digital video images involves the use of a database of images of predefined hand gestures and the command to which they relate, or indeed a data base of images of predefined hand positions (in relation to a passive keyboard or control panel) and the action or "key" to which they relate.
  • an auto threshold function is first performed on the image to extract the hand from the background.
  • the wrist is then removed from the hand shape, using a so-called "blob" image superposed over the palm of the hand, to separate out the individual parts of the hand so that the edge of the blob defines the border of the image.
  • the parts outside of the border i.e. the wrist
  • shape recognition software can be used to extract and match the shape of a hand to a predefined hand gesture, or "markers" associated with the configuration of the keyboard or control panel can be used to determine the relative position and hand action, and call the associated command accordingly.
  • the user has the ability to first select the area of the virtual environment they wish to manipulate before actually performing any manipulation, it is necessary to provide some form of direction and orientation sensing, such that it can be determined where in the virtual environment the user is pointing.
  • This can be done by any known means, for example, by image recognition within the captured images of the user's hands relative to a marker within the image, or by means of an accelerometer or similar orientation sensor mounted or otherwise provided on or in relation to the user's arms or hands.
  • embodiments of the present invention provide the ability to create a mixed reality information and entertainment environment which, in its most basic form for example, allows a passenger to view their environment as normal but simply have a large display screen 20 'floating' around them. Selected from a library of features, the screen 20 could be used to provide facilities such as movies, internet browsing or business functionality, and permits user interactivity for configuration and manipulation of the screen itself and/or the data displayed thereon.
  • the mixed reality environment provided by aspects of the present invention enables a user to completely change their surroundings if they wish to, for example, by substituting the aircraft environment for one more suited to their comfort.
  • image capture devices may additionally be provided on the outside of the vehicle, for capturing external images
  • the processor may be configured to blend the resultant images into the user's 3D virtual environment thus, for example, creating the illusion of large windows. Even in this case, the user may still wish to retain the ability to see their own body, for example, and the system enables the user to select which real world elements they wish to retain within their view.
  • system may include the ability to store user accounts, such that a user's personal preferences in respect of the system can be stored and recalled for future use.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Optics & Photonics (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • User Interface Of Digital Computer (AREA)
  • Processing Or Creating Images (AREA)

Abstract

A mixed reality information and/or entertainment system for use onboard a vehicle, the system comprising a headset (100) for placing over a user's eyes, in use, said headset including a screen (102), image capture means (106) for capturing images of the real world environment in the vicinity of a user, and a processor (104) configured to generate a selected three-dimensional virtual reality environment and blend images of said real world environment into said three-dimensional virtual reality environment to create a mixed reality environment and display said mixed reality environment on said screen (102), the system further comprising a control function for enabling selection of a portion of said images of said real world environment, the processor (104) being further configured to, in response to control signals from said control function, selectively remove said selected portions from, or blend said selected portions into, said mixed reality environment.

Description

MIXED REALITY INFORMATION AND ENTERTAINMENT SYSTEM
AND METHOD
This invention relates generally to a mixed reality information and entertainment system and method and, more particularly, to a mixed reality information and entertainment system and method for use onboard a vehicle, for example, onboard an aircraft.
It is common for information and entertainment systems to be provided for passengers in a vehicle, particularly those onboard public transport such as commercial passenger aircraft. Conventional such systems tend to comprise a screen, typically a touch screen, mounted within the back of the seat in front of the intended user, a set of basic controls in the user's armrest, and a headphone jack for receiving a set of headphones, to enable the user to listen to the audio output of the system without disturbing other passengers.
However, there are a number of limitations associated with traditional information and entertainment systems of this type. The fixed infrastructure and configuration, for example, leads to a lack of overall flexibility and little scope for personalisation by the user. Size, weight and power considerations limit the scope for change or extension of the existing infrastructure, and through-life costs tend to be high, as does the cost of change. Aspects of the present invention seek to address at least some of these issues and, in accordance with one aspect of the present invention, there is provided a mixed reality information and entertainment system for use onboard a vehicle, the system comprising a headset for placing over a user's eyes, in use, said headset including a screen, image capture means for capturing images of the real world environment in the vicinity of a user, and a processor configured to generate a selected three-dimensional virtual reality environment and blend images of said real world environment into said three-dimensional virtual reality environment to create a mixed reality environment and display said mixed reality environment on said screen, the system further comprising a control function for enabling selection of a portion of said images of said real world environment, the processor being further configured to, in response to control signals from said control function, selectively remove said selected portions from, or blend said selected portions into, said mixed reality environment.
The system may be configured, in use, to allow a user to select and/or manipulate elements of said mixed reality environment by means of hand gestures. In one exemplary embodiment, the system may be configured to allow a user, in use, to select objects, portions or people from their real world environment to be included within, or removed from, said mixed reality environment, by means of one or more predefined hand gestures
The processor may be configured to generate said three dimensional virtual environment including at least one virtual display for displaying selected data. In this case, the system may be configured, in use, to allow a user, in use, to manipulate data displayed on said virtual display by means of hand gestures.
In an exemplary embodiment, the system may comprise a pair of spatially separated image capture devices for capturing respective images of the real world environment in the vicinity of the user, said processor being configured to define a depth map using respective image frame pairs to produce three dimensional image data. The image capture devices may be mounted on said headset. In this case, the image capture devices may be mounted on said headset so as to be substantially aligned with a user's eyes, in use.
The processor may be configured to receive image data captured externally of said vehicle and to selectively blend said image data into said three dimensional virtual environment.
Aspects of the invention extend to a mixed reality information and/or entertainment apparatus for use onboard a vehicle, the apparatus comprising at least two systems as described above, and being configured to display the same mixed reality environment, or at least a portion thereof, on each screen.
Another aspect of the invention extends to a method of providing an information and entertainment facility onboard a vehicle, the method comprising providing at least one mixed reality system as described above, and configuring the processor to: generate a selected three-dimensional virtual reality environment and blend images of said real world environment into said three-dimensional virtual reality environment to create a mixed reality environment and display said mixed reality environment on said screen; - receive control signals from said control function representative of selected portions of said real world environment within said captured images; and in response to control signals from said control function, selectively remove said selected portions from, or blend said selected portions into, said mixed reality environment.
The information and entertainment facility may be an in-flight information and entertainment facility for use on an aircraft, and the method may include the step of configuring said processor to automatically render and display within said mixed reality environment, one or more predefined data items. These and other aspects of the present invention will be apparent from the following specific description in which embodiments of the present invention are described in more detail, by way of examples only, and with reference to the accompanying drawings, in which:
Figure 1 is a front perspective view of a headset for use in a control system according to an exemplary embodiment of the present invention;
Figure 2 is a schematic block diagram of a control system according to an exemplary embodiment of the present invention; and
Figure 3 is a schematic user's view of a mixed reality information and entertainment environment created by a system according to an exemplary embodiment of the present invention.
Referring to Figure 1 of the drawings, a system according to an exemplary embodiment of the present invention may comprise a headset comprising a visor 10 having a pair of arms 12 hingedly attached at opposing sides thereof in order to allow the visor to be secured onto a user's head, over their eyes, in use, by placing the curved ends of the arms 12 over and behind the user's ears, in a manner similar to conventional spectacles. It will be appreciated that, whilst the headset is illustrated herein in the form of a visor, it may alternatively comprise a helmet for placing over a user's head, or even a pair of contact lenses or the like, for placing within a user's eyes, and the present invention is not intended to be in any way limited in this regard. Also provided on the headset, is a pair of image capture devices 14 for capturing images of the environment, such image capture devices being mounted as closely as possible aligned with the user's eyes, in use.
The system of the present invention further comprises a processor, which is communicably connected in some way to a screen which is provided inside the visor 10. Such communicable connection may be a hard wired electrical connection, in which case the processor and associated circuitry will also be mounted on the headset. However, in an alternative exemplary embodiment, the processor may be configured to wirelessly communicate with the visor, for example, by means of Bluetooth or similar wireless communication protocol, in which case, the processor need not be mounted on the headset but can instead be located remotely from the headset, with the relative allowable distance between them being dictated and limited only by the wireless communication protocol being employed. For example, the processor could be mounted on, or formed integrally with, the user's clothing, or instead located remotely from the user, either as a stand-alone unit or as an integral part of a larger control unit, for example.
Referring to Figure 2 of the drawings, a system according to an exemplary embodiment of the invention comprises, generally, a headset 100, incorporating a screen 102, a processor 104, and a pair of external digital image capture devices (only one shown) 106.
The processor 104 is configured to display interactive virtual data items and portals, simultaneously if required, within a three dimensional (3D) virtual environment. Thus, for example, the user may be provided with virtual rendering of their own computing device (stored elsewhere) or a generic PC terminal, an audio and/or video menu, a telepresence/video conferencing panel, a control panel for enabling personalisation/customisation of their visual environment, a refreshment/goods ordering panel, etc. The user can select which items to display and the user is able to configure the data items displayed within the virtual environment in any way they wish to, simply by means of hand gestures, for example, as will be described in more detail later. For example, and referring additionally to Figure 3 of the drawings, the selected item(s) could appear in the conventional manner on a "screen" 40 in front of the user. Alternatively, however, the user could choose to have one or more large "screens" 20 appearing to float or hover around them within their 3D virtual environment. Some of the items displayed, such as safety announcements and the like, may be preconfigured to be fed into all users' 3D virtual environments at any required time, but the user could then be provided with options as to how that information is displayed, the language/format in which the information is provided, etc.
Digital video image frames of the user's real world environment 42, 50 are captured by the image capture devices provided on the headset 10, and two image capture devices are used in this exemplary embodiment to capture respective images such that the data representative thereof can be blended to produce a stereoscopic depth map which enables the processor to determine depth within the captured images without any additional infrastructure being required. The user can select portions or objects from these images to be blended into the virtual environment being displayed on the screen. Alternatively, the user may initially be presented with a rendered and blended image of the entire real world environment and may then be able to select portions thereof to be removed from the displayed image. In either case, the resultant displayed image is continuously updated as the user's field of view changes, either due to their own movement or movement within their environment.
In this way, the user can selectively "build" or configure their virtual environment, virtually unconstrained by their real world environment. For example, the user may choose to remove all of their real world environment from the displayed environment and, instead, replace it with an entirely different environment, or simply remove elements of the real world environment from their view in order to reduce distractions, whilst maintaining an awareness of what is going on around them. Data relating to any of the items displayed within the 3D virtual environment may be received by the processor over a wide bandwidth data link from several different sources via, for example, an Internet connection, a secure LAN, Blue tooth, or any other communications protocol, which may be wireless or otherwise, and that data is fed to, and updated within, the displayed items in the user's three-dimensional virtual environment, as required.
The general concept of real time image blending for augmented reality is known, and several different techniques have been proposed. The present invention is not necessarily intended to be in any way limited in this regard. However, for completeness, one exemplary method for image blending will be briefly described. Thus, once an object has been selected from a real world image to be blended into the virtual environment, a threshold function may be applied in order to extract that object from the background image. Its relative location and orientation may also be extracted and preserved by means of marker data. Next, the image and marker data is converted to a binary image, possibly by means of adaptive thresholding (although other methods are known). The marker data and binary image are then transformed into a set of coordinates which match the location within the virtual environment in which they will be blended. Such blending is usually performed using black and white image data. Thus, if necessary, colour data sampled from the source image can be backward warped, using homography, to each pixel in the resultant virtual scene. All of these computational steps require minimal processing and time and can, therefore, be performed quickly and in real (or near real) time. Thus, if the selected object is moving, for example, a person, the corresponding image data within the virtual environment can be updated in real time.
As stated above, selection of real world elements from the captured images and also selection and manipulation of, and interaction with, the displayed data may be effected by, for example, hand gestures made by the user. Several different techniques for automated recognition of hand gestures are known, and the present invention is not in any way intended to be limited in this regard. For example, predefined hand gestures may be provided that are associated with specific actions, in which case, the processor is preconfigured to recognise those specific predefined hand gestures and cause the associated action to be performed in respect of the selected data item. Alternatively, a passive (physical or virtual) control panel or keyboard may be provided which the user "operates" like a normal keyboard or control panel, except that the user's actions in respect thereof are captured by the image capture devices, and the processor is configured to employ image recognition techniques to determine which keys or icons the user has pressed on the keyboard or control panel, and cause the required action to be performed in respect of the selected data item.
Either way, it will be appreciated that the image capture devices provided in the system described above can be used to capture video images of the user's hands (which can be selected to be blended into the 3D virtual environment displayed on the user's screen). Thus, one relatively simple method of automated hand gesture recognition and control using captured digital video images involves the use of a database of images of predefined hand gestures and the command to which they relate, or indeed a data base of images of predefined hand positions (in relation to a passive keyboard or control panel) and the action or "key" to which they relate. Thus, an auto threshold function is first performed on the image to extract the hand from the background. The wrist is then removed from the hand shape, using a so-called "blob" image superposed over the palm of the hand, to separate out the individual parts of the hand so that the edge of the blob defines the border of the image. The parts outside of the border (i.e. the wrist) are then removed from the image, following which shape recognition software can be used to extract and match the shape of a hand to a predefined hand gesture, or "markers" associated with the configuration of the keyboard or control panel can be used to determine the relative position and hand action, and call the associated command accordingly.
In the present invention, where the user has the ability to first select the area of the virtual environment they wish to manipulate before actually performing any manipulation, it is necessary to provide some form of direction and orientation sensing, such that it can be determined where in the virtual environment the user is pointing. This can be done by any known means, for example, by image recognition within the captured images of the user's hands relative to a marker within the image, or by means of an accelerometer or similar orientation sensor mounted or otherwise provided on or in relation to the user's arms or hands.
It is envisaged that passengers travelling together could be given the option to share the same virtual environment (30), which would enable, for example, families to share the same movie, provide a shared internet browsing experience, or facilitate a business discussion during a journey.
Thus, embodiments of the present invention provide the ability to create a mixed reality information and entertainment environment which, in its most basic form for example, allows a passenger to view their environment as normal but simply have a large display screen 20 'floating' around them. Selected from a library of features, the screen 20 could be used to provide facilities such as movies, internet browsing or business functionality, and permits user interactivity for configuration and manipulation of the screen itself and/or the data displayed thereon. However, the mixed reality environment provided by aspects of the present invention enables a user to completely change their surroundings if they wish to, for example, by substituting the aircraft environment for one more suited to their comfort. In one exemplary embodiment, image capture devices may additionally be provided on the outside of the vehicle, for capturing external images, and the processor may be configured to blend the resultant images into the user's 3D virtual environment thus, for example, creating the illusion of large windows. Even in this case, the user may still wish to retain the ability to see their own body, for example, and the system enables the user to select which real world elements they wish to retain within their view.
It is also envisaged that the system may include the ability to store user accounts, such that a user's personal preferences in respect of the system can be stored and recalled for future use.
It will be apparent to a person skilled in the art, from the foregoing, that modifications and variations can be made to the described embodiments without departing from the scope of the present invention as claimed.

Claims

1 . A mixed reality information and/or entertainment system for use onboard a vehicle, the system comprising a headset for placing over a user's eyes, in use, said headset including a screen, image capture means for capturing images of the real world environment in the vicinity of a user, and a processor configured to generate a selected three-dimensional virtual reality environment and blend images of said real world environment into said three-dimensional virtual reality environment to create a mixed reality environment and display said mixed reality environment on said screen, the system further comprising a control function for enabling selection of a portion of said images of said real world environment, the processor being further configured to, in response to control signals from said control function, selectively remove said selected portions from, or blend said selected portions into, said mixed reality environment.
2. A system according to claim 1 , configured, in use, to allow a user to select and/or manipulate elements of said mixed reality environment by means of hand gestures.
3. A system according to claim 1 or claim 2, configured to allow a user, in use, to select objects, portions or people from their real world environment to be included within, or removed from, said mixed reality environment, by means of one or more predefined hand gestures
4. A system according to any of the preceding claims, wherein said processor is configured to generate said three dimensional virtual environment including at least one virtual display for displaying selected data.
5. A system according to claim 4, configured, in use, to allow a user, in use, to manipulate data displayed on said virtual display by means of hand gestures.
6. A system according to any of the preceding claims, comprising a pair of spatially separated image capture devices for capturing respective images of the real world environment in the vicinity of the user, said processor being configured to define a depth map using respective image frame pairs to produce three dimensional image data.
7. A system according to claim 6, wherein said image capture devices are mounted on said headset.
8. A system according to claim 7, wherein said image capture devices are mounted on said headset so as to be substantially aligned with a user's eyes, in use.
9. A system according to any of the preceding claims, wherein said processor is configured to receive image data captured externally of said vehicle and to selectively blend said image data into said three dimensional virtual environment.
10. A mixed reality information and/or entertainment apparatus for use onboard a vehicle, the apparatus comprising at least two systems according to any of the preceding claims, and being configured to display the same mixed reality environment, or at least a portion thereof, on each screen.
1 1 . A method of providing an information and entertainment facility onboard a vehicle, the method comprising providing at least one mixed reality system according to any of claims 1 to 10, and configuring the processor to:
- generate a selected three-dimensional virtual reality environment and blend images of said real world environment into said three- dimensional virtual reality environment to create a mixed reality environment and display said mixed reality environment on said screen;
- receive control signals from said control function representative of selected portions of said real world environment within said captured images; and
- in response to control signals from said control function, selectively remove said selected portions from, or blend said selected portions into, said mixed reality environment.
12. A method according to claim 1 1 , wherein said information and entertainment facility is an in-flight information and entertainment facility for use on an aircraft, wherein the method includes the step of configuring said processor to automatically render and display within said mixed reality environment, one or more predefined data items.
PCT/GB2015/053362 2014-11-19 2015-11-06 Mixed reality information and entertainment system and method WO2016079470A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
GB1420568.6 2014-11-19
GB1420568.6A GB2532462A (en) 2014-11-19 2014-11-19 Mixed reality information and entertainment system and method

Publications (1)

Publication Number Publication Date
WO2016079470A1 true WO2016079470A1 (en) 2016-05-26

Family

ID=52248609

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/GB2015/053362 WO2016079470A1 (en) 2014-11-19 2015-11-06 Mixed reality information and entertainment system and method

Country Status (2)

Country Link
GB (1) GB2532462A (en)
WO (1) WO2016079470A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107918210A (en) * 2016-10-07 2018-04-17 松下航空电子公司 Hand-held device with virtual reality glasses
US10373385B2 (en) 2016-12-14 2019-08-06 Microsoft Technology Licensing, Llc Subtractive rendering for augmented and virtual reality systems

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB201404134D0 (en) * 2014-03-10 2014-04-23 Bae Systems Plc Interactive information display
CN107054660A (en) * 2017-05-08 2017-08-18 佛山市神风航空科技有限公司 A kind of VR experience apparatus on passenger plane
ES2704373B2 (en) * 2017-09-15 2020-05-29 Seat Sa Method and system to display virtual reality information in a vehicle
US10991138B2 (en) * 2017-12-22 2021-04-27 The Boeing Company Systems and methods for in-flight virtual reality displays for passenger and crew assistance
JP7043845B2 (en) * 2018-01-17 2022-03-30 トヨタ自動車株式会社 Display linkage control device for vehicles
CN108667798A (en) * 2018-03-27 2018-10-16 上海临奇智能科技有限公司 A kind of method and system of virtual viewing

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090319902A1 (en) * 2008-06-18 2009-12-24 Heidi Joy Kneller Multipurpose information transfer medium eyepiece
US20120162254A1 (en) * 2010-12-22 2012-06-28 Anderson Glen J Object mapping techniques for mobile augmented reality applications
US20130093788A1 (en) * 2011-10-14 2013-04-18 James C. Liu User controlled real object disappearance in a mixed reality display
US20130326364A1 (en) * 2012-05-31 2013-12-05 Stephen G. Latta Position relative hologram interactions
US20140180508A1 (en) * 2012-12-21 2014-06-26 Airbus Sas Aircraft with a cockpit including a viewing surface for piloting which is at least partially virtual

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102007045835B4 (en) * 2007-09-25 2012-12-20 Metaio Gmbh Method and device for displaying a virtual object in a real environment
US9480919B2 (en) * 2008-10-24 2016-11-01 Excalibur Ip, Llc Reconfiguring reality using a reality overlay device
US8817092B2 (en) * 2008-11-25 2014-08-26 Stuart Leslie Wilkinson Method and apparatus for generating and viewing combined images
US20140132595A1 (en) * 2012-11-14 2014-05-15 Microsoft Corporation In-scene real-time design of living spaces

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090319902A1 (en) * 2008-06-18 2009-12-24 Heidi Joy Kneller Multipurpose information transfer medium eyepiece
US20120162254A1 (en) * 2010-12-22 2012-06-28 Anderson Glen J Object mapping techniques for mobile augmented reality applications
US20130093788A1 (en) * 2011-10-14 2013-04-18 James C. Liu User controlled real object disappearance in a mixed reality display
US20130326364A1 (en) * 2012-05-31 2013-12-05 Stephen G. Latta Position relative hologram interactions
US20140180508A1 (en) * 2012-12-21 2014-06-26 Airbus Sas Aircraft with a cockpit including a viewing surface for piloting which is at least partially virtual

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
NIKOS FRANGAKIS: "SEVENTH FRAMEWORK PROGRAMME FP7 VR-HYPERSPACE Research Roadmap, Deliverable D6.4", 30 September 2014 (2014-09-30), pages 1 - 76, XP055237907, Retrieved from the Internet <URL:http://www.vr-hyperspace.eu/www.vr-hyperspace.eu/files/VR-HYPERSPACE_D6_4_Roadmap_Final/index.pdf> [retrieved on 20151221] *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107918210A (en) * 2016-10-07 2018-04-17 松下航空电子公司 Hand-held device with virtual reality glasses
US10373385B2 (en) 2016-12-14 2019-08-06 Microsoft Technology Licensing, Llc Subtractive rendering for augmented and virtual reality systems

Also Published As

Publication number Publication date
GB2532462A (en) 2016-05-25
GB201420568D0 (en) 2014-12-31

Similar Documents

Publication Publication Date Title
US10262465B2 (en) Interactive control station
US11803055B2 (en) Sedentary virtual reality method and systems
US11127217B2 (en) Shared environment for a remote user and vehicle occupants
WO2016079470A1 (en) Mixed reality information and entertainment system and method
EP3117290B1 (en) Interactive information display
US10096166B2 (en) Apparatus and method for selectively displaying an operational environment
EP3676745B1 (en) Privacy screen for computer simulated reality
US10891800B1 (en) Providing features of an electronic product in an augmented reality environment
US20180218631A1 (en) Interactive vehicle control system
US20210240331A1 (en) Selecting a text input field using eye gaze
US11900520B1 (en) Specifying effects for entering or exiting a computer-generated reality environment
CN112116716A (en) Virtual content located based on detected objects
GB2525304B (en) Interactive information display
US20230308495A1 (en) Asymmetric Presentation of an Environment
EP2919094A1 (en) Interactive information display
JP7332823B1 (en) program
US11181973B2 (en) Techniques related to configuring a display device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15794249

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 15794249

Country of ref document: EP

Kind code of ref document: A1