WO2016135471A1 - Système d'informations interactives - Google Patents

Système d'informations interactives Download PDF

Info

Publication number
WO2016135471A1
WO2016135471A1 PCT/GB2016/050452 GB2016050452W WO2016135471A1 WO 2016135471 A1 WO2016135471 A1 WO 2016135471A1 GB 2016050452 W GB2016050452 W GB 2016050452W WO 2016135471 A1 WO2016135471 A1 WO 2016135471A1
Authority
WO
WIPO (PCT)
Prior art keywords
user
processor
image data
data
optical label
Prior art date
Application number
PCT/GB2016/050452
Other languages
English (en)
Inventor
Christopher James WHITEFORD
Nicholas Giacomo Robert Colosimo
Julian David WRIGHT
Original Assignee
Bae Systems Plc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from EP15275045.1A external-priority patent/EP3062218A1/fr
Priority claimed from GB1503112.3A external-priority patent/GB2535727A/en
Application filed by Bae Systems Plc filed Critical Bae Systems Plc
Publication of WO2016135471A1 publication Critical patent/WO2016135471A1/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F13/00Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F13/14Handling requests for interconnection or transfer
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality

Definitions

  • This invention relates generally to an interactive information system, and method for providing same, and more particularly, but not necessarily exclusively, to a mixed reality system configured to enable a number of users to work together in a collaborative environment that requires sharing and exchange of traditional paper-based information sources.
  • a mixed reality system comprising headset for placing over a user's eyes, in use, the headset including a screen, the system further comprising a processor configured to generate a three dimensional virtual environment, and an image capture device for capturing visible image data representative of the real world environment in the vicinity of a user, the processor being configured to blend said visible image data into said three dimensional virtual environment to create and display on said screen a continuously updated mixed reality environment representative of a user's filed of view, the processor being further configured to extract, from an optical image captured in respect of the real world environment in the vicinity of said user, an optical label representative of a digital location at which data or electronic media is stored and can be accessed, the system further comprising an optical label reader for decoding said extracted optical label to determine said digital location and accessing said data or electronic media stored therein, the processor being further configured to blend image data representative of said accessed data or electronic media into said mixed reality environment displayed on said screen.
  • the processor may be configured to extract, from said visible image data, an optical label that is visible to the naked eye.
  • the system may, alternatively or in addition, comprise a spectral camera for capturing multispectral image data representative of the real world environment in the vicinity of a user, wherein said processor may be configured to extract, from said multispectral image data, an optical label that is detectable at one or more optical wavelengths outside the visible wavelength band.
  • the system may be configured to scan captured image data representative of the real world environment in the vicinity of a user so as to identify one or more optical labels present therein, and display, on said screen, data representative of identified optical labels.
  • the system may further comprise a selection function actuatable by a user so as to select an optical label, from said identified optical labels, to be decoded and the data or electronic media stored in the digital location associated therewith accessed.
  • the system may, alternatively or in addition, further comprise a selection function, actuatable by a user, to select an optical label, from within said mixed reality environment displayed on said screen, to be decoded and the data or electronic media stored in the digital location associated therewith accessed.
  • a selection function actuatable by a user, to select an optical label, from within said mixed reality environment displayed on said screen, to be decoded and the data or electronic media stored in the digital location associated therewith accessed.
  • the above-mentioned selection function may be actuatable by a predefined bodily movement of a user.
  • the processor may be configured to identify and extract a predefined user bodily movement from said visible image data captured in respect of the real world environment in the vicinity of said user, identify therefrom a required selection action, and generate a control signal to effect said selection action.
  • the selection function may be actuatable by a predefined hand gesture made by a user.
  • the system may comprise a pair of spatially separated image capture devices for capturing respective images of the real world environment in the vicinity of the user, said processor being configured to define a depth map using respective image frame pairs to produce three-dimensional image data.
  • the image capture devices may be mounted on said headset so as to be substantially aligned with a user's eyes, in use.
  • Another aspect of the present invention extends to a method of displaying data or electronic media, the method comprising: - applying an optical label to a physical object, said optical label being representative of a digital location at which data or electronic media is stored and can be accessed; providing a mixed reality system comprising a headset for placing over a user's eyes, in use, the headset including a screen, the system further comprising a processor configured to generate a three dimensional virtual environment, and an image capture device for capturing visible image data representative of the real world environment in the vicinity of a user, the processor being configured to blend said visible image data into said three dimensional virtual environment to create and display on said screen a continuously updated mixed reality environment representative of a user's filed of view; configuring said processor to extract, from an optical image captured in respect of the real world environment in the vicinity of said user, said optical label; - configuring an optical label reader to decode said extracted optical label to determine said digital location and access said data or electronic media stored therein; and configuring the processor to blend image data representative of said accessed data
  • Figure 1 is a front perspective view of a headset for use in a control apparatus according to an exemplary embodiment of the present invention
  • Figure 2 is a schematic block diagram of a control apparatus according to an exemplary embodiment of the present invention.
  • FIG 3 is a schematic diagram illustrating a system according to an exemplary embodiment of the present invention, in use.
  • Virtual reality systems comprising a headset which, when placed over a user's eyes, creates and displays a three-dimensional virtual environment in which a user feels immersed and with which a user can interact in a manner dependent on the application.
  • the virtual environment created may comprise a game zone, within which a user can play a game.
  • game zone within which a user can play a game.
  • such systems are unsuitable.
  • augmented and mixed reality systems have been developed, wherein an image of a real world object can be captured, rendered and placed within a 3D virtual reality environment, such that it can be viewed and manipulated within that environment in the same way as virtual objects therein.
  • a system may comprise a headset 100 comprising a visor 10 having a pair of arms 12 hingedly attached at opposing sides thereof in order to allow the visor to be secured onto a user's head, over their eyes, in use, by placing the curved ends of the arms 12 over and behind the user's ears, in a manner similar to conventional spectacles.
  • the headset is illustrated herein in the form of a visor, it may alternatively comprise a helmet for placing over a user's head, or even a pair of contact lenses or the like, for placing within the user's eyes, and the present invention is not intended to be in any way limited in this regard.
  • a pair of image capture devices 14 for capturing images of the environment, such image capture devices being mounted roughly aligned with a user's eyes in use.
  • the system of the present invention further comprises a processor which is communicably connected in some way to a screen which is provided inside the visor 10.
  • Such communicable connection may be a hard wired electrical connection, in which case the processor and associated circuitry will also be mounted on the headset.
  • the processor may be configured to wirelessly communicate with the visor, for example, by means of Bluetooth or similar wireless communication protocol, in which case, the processor need not be mounted on the headset but can instead be located remotely from the headset, with the relative allowable distance between them being dictated and limited only by the wireless communication protocol being employed.
  • the processor could be mounted on or formed integrally with the user's clothing, or instead located remotely from the user, either as a stand-alone unit or as an integral part of a larger control unit, for example.
  • a system according to an exemplary embodiment of the invention comprises, generally, a headset 100, incorporating a screen 102, a processor 104, and a pair of external digital image capture devices (only one shown) 14.
  • the user's headset 100 includes two image capture devices, as stated previously, which may be used to capture respective images of the real world environment in the vicinity of the user, and data representative thereof can be blended to produce a stereoscopic depth map which enables the processor to determine depth within the captured images without any additional infrastructure being required.
  • the processor 104 is configured to generate and display on the screen 102, a three dimensional virtual environment, and all or selected portions of the 3D images captured by the image capture devices 14 can be blended into the virtual environment being displayed on the screen.
  • the general concept of real time image blending for augmented and mixed reality is known, and several techniques have been proposed.
  • the present invention is not intended to be in any way limited in this regard. However, for completeness, one exemplary method for image blending will be briefly described.
  • a threshold function may be applied in order to extract that image data from any background images. Its relative location and orientation may also be extracted and preserved by means of marker data.
  • the image and marker data is converted to a binary image, possibly by means of adaptive thresholding (although other methods are known).
  • the marker data and binary image are then transformed into a set of coordinates which match the location within the virtual environment in which they will be blended.
  • Such blending is usually performed using black and white image data.
  • colour data sampled from the source image may be backward warped, using homography, to each pixel in the resultant virtual scene. All of these computational steps require minimal processing capacity and time and can, therefore, be performed quickly and in real time.
  • the selected object is moving, for example, the user's own body, the corresponding image data within the virtual environment can be updated in real time.
  • the user 200 is thus provided with a three dimensional mixed reality view 202 which is representative of the surroundings within their field of view, including any people and objects with which they may be required to interact.
  • a paper document or book 204 may be required to be shared amongst a number of users.
  • An optical label 206 is provided within the pages of the book 204.
  • the optical label may take any one of a number of different forms, for example, it may comprise a barcode or QR code, but the present invention is not necessarily intended to be limited in this regard.
  • the label 206 contains, or is representative of, a digital location in which digital data or media is stored which is relevant to the resource to which it has been applied.
  • digital data and media may take any known form, including document files, image files, media files having active content and feedback options, adverts, e-commerce portals or even games.
  • the printed document is still perfectly usable, in a conventional manner, without a mixed reality system.
  • the addition of a mixed reality headset serves to enhance the functionality of the document, by enabling the label to be identified and the associated content accessed.
  • image data representative thereof will be captured by the image capture devices 14 on the user's headset.
  • Image recognition means within the processor may be configured to continuously scan images captured by the image capture devices to identify and extract optical labels in the vicinity of the user. These could then be provided as options from which the user can select in order to access any required data or media.
  • the user themselves may identify, from the mixed reality environment displayed on their screen, an optical label which they wish to utilise, and actuate an image extraction process in order to extract the required optical label from the captured image data. In either case, selection may be effected in one of a number of different ways.
  • the image capture devices 14 on the user's headset also capture image data representative of parts of the user's body, including their arms, hands and fingers.
  • selection could, in some exemplary embodiments of the invention, be effected by means of predefined hand gestures in respect of the optical label to be utilised.
  • the image capturing module provided in the system described above can be used to capture video images of the user's hands, such that hand gestures provide a convenient method of selecting optical labels within the 3D mixed reality environment.
  • One relatively simple method of automated hand gesture recognition and control using captured digital video images involves the use of a database of images of predefined hand gestures and the command to which they relate.
  • an auto threshold function is first performed on the image to extract the hand from the background.
  • the wrist is then removed from the hand shape, using a so-called "blob" image superposed over the palm of the hand, to separate out the individual parts of the hand so that the edge of the blob defines the border of the image.
  • the parts outside of the border i.e. the wrist
  • shape recognition software can be used to extract and match the shape of the hand to the predefined hand gesture, and call the associated command accordingly.
  • optical labels within the 3D mixed reality environment may alternatively or additionally be employed, including other bodily movements, such as a user's head movement.
  • gaze detection means and a movement sensor may, for example, be used to determine a movement of a user's head in the direction of a selected optical label.
  • present invention is not necessarily intended to be limited in this regard.
  • the optical label may not necessarily be visible to the naked eye.
  • Media only detectable using wavelengths other than visible wavelengths may be used to apply the optical labels to the physical objects.
  • a multi-spectral or hyperspectral camera may be provided within the system to detect such optical labels within multi-spectral images captured thereby.
  • the processor is configured to automatically open the hyperlink represented thereby in order to access the location in which the relevant digital data or media is stored and, once accessed and retrieved, blend the data or media into the mixed reality environment displayed on the user's screen.
  • the present invention is not intended to be limited in relation to the natures and types of digital data and media that can be accessed within the system via the above-mentioned optical labels and, indeed, the invention is not intended to be limited to simply enabling a user to access and view such data or media.
  • Systems according to at least some exemplary embodiments of the present invention may be configured to enable the user to interact fully with the data or media once displayed within their mixed reality environment on the screen, depending of course on the nature and type of the data or media in question.

Abstract

L'invention concerne un système de réalité mixée comprenant un casque d'écoute (100) destiné à être placé sur les yeux d'un utilisateur, en utilisation, le casque d'écoute comprenant un écran (102), le système comprenant en outre un processeur (104) configuré pour générer un environnement virtuel tridimensionnel, et un dispositif de capture d'image (106) pour capturer des données d'image visible représentatives de l'environnement du monde réel au voisinage d'un utilisateur, le processeur (104) étant configuré pour mélanger lesdites données d'image visible dans ledit environnement virtuel tridimensionnel pour créer et afficher sur ledit écran (102) un environnement de réalité mixée mis à jour de manière continue représentatif du champ de vision d'un utilisateur, le processeur étant en outre configuré pour extraire, à partir d'une image optique capturée par rapport à l'environnement du monde réel au voisinage dudit utilisateur, une étiquette optique représentative d'un emplacement numérique dans lequel des données ou des supports électroniques sont stockés et peuvent faire l'objet d'un accès, le système comprenant en outre un lecteur d'étiquette optique pour décoder ladite étiquette optique extraite pour déterminer ledit emplacement numérique et accéder auxdites données ou auxdits supports électroniques stockés dans celui-ci, le processeur étant en outre configuré pour mélanger des données d'image représentatives desdites données ou desdits supports électroniques ayant fait l'objet d'un accès dans ledit environnement de réalité mixée affiché sur ledit écran (102). [Figure 2]
PCT/GB2016/050452 2015-02-25 2016-02-23 Système d'informations interactives WO2016135471A1 (fr)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
GB1503112.3 2015-02-25
EP15275045.1A EP3062218A1 (fr) 2015-02-25 2015-02-25 Système d'informations interactif
GB1503112.3A GB2535727A (en) 2015-02-25 2015-02-25 Interactive information system
EP15275045.1 2015-02-25

Publications (1)

Publication Number Publication Date
WO2016135471A1 true WO2016135471A1 (fr) 2016-09-01

Family

ID=55453209

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/GB2016/050452 WO2016135471A1 (fr) 2015-02-25 2016-02-23 Système d'informations interactives

Country Status (1)

Country Link
WO (1) WO2016135471A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106886287A (zh) * 2017-03-23 2017-06-23 广州三星通信技术研究有限公司 用于在虚拟现实设备中共享画面的方法和装置

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130069985A1 (en) * 2011-09-21 2013-03-21 Google Inc. Wearable Computer with Superimposed Controls and Instructions for External Device
US20150049113A1 (en) * 2013-08-19 2015-02-19 Qualcomm Incorporated Visual search in real world using optical see-through head mounted display with augmented reality and user interaction tracking

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130069985A1 (en) * 2011-09-21 2013-03-21 Google Inc. Wearable Computer with Superimposed Controls and Instructions for External Device
US20150049113A1 (en) * 2013-08-19 2015-02-19 Qualcomm Incorporated Visual search in real world using optical see-through head mounted display with augmented reality and user interaction tracking

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
CHIS CAMERON: "You Can Now Use Layar on Google Glass | Layar Blog", 18 March 2014 (2014-03-18), XP055195670, Retrieved from the Internet <URL:https://www.layar.com/news/blog/2014/03/18/you-can-now-use-layar-on-google-glass/> [retrieved on 20150615] *
JOHN BRODKIN: "ER doctors use Google Glass and QR codes to identify patients | Ars Technica", 12 March 2014 (2014-03-12), XP055195657, Retrieved from the Internet <URL:http://arstechnica.com/information-technology/2014/03/er-doctors-use-google-glass-and-qr-codes-to-identify-patients/> [retrieved on 20150615] *
LAYAR: "Layar Augmented Reality for Google Glasses", 20 March 2014 (2014-03-20), XP054975918, Retrieved from the Internet <URL:https://www.youtube.com/watch?v=rBPmG5mqWfI&feature=player_embedded> [retrieved on 20150615] *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106886287A (zh) * 2017-03-23 2017-06-23 广州三星通信技术研究有限公司 用于在虚拟现实设备中共享画面的方法和装置
CN106886287B (zh) * 2017-03-23 2020-06-26 广州三星通信技术研究有限公司 用于在虚拟现实设备中共享画面的方法和装置

Similar Documents

Publication Publication Date Title
US11587297B2 (en) Virtual content generation
US20200310532A1 (en) Systems, apparatuses, and methods for gesture recognition and interaction
US10089794B2 (en) System and method for defining an augmented reality view in a specific location
US9160993B1 (en) Using projection for visual recognition
US10186084B2 (en) Image processing to enhance variety of displayable augmented reality objects
CN102959616B (zh) 自然交互的交互真实性增强
JP6801263B2 (ja) 表示制御プログラム、表示制御方法および表示制御装置
US11302086B1 (en) Providing features of an electronic product in an augmented reality environment
US10296359B2 (en) Interactive system control apparatus and method
US20110310260A1 (en) Augmented Reality
WO2015130383A2 (fr) Système d&#39;identification biométrique
KR20100138863A (ko) 카메라 탑재 단말기에서 코드에 대응하는 증강현실과 개인화된 콘텐츠를 제공하는 방법
US20220100265A1 (en) Dynamic configuration of user interface layouts and inputs for extended reality systems
WO2012077715A1 (fr) Système de fourniture de contenu utilisant une information invisible, dispositif d&#39;incorporation d&#39;information invisible, dispositif de reconnaissance, procédé d&#39;incorporation, procédé de reconnaissance, programme d&#39;incorporation, et programme de reconnaissance
KR20230042277A (ko) 확장 현실을 위한 난독화된 제어 인터페이스들
CN107408186A (zh) 隐私内容的显示
WO2016135471A1 (fr) Système d&#39;informations interactives
GB2535727A (en) Interactive information system
JP2017182647A (ja) 現実の書籍と電子書籍が連携した書籍システム
EP3062218A1 (fr) Système d&#39;informations interactif
GB2525304A (en) Interactive information display
Beglov Object information based on marker recognition
JP2016201050A (ja) 情報処理装置、情報処理方法及びプログラム
JP6404526B2 (ja) 撮像画像共有システム、撮像画像共有方法及びプログラム
WO2018135272A1 (fr) Dispositif de traitement d&#39;informations, procédé d&#39;affichage, programme, et support d&#39;enregistrement lisible par ordinateur

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16707514

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16707514

Country of ref document: EP

Kind code of ref document: A1