GB2546983A - Entertainment system - Google Patents

Entertainment system Download PDF

Info

Publication number
GB2546983A
GB2546983A GB1601842.6A GB201601842A GB2546983A GB 2546983 A GB2546983 A GB 2546983A GB 201601842 A GB201601842 A GB 201601842A GB 2546983 A GB2546983 A GB 2546983A
Authority
GB
United Kingdom
Prior art keywords
interaction
user
virtual
content
sound
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
GB1601842.6A
Other versions
GB201601842D0 (en
Inventor
Gumbleton Simon
Ward-Foxton Nicholas
Van Mourik Jelle
Mauricio Carvalho Corvo Pedro
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sony Interactive Entertainment Europe Ltd
Original Assignee
Sony Computer Entertainment Europe Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sony Computer Entertainment Europe Ltd filed Critical Sony Computer Entertainment Europe Ltd
Priority to GB1601842.6A priority Critical patent/GB2546983A/en
Publication of GB201601842D0 publication Critical patent/GB201601842D0/en
Publication of GB2546983A publication Critical patent/GB2546983A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/167Audio in a user interface, e.g. using voice commands for navigating, audio feedback
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/20Input arrangements for video game devices
    • A63F13/21Input arrangements for video game devices characterised by their sensors, purposes or types
    • A63F13/211Input arrangements for video game devices characterised by their sensors, purposes or types using inertial sensors, e.g. accelerometers or gyroscopes
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/20Input arrangements for video game devices
    • A63F13/21Input arrangements for video game devices characterised by their sensors, purposes or types
    • A63F13/213Input arrangements for video game devices characterised by their sensors, purposes or types comprising photodetecting means, e.g. cameras, photodiodes or infrared cells
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/25Output arrangements for video game devices
    • A63F13/26Output arrangements for video game devices having at least one additional display device, e.g. on the game controller or outside a game booth
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • A63F13/525Changing parameters of virtual cameras
    • A63F13/5255Changing parameters of virtual cameras according to dedicated instructions from a player, e.g. using a secondary joystick to rotate the camera around a player's character
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/0093Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00 with means for monitoring data relating to the user, e.g. head-tracking, eye-tracking
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/017Head mounted
    • G02B27/0172Head mounted characterised by optical features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/20Input arrangements for video game devices
    • A63F13/21Input arrangements for video game devices characterised by their sensors, purposes or types
    • A63F13/212Input arrangements for video game devices characterised by their sensors, purposes or types using sensors worn by the player, e.g. for measuring heart beat or leg activity
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • G02B2027/0138Head-up displays characterised by optical features comprising image capture systems, e.g. camera
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • G02B2027/014Head-up displays characterised by optical features comprising information/image processing systems
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • G02B2027/0141Head-up displays characterised by optical features characterised by the informative content of the display
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0179Display position adjusting means not related to the information to be displayed
    • G02B2027/0187Display position adjusting means not related to the information to be displayed slaved to motion of at least a part of the body of the user, e.g. head, eye

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Optics & Photonics (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • General Health & Medical Sciences (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

An entertainment system for providing virtual reality content comprises a microphone 1070 operable to capture a sound input, a head-mountable display (HMD) device 1060 operable to display a virtual environment and an entertainment device 1000. An audio input analysis unit 1020 characterises sound captured by the microphone, and an interaction determination unit 1030 determines an interaction with the virtual environment in dependence upon the results of the audio input analysis and the relative positions of the user and a virtual object in the virtual environment. A content generation unit 1040 then generates content to modify the virtual environment, the content corresponding to the determined interaction. The captured sound may be a non-verbal input by a user. The sound may be characterised based upon at least one of volume, perceived pitch, frequency content and duration.

Description

ENTERTAINMENT SYSTEM
This disclosure relates to entertainment systems.
As background, an example head-mountable display (HMD) for use in an entertainment system will be discussed.
An HMD is an image or video display device which may be worn on the head or as part of a helmet. Either one eye or both eyes are provided with small electronic display devices.
Some HMDs allow a displayed image to be superimposed on a real-world view. This type of HMD can be referred to as an optical see-through HMD and generally requires the display devices to be positioned somewhere other than directly in front of the user's eyes. Some way of deflecting the displayed image so that the user may see it is then required. This might be through the use of a partially reflective mirror placed in front of the user's eyes so as to allow the user to see through the mirror but also to see a reflection of the output of the display devices. In another arrangement, disclosed in EP-A-1 731 943 and US-A-2010/0157433, the contents of which are incorporated herein by reference, a waveguide arrangement employing total internal reflection is used to convey a displayed image from a display device disposed to the side of the user's head so that the user may see the displayed image but still see a view of the real world through the waveguide. Once again, in either of these types of arrangement, a virtual image of the display is created (using known techniques) so that the user sees the virtual image at an appropriate size and distance to allow relaxed viewing. For example, even though the physical display device may be tiny (for example, 10 mm x 10 mm) and may be just a few millimetres from the user's eye, the virtual image may be arranged so as to be perceived by the user at a distance of (for example) 20 m from the user, having a perceived size of 5 m x 5m.
Other HMDs, however, allow the user only to see the displayed images, which is to say that they obscure the real world environment surrounding the user. This type of HMD can position the actual display devices in front of the user's eyes, in association with appropriate lenses which place a virtual displayed image at a suitable distance for the user to focus in a relaxed manner - for example, at a similar virtual distance and perceived size as the optical see-through HMD described above. This type of device might be used for viewing movies or similar recorded content, or for viewing so-called virtual reality content representing a virtual space surrounding the user. It is of course however possible to display a real-world view on this type of HMD, for example by using a forward-facing camera to generate images for display on the display devices.
Although the original development of HMDs was perhaps driven by the military and professional applications of these devices, HMDs are becoming more popular for use by casual users in, for example, computer game or domestic computing applications.
An HMD such as that described above may be used to display a virtual reality environment to a user, for example in a gaming, or other entertainment, context. A user may be able to interact with such an environment in a number of ways, such as changing their position (which may be sensed by tracking the location of the HMD) or using a peripheral that enables inputs such as the Dualshock®4 controller that allows a user to interact with a Sony® PlayStation® 4 (which is an example of a processing device that may be used to provide content to the HMD).
However, a device that only allows a user to interact with a provided virtual environment using inputs such as a change of position or controller inputs may be limiting in respect of the number of possible interactions that may be experienced by the user in an immersive fashion. The user of an HMD generally seeks an immersive virtual reality experience, and so a need for alternative input methods is realised.
Arrangements according the present disclosure are provided in order to alleviate the problem of a lack of suitable inputs for an HMD system.
This disclosure is defined by claim 1.
Further respective aspects and features of the disclosure are defined in the appended claims.
Embodiments of the disclosure will now be described with reference to the accompanying drawings, in which:
Figure 1 schematically illustrates an HMD worn by a user;
Figure 2 is a schematic plan view of an HMD;
Figure 3 schematically illustrates the formation of a virtual image by an HMD;
Figure 4 schematically illustrates another type of display for use in an HMD;
Figure 5 schematically illustrates a pair of stereoscopic images;
Figure 6 schematically illustrates a change of view of user of an HMD;
Figures 7a and 7b schematically illustrate HMDs with motion sensing;
Figure 8 schematically illustrates a position sensor based on optical flow detection;
Figure 9 schematically illustrates the generation of images in response to HMD position or motion detection;
Figure 10 schematically illustrates a system for processing audio inputs;
Figure 11 schematically illustrates a user’s position in a virtual reality environment;
Figure 12 schematically illustrates a method for providing interaction with a virtual object.
Referring now to Figure 1, a user 10 is wearing an HMD 20 on the user's head 30. The HMD comprises a frame 40, in this example formed of a rear strap and a top strap, and a display portion 50.
The HMD of Figure 1 completely obscures the user's view of the surrounding environment. All that the user can see is the pair of images displayed within the HMD.
The HMD has associated headphone earpieces 60 which fit into the user's left and right ears 70. The earpieces 60 replay an audio signal provided from an external source, which may be the same as the video signal source which provides the video signal for display to the user's eyes.
The HMD 20 may also comprise a microphone 90, although in some embodiments it is more appropriate that a microphone is located elsewhere such as at an entertainment device associated with content provided to the HMD 20 or at a breakout box associated with this entertainment device. In other examples the microphone could be at a distal end of an extension bar or member such that, when the HMD is being worn, the microphone lies close to the wearer’s mouth.
In operation, a video signal is provided for display by the HMD. This could be provided by an external video signal source 80 such as a video games machine or data processing apparatus (such as a personal computer), in which case the signals could be transmitted to the HMD by a wired or a wireless connection. Examples of suitable wireless connections include Bluetooth® connections. Audio signals for the earpieces 60 can be carried by the same connection. Similarly, any control signals passed from the HMD to the video (audio) signal source may be carried by the same connection.
Accordingly, the arrangement of Figure 1 provides an example of a head-mountable display system comprising a frame to be mounted onto an observer’s head, the frame defining one or two eye display positions which, in use, are positioned in front of a respective eye of the observer and a display element mounted with respect to each of the eye display positions, the display element providing a virtual image of a video display of a video signal from a video signal source to that eye of the observer.
Figure 1 shows just one example of an HMD. Other formats are possible: for example an HMD could use a frame more similar to that associated with conventional eyeglasses, namely a substantially horizontal leg extending back from the display portion to the top rear of the user's ear, possibly curling down behind the ear. In other examples, the user's view of the external environment may not in fact be entirely obscured; the displayed images could be arranged so as to be superposed (from the user's point of view) over the external environment. An example of such an arrangement will be described below with reference to Figure 4.
In the example of Figure 1, a separate respective display is provided for each of the user's eyes. A schematic plan view of how this is achieved is provided as Figure 2, which illustrates the positions 100 of the user's eyes and the relative position 110 of the user's nose. The display portion 50, in schematic form, comprises an exterior shield 120 to mask ambient light from the user's eyes and an internal shield 130 which prevents one eye from seeing the display intended for the other eye. The combination of the user's face, the exterior shield 120 and the interior shield 130 form two compartments 140, one for each eye. In each of the compartments there is provided a display element 150 and one or more optical elements 160. The way in which the display element and the optical element(s) cooperate to provide a display to the user will be described with reference to Figure 3.
Referring to Figure 3, the display element 150 generates a displayed image which is (in this example) refracted by the optical elements 160 (shown schematically as a convex lens but which could include compound lenses or other elements) so as to generate a virtual image 170 which appears to the user to be larger than and significantly further away than the real image generated by the display element 150. As an example, the virtual image may have an apparent image size (image diagonal) of more than 1 m and may be disposed at a distance of more than 1 m from the user's eye (or from the frame of the HMD). In general terms, depending on the purpose of the HMD, it is desirable to have the virtual image disposed a significant distance from the user. For example, if the HMD is for viewing movies or the like, it is desirable that the user's eyes are relaxed during such viewing, which requires a distance (to the virtual image) of at least several metres. In Figure 3, solid lines (such as the line 180) are used to denote real optical rays, whereas broken lines (such as the line 190) are used to denote virtual rays.
An alternative arrangement is shown in Figure 4. This arrangement may be used where it is desired that the user's view of the external environment is not entirely obscured. However, it is also applicable to HMDs in which the user's external view is wholly obscured. In the arrangement of Figure 4, the display element 150 and optical elements 200 cooperate to provide an image which is projected onto a mirror 210, which deflects the image towards the user's eye position 220. The user perceives a virtual image to be located at a position 230 which is in front of the user and at a suitable distance from the user.
In the case of an HMD in which the user's view of the external surroundings is entirely obscured, the mirror 210 can be a substantially 100% reflective mirror. The arrangement of Figure 4 then has the advantage that the display element and optical elements can be located closer to the centre of gravity of the user's head and to the side of the user's eyes, which can produce a less bulky HMD for the user to wear. Alternatively, if the HMD is designed not to completely obscure the user's view of the external environment, the mirror 210 can be made partially reflective so that the user sees the external environment, through the mirror 210, with the virtual image superposed over the real external environment.
In the case where separate respective displays are provided for each of the user's eyes, it is possible to display stereoscopic images. An example of a pair of stereoscopic images for display to the left and right eyes is shown in Figure 5. The images exhibit a lateral displacement relative to one another, with the displacement of image features depending upon the (real or simulated) lateral separation of the cameras by which the images were captured, the angular convergence of the cameras and the (real or simulated) distance of each image feature from the camera position.
Note that the lateral displacements in Figure 5 (and those in Figure 15 to be described below) could in fact be the other way round, which is to say that the left eye image as drawn could in fact be the right eye image, and the right eye image as drawn could in fact be the left eye image. This is because some stereoscopic displays tend to shift objects to the right in the right eye image and to the left in the left eye image, so as to simulate the idea that the user is looking through a stereoscopic window onto the scene beyond. However, some HMDs use the arrangement shown in Figure 5 because this gives the impression to the user that the user is viewing the scene through a pair of binoculars. The choice between these two arrangements is at the discretion of the system designer.
In some situations, an HMD may be used simply to view movies and the like. In this case, there is no change required to the apparent viewpoint of the displayed images as the user turns the user's head, for example from side to side. In other uses, however, such as those associated with virtual reality (VR) or augmented reality (AR) systems, the user's viewpoint need to track movements with respect to a real or virtual space in which the user is located.
This tracking is carried out by detecting motion of the HMD and varying the apparent viewpoint of the displayed images so that the apparent viewpoint tracks the motion.
Figure 6 schematically illustrates the effect of a user head movement in a VR or AR system.
Referring to Figure 6, a virtual environment is represented by a (virtual) spherical shell 250 around a user. Because of the need to represent this arrangement on a two-dimensional paper drawing, the shell is represented by a part of a circle, at a distance from the user equivalent to the separation of the displayed virtual image from the user. A user is initially at a first position 260 and is directed towards a portion 270 of the virtual environment. It is this portion 270 which is represented in the images displayed on the display elements 150 of the user's HMD.
Consider the situation in which the user then moves his head to a new position and/or orientation 280. In order to maintain the correct sense of the virtual reality or augmented reality display, the displayed portion of the virtual environment also moves so that, at the end of the movement, a new portion 290 is displayed by the HMD.
So, in this arrangement, the apparent viewpoint within the virtual environment moves with the head movement. If the head rotates to the right side, for example, as shown in Figure 6, the apparent viewpoint also moves to the right from the user's point of view. If the situation is considered from the aspect of a displayed object, such as a displayed object 300, this will effectively move in the opposite direction to the head movement. So, if the head movement is to the right, the apparent viewpoint moves to the right but an object such as the displayed object 300 which is stationary in the virtual environment will move towards the left of the displayed image and eventually will disappear off the left-hand side of the displayed image, for the simple reason that the displayed portion of the virtual environment has moved to the right whereas the displayed object 300 has not moved in the virtual environment. Similar considerations apply to the up-down component of any motion.
Figures 7a and 7b schematically illustrated HMDs with motion sensing. The two drawings are in a similar format to that shown in Figure 2. That is to say, the drawings are schematic plan views of an HMD, in which the display element 150 and optical elements 160 are represented by a simple box shape. Many features of Figure 2 are not shown, for clarity of the diagrams. Both drawings show examples of HMDs with a motion detector for detecting motion of the observer’s head.
In Figure 7a, a forward-facing camera 320 is provided on the front of the HMD. This does not necessarily provide images for display to the user (although it could do so in an augmented reality arrangement). Instead, its primary purpose in the present embodiments is to allow motion sensing. A technique for using images captured by the camera 320 for motion sensing will be described below in connection with Figure 8. In these arrangements, the motion detector comprises a camera mounted so as to move with the frame; and an image comparator operable to compare successive images captured by the camera so as to detect inter-image motion.
Figure 7b makes use of a hardware motion detector 330. This can be mounted anywhere within or on the HMD. Examples of suitable hardware motion detectors are piezoelectric accelerometers or optical fibre gyroscopes. It will of course be appreciated that both hardware motion detection and camera-based motion detection can be used in the same device, in which case one sensing arrangement could be used as a backup when the other one is unavailable, or one sensing arrangement (such as the camera) could provide data for changing the apparent viewpoint of the displayed images, whereas the other (such as an accelerometer) could provide data for image stabilisation.
Figure 8 schematically illustrates one example of motion detection using the camera 320 of Figure 7a.
The camera 320 is a video camera, capturing images at an image capture rate of, for example, 25 images per second. As each image is captured, it is passed to an image store 400 for storage and is also compared, by an image comparator 410, with a preceding image retrieved from the image store. The comparison uses known block matching techniques (so-called “optical flow” detection) to establish whether substantially the whole image captured by the camera 320 has moved since the time at which the preceding image was captured. Localised motion might indicate moving objects within the field of view of the camera 320, but global motion of substantially the whole image would tend to indicate motion of the camera rather than of individual features in the captured scene, and in the present case because the camera is mounted on the HMD, motion of the camera corresponds to motion of the HMD and in turn to motion of the user’s head.
The displacement between one image and the next, as detected by the image comparator 410, is converted to a signal indicative of motion by a motion detector 420. If required, the motion signal is converted by to a position signal by an integrator 430.
As mentioned above, as an alternative to, or in addition to, the detection of motion by detecting inter-image motion between images captured by a video camera associated with the HMD, the HMD can detect head motion using a mechanical or solid state detector 330 such as an accelerometer. This can in fact give a faster response in respect of the indication of motion, given that the response time of the video-based system is at best the reciprocal of the image capture rate. In some instances, therefore, the detector 330 can be better suited for use with higher frequency motion detection. However, in other instances, for example if a high image rate camera is used (such as a 200 Hz capture rate camera), a camera-based system may be more appropriate. In terms of Figure 8, the detector 330 could take the place of the camera 320, the image store 400 and the comparator 410, so as to provide an input directly to the motion detector 420. Or the detector 330 could take the place of the motion detector 420 as well, directly providing an output signal indicative of physical motion.
Other position or motion detecting techniques are of course possible. For example, a mechanical arrangement by which the HMD is linked by a moveable pantograph arm to a fixed point (for example, on a data processing device or on a piece of furniture) may be used, with position and orientation sensors detecting changes in the deflection of the pantograph arm. In other embodiments, a system of one or more transmitters and receivers, mounted on the HMD and on a fixed point, can be used to allow detection of the position and orientation of the HMD by triangulation techniques. For example, the HMD could carry one or more directional transmitters, and an array of receivers associated with known or fixed points could detect the relative signals from the one or more transmitters. Or the transmitters could be fixed and the receivers could be on the HMD. Examples of transmitters and receivers include infra-red transducers, ultrasonic transducers and radio frequency transducers. The radio frequency transducers could have a dual purpose, in that they could also form part of a radio frequency data link to and/or from the HMD, such as a Bluetooth® link.
Figure 9 schematically illustrates image processing carried out in response to a detected position or change in position of the HMD.
As mentioned above in connection with Figure 6, in some applications such as virtual reality and augmented reality arrangements, the apparent viewpoint of the video being displayed to the user of the HMD is changed in response to a change in actual position or orientation of the user’s head.
With reference to Figure 9, this is achieved by a motion sensor 450 (such as the arrangement of Figure 8 and/or the motion detector 330 of Figure 7b) supplying data indicative of motion and/or current position to a required image position detector 460, which translates the actual position of the HMD into data defining the required image for display. An image generator 480 accesses image data stored in an image store 470 if required, and generates the required images from the appropriate viewpoint for display by the HMD. The external video signal source can provide the functionality of the image generator 480 and act as a controller to compensate for the lower frequency component of motion of the observer’s head by changing the viewpoint of the displayed image so as to move the displayed image in the opposite direction to that of the detected motion so as to change the apparent viewpoint of the observer in the direction of the detected motion. The image generator 480 may act on the basis of metadata such as so-called view matrix data.
The change in position of the HMD may also be used in order to provide an input to the processing device that provides content to the HMD. For example, a user’s position in a generated virtual environment could be modified in response to the detected position, or by detecting the direction of the user’s gaze interaction with an object may be performed. In the present disclosure, it is considered that such an input may be combined with audio inputs in order to provide a greater range of possible interactions whilst mitigating the problem of a loss of immersion. In some cases, such a combined input may increase a user’s sense of immersion.
Figure 10 schematically illustrates a system for providing such inputs in accordance with embodiments of the present disclosure.
In Figure 10 an entertainment device 1000 is in communication with an HMD 1060 (a device such as the HMD 20 of Figure 1, for example) via a connection 1050, and the system also comprises a microphone 1070 that may be coupled to either the HMD 1060 or the entertainment device 1000 via connections 1080a or 1080b respectively. It should be understood that the microphone 1070 may be a free-standing piece of equipment that is separate from any other part of the system or it may be a component of either the HMD 1060 or entertainment device 1000; it may be the case that a microphone is located at each of these devices. The (or a) microphone 1070 may also (or instead) be located in any other associated piece of equipment such as a game controller or a breakout box associated with an entertainment system. Any of the connections illustrated may be implemented either as wired or wireless connections.
The entertainment device 1000 comprises a plurality of processing units representing different functions that are performed by the device. Each of these units may be embodied in a separate processing unit, or the units may be representative of functions performed by a single processing unit. In some embodiments of the current disclosure, the entertainment device 1000 may be formed as a part of the HMD 1060 such that a separate device is not used (or at least is not required) to produce the content provided to the user via the HMD 1060. So, the disclosure includes embodiments in which some, all or none of the functionality of the entertainment device 1000 is provided by circuitry or other processing physically located at the HMD.
An HMD position detection unit 1010 performs processing in order to detect the position of the HMD. The HMD position detection unit 1010 may use input data from cameras associated with the entertainment device used to image the HMD for example, although any method of position detection could be used instead of or in addition to this such as those described with reference to Figures 7a and 7b.
An audio input analysis unit 1020 is operable to characterise sound captured by the microphone. The audio input analysis unit 1020 receives an audio input from a microphone 1070 and performs processing so as to characterise the input in terms of at least one of its properties; example properties include duration, frequency content, whether the input is voiced or unvoiced, perceived pitch, spectral distribution and volume. The audio input analysis unit 1020 may further perform filtering on the audio input, for example a band-pass filtering to obtain sound between two frequency values. The audio input analysis unit 1020 is operable to output the results of its analysis to either a memory module (for example a hard drive that is located in the entertainment device 1000, but not shown) or directly to the interaction determination unit 1030.
Here, the term “perceived pitch” indicates a dominant pitch (in the case of a complex sound) as it would be perceived by a typical listener. A mapping between frequency content and perceived pitch can be pre-stored and maintained by the audio input analysis unit. So, in one sense, perceived pitch is itself a characterisation of the captured sound.
The characterisation of the audio input from the microphone 1070 comprises a detection of the properties of the sound and a subsequent classification of the sound based upon the detected properties. A classification may comprise the comparing of a property of the sound to a particular threshold or range, for example, such as a sound being classified as ‘loud’ if it exceeds a certain threshold volume or lies in a range of volumes. Alternatively, a sound that comprises a particular spectral distribution may be classified as being a particular type of input sound, such as a blowing sound, based upon the detection. In other words, the classification of a sound corresponds to a detection of the properties of the sound and then a comparison of the detected properties to predetermined criteria by which allowed input sounds are defined.
An interaction determination unit 1030 is operable to determine an interaction with the virtual environment in dependence upon the results of the audio input analysis and the relative positions of the user and a virtual object in the virtual environment. In other words, the interaction determination unit 1030 is operable to make a determination as to whether an interaction with an object in the virtual reality environment should be performed. Such a determination may be performed based upon one or more inputs, including that from the audio input analysis unit 1020 and the results of a detection by the HMD position detection unit 1010. The interaction determination unit 1030 may also examine a current game state to determine an appropriate interaction; alternatively, or in addition to this, the interaction determination unit 1030 may also access information about a previous game state to determine an appropriate interaction. An example of an examination of the game state includes the measuring of the position of virtual objects in the virtual environment. The results of these inputs may be compared to stored interaction information that links an action in the virtual environment to that of a user of the HMD.
If the interaction determination unit 1030 determines that an action in the virtual environment should be performed, a content generation unit 1040 is operable to generate content to modify the virtual environment, the content corresponding to the determined interaction. In embodiments the content generation unit 1030 is operable to generate at least video content to be displayed to the user, the video content being responsive to the detected user inputs. Audio content may also be generated to accompany the video content, and any appropriate haptic feedback may also be provided to the user via peripherals.
The HMD 1060 comprises at least a display in order to present visual content to a user, and may comprise any number of other optional features in line with the above description of an HMD such as headphones in order to provide audio output to a user. The HMD is operable to display a virtual environment to the user, or at least to present virtual objects that overlay a real environment from the user’s perspective.
The microphone 1070 is operable to capture a sound input (such as a user’s voice) and provide them to the entertainment device 1000, either directly through connection 1080b or via the HMD 1060 through the connection 1080a, for processing by the audio input analysis unit 1020. As noted, the microphone may be used to capture the sound of a user’s vocalisation, or the sound captured by the microphone may comprise a non-vocal input by a user.
Figure 11 is a schematic illustration of a two-dimensional representation of a user’s position in a virtual reality environment. Generally a virtual reality environment will be a three-dimensional feature, but a two-dimensional representation is discussed here for clarity. A boundary 1100 represents the extent of the virtual reality environment; this could be either the viewable environment, the area of a virtual environment that may be occupied by a user or the surface of a sphere (or other shape) upon which generated images are currently rendered. In any case, the user occupies a region 1110 in the virtual environment. This is taken to mean that a viewpoint is presented to the user as if they were occupying the region 1110 in the virtual environment, as the user is of course not physically present in the virtual environment.
An area (or volume, in a three-dimensional application) of interaction 1120 surrounds the position of the user in the virtual environment. This is a region of space which a virtual object must be within in order for interaction with the user to occur. Any virtual object in the virtual environment may have an associated interaction area in which it must be located in order for a user to perform a specified interaction with the virtual object. The properties of this area could be chosen to correspond to a physical characteristic of the user, such as the maximum reach of the user, or an area that corresponds to a user’s in-game characteristics for example. In embodiments this area may vary in size or location (for example) when considering different objects that may be interacted with by the user or for different potential interactions; this will be discussed in more detail below.
It should also be noted that in some embodiments of the present arrangement an augmented reality application is provided. In such an embodiment the user is able to view the real surrounding environment, and virtual objects are generated by the entertainment device 1000 and displayed to the user via the HMD 1060 in a manner so as to overlay real-world features. The user would then be able to similarly interact with virtual objects in a real environment in the manner described below.
Figure 12 schematically illustrates a method for implementing interactions in response to audio inputs.
At a step 1200 the position of the HMD associated with a user is detected. This is taken to be indicative of a user’s position, and is used to determine a user’s position in the virtual environment. The orientation of the HMD may also be detected at this step, in order to determine the portion of the virtual environment that is visible to the user.
At a step 1210a sound input is captured by a microphone (such as the microphone 1070 of Figure 10). A sound input may comprise any sound, although those generated by the user are of greater use in the present arrangement as objects in the virtual environment should react to such sounds as these are likely to be intentional indications of a desired interaction. User generated sounds may comprise key words or phrases that are associated with actions or they may be non-verbal such as humming, blowing, breathing, or non-vocal such as clapping.
At a step 1220 a target in the virtual environment is identified. This may be performed in a number of ways, for example by voice command (such as the user naming a virtual object as the target), gaze detection, proximity in the virtual environment, being picked up by a user’s avatar, or any other appropriate method. In some embodiments, a target may only be identified if it is within the area of interaction as described above with reference to Figure 11.
At a step 1230 the captured sound input is analysed in order to characterise the input sound. As described above, this may take the form of a measurement of characteristics of the sound such as pitch or volume.
At a step 1240 an interaction with the virtual environment (or a virtual object in the virtual environment) that corresponds to the results of the analysis is identified. In this step, the results of the audio analysis are compared to possible interactions with the identified target in the virtual environment. A closest possible match from the set of possible interactions may then be selected and implemented. Such an interaction may be triggered once in response to an evaluation of a captured input sound, or there may be a continuous evaluation based upon the duration of the captured input sound (for example) depending on the interaction that is determined. For some interactions, such as blowing out a virtual candle, it may only be intended that a single interaction need be performed and only a single measurement will be needed (for example, is the captured sound input indicative of a blowing of sufficient force to blow out a candle). With regards to other interactions, a continuous evaluation may be implemented; an example of this is the exhalation of smoke example that is discussed below in which the captured sound input may be evaluated in real-time. This may be advantageous in that the intensity of the exhalation may vary over the duration of the sound, and the content generated in response to the captured input sound may be provided both more responsively and at a time that is closer to that at which the sound is captured.
At a step 1250 content corresponding to the identified interaction is generated by the entertainment device. This may comprise visual content, such as the movement or modification of a virtual object in the virtual environment, audio content, such as a sound associated with the target object, haptic feedback or any other content that may make it apparent to a user that an interaction has taken place in response to their action.
At a step 1260 the generated content is presented to the user. For example, visual content may be provided to the user via displays associated with the HMD and audio content may be provided to the user via headphones that are also associated with the HMD.
It should be noted that the above steps may not each be performed; for example, the position of the HMD may already be known and thus rather than detecting the position it could be read from a stored data file. Equally, the steps of the described method may be carried out in an order other than that illustrated. For example, the target identification may be carried out before the capture of the sound, as may the identification of potential interactions. Similarly the target identification may be performed much later, such that analysis of the captured sound and the identification of corresponding potential interactions may be used to identify the target object.
Examples of the use of arrangements according to embodiments of the present disclosure are provided below.
In a first example a user is provided with a virtual environment such as a restaurant in which the user may interact with a virtual drink that is served in a glass with a straw. In such a scenario, a user is presented with the opportunity to ‘drink’ the virtual drink by sucking the straw (although it is of course apparent that as there is no corresponding real drink the user will not experience a liquid in their mouth after sucking) or to create bubbles in the virtual drink by blowing into the straw.
In earlier examples of such a scenario in a gaming context, the user would generally be invited to press a button on a gamepad in order to perform either action. In the present arrangement however, a more immersive and intuitive interaction may be provided.
At a first step (corresponding to step 1200 of Figure 12) the position of the user in the virtual environment is determined, for example the position could be based upon a detection of the position of the HMD. When the straw is brought to the area which a user’s mouth would be expected to occupy, either as part of a predetermined script associated with the scenario or by the user’s own action (such as by interacting with a peripheral), it enters an area of interaction 1120. The area of interaction in this case may be small; this corresponds to the user’s expectations that the straw would have to be in their mouth before they could use it to drink in the equivalent real-world scenario. However the area of interaction may be larger than the user’s expected real-world equivalent, in order to both account for different head sizes/proportions and to prevent frustration for the user if they are unable to locate the area of interaction. By moving the straw into the area of interaction 1120, the straw is identified as the target for interaction in the virtual environment (step 1220)
Sounds are then captured (step 1210) by a microphone and analysed to determine whether or not interaction with the virtual object should be performed. For example, if the user makes a sucking noise the virtual drink could be depleted, if the user coughs it could be spilt or if the user makes a blowing noise then bubbles could appear in the virtual drink. The range of interactions could be selected based upon ease of differentiation between the corresponding inputs, ease of implementation of the virtual interaction that results or any other factor. Below it is considered that an arrangement is provided that is able to differentiate between a blowing and a sucking noise, although either of these interactions could be changed for other interactions (such as the cough to spill the drink) if that were not the case.
Firstly, captured sounds should be analysed (step 1230) to determine whether a threshold volume is met or exceeded; an intentional blowing or sucking will be louder than breathing. If the threshold volume is met or exceeded, then a frequency analysis may be performed to recognise the sound. For example, a band-pass filter could be used to differentiate between blowing and sucking. In this case, the sound may be filtered such that only the portion between two frequencies is analysed as this frequency band may be considered to be a reliable indicator; in one example the input sound is filtered such that only the sound between 2 kHz and 4 kHz is considered, although any appropriate range may be used to identify an intended interaction. Frequency analysis may be performed either by noting sound at a particular number of frequencies that are identified as being related to a particular sound, or by a comparison of frequency spectra with a stored sample. The stored sample may be an aggregation of a number of recorded sounds associated with the interaction, for example.
The duration of the blowing or sucking is another factor that may be considered when considering if an interaction is performed. This may be measured in units of seconds or elapsed frames of video content, for example, or any other appropriate measurement of time. If the blowing is not sustained for a threshold amount of time it may be considered unintentional. In such a case, the interaction would not be performed as it is not clear that the user intended to interact with the virtual object in that manner. If the threshold duration is met or exceeded by the sound, then the interaction between the user and the virtual object that corresponds to the characteristics of the input sound is performed. For example, in this scenario the intention to perform a ‘suck’ or ‘blow’ interaction may be derived from the characteristics of the input sound.
If a ‘suck’ interaction is identified (step 1240), content is generated (step 1250) so as to provide appropriate feedback to the user via the virtual environment. Possible content could include modification of the appearance of the virtual cup to indicate that the amount of liquid in the cup has been reduced as a result of drinking, or a slurping noise that may be associated with such an action. If a ‘blow’ interaction is identified then the user may be presented with a view of the liquid in the cup bubbling and a corresponding sound may be provided to the user.
Finally, the generated content is presented to the user (step 1260) via a display associated with the HMD 1060.
It is apparent that an interaction such as that described above would be considered more natural to the user than the use of a gamepad; the user is performing an action that more readily corresponds to that which they would be expected to perform in a corresponding real-world scenario. The use of input sound analysis allows for the interaction to more closely match the intentions of the user, for example allowing the user to control the rate of drinking without relying upon multiple button presses to communicate this.
In a second example, a user is presented with a scenario in which they are able to smoke a cigar. This scenario illustrates how an area of interaction may vary for the same object based upon a number of different factors. A first interaction that may be considered in the present scenario is that of sucking on the cigar to allow for inhalation of the smoke. A method for determining that such an interaction is to take place is as follows.
Firstly, a detection (step 1200) is made that the cigar is sufficiently close to the user’s mouth (the area of interaction 1120 is set to a sphere of a small radius about the expected position of the user’s mouth, for example); as in the previous example, the user cannot suck on it if it is not in their mouth. As above, performing an action to cause the cigar to enter the area of interaction 1120 causes the cigar to be identified (step 1220) as the target for interaction within the virtual environment.
Sound input is again captured (step 1210) and analysed (step 1230); for example, a detection of the volume may be performed. A band pass filtering may also be performed to isolate a range of frequencies for analysis; in this example a frequency range that may be useful for identifying the input sound is that between 500 Hz and 1 kHz, although any appropriate frequency range may be considered. The duration of the sound is measured and compared to a threshold value in order to determine whether the interaction should be performed. If it is determined that the sound does correspond to an interaction, the interaction is identified (1240) and content is generated (1250) that corresponds to this interaction.
The user is then presented (step 1260) with a modified view of the virtual environment to reflect this interaction; the intensity of the glow of the tip of the cigar may be increased, for example. The intensity of the glowing of the tip of the cigar and the amount of smoke that is present after the corresponding exhalation interaction is performed may be proportional to the volume of the captured sound or the duration of the sound, for example. The duration of time for which the tip of the cigar has an increased intensity glowing may also be linked to the duration of the sound input.
In the corresponding exhalation interaction, optionally a first determination is that the cigar is not near the user’s mouth. For example, this could be either by recognising that the cigar is outside of the interaction area for the cigar or by defining the interaction area as occupying the whole (or a significant portion) of the virtual environment except for the area around a user’s mouth. A second determination, which may be performed before the first determination in order to identify the interaction area associated with the cigar, is performed in order to confirm that the player has previously performed an inhalation interaction without a corresponding exhalation interaction having been performed; such a determination makes it apparent that an exhalation is expected by the user. Such a determination is based upon the position of the cigar in the virtual environment and a detection of the position of the HMD (step 1200) to locate the user in the virtual environment. The cigar is identified (step 1220) as the target for interaction in the virtual environment as above.
An input sound is captured (step 1210) and the volume of the captured sound is measured as part of the analysis of the captured sound (step 1230); again a band-pass filtering may be performed in order to isolate sounds associated with exhalation. In this case a different frequency band may be selected, for example the 4 kHz to 8 kHz band may be more useful for identifying exhalation noises than the 500 Hz to 1 kHz band used above for detecting inhalation noises. Alternatively, or in addition, contextual clues from the virtual environment or previous interactions may be used to identify an intended interaction. For example, in the cigar-smoking scenario it may be possible to distinguish between an inhale and exhale interaction on the basis of the previous action that was performed; this assumes that an exhale follows an inhale and vice-versa. To provide another example, the position of the cigar may be used to identify the intended interaction; if the cigar is nowhere near the user’s mouth then it is immediately apparent that an inhale interaction should not be performed. Thus, more generally, the classification of an otherwise similar sound (such as inhale or exhale) may be made dependent upon an existing state within the virtual environment.
The duration of the captured sound is then measured, and optionally if a threshold duration is met or exceeded then an intended exhalation interaction is identified.
Once the exhalation interaction is identified (step 1240), content is generated (step 1250) to convey this to a user of the HMD 1060. For example, smoke will be shown to be issuing from the position of the user’s mouth in the virtual environment (the amount of which may be proportional to the volume or duration of the captured sound, in addition to or instead of the inhalation sound being used to vary this quantity as described above) and any in-game avatars may react accordingly, for example to remind a user that smoking is not permitted in that particular virtual environment. The generated content is then presented (step 1260) to the user of the HMD 1060.
The different interactions associated with the cigar based upon its position relative to the user is an example of a virtual object having one or more associated interaction areas that each correspond to a different set of potential interactions; the user cannot perform an inhale interaction when the cigar is in an interaction area that corresponds to exhale interactions. It is apparent that each of the associated interaction areas has a different correspondence between the characteristics of sound inputs and interactions, such that different interactions may be associated with an identical input sound in dependence upon the interaction area that the virtual object occupies.
In some embodiments however desired interactions may not be able to be identified based only on the interaction area; for example, in the cigar-smoking scenario a user would expect to be able to perform an exhale interaction even if the cigar is still in or around their mouth. This area is therefore not exclusive to an inhalation interaction, and other contextual clues or input information may need to be considered in order to distinguish between the intended interactions. Potential interactions may therefore rely on at least either different interaction areas, different game states, or a combination of the two in order to be identified correctly.
It should be appreciated that not all interactions need involve such a high degree of proximity to the user. In a further example, a user may interact with a virtual pet. For this pet a number of different interaction areas may be defined; we consider here a near interaction area and a far interaction area. Each of these areas may have different criteria for an input sound to correspond to an interaction; for example, a higher threshold volume may be required when the virtual pet is in a far interaction area in order to correspond to real-world expectations of the user in that their pet would not be able to hear them from afar if they spoke at the same volume as they would to communicate with the pet from a very short distance.
In such an example, especially if there are multiple virtual pets, it may be advantageous to be able to identify a target for interaction in ways other than contextual clues as described above (such as knowing that a sucking sound near a straw would mean that the user was attempting to drink a drink). Possible examples of this include speech recognition and gaze detection. In this context, gaze detection could be used to determine which of the virtual pets the user is attempting to communicate with and as a result only potential interactions with this virtual pet will be considered. In one embodiment of the present arrangement, gaze detection is performed using cameras mounted upon the HMD that are able to image the user’s eyes.
By defining different interaction areas, similar sound inputs may correspond to different interactions depending on the relative position of the user and the virtual object to be interacted with. In the example of a user interacting with a virtual pet, a loud shouting noise may have the purpose of scaring the virtual pet away when they are in the near interaction area or summoning them when they are in a far interaction area.
In this disclosure it is taken that sound inputs by the user are non-verbal (i.e. they do not comprise words), however this need not be the case. Verbal inputs may be similarly applied, even if the specific words themselves are not identified by the device; for example, a volume and pitch could be used to determine the general emotion being conveyed and in the context of a virtual pet this could lead to a system by which a nearby virtual pet in the virtual environment may be comforted by the user without the need for speech recognition to be performed.
The techniques described above may be implemented in hardware, software or combinations of the two. In the case that a software-controlled data processing apparatus is employed to implement one or more features of the embodiments, it will be appreciated that such software, and a storage or transmission medium such as a non-transitory machine-readable storage medium by which such software is provided, are also considered as embodiments of the disclosure.

Claims (15)

1. An entertainment system for providing virtual reality content, the system comprising: a microphone operable to capture a sound input; a head-mountable display device operable to display a virtual environment; and, an entertainment device comprising: an audio input analysis unit operable to characterise sound captured by the microphone, an interaction determination unit operable to determine an interaction with the virtual environment in dependence upon the results of the audio input analysis and the relative positions of the user and a virtual object in the virtual environment, and a content generation unit operable to generate content to modify the virtual environment, the content corresponding to the determined interaction.
2. A system according to claim 1, wherein the sound captured by the microphone comprises a nonverbal input by a user.
3. A system according to claim 1, wherein the microphone is a component of the head-mountable display device.
4. A system according to claim 1, wherein the audio input analysis device characterises sounds based upon at least one of volume, perceived pitch, frequency content, and duration.
5. A system according to claim 1, wherein the interaction determination unit examines a current game state to determine an appropriate interaction.
6. A system according to claim 1, wherein the interaction determination unit may access information about a previous game state to determine an appropriate interaction.
7. A system according to claim 5 or claim 6, wherein examination of the game state includes measuring the position of one or more virtual objects in the virtual environment.
8. A system according to claim 1, wherein the virtual object has an associated interaction area in which it must be located in order for a user to perform a specified interaction with the virtual object.
9. A system according to claim 8, wherein the virtual object has one or more associated interaction areas that each correspond to at least a first respective potential interaction.
10. A system according to claim 9, wherein each of the associated interaction areas has a different interaction associated with an identical input sound.
11. A system according to claim 1, wherein the content generated by the content generation unit comprises at least one of video content, audio content or haptic feedback.
12. A system according to claim 1, wherein the virtual object is identified for interaction using gaze detection.
13. A method for providing virtual reality content, the method comprising: capturing a sound input; characterising the captured sound input; determining an interaction with a virtual environment in dependence upon the results of the characterisation of the captured sound input and the relative positions of the user and a virtual object in the virtual environment, and generating content to modify the virtual environment, the content corresponding to the determined interaction. displaying a virtual environment comprising the generated content.
14. Computer software which, when executed by a computer, causes the computer to carry out the method of claim 13.
15. A machine-readable non-transitory storage medium which stores computer software according to claim 14.
GB1601842.6A 2016-02-02 2016-02-02 Entertainment system Withdrawn GB2546983A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
GB1601842.6A GB2546983A (en) 2016-02-02 2016-02-02 Entertainment system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB1601842.6A GB2546983A (en) 2016-02-02 2016-02-02 Entertainment system

Publications (2)

Publication Number Publication Date
GB201601842D0 GB201601842D0 (en) 2016-03-16
GB2546983A true GB2546983A (en) 2017-08-09

Family

ID=55590539

Family Applications (1)

Application Number Title Priority Date Filing Date
GB1601842.6A Withdrawn GB2546983A (en) 2016-02-02 2016-02-02 Entertainment system

Country Status (1)

Country Link
GB (1) GB2546983A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3534241A1 (en) * 2018-03-01 2019-09-04 Nokia Technologies Oy Method, apparatus, systems, computer programs for enabling mediated reality
WO2019173566A1 (en) * 2018-03-08 2019-09-12 Bose Corporation Augmented reality software development kit

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113920226A (en) * 2021-09-30 2022-01-11 北京有竹居网络技术有限公司 User interaction method and device, storage medium and electronic equipment

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110004327A1 (en) * 2008-03-26 2011-01-06 Pierre Bonnat Method and System for Controlling a User Interface of a Device Using Human Breath
US9223786B1 (en) * 2011-03-15 2015-12-29 Motion Reality, Inc. Communication in a sensory immersive motion capture simulation environment

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110004327A1 (en) * 2008-03-26 2011-01-06 Pierre Bonnat Method and System for Controlling a User Interface of a Device Using Human Breath
US9223786B1 (en) * 2011-03-15 2015-12-29 Motion Reality, Inc. Communication in a sensory immersive motion capture simulation environment

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Head Start Design, 28 March 2015, "Breath Tech Demo", YouTube [online], Available from: https://www.youtube.com/watch?v=MwhW8EhDYuQ&index=1&list=PLZQhOmC9oG9qCQgIraMj1Ge6JHaYIFPSh [Accessed 29 June 2016] *
Project 260, 23 April 2015, "VR Breathing Tech Demo", iamluciddreaming.com [online], Available from: http://www.iamluciddreaming.com/260/3/day-54-vr-breathing-tech-demo/ [Accessed 29 June 2016] *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3534241A1 (en) * 2018-03-01 2019-09-04 Nokia Technologies Oy Method, apparatus, systems, computer programs for enabling mediated reality
WO2019166272A1 (en) * 2018-03-01 2019-09-06 Nokia Technologies Oy Methods, apparatus, systems, computer programs for enabling mediated reality
WO2019173566A1 (en) * 2018-03-08 2019-09-12 Bose Corporation Augmented reality software development kit
US10915290B2 (en) 2018-03-08 2021-02-09 Bose Corporation Augmented reality software development kit

Also Published As

Publication number Publication date
GB201601842D0 (en) 2016-03-16

Similar Documents

Publication Publication Date Title
EP3427103B1 (en) Virtual reality
US10198866B2 (en) Head-mountable apparatus and systems
US11500459B2 (en) Data processing apparatus and method
JP7218376B2 (en) Eye-tracking method and apparatus
US20200089333A1 (en) Virtual reality
US11762459B2 (en) Video processing
US20230015732A1 (en) Head-mountable display systems and methods
GB2546983A (en) Entertainment system
US11045733B2 (en) Virtual reality
US11314082B2 (en) Motion signal generation
GB2571286A (en) Virtual reality
WO2018115842A1 (en) Head mounted virtual reality display
GB2569576A (en) Audio generation system
US12093448B2 (en) Input generation system and method
EP4231649A1 (en) Apparatus and method for displaying a composite video by a head-mounted display

Legal Events

Date Code Title Description
WAP Application withdrawn, taken to be withdrawn or refused ** after publication under section 16(1)