GB2536020A - System and method of virtual reality feedback - Google Patents

System and method of virtual reality feedback Download PDF

Info

Publication number
GB2536020A
GB2536020A GB1503639.5A GB201503639A GB2536020A GB 2536020 A GB2536020 A GB 2536020A GB 201503639 A GB201503639 A GB 201503639A GB 2536020 A GB2536020 A GB 2536020A
Authority
GB
United Kingdom
Prior art keywords
user
virtual
virtual environment
acoustic
audio
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
GB1503639.5A
Other versions
GB201503639D0 (en
Inventor
Ward-Foxton Nicholas
Mauricio De Sa Carvalho Corvo Pedro
Van Mourik Jelle
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sony Interactive Entertainment Europe Ltd
Original Assignee
Sony Computer Entertainment Europe Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sony Computer Entertainment Europe Ltd filed Critical Sony Computer Entertainment Europe Ltd
Priority to GB1503639.5A priority Critical patent/GB2536020A/en
Publication of GB201503639D0 publication Critical patent/GB201503639D0/en
Publication of GB2536020A publication Critical patent/GB2536020A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/012Head tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/165Management of the audio stream, e.g. setting of volume, audio stream path
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/167Audio in a user interface, e.g. using voice commands for navigating, audio feedback

Abstract

A method of virtual reality (VR) feedback comprises generating a virtual environment for display to a head-mounted display (HMD); associating acoustic properties (e.g. absorption coefficient; impulse response) with elements of the virtual environment; detecting the virtual location of the user within the virtual environment; receiving audio from the user; designating the virtual location of the user as a source location for the audio within the virtual environment; calculating acoustic responses to the audio by elements of the virtual environment; calculating a return acoustic signal from the element to the users virtual location, dependent on the calculated acoustic response; and outputting the return signal for audio reproduction to a user. Calculating return signal may involve estimating the acoustic reverberation time for the current virtual environment, and may utilise points of intersection between the virtual environment and lines originating at the source, with the lines pointing in different directions. A volume of space bounded by the intersection points may be calculated, with acoustic properties of the environments therein being retrieved and reverberation time estimated as a function of volume and acoustic properties.

Description

SYSTEM AND METHOD OF VIRTUAL REALITY FEEDBACK The present invention relates to a system and method of virtual reality feedback.
Virtual and augmented reality devices and applications are becoming increasingly popular. One class of such applications is games, where augmented and particularly virtual reality provide a high level of immersion within a game's virtual environment due to the stereoscopic presentation of that environment with a wide field of view that follows the user's head movements.
However, vision is only one of the senses that may be immersed within virtual reality. To provide a more immersive experience, it is desirable for sounds to be convincing within the virtual environment as well.
The present invention seeks to address or alleviate this problem.
In a first aspect, a method of virtual reality feedback is provided in accordance with claim 1. In another aspect, an entertainment device is provided in accordance with claim 13.
In another aspect, an entertainment system is provided in accordance with claim 14.
Further respective aspects and features of the invention are defined in the appended claims.
Embodiments of the present invention will now be described by way of example with reference to the accompanying drawings, in which: Figure 1 is a flow diagram of a method of virtual reality feedback in accordance with an embodiment of the present invention.
Figure 2 is a schematic diagram of an entertainment system in accordance with an embodiment of the present invention.
A system and method of virtual reality feedback are disclosed. In the following description, a number of specific details are presented in order to provide a thorough understanding of the embodiments of the present invention. It will be apparent, however, to a person skilled in the art that these specific details need not be employed to practice the present invention. Conversely, specific details known to the person skilled in the art are omitted for the purposes of clarity where appropriate.
Referring now to Figure 1, in an embodiment of the present invention, a method of virtual reality feedback comprises the following steps.
In a first step s110, generating in real time (i.e. on a per video frame basis) a virtual environment for display to a user of a head-mounted display. In an embodiment of the present invention this is achieved by a videogame console or other computing device (more generally, an 'entertainment device') comprising a central processing unit, operating in conjunction with a graphics processing unit, under suitable software instruction. The generated virtual environment may be conveyed to the head-mounted display by a wireless or wired connection, such as via an HDMI connector.
In a second step s120, associating one or more acoustic properties with one or more elements of the virtual environment. It will be appreciated that these acoustic properties may be defined for the or each element of the virtual environment during development of the application that uses the virtual environment. The association between elements of the virtual environment and the acoustic properties may be through a look-up table or by use of metadata for individual elements or classes of elements. Hence as a non-limiting example, all virtual objects notionally made of rock or brick may share the same acoustic properties.
The acoustic properties may include the absorption coefficient of the material from which the virtual object is supposedly made, optionally for a plurality of different frequency bands (for example in P octave bands, where P may for example be 3); and optionally an impulse response, for example to model a response to being hit by a bullet fired by the user within a game, or a response to acoustic excitation.
In a third step s130, detecting the virtual location of the user within the virtual environment. In an embodiment of the present invention this takes the form of obtaining co-ordinates within the virtual environment corresponding to the virtual camera whose viewpoint is used as a basis for rendering images of the virtual environment. Where the output is being generated for a head mounted display, a stereoscopic pair of virtual cameras may be used. In this case the co-ordinates may correspond to a centre point between the camera positions, or in either case may be located behind the camera position(s) at a position corresponding to the centre of the user's head, or may be located below the camera position(s) at a position corresponding to the user's mouth.
In a fourth step s140, receiving audio originating from the user of the head-mounted display. in an embodiment of the present invention, the head-mounted display comprises one or more microphones operable to receive the user's utterances (for example speech, exclamations and other vocalisations). These may then be transmitted to the entertainment device either wireless or via a wired connection, which in turn receives the audio as per this fourth step.
In a fifth step s150, designating the virtual location of the user within the virtual environment as a source location for the received audio within the virtual environment. Hence for the purposes of subsequent calculations, the received audio is treated as if it was generated within the virtual environment at the virtual location of the user.
In a sixth step s160, calculating an acoustic response to the received audio by one or more elements of the currently generated virtual environment associated with the one or more acoustic properties. In this step, a modification of the received audio is calculated for the or each of these elements. Hence for example, an attenuation of the received audio is calculated responsive to an attenuation coefficient associated with an element, optionally for each of a plurality of frequency bands. A scheme for selecting a subset of elements for use in such a calculation is described later herein.
In a seventh step s170, calculating a return acoustic signal from one or more elements of the virtual environment associated with the one or more acoustic properties to the virtual location of the user, dependent upon the calculated acoustic response. This step may be combined with the sixth step to form one calculation.
The calculated return acoustic signal may take several forms, either singly or together. In first example, the returned acoustic signal may be a reverberant signal calculated as described later herein. Alternatively or in addition, in a second example, the returned acoustic signal may be a plurality of reflections, having relative timings corresponding to the acoustic propagation time of the round trip between the virtual location of the user and the respective reflective element within the virtual environment, as described later herein.
Then, in an eighth step s180, outputting the calculated return acoustic signal for audio reproduction to the user. In an embodiment of the present invention, a wireless or wired connection is used (such as an HDNTI cable) to transmit the acoustic signal to the head mounted display, or to headphones being used in conjunction with the head mounted display.
Notably, in an embodiment of the present invention, the received audio originating from the user is not output back to the user for audio reproduction, but only the calculated return acoustic signal (together with any other in-game audio as applicable). Hence the user will hear their own utterances directly, and hear the estimated acoustic response of the virtual environment through their headphones.
The psychoacoustic effect makes the user feel as if they are vocalising within the virtual space, receiving their own voice directly (which is not normally mediated by environmental features to a discernible extent) and then also receiving reflections and /or reverberations caused by their vocalisations that are consistent with the virtual world they can see This advantageously increases the user's sense of immersion within the virtual world In an instance of this method, the step of calculating a return acoustic signal (optionally in conjunction with calculating an acoustic response) comprises dynamically estimating a reverberation time for the current state of the virtual environment. In other words, the reverberation time is calculated in response to the current virtual location for the user and the current state of the virtual environment, both of which can change on a video frame-by-frame basis. Preferably the reverberation time is also calculated on a frame-by-frame basis, but alternatively may be calculated ever M frames, where M=2,3,4,5,6, etc, and or may be recalculated in response to a threshold change in virtual user location and/or a threshold change in the layout of the virtual environment within a predetermined distance from the virtual user location Calculation of the reverberation time is now described in more detail.
In an instance of this method, the step of calculating a return acoustic signal comprises the step of calculating points of intersection between the virtual environment and a plurality of lines originating at the source location, the plurality of lines respectively pointing in different directions This approach provides a means to sub-sample the virtual environment, thereby simplifying the calculations required to estimate the returned acoustic signal. The lines may comprise a pre-computed set of vectors whose origins are set to the current virtual location of the user. In an embodiment of the present invention, a set of six orthogonal lines parallel to axes of the in-game co-ordinate system are used. Alternatively or in addition a plurality of other lines may be used, totalling for example 64. These lines may be equally distributed spherically, or their distribution may be biased in a particular direction, for example to include more lines within the visible field of view of the user.
In this instance, the method may proceed by estimating a volume of space bounded by the points of intersection. Optionally the major dimensions of the volume, for example corresponding to the 6 axes described previously, may be estimated to enable the calculation of separate modal reverberation times.
Then, for one or more points of intersection, one or more acoustic properties of the corresponding element of the virtual environment is retrieved A reverberation decay time (such as for example RT60, the time taken for a sound to attenuate by 60dB) is then estimated as a function of the estimated volume of space and the one or more retrieved acoustic properties, for example using an known equation such as the Sabine equation or the Eyring equation. These equations assume an average attenuation coefficient for the estimated volume and so this may be calculated based upon the retrieved acoustic properties.
Where the estimated volume of space is substantially rectilinear, resonant modes may be calculated corresponding to standing waves between opposing sides of the space. Different reverberation decay times may then be calculated for these modes; for example where absorption coefficients are provided for a plurality of frequency bands, and/or where the dimensions of the estimated volume differ by a threshold amount (for example in a long corridor) meaning that reflections in one direction (e.g. transverse) would be more rapid than in another (e.g. longitudinal), resulting in different energy absorption rates for different modes and hence different reverberation decay times.
Optionally, a modified form of this calculation comprises the steps of detecting whether the volume of space comprises an unbounded portion corresponding to where one or more lines did not intersect the virtual environment within a predetermined distance, and estimating the reverberation decay time as a function of the estimated volume of space, the one or more retrieved acoustic properties, and an energy leakage factor dependent upon the relative size of the unbounded portion to the volume of space.
Hence in this instance, the basic calculation is the same, but takes account of unbounded regions of the local environment. This may occur when a user is standing next to an open door within the virtual environment, or is standing outside between two buildings, or indeed when they are standing on a field in an open environment. The proportion of the environment that is unbounded varies with each situation On these examples, increasing each time).
In this case it would be incorrect to simply average the absorption coefficients for a sub sampling of the environment and use these to derive a reverberation time; for example in the case of a user standing in an open field, half of a sphere surrounding the user points at an unbounded environment; however the reverberation time is not merely half that of the material on the ground Rather, there is no reverberation at all because there are no return reflections within the environment; all the energy escapes. Hence in this instance an energy loss factor is incorporated into the reverberation calculation.
In an embodiment of the present invention, this in turn may be derived from to the expected number of reflections of any acoustic ray before it leaves the volume through an unbounded region. This expected a number of reflections in a partially bounded space may be approximated as
CD
E (p) =I k pk (1 -= p k=0 where p is the fraction of acoustic rays that are reflected, and is in turn a function of the relative size of the unbounded portion to the volume of space (in particular, the relative surface areas of the bounded and unbounded regions of the volume). This number can then be used as an adjustment parameter in the reverberation time calculation In either case, the estimated reverberation decay time may then be used to generate a reverb audio signal responsive to the received audio. This can then be output as a calculated return acoustic signal for audio reproduction to the user.
As was noted previously, the calculated return acoustic signal may take several forms, including a reverberant signal based on the reverberation time described previously herein, and also a plurality of reflections. This second calculation is now described in more detail.
In an embodiment of the present invention, the points of intersection with the virtual environment described previously herein are used when calculating a plurality of reflections Hence when both a plurality of reflections and a reverberation are being generated, the intersections can be re-used.
In this embodiment, the method of calculating an return acoustic signal comprising a plurality of reflections includes the initial step of detecting the N closest points of intersection to the source location, where N is a predetermined number. A user will subconsciously estimate the size of a surrounding space based on the timings of the first few acoustic reflections of self-generated sound that they hear. Consequently, in an embodiment of the present invention, N is in the range 2 to 20 More preferably, N is in the range 4 to 15 Still more preferably, N is in the range 6 to 10 Next, one or more acoustic properties of elements of the virtual environment at said N closest points of intersection are retrieved. Then the acoustic response to the received audio is calculated in dependence upon the or each retrieved acoustic property at said N closest points of intersection. For example, the source audio may be attenuated responsive to the attenuation coefficient associated with the element of the virtual environment at a given point. Where the attenuation coefficient is provided at a plurality of frequency bands, the attenuation is applied to those bands accordingly. The attenuation may be implemented by a filter.
Then (or as part of the calculation of the acoustic response to the received audio, as discussed previously), a return acoustic signal from each of said N closest points of intersection to the source location is calculated, dependent upon the calculated acoustic response.
As part of this calculation, respective delays to the calculated return acoustic signal from each of said N closest points of intersection are estimated, corresponding to the acoustic propagation time for the return distance to each of said N closest points of intersection from the source location, Subsequently the respective calculated return acoustic signals for each of said N closest points of intersection are output for audio reproduction to the user, at respective delays corresponding to the acoustic propagation time for the return distance to each of said N closest points of intersection, In this way, the N closest reflections to the user within the virtual environment are heard by the user, subconsciously giving him or her a sense of the size of space around them.
As with the reverberation calculation, optional refinements may be considered.
It will be understood that if the N closest points are re-selected for each iteration of the calculated return acoustic signal as the game unfolds, these points may change rapidly. In particular, where for example a user is proceeding down the middle of a corridor, the N closest points may flip between one side of the corridor and the other, for example in time to a built-in walking motion within the game. This change in location of reflection would appear unnatural (since in reality, all reflections are heard, not just a subset). This would be particularly noticable if the calculated return acoustic signal is presented stereoscopically or binaurally, making localisation of the changing reflections very clear.
Accordingly, in one instance, respective ones of the N closest points of intersection are re-used when the virtual location of the user within the virtual environment changes, or the state of the virtual environment changes, until a respective termination criterion is met This results in consistency between updated calculations, until a respective one of the points should be dropped in favour of another, termination criteria thus include one or more from the list consisting of an elapsed period of time as one of the N closest points, a reaching a theshold maximum distance for a closest point, or dropping down a ranking of closest points by or to a predetermined number, such as for example to 12th place Similarly, it will be understood that the acoustic propagation delay for the closest points of reflection may be short, in the order of 3 milliseconds for 1 metre. Consequently, a fixed delay caused by wireless transmission of the user-originating audio from a microphone of the head mounted display to the entertainment device may equate to a significant proportion of this simulated acoustic delay, for example. Similarly other aspects of the calculations, such as applying filters to the user originating audio, may incur system delays. However, these in-build system delays can be estimated and factored into the simulated acoustic delay.
Accordingly in an embodiment of the present invention, the respective delays are each reduced by a predetermined amount corresponding to a system latency incurred when receiving the audio and implementing steps of the method of virtual reality feedback preceding output of the calculated return acoustic signals for audio reproduction to the user.
Finally, it will be appreciated that whilst the above description refers to calculating return acoustic signals to the location of the user / the source location, and this may be done for stereo output, optionally calculating a return acoustic signal comprises calculating the return acoustic signals for binaural positions at to the virtual location of the user. In other words, return acoustic signals are calculated for two return locations, preferably at positions corresponding to the expected positions of the user's ears within the virtual environment. Further optionally, a head-related transfer function may be used to estimate the head-shadow of a typical user for each ear position.
In a variant embodiment of the present invention, a virtual environment may be occupied by multiple users. In this case, for each user, return acoustic signals may be calculated for that user's location as described previously, and also the user location of other users within the environment. In this way, all players' voices sound consistent with the virtual environment to a given user, and not just that user's own.
Optionally in this case, only reverb audio is calculated for users other than the originating user. Similarly optionally in this case, the originating audio of one user may also be transmitted to other users (particularly in the case where the users are remote from each other can cannot hear each other directly, as may be the case if they are all in the same room in reality.
In a further variant, an additional user does not receive any calculated return acoustic signals themselves, for example, an additional user may have a companion application on a smartphone or tablet that allows them to track the user's progress (for example, acting as a navigator or guide for the user). This person may be able to talk into their smartphone or tablet, and their voice may be placed within the virtual world as a position corresponding for example to a tannoy or radio A return acoustic signal may then be calculated for any user wearing a head mounted display so that the audio from the tannoy/radio or the like sounds consistent with the virtual environment. Hence more generally it will be understood that such user originating audio may be received via a network connection, such as over the internet.
Referring now to Figure 2, as discussed previously herein, an entertainment device (10) may implement the above methods.
Consequently such an entertainment device comprises a graphics processing means (20, 20B) operable to generate a virtual environment for display to a user (60) of a head mounted display (53), and a processing means (20, 20A) adapted to associate one or more acoustic properties with one or more elements of the virtual environment. The processing means is also operable to detect the virtual location of the user within the virtual environment. The entertainment device is operable to receive audio originating from the user of the virtual reality device. The processing means is adapted to designate the virtual location of the user within the virtual environment as a source location for the received audio within the virtual environment. The processing means is adapted to calculate an acoustic response to the received audio by one or more elements of the virtual environment associated with the one or more acoustic properties. The processing means is adapted to calculate a return acoustic signal from one or more elements of the virtual environment associated with the one or more acoustic properties to the virtual location of the user, dependent upon the calculated acoustic response. The entertainment device also comprises output means (39, 35, 34, 33, 32) operable to output the calculated return acoustic signal for audio reproduction to the user.
It will be appreciated therefore that this entertainment device is adapted to implement the method steps 110-180 described previously herein. It will also be appreciated that the entertainment device may be adapted to implement any of the methods described herein. hi an embodiment of the present invention, the adaptation is by the use of suitable software instruction.
It will therefore be appreciated that the above methods may be carried out on conventional hardware suitably adapted as applicable by software instruction or by the inclusion or substitution of dedicated hardware Thus the required adaptation to existing parts of a conventional equivalent device may be implemented in the form of a computer program product comprising processor implementable instructions stored on a tangible non-transitory machine-readable medium such as a floppy disk, optical disk, hard disk, PROM, RAM, flash memory or any combination of these or other storage media, or realised in hardware as an ASIC (application specific integrated circuit) or an FPGA (field programmable gate array) or other configurable circuit suitable to use in adapting the conventional equivalent device. Separately, such a computer program may be transmitted via data signals on a network such as an Ethernet, a wireless network, the Internet, or any combination of these of other networks.
An example of the entertainment device described herein is the Sony 0 Playstati on 4 0 Figure 2 schematically illustrates the overall system architecture of a Sony® PlayStation 4® entertainment device. A system unit 10 is provided, with various peripheral devices connectable to the system unit The system unit 10 comprises an accelerated processing unit (APU) 20 being a single chip that in turn comprises a central processing unit (CPU) 20A and a graphics processing unit (GPLT) 20B.
The APU 20 has access to a random access memory (RAM) unit 22.
The APU 20 communicates with a bus 40, optionally via an PO bridge 24, which may be a discreet component or part of the APU 20.
Connected to the bus 40 are data storage components such as a hard disk drive 37, and a Blu-ray ® drive 36 operable to access data on compatible optical discs 36A. Additionally the RAM unit 22 may communicate with the bus 40.
Optionally also connected to the bus 40 is an auxiliary processor 38. The auxiliary processor 38 may be provided to run or support the operating system.
The system unit 10 communicates with peripheral devices as appropriate via an audio/visual input port 31, an Ethernet ® port 32, a Bluetooth ® wireless link 33, a Wi-F ® wireless link 34, or one or more universal serial bus (USB) ports 35. Audio and video may be output via an AV output 39, such as an HDMI port.
The peripheral devices may include a monoscopic or stereoscopic video camera 41 such as the PlayStation Eye 0; wand-style videogame controllers 42 such as the PlayStation Move 0 and conventional handheld videogame controllers 43 such as the DualShock 4 0; portable entertainment devices 44 such as the PlayStation Portable TO and PlayStation Vita a), a keyboard 45 and/or a mouse 46; a media controller 47, for example in the form of a remote control; and a headset 48. Other peripheral devices may similarly be considered such as a printer, or a 3D printer (not shown).
The CPU 20B, optionally in conjunction with the CPU 20A, generates video images and audio for output via the AV output 39. Optionally the audio may be generated in conjunction with or instead by an audio processor (not shown) The video and optionally the audio may be presented to a television 51. Where supported by the television, the video may be stereoscopic. The audio may be presented to a home cinema system 52 in one of a number of formats such as stereo, 5.1 surround sound or 7.1 surround sound.
Video and audio may likewise be presented to a head mounted display 53 worn by a user 60.
In an embodiment of the present invention, the entertainment device and the head mounted display together form an entertainment system In this system, the entertainment device comprises a receiver (such as AV Out 39, supporting two-way communication, or USB 35, or Bluetooth ® 33, or WiFi ® 34); and the head mounted display comprising a microphone and a corresponding transmitter (not shown), and the head mounted display is operable to transmit to the entertainment device the audio originating from the user of the head-mounted display that is detected by the microphone.
In this way, the system provides the user originating audio to the entertainment device for use with the above described method.
Once the above described method has been implemented, the estimated acoustic response of the virtual environment to the user originating audio is played back to the user.
Accordingly, in an embodiment of the present invention the entertainment device comprises a transmitter (such as AV Out 39, or USB 35, or Bluetooth 0 33, or Wi-Fi 0 34), and the head mounted display comprises a receiver and headphones (not shown), and the entertainment device is operable to transmit to the head mounted display the calculated return acoustic signal, and the head mounted display is operable to audibly reproduce the calculated return acoustic signal to the user.
In an alternative embodiment, the headphones are separate to the head mounted display. In this case the entertainment device is operable to transmit the calculated return acoustic signal to the separate headphones using similar communication means.

Claims (15)

  1. CLAIMS1. A method of virtual reality feedback, comprising: generating a virtual environment in real-time for display to a user of a head-mounted display; associating one or more acoustic properties with one or more elements of the virtual environment; detecting the virtual location of the user within the virtual environment; receiving audio originating from the user of the head-mounted display, designating the virtual location of the user within the virtual environment as a source location for the received audio within the virtual environment, calculating an acoustic response to the received audio by one or more elements of the currently generated virtual environment associated with the one or more acoustic properties; calculating a return acoustic signal from one or more elements of the virtual environment associated with the one or more acoustic properties to the virtual location of the user, dependent upon the calculated acoustic response; and outputting the calculated return acoustic signal for audio reproduction to the user.
  2. 2. A method of virtual reality feedback according to claim 1, in which the step of calculating a return acoustic signal comprises dynamically estimating a reverberation time for the current state of the virtual environment.
  3. 3. A method of virtual reality feedback according to claim 1 or claim 2, in which the step of calculating a return acoustic signal comprises the step of calculating points of intersection between the virtual environment and a plurality of lines originating at the source location, the plurality of lines respectively pointing in different directions.
  4. 4. A method of virtual reality feedback according to claim 3, comprising the steps of: estimating a volume of space bounded by the points of intersection; retrieving one or more acoustic properties of elements of the virtual environment at one or more of the points of intersection; and estimating a reverberation decay time as a function of the estimated volume of space and the one or more retrieved acoustic properties.
  5. 5. A method of virtual reality feedback according to claim 4, comprising the steps of: detecting whether the volume of space comprises an unbounded portion corresponding to where one or more lines did not intersect the virtual environment within a predetermined distance; and estimating the reverberation decay time as a function of the estimated volume of space, the one or more retrieved acoustic properties, and an energy leakage factor dependent upon the relative size of the unbounded portion to the volume of space.
  6. 6. A method of virtual reality feedback according to claim 4 or claim 5, comprising the steps of: generating a reverb audio signal responsive to the received audio and the estimated reverberation decay time; and outputting the reverb audio signal as a calculated return acoustic signal for audio reproduction to the user.
  7. 7. A method of virtual reality feedback according to claim 3, comprising the steps of: detecting the N closest points of intersection to the source location, where N is a predetermined number; retrieving one or more acoustic properties of elements of the virtual environment at said N closest points of intersection; calculating the acoustic response to the received audio in dependence upon the or each retrieved acoustic property at said N closest points of intersection; calculating a return acoustic signal from each of said N closest points of intersection to the source location, dependent upon the calculated acoustic response; and outputting, for audio reproduction to the user, the respective calculated return acoustic signals for each of said N closest points of intersection, at respective delays corresponding to the acoustic propagation time for the return distance to each of said N closest points of intersection.
  8. 8. A method of virtual reality feedback according to claim 7, where respective ones of the N closest points of intersection are re-used when the virtual location of the user within the virtual environment changes, or the state of the virtual environment changes, until a respective termination criterion is met.
  9. 9. A method of virtual reality feedback according to claim 7, in which the respective delays are each reduced by a predetermined amount corresponding to a system latency incurred when receiving the audio and implementing steps of the method of virtual reality feedback preceding output of the calculated return acoustic signals for audio reproduction to the user.
  10. 10. A method according to any one of the preceding claims, in which the step of calculating a return acoustic signal comprises calculating the return acoustic signals for binaural positions at to the virtual location of the user.
  11. 11. A method of virtual reality feedback according to anyone of the preceding claims, in which an acoustic property may be one selected from the list consisting of i. absorption coefficient(s); impulse response;
  12. 12. A computer program product for implementing the steps of any preceding method claim.
  13. 13. An entertainment device, comprising: a graphics processing means operable to generate a virtual environment for display to a user of a head mounted display; a processing means adapted to associate one or more acoustic properties with one or more elements of the virtual environment; the processing means being operable to detect the virtual location of the user within the virtual environment; the entertainment device being operable to receive audio originating from the user of the head mounted display; the processing means being adapted to designate the virtual location of the user within the virtual environment as a source location for the received audio within the virtual environment; the processing means being adapted to calculate an acoustic response to the received audio by one or more elements of the virtual environment associated with the one or more acoustic properties; the processing means being adapted to calculate a return acoustic signal from one or more elements of the virtual environment associated with the one or more acoustic properties to the virtual location of the user, dependent upon the calculated acoustic response; and output means operable to output the calculated return acoustic signal for audio reproduction to the user.
  14. 14. An entertainment system, comprising: the entertainment device of claim 12 or claim 13, comprising a receiver; and a head mounted display comprising a microphone and a transmitter, and in which the head mounted display is operable to transmit to the entertainment device audio originating from a user of the head-mounted display that is detected by the microphone.
  15. 15. The entertainment system of claim 14, in which the entertainment device comprises a transmitter; and the head mounted display comprises a receiver and headphones, and in which the entertainment device is operable to transmit to the head mounted display the calculated return acoustic signal; and the head mounted display is operable to audibly reproduce the calculated return acoustic signal to the user.
GB1503639.5A 2015-03-04 2015-03-04 System and method of virtual reality feedback Withdrawn GB2536020A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
GB1503639.5A GB2536020A (en) 2015-03-04 2015-03-04 System and method of virtual reality feedback

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB1503639.5A GB2536020A (en) 2015-03-04 2015-03-04 System and method of virtual reality feedback

Publications (2)

Publication Number Publication Date
GB201503639D0 GB201503639D0 (en) 2015-04-15
GB2536020A true GB2536020A (en) 2016-09-07

Family

ID=52876480

Family Applications (1)

Application Number Title Priority Date Filing Date
GB1503639.5A Withdrawn GB2536020A (en) 2015-03-04 2015-03-04 System and method of virtual reality feedback

Country Status (1)

Country Link
GB (1) GB2536020A (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019161313A1 (en) 2018-02-15 2019-08-22 Magic Leap, Inc. Mixed reality virtual reverberation
WO2019212794A1 (en) * 2018-05-03 2019-11-07 Dakiana Research Llc Method and device for sound processing for a synthesized reality setting
CN111713121A (en) * 2018-02-15 2020-09-25 奇跃公司 Dual listener position for mixed reality
EP3698201A4 (en) * 2017-10-17 2020-12-09 Magic Leap, Inc. Mixed reality spatial audio
US11304017B2 (en) 2019-10-25 2022-04-12 Magic Leap, Inc. Reverberation fingerprint estimation
EP4092515A1 (en) * 2018-02-17 2022-11-23 Varjo Technologies Oy System and method of enhancing user's immersion in mixed reality mode of display apparatus
EP4102859A1 (en) * 2021-06-08 2022-12-14 Atos Information Technology GmbH Virtual reality method and equipment for enhancing the user's feeling of immersion
US11546716B2 (en) 2018-10-05 2023-01-03 Magic Leap, Inc. Near-field audio rendering
CN112106020B (en) * 2018-05-03 2024-05-10 苹果公司 Method and apparatus for sound processing of synthetic reality scenes

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115272617B (en) * 2022-08-29 2023-05-02 北京京航计算通讯研究所 Virtual simulation display method and system for object acoustics

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6723910B1 (en) * 2002-11-18 2004-04-20 Silicon Integrated Systems Corp. Reverberation generation processor
US20080252637A1 (en) * 2007-04-14 2008-10-16 Philipp Christian Berndt Virtual reality-based teleconferencing
US20140132628A1 (en) * 2012-11-12 2014-05-15 Sony Computer Entertainment Inc. Real world acoustic and lighting modeling for improved immersion in virtual reality and augmented reality environments

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6723910B1 (en) * 2002-11-18 2004-04-20 Silicon Integrated Systems Corp. Reverberation generation processor
US20080252637A1 (en) * 2007-04-14 2008-10-16 Philipp Christian Berndt Virtual reality-based teleconferencing
US20140132628A1 (en) * 2012-11-12 2014-05-15 Sony Computer Entertainment Inc. Real world acoustic and lighting modeling for improved immersion in virtual reality and augmented reality environments

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Applied Acoustics Vol. 73, Issue 4, April 2012, (M. Yadav et al); "A system for simulating room acoustical environments for ones own voice", Pages 409-414 *
iJET Volume 3, Number 2, June 2008; (M.North et al); "Virtual Reality Training in Aid of Communication Apprehension in Classroom Environments"; Pages 34-37 *
M.Yadav et al; "Audio Engineering Society - 133rd Convention"; 26-29th October 2012; Convention Paper 8781; "Simulating autophony with auralized oral-binaural room impulse responses" *

Cited By (30)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3698201A4 (en) * 2017-10-17 2020-12-09 Magic Leap, Inc. Mixed reality spatial audio
US11895483B2 (en) 2017-10-17 2024-02-06 Magic Leap, Inc. Mixed reality spatial audio
US11212636B2 (en) 2018-02-15 2021-12-28 Magic Leap, Inc. Dual listener positions for mixed reality
US11956620B2 (en) 2018-02-15 2024-04-09 Magic Leap, Inc. Dual listener positions for mixed reality
CN111713091A (en) * 2018-02-15 2020-09-25 奇跃公司 Mixed reality virtual reverberation
CN114679677B (en) * 2018-02-15 2024-02-20 奇跃公司 Dual listener position for mixed reality
EP3753238A4 (en) * 2018-02-15 2021-04-07 Magic Leap, Inc. Mixed reality virtual reverberation
JP2021514081A (en) * 2018-02-15 2021-06-03 マジック リープ, インコーポレイテッドMagic Leap,Inc. Mixed reality virtual echo
US20230007332A1 (en) * 2018-02-15 2023-01-05 Magic Leap, Inc. Mixed reality virtual reverberation
CN111713121A (en) * 2018-02-15 2020-09-25 奇跃公司 Dual listener position for mixed reality
US11800174B2 (en) * 2018-02-15 2023-10-24 Magic Leap, Inc. Mixed reality virtual reverberation
CN114679677A (en) * 2018-02-15 2022-06-28 奇跃公司 Dual listener position for mixed reality
IL276510B1 (en) * 2018-02-15 2023-10-01 Magic Leap Inc Mixed reality virtual reverberation
US11477510B2 (en) * 2018-02-15 2022-10-18 Magic Leap, Inc. Mixed reality virtual reverberation
WO2019161313A1 (en) 2018-02-15 2019-08-22 Magic Leap, Inc. Mixed reality virtual reverberation
US11736888B2 (en) 2018-02-15 2023-08-22 Magic Leap, Inc. Dual listener positions for mixed reality
US11589182B2 (en) 2018-02-15 2023-02-21 Magic Leap, Inc. Dual listener positions for mixed reality
EP4092515A1 (en) * 2018-02-17 2022-11-23 Varjo Technologies Oy System and method of enhancing user's immersion in mixed reality mode of display apparatus
US11743645B2 (en) * 2018-05-03 2023-08-29 Apple Inc. Method and device for sound processing for a synthesized reality setting
US20220322006A1 (en) * 2018-05-03 2022-10-06 Apple Inc. Method and device for sound processing for a synthesized reality setting
US11363378B2 (en) * 2018-05-03 2022-06-14 Apple Inc. Method and device for sound processing for a synthesized reality setting
CN112106020A (en) * 2018-05-03 2020-12-18 苹果公司 Method and apparatus for sound processing of a synthetic reality setting
WO2019212794A1 (en) * 2018-05-03 2019-11-07 Dakiana Research Llc Method and device for sound processing for a synthesized reality setting
CN112106020B (en) * 2018-05-03 2024-05-10 苹果公司 Method and apparatus for sound processing of synthetic reality scenes
US11546716B2 (en) 2018-10-05 2023-01-03 Magic Leap, Inc. Near-field audio rendering
US11778411B2 (en) 2018-10-05 2023-10-03 Magic Leap, Inc. Near-field audio rendering
US11540072B2 (en) 2019-10-25 2022-12-27 Magic Leap, Inc. Reverberation fingerprint estimation
US11778398B2 (en) 2019-10-25 2023-10-03 Magic Leap, Inc. Reverberation fingerprint estimation
US11304017B2 (en) 2019-10-25 2022-04-12 Magic Leap, Inc. Reverberation fingerprint estimation
EP4102859A1 (en) * 2021-06-08 2022-12-14 Atos Information Technology GmbH Virtual reality method and equipment for enhancing the user's feeling of immersion

Also Published As

Publication number Publication date
GB201503639D0 (en) 2015-04-15

Similar Documents

Publication Publication Date Title
GB2536020A (en) System and method of virtual reality feedback
US10911882B2 (en) Methods and systems for generating spatialized audio
CN112567767B (en) Spatial audio for interactive audio environments
US7113610B1 (en) Virtual sound source positioning
KR20100021387A (en) Apparatus and method to perform processing a sound in a virtual reality system
JP2008507006A (en) Horizontal perspective simulator
US11902772B1 (en) Own voice reinforcement using extra-aural speakers
CN111158459A (en) Application of geometric acoustics in immersive Virtual Reality (VR)
US11651762B2 (en) Reverberation gain normalization
JP2022525902A (en) Audio equipment and its processing method
JP2023168544A (en) Low-frequency interchannel coherence control
KR20230165851A (en) Audio device and method therefor
WO2023199673A1 (en) Stereophonic sound processing method, stereophonic sound processing device, and program

Legal Events

Date Code Title Description
WAP Application withdrawn, taken to be withdrawn or refused ** after publication under section 16(1)