GB2575492A - An ambisonic microphone apparatus - Google Patents

An ambisonic microphone apparatus Download PDF

Info

Publication number
GB2575492A
GB2575492A GB1811458.7A GB201811458A GB2575492A GB 2575492 A GB2575492 A GB 2575492A GB 201811458 A GB201811458 A GB 201811458A GB 2575492 A GB2575492 A GB 2575492A
Authority
GB
United Kingdom
Prior art keywords
microphone
sound
ambisonic
virtual
sound field
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
GB1811458.7A
Other versions
GB201811458D0 (en
Inventor
Mcardle Stephen
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Centricam Tech Ltd
Original Assignee
Centricam Tech Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Centricam Tech Ltd filed Critical Centricam Tech Ltd
Priority to GB1811458.7A priority Critical patent/GB2575492A/en
Publication of GB201811458D0 publication Critical patent/GB201811458D0/en
Publication of GB2575492A publication Critical patent/GB2575492A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S3/00Systems employing more than two channels, e.g. quadraphonic
    • H04S3/008Systems employing more than two channels, e.g. quadraphonic in which the audio signals are in digital form, i.e. employing more than two discrete digital channels
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R1/00Details of transducers, loudspeakers or microphones
    • H04R1/20Arrangements for obtaining desired frequency or directional characteristics
    • H04R1/32Arrangements for obtaining desired frequency or directional characteristics for obtaining desired directional characteristic only
    • H04R1/40Arrangements for obtaining desired frequency or directional characteristics for obtaining desired directional characteristic only by combining a number of identical transducers
    • H04R1/406Arrangements for obtaining desired frequency or directional characteristics for obtaining desired directional characteristic only by combining a number of identical transducers microphones
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
    • G10L19/008Multichannel audio signal coding or decoding using interchannel correlation to reduce redundancy, e.g. joint-stereo, intensity-coding or matrixing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R19/00Electrostatic transducers
    • H04R19/005Electrostatic transducers using semiconductor materials
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R3/00Circuits for transducers, loudspeakers or microphones
    • H04R3/005Circuits for transducers, loudspeakers or microphones for combining the signals of two or more microphones
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S7/00Indicating arrangements; Control arrangements, e.g. balance control
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S7/00Indicating arrangements; Control arrangements, e.g. balance control
    • H04S7/30Control circuits for electronic adaptation of the sound field
    • H04S7/302Electronic adaptation of stereophonic sound system to listener position or orientation
    • H04S7/303Tracking of listener position or orientation
    • H04S7/304For headphones
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R19/00Electrostatic transducers
    • H04R19/04Microphones
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2201/00Details of transducers, loudspeakers or microphones covered by H04R1/00 but not provided for in any of its subgroups
    • H04R2201/40Details of arrangements for obtaining desired directional characteristic by combining a number of identical transducers covered by H04R1/40 but not provided for in any of its subgroups
    • H04R2201/4012D or 3D arrays of transducers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2201/00Details of transducers, loudspeakers or microphones covered by H04R1/00 but not provided for in any of its subgroups
    • H04R2201/40Details of arrangements for obtaining desired directional characteristic by combining a number of identical transducers covered by H04R1/40 but not provided for in any of its subgroups
    • H04R2201/405Non-uniform arrays of transducers or a plurality of uniform arrays with different transducer spacing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R7/00Diaphragms for electromechanical transducers; Cones
    • H04R7/02Diaphragms for electromechanical transducers; Cones characterised by the construction
    • H04R7/04Plane diaphragms
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R7/00Diaphragms for electromechanical transducers; Cones
    • H04R7/16Mounting or tensioning of diaphragms or cones
    • H04R7/18Mounting or tensioning of diaphragms or cones at the periphery
    • H04R7/20Securing diaphragm or cone resiliently to support by flexible material, springs, cords, or strands
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S2400/00Details of stereophonic systems covered by H04S but not provided for in its groups
    • H04S2400/15Aspects of sound capture and related signal processing for recording or reproduction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S2420/00Techniques used stereophonic systems covered by H04S but not provided for in its groups
    • H04S2420/11Application of ambisonics in stereophonic audio systems

Abstract

An apparatus comprising a first Ambisonic microphone unit 601 providing at least one virtual microphone 608 having a sound field size oriented in a fixed direction within a panoramic sound field, and a second Ambisonic microphone unit 701 providing a virtual microphone 702 having a sound movable within the panoramic sound field. Preferably, the first unit provides two virtual microphones 608A,B which are fixed in left-side and right-side sound feed for a user, who may be in a virtual reality. Preferably the sound field size for the movable virtual microphone is narrower than that of the fixed virtual microphone. Preferably, the sound signal from the second microphone unit is inserted into the sound signal from the first microphone unit. Also claimed is a system comprising an Ambisonic unit providing a virtual microphone of a sound field in a fixed direction which provides a first sound signal, a speaker for outputting the first sound signal, and a processor for injecting an out-of-field sound signal into the first sound signal.

Description

An Ambisonic Microphone Apparatus
Field of the Invention.
The invention relates to an Ambisonic microphone apparatus and particularly, but not exclusively to an Ambisonic microphone apparatus for a virtual reality (VR) system.
Background of the Invention.
For a truly immersive experience using VR systems, it is necessary to have threedimensional (3D) spatial sound, i.e. 3D audio. In general, 3D audio can be considered as channel-based, scene-based or object-based. In channel-based systems, the audio content is delivered to a loudspeaker set-up usually with one channel for each speaker for audio playback, e.g. left and right speakers for stereo. In object-based systems, the audio content describes where a certain audio object is placed within the sound field and data processing is used to calculate its playback on an appropriate 3D speaker set-up.
VR typically relies on scene-based sound capture where sound at a specific point in the sound field is captured. This might include any of binaural, quad binaural or Ambisonic systems. Binaural uses a dummy head with microphones placed within an artificial ear, whereas quad binaural requires four sets of binaural dummy heads. Both techniques will capture the sound field with pre-rendered binaural information which provides static binaural playback on standard stereo headphones.
Ambisonic systems capture a channel-independent representation of the sound field. It comprises a 3D sound format that captures the full spherical sound field, thereby providing an enveloping surround sound experience, not only in the horizontal plane, but also including height information. It can be decoded to any existing speaker layout and also allows for dynamic binaural playback on headphones by applying a binaural Tenderer including Head Related Transfer Functions (HRTFs).
Typically, the microphone apparatus for an Ambisonic system is placed as close to its associated camera as possible to give the video and audio capture a same point of reference which, in most cases, is a forward-looking point of reference from the perspective of the camera.
A truly immersive VR experience also requires that out of field sound events are not ignored as, in a real-world environment, a person’s ears are very good at detecting out of visual field sound events and reacting, e.g. turning to face, such sound events.
Objects of the Invention.
An object of the invention is to mitigate or obviate to some degree one or more problems associated with known Ambisonic microphone apparatuses, particularly Ambisonic microphone apparatuses for VR.
The above object is met by the combination of features of the main claims; the subclaims disclose further advantageous embodiments of the invention.
Another object of the invention is to provide an improved Ambisonic microphone apparatus which captures out of visual field sound events and optionally selectively injects audio for said out of visual field sound events into an audio stream for visual in-field audio events.
One skilled in the art will derive from the following description other objects of the invention. Therefore, the foregoing statements of object are not exhaustive and serve merely to illustrate some of the many objects of the present invention.
Summary of the Invention.
In a first main aspect, the present invention concerns an Ambisonic microphone apparatus comprising a first Ambisonic microphone unit configured to provide at least one virtual microphone having a selected sound field size orientated in a fixed direction within a panoramic sound field; and a second Ambisonic microphone unit configured to provide at least one virtual microphone having a selected sound field size movable within said panoramic sound field.
In a second main aspect, the present invention provides a method of processing sound signals, comprising the steps of processing a first sound signal from at least one virtual microphone of a first Ambisonic microphone unit having a selected sound field size orientated in a fixed direction within a panoramic sound field, outputting said processed first sound signal to a user on a speaker device, processing a second sound signal from at least one virtual microphone of a second Ambisonic microphone unit having a selected sound field size movable within said panoramic sound field, and inserting said processed second sound signal into said processed first sound signal.
In a third main aspect, the present invention provides a non-transitory computer readable medium comprising machine readable instructions which, when executed by a processor of a microphone apparatus according to the first main aspect implements the steps of the second main aspect.
In a fourth main aspect, the present invention provides a sound system comprising: an Ambisonic microphone unit configured to provide at least one virtual microphone having a selected sound field size orientated in a fixed direction within a panoramic sound field, said at least one virtual microphone providing a first sound signal; a speaker for outputting said first sound signal; and a signal processor device for injecting an out of field sound signal into said first sound signal.
In a fifth main aspect, the present invention provides a method of processing sound signals, comprising the steps of: processing a first sound signal from at least one virtual microphone of a first Ambisonic microphone unit having a selected sound field size orientated in a fixed direction within a panoramic sound field; outputting said processed first sound signal to a user on a speaker device; inserting an out of field sound signal into said processed first sound signal.
In a sixth main aspect, the present invention provides a non-transitory computer readable medium comprising machine readable instructions which, when executed by a processor of a microphone apparatus according to the fourth main aspect implements the steps of the fifth main aspect.
Other aspects of the invention are in accordance with the appended claims.
The summary of the invention does not necessarily disclose all the features essential for defining the invention; the invention may reside in a sub-combination of the disclosed features.
Brief Description of the Drawings.
The foregoing and further features of the present invention will be apparent from the following description of preferred embodiments which are provided by way of example only in connection with the accompanying figures, of which:
Figure 1 is a schematic block diagram of a typical MEMS microphone assembly;
Figure 2 is a schematic diagram of a membrane shown in isolation for the microphone assembly of Fig. 1;
Figure 3 is a schematic block diagram of a microphone system;
Figure 4A is a cardioid polar pattern at 5KHz for the microphone system of Fig. 3;
Figure 4B is a cardioid polar pattern at 15KHz for the microphone system of Fig. 3;
Figure 5 is a schematic diagram of an Ambisonic microphone unit;
Figure 6 is a schematic diagram of another Ambisonic microphone unit;
Figures 7A and 7B illustrate Ambisonic microphone arrays;
Figure 8 is a schematic diagram of a virtual microphone provided by an Ambisonic microphone array;
Figure 9 is a schematic diagram of an Ambisonic microphone unit in accordance with the invention;
Figure 10 is a block schematic diagram of an Ambisonic microphone system in accordance with the invention;
Figure 11 is a schematic diagram of a first Ambisonic microphone apparatus in accordance with the invention; and
Figure 12 is a schematic block diagram of a second Ambisonic microphone apparatus in accordance with the invention.
Description of Preferred Embodiments.
The following description is of preferred embodiments by way of example only and without limitation to the combination of features necessary for carrying the invention into effect.
Reference in this specification to one embodiment or an embodiment means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the invention. The appearances of the phrase in one embodiment in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Moreover, various features are described which may be exhibited by some embodiments and not by others. Similarly, various requirements are described which may be requirements for some embodiments, but not other embodiments.
The following description of Figs. 1 to 7 is given by way of example only of microphone assemblies, microphone systems and microphone arrays which could be used to form the Ambisonic microphone units and systems for providing an Ambisonic microphone apparatus in accordance with the invention, but it should be understood that other configurations of microphone elements, assemblies, units, arrays and systems could be employed to implement the microphone units for an Ambisonic microphone apparatus in accordance with the concepts described herein.
Referring to Fig. 1 and provided by way of example only is a typical MEMS microphone assembly 10 comprising a housing 12 accommodating an electrically charged (Vb) floating membrane 14. The membrane is spaced from and generally arranged parallel to a fixed conductive plane comprising a conductive backplate 16 which is fixed in position relative to the housing 12. The membrane 14 is supported by springs or other biasing means 18 to enable it to move, e.g. flex, as represented by arrow 20 when a pressure wave indicated by arrow 22 passing through a front window 24 of the housing 12 is incident on it, although in this example of a MEMS microphone assembly 10, an incident pressure wave may be admitted to the housing by a rear window 26 of the housing 12. Consequently, in this example of a MEMS microphone assembly 10, the fixed backplate 16 has a perforated structure, although this is not always the case with MEMS microphone assemblies.
Movement of the membrane 14 in response to the incident pressure wave 22 causes variations over time in capacitance between the membrane 14 and the fixed backplate 16. The variations in capacitance can be translated by an operational amplifier (Op-Amp) 28 or the like into an electrical signal representative of the pressure variations experienced by the membrane 14, i.e. an into an electrical signal representation of the pressure wave 22 incident on the membrane 14.
The interior volume 30 of the housing 12 comprising the acoustic cavity in which the membrane 14 is suspended by the springs 18 is, in effect, open to the atmosphere all around the membrane 14 which causes the MEMS microphone assembly 10 to be inherently omnidirectional.
Consider now Fig. 2 which shows a membrane 100 for a MEMS microphone assembly (now shown). The membrane 100 is shown in isolation for reasons of clarity, but it will be understood that the membrane 100 forms part of a MEMS microphone assembly of the type shown by way of example in Fig. 1. The membrane 100 is suspended from a fixed frame 102 by a biasing mechanism 104 comprising, in this example, a plurality of spring members 106 arranged around the periphery of the membrane 100. The spring members 106 are integrally formed with the membrane 100 and the fixed frame 102. Arrow 108 depicts the desired direction for a unidirectional response of the membrane 100, but, as will be understood from the foregoing, it is not possible to achieve such a response with a current omnidirectional MEMS microphone assembly as illustrated by Figs. 1 and 2.
Where a pressure wave such as a sound wave approaches a front surface 100A of the membrane as viewed in Fig. 2 at say about 343 metres/second (m/s) for a normal room temperature environment, the pressure wave causes the membrane 100 to move or flex, from which can be obtained an electrical signal representation of the pressure wave in the manner described with respect to Fig. 1, but absent of any information on the pressure wave’s direction or angle of incidence to the front surface 100A of the membrane 100. Similarly, if an identical pressure wave approaches a rear surface of the membrane 100, it will cause the membrane 100 to move or flex in a same manner enabling a same electrical signal representation of the pressure wave to be obtained, but again absent of any information on its direction or angle of incidence to the membrane 100, although, in this case, the electrical signal representation of the pressure wave will be inverted with respect to the electrical signal representation obtained for the pressure wave incident on the front surface 100A of the membrane 100.
As indicated, an inherent feature of current MEMS microphone assemblies is that they are omnidirectional in nature, namely that their responses to incident pressure waves are independent of the angle of incidence of the pressure wave on the membrane. Consequently, although it is not possible to obtain a unidirectional response from a current omnidirectional MEMS microphone assembly, current MEMS technology allows MEMS microphone assemblies to be manufactured at very low cost and at very small sizes so there is a desire to make use of the cost and size advantages afforded by current MEMS technology.
Fig. 3 shows a schematic block diagram of a microphone system 200. The microphone system 200 comprises a first microphone assembly 202 for providing a first electrical signal representation of a pressure wave incident thereon and a second microphone assembly 204 for providing a second electrical signal representation of the pressure wave. The pressure wave and its preferred direction of propagation are represented by arrow 206. The second microphone assembly 204 is spaced a selected or calculated distance d from the first microphone assembly 202. More specifically, a membrane 208 of the first microphone assembly 202 is spaced the selected or calculated distance d from a membrane 210 of the second microphone assembly 204.
The first and second microphone assemblies 202, 204 may be accommodated in separate respective housings 212, 214 or they may be accommodated in a same housing 216 as depicted by dashed lines 218. It is necessary, however, that whatever the spaced arrangement of the first and second microphone assemblies 202, 204, it is necessary to know the distance d between the respective membranes 208, 210. It is also necessary to know the speed of the pressure wave, but an estimate of said speed may be employed. For example, a sound wave speed or velocity of 343m/s may be used for envisaged applications of the invention where the microphone system 200 of the invention will typically be utilized in a room temperature environment for detecting sound signals. That being said, the microphone system 200 may be equipped with a module for measuring the speed or velocity of an incident pressure wave.
Each of the first and second microphone assemblies 202, 204 has a respective fixed backplate 220, 222 placed adjacent to its respective membrane 208, 210.
The first and second microphone assemblies 202, 204 may have generally the same structure as shown for the typical MEMS microphone assembly 10 of Fig. 1, although it will be understood that the microphone system 200 may comprise any combination of two or more known microphone assemblies, particularly two or more known MEMS microphone assemblies.
Whatever the spaced arrangement of the first and second microphone assemblies 202, 204, the outputs 224, 226 of the first and second microphone assemblies 202, 204 are each connected to a signal processor 228 which is configured to combine the first and second electrical signal representations obtained from the first and second microphone assemblies 202, 204 to provide a unidirectional output signal 230. The signal processor 228 includes a non-transitory memory 232 which may be configured to store machine readable instructions which, when executed by the signal processor 228, implements the methods herein described. The signal processor 228 may also provide the module for measuring the speed or velocity of an incident pressure wave. Furthermore, the memory 232 may be arranged to store the output signals from the first and second microphone assemblies 202, 204 including the outputted first and second electrical signal representations of the incident pressure wave 206.
It can be seen from Fig. 3 that the second microphone assembly 204 is placed behind the first microphone assembly 202 with respect to a desired direction of the incident pressure wave as depicted by arrow 206. Placing the second microphone assembly 204 at a selected or calculated distance behind the first microphone assembly 202 makes it possible to gain information on the direction of the incident pressure wave 206.
A pressure wave reaching the first microphone assembly 202 from the desired direction depicted by arrow 206 can be considered as reaching the membrane 208 of said first microphone assembly 202 at a time t and reaching the membrane 210 of the second microphone assembly 204 at a slightly later time of d/c seconds where c is the speed or velocity of the pressure wave and d is the separation distance between the first and second membranes 208, 210 in metres. For convenience, t can be equated to time zero at the first membrane 208.
Conversely, a pressure wave reaching the membrane 208 of the first microphone assembly 202 at time t = 0 from an opposite direction to arrow 206 will already have encountered the membrane 210 of the second microphone assembly 204 at a time (t - d/c) seconds earlier.
It is therefore possible to take the outputs 224, 226 of the first and second microphone assemblies 202, 204 and manipulate these using the signal processor 228 to beamform the output 230 of the microphone system 200.
Taking the outputs 224, 226 of the first and second microphone assemblies 202, 204 for the situation where the incident pressure wave is incident on the membranes 208, 210 in a direction opposite to the desired direction of arrow 206, a method in accordance with the invention comprises the steps at the signal processor 228 of: (a) capturing an electrical signal representation of the pressure wave from the second membrane 210; (b) inverting said captured electrical signal representation; (c) adding a time delay of d/c seconds to said captured electrical signal representation; (d) storing said inverted and time delayed electrical signal representation in the memory 232; (e) capturing an electrical signal representation of the pressure wave from the first membrane 208; and (f) adding the stored inverted and time delayed electrical signal representation from the memory 232 to the captured electrical signal representation of the pressure wave from the first membrane 208. As the pressure wave should result in the generation of identical electrical signal representations by the first and second membranes 208, 210 save for the fact that one is inverted with respect to the other and one occurs at a time d/c seconds before the other, it will be appreciated that the method steps (a) to (f) result in the stored inverted and time delayed electrical signal representation from the memory 232 cancelling out the captured electrical signal representation of the pressure wave from the first membrane 208, i.e. resulting in a zero signal output 230.
It will be understood that steps (b), (c) and (d) may be performed in any order. It will also be appreciated that the method may include storing the captured electrical signal representation of the pressure wave from the first membrane 208 prior to step (f).
As the method defined above is a continuous process, only pressure waves from the exact opposite direction of the desired direction 206 are completely cancelled out. Pressure waves coming from behind, but at less than opposite angles are cancelled to lesser degrees.
Taking the outputs 224, 226 of the first and second microphone assemblies 202, 204 for the situation where the incident pressure wave is incident on the membranes 208, 210 in a same direction as the desired direction of arrow 206, a further method in accordance with the invention comprises the steps at the signal processor 228 of: (i) capturing an electrical signal representation of the pressure wave from the first membrane 208; (ii) storing said captured electrical signal representation in the memory 232; (iii) capturing an electrical signal representation of the pressure wave from the second membrane 210; (iv) inverting said captured electrical signal representation from the second membrane 210; (v) adding a time delay of d/c seconds to said inverted electrical signal representation from the second membrane 210; and (vi) adding the stored electrical signal representation from the memory 232 to the inverted and time delayed electrical signal representation of the pressure wave from the second membrane 210. The net effect is that the stored electrical signal representation from the memory 232 is added to the inverted and time delayed electrical signal representation of the pressure wave from the second membrane 210 at time t = 2:-'d/c seconds. This time delay comprises a first time-delay of d/c seconds being the later time at which the pressure wave is incident on the second membrane 210 and a second time delay of d/c seconds being the time delay added by the signal processor 228.
The pressure wave arriving from the desired direction of arrow 206 is sampled first by the first membrane 208 (t = 0 seconds) and eventually sampled by the second membrane 210 at t = d/c seconds. The addition of the further delay of d/c seconds and the inversion of the signal from the second membrane 210 results in this signal manifesting itself as some distortion in the electrical signal representation of the pressure wave from the first membrane 208, but no complete cancellation occurs.
It will be understood that steps (ii), (iii) and (iv) may be performed in any order. It will also be appreciated that the method may include storing the captured electrical signal representation of the pressure wave from the second membrane 208 prior to step (v).
As the method defined above is a continuous process, pressure waves from the desired direction 206 are largely retained. Consequently, the continuous method results in a cardioid polar pattern for the microphone system 200 as illustrated by Figs. 4A and 4B.
Preferably, the inverted and time delayed signal from the second membrane 210 is attenuated prior to being added in order to mitigate to some degree the distortion it causes to the signal from the first membrane 208. The degree of attenuation may be determined at a final test stage of the assembled microphone system of Fig. 3.
Additionally, or alternatively to attenuating the added signal, it is possible to utilize one or more additional microphone assemblies in the cascaded manner illustrated in Fig. 3 with the outputs of all of the microphone assemblies being fed to the signal processor 228. The amount of distortion would be reduced for each additional microphone assembly added to the string of assemblies.
Where, for example, a third microphone assembly is added to the microphone system 200 of Fig. 3, the third microphone assembly is preferably spaced by a selected or calculated distance d’ from said first microphone assembly 202, where d’ is greater than d.
It is possible to create an Ambisonic microphone unit based on the foregoing embodiments of the invention by using four or more microphone systems of the type shown in Fig. 3 or of similar type, each having a cardioid polar pattern and each being formed from two or more microphone assemblies.
It is desirable to make the microphone assemblies 202, 204 as small as possible. This requires that distance d is minimized. However, a constraint on minimizing distance d is the sampling time required for obtaining the outputs 224, 226 of the first and second microphone assemblies 202, 204 and the processing time required by the signal processor 228 to process said putputs 224, 226. The distance d must be of a size which provides sufficient time to acquire and process the outputs 224, 226, i.e. the time taken for the pressure wave to travel distance d must be such as to allow the two time-separated outputs 224, 226 to be acquired, to be fed to the signal processor 228 and processed thereby.
In the case where distance d is say 10mm and taking the velocity of sound at room temperature as 343m/s, there will be a delay of 29.4psecs between the first and second membranes 208, 210 of the first and second microphone assemblies 202, 204 detecting the same sound pressure wave. To enable a pressure wave travelling from the opposite of the desired direction to be cancelled, the microphone system 200 must be configured such that the steps of sampling the pressure wave at the second microphone assembly 204, inverting said sampled pressure wave and then adding said inverted and sampled pressure wave to the pressure wave sampled at the first microphone assembly 202 can be conducted within 29.4psecs. This is achievable using a 48kHz pressure wave sampling rate at each of the first and second microphone assemblies 202, 204 which enables one sample every 20.8psecs thereby providing 9psecs for the signal processing steps.
It will be understood from the foregoing that the distance d is directly related to the selected sampling rate and signal processor speed. Consequently, when seeking to minimize the size of distance d, one first determines the preferred or required sampling rate, selects a preferred or required signal processor having a known processing speed and then determines the minimum size of distance d therefrom. The spacing between a membrane and its capacitor plate in the microphone assemblies 202, 204 is typically 1mm.
An Ambisonic microphone unit may comprise an array of 4, 8, 32, ..etc. unidirectional or cardioid microphone assemblies or systems placed equidistantly on a surface of an imaginary or real sphere of radius R. Sampling the sound waves at points on the surface of the sphere coincident with the microphone systems is essentially the same as sampling the solution to the wave equation for any number of planar waves hitting the spherical surface from any directions of incidence. Using known complex mathematics involving spherical Bessel functions to a degree dependent on the number of microphones, information on what sound came from what direction can be obtained.
To create a unidirectional or cardioid microphone system using MEMS technology, it is necessary to use two or more MEMS microphone assemblies to create each unidirectional or cardioid microphone system as hereinbefore described.
In an embodiment of a four microphone Ambisonic microphone unit 300 as schematically depicted in Fig. 5, it is necessary to use eight MEMS microphone assemblies 304 to create the four MEMS microphone systems 302. Each microphone system 302 may be of a type as depicted, for example, in Fig. 3. The microphone systems 302 are arranged in an Ambisonic format such as a tetrahedral configuration as shown in Fig. 5.
It can be seen therefore that, for an N microphone Ambisonic microphone unit, at least 2*N MEMS microphone assemblies are required.
The resulting Ambisonic microphone unit 300 comprises a multiple of four first microphone systems 302 arranged equidistant from a centre point 301 of an imaginary or real sphere 303 and preferably arranged in a tetrahedral configuration. Each of the first microphone systems 300 comprises a first microphone assembly 304A and a second microphone assembly 304B, where said second microphone assembly 304B is spaced by the selected distance d from its respective first microphone assembly 304A, where d is less than the radius R of the imaginary or real sphere.
In one embodiment, each of the first microphone assemblies 304A may have associated with it a third microphone assembly (not shown) spaced by a selected distance d’ from its respective first microphone assembly 304A, where d’ is greater than d but less than the radius R of the imaginary or real sphere.
It is, however, possible to form an Ambisonic microphone unit using a reduced number of microphone assemblies, namely to reduce the number of microphone assemblies from 2*N microphone assemblies to as few as N + 1 microphone assemblies as depicted in Fig. 6. This involves creating an Ambisonic microphone unit 400 by placing, for example, four primary microphone assemblies 404A (Fig. 1) on a surface of the imaginary or real sphere having a radius R and placing a single secondary microphone assembly 404B at the centre point of the imaginary or real sphere. The secondary microphone assembly 404B is used as the common, second microphone assembly for each of the four microphone systems formed by connecting an output of the secondary microphone assembly 404B to the respective signal processors of the four primary microphone assemblies 404A. As each of the primary and secondary microphone assemblies 404A,B is omnidirectional, its orientation with respect to other ones of the primary and secondary microphone assemblies 404A,B is not critical. The four primary microphone assemblies 404A are preferably arranged in a tetrahedral configuration.
The resulting Ambisonic microphone unit 400 comprises a multiple of four first microphone assemblies 404A arranged equidistantly from a centre point of the imaginary or real sphere having the radius R. In this configuration of an Ambisonic microphone unit 400, the single, shared secondary microphone assembly 400B is positioned at the centre of the imaginary or real sphere such that the selected distance d from each primary microphone assembly 400A to the single, shared secondary microphone assembly 400B is equal to the radius R of the sphere.
In one embodiment, each of the primary microphone assemblies 400A may have associated with it a third microphone assembly (not shown) spaced by a selected distance d’ from its respective primary microphone assembly, where d’ is less than the radius R of the imaginary or real sphere.
In the embodiments of Figs. 5 and 6, it is possible that an Ambisonic sound signal output of the microphone units is an “A” format signal, a “B” format signal, or a “C” format signal.
It is, however, also possible to form an Ambisonic microphone unit generally in accordance with Figs. 5 and 6 where the microphone assemblies or systems each comprise an analogue condenser microphone assembly or system of respectively similar configurations as shown in Figs. 1 and 3.
In embodiments where the microphone assemblies or systems (Fig. 3) are arranged in a tetrahedral array and preferably a B format tetrahedral array, pairs of MEMS microphone assemblies are preferably provided to provide eight channels. The use of small MEMs microphone assemblies enables the size of the array to be miniaturized. The pairs of MEMS microphone assemblies may be spatially offset within a pair and/or between the pairs. More preferably, each pair of MEMs microphone assemblies is arranged with one spaced a small distance behind the other. Each pair of MEMS microphone assemblies may be sufficiently displaced to provide a single cardioid pattern beam formed from the two omnidirectional MEMs assemblies. As such, the signal from the first MEMS assembly may be delayed in time and then combined with the signal from the second MEMS assembly to cancel out signals from behind the pair of MEMs assemblies to provide a controlled cardioid polar pattern. The eight-channel arrangement so formed provides for better manipulation of the cardioid pickup pattern by the Ambisonic microphone unit array enabling much more accurate and tight beam forming.
Where the Ambisonic microphone unit’s array of microphone assemblies or systems comprises omnidirectional microphone assemblies which accept sound from all directions, electrical signals of the microphone assemblies contain information about sounds coming from all directions. Processing of these sounds allows the selection of a sound signal coming from a given direction. Thus, a microphone array can comprise many known arrangements which enable selection of sound coming from a given direction by using known algorithms to process one or many channel signals of a captured surround sound field.
An Ambisonic microphone unit for an Ambisonic microphone module or apparatus in accordance with the invention may therefore be formed from or comprise: a combination or array of microphone assemblies 10 as shown in Fig. 1; and/or a combination or array of microphone systems 200 shown in Fig. 3; and/or a plurality of the microphone units 300, 400 as shown in either of Figs. 6 or 7; and/or a combination of orthogonal bi-polar transducer elements with an omnidirectional, pressure sensitive capsule.
For the latter case, the output of the omnidirectional, pressure sensitive capsule is referred to as the 'W' signal, and provides information about the overall amplitude of sound impinging on the Ambisonic microphone apparatus. Bi-polar or figure-of-eight transducer elements forming an array for an Ambisonic unit of the Ambisonic microphone apparatus of the invention can provide the directional information, that is, their outputs can be used to determine the direction from which each element of sound arrives. Preferably, one of these elements points front-back providing the 'X' signal, another points left-right (Ύ'), and a third up-down ('Z'). These four signals, W, X, Y, Z, convey everything needed to know about the amplitude and direction of the acoustic signals arriving at the microphone unit array. The four signals together are known as B-format signals, and, if recorded on four or eight discrete tracks or channels, can provide a record of the original sound, captured with total threedimensional accuracy. A decoder embodied in a signal processor of a camera unit, server or a user device can be configured to convert the microphone apparatus's output signals into a form suitable to drive one or more speakers.
By combining the W, X, Y and Z signals in various ways, it is possible to recreate the effect of any conventional microphone polar pattern from omnidirectional, through cardioid, hyper-cardioid and figure-of-eight, pointed in any direction. This works in exactly the same way as a conventional stereo middle-and-side microphone, only in three dimensions instead of just one (left-right). With the right combinations of W, X, Y and Z signals, it is therefore possible to replicate the signals that would have been obtained from, say, a stereo pair of crossed cardioids.
The microphone array for an Ambisonic microphone unit for the Ambisonic microphone apparatus may be an A format, a B format or a C format Ambisonic signal array. The microphone array for an Ambisonic microphone unit for the Ambisonic microphone apparatus may comprise a Nimbus-Halliday microphone, a Soundfield microphone or three figure of eight microphones in an orthonormal arrangement respectively along X, Y and Z directions as illustrated in Figs. 7A and 7B where Fig. 7A shows an array 500 having a support 502 with three figure of eight microphones 500x, 500y, 500z where the X direction microphone 500x being aligned in a horizontal direction as viewed. Fig. 7B shows an Ambisonic microphone array 550 having a support 552 with three figure of eight microphones 550x, 550y, 550z where the X direction microphone 550x is aligned in an inclined direction to horizontal as viewed. Processing the W+X, Y, Z Ambisonic signals can provide one or more virtual microphones.
Given a fixed physical relationship in space between microphone elements, assemblies or systems for an Ambisonic microphone unit for the Ambisonic apparatus of the invention, digital signal processing (DSP) of the signals from each of the individual microphone elements, assemblies or systems can create one or more virtual microphones isolating sound from a determined direction within the surround sound field. Different algorithms permit the creation of virtual microphones with extremely complex virtual polar patterns and even the possibility to steer the individual lobes of the virtual microphones patterns so as to home-inon, or to reject, particular sound sources, i.e. directions, of sound. In the case where the array consists of omnidirectional microphone elements, assemblies or systems which accept sound from all directions, electrical signals of the microphone elements, assemblies or systems contain the information about the sounds coming from all directions. Joint processing of these sounds allows the selection of a sound signal coming from a given direction.
From the foregoing, it can be seen that an Ambisonic microphone unit can be considered as comprising an array of acoustic sensors (microphone elements, assemblies, systems, etc.) mounted on a surface of a real or imaginary sphere which each sample the acoustic pressure across the surface of the sphere at any given time from sound sources at remote distances from the centre point of the sphere, the distances to the sound sources being much greater than the radius R of the sphere. Each sensor in the array can be thought of as sampling the instantaneous solution of the Fourier-Bessel exact sound field description, consisting of an infinite number of spherical harmonics. In theory, decomposition of the sound field into harmonics leads to a description of all sound sources incident on the sphere at that point in time. In practice, only a finite number of spherical harmonics can be processed which leads to some compromise in the sound field decomposition.
For practical applications, it is assumed that the distance of the sound sources from the sphere mean that all acoustic waves incident on the sphere comprise plane waves. A B format Ambisonic decomposition of the sound field, to a first order, produces the four signals as described above, namely W, X, Y, Z. As already indicated, W is the acoustic pressure at a given point in 3D space, whilst X, Y, Z are the vectors of velocity at that point in orthogonal space. For a tetrahedral microphone array using 4 cardioid microphones, it is well established that:
W’= LFU + RFD + RBU + LBD
X’ = LFU + RFD - RBU - LBDU
Y’ = LFU - RFD - RBU + EBD
Z’ = EFU - RFD + RBU - EBD where E = left, R = right, B = back, F = forward, U = up, D = down denotes each sensor in the array.
Considering the above equations as an AB matrix, in order to get to more correct W, X, Y, Z, some filters and some gain factors need to be applied. However, it has also been established that only two filters are necessary: one for the zero order W component, one for the 1st order X, Y, Z components. There are known mathematical methods for determining the zero order W component, and the 1st order X, Y, Z components, but they can also be obtained through testing the Ambisonic microphone unit or apparatus in an anechoic chamber.
Live or recorded Ambisonic audio signals, preferably B format signals, can be used to create a virtual microphone virtually located at the centre of the real or imaginary sphere that can be directed outwardly anywhere in 3D space, i.e. panoramic space, as depicted in Fig. 8 where the tetrahedral object 600 represents a tetrahedral Ambisonic microphone unit 601 for an Ambisonic microphone apparatus in accordance with the invention having cardioid acoustic sensors 602 (microphone elements, assemblies or systems as hereinbefore described) at positions coincident with the ends of the legs 603 of the tetrahedral object 600. The four acoustic sensors 602 comprise LFU, RFD, LBD, RBU acoustic sensors using the notation of the equations above. The length of a leg 603 to the centre point 604 of the tetrahedral object 600 equals the radius R of the real or imaginary sphere 606. The conical object 607 represents the sound field of the virtual microphone 608 which, in this instance, is aligned vertically with the Z axis. However, it can be aligned with any direction in the panoramic sound field. Furthermore, the characteristics of the virtual microphone 608 can be controlled by a signal processor 610 to set the virtual microphone’s polar pattern from omni-directional through figure of eight to super cardioid or hyper cardioid. Any number of independent virtual microphones 608 can be created simultaneously based on available signal processor resources.
In a preferred embodiment of the Ambisonic microphone unit 601 for an Ambisonic microphone apparatus in accordance with the invention as schematically depicted in Fig. 9, the signal processor 610 is configured to preferably create virtual microphones 608A,B to provide a user with a stereo signal. These can be tracked together through 3D space to give the user a sense of immersion in the live or recorded 3D sound space. This immersion experience can be further enhanced by applying a binaural Tenderer including Head Related Transfer Functions (HRTFs) to the sound/audio signals from the two virtual microphones 608A,B for better 3D sound perception. The binaural Tenderer including HRTFs may be embodied by machine readable instructions stored in a non-transitory memory 612 of the signal processor 610. The signal processor 610 may also be configured to store in the memory 612 machine readable instructions which, when executed by the signal processor 610, configure it to implement the methods herein described.
The first Ambisonic microphone unit 601 is therefore configured to provide two virtual microphones 608A,B, each having a selected sound field size orientated in a fixed direction within said panoramic sound field and arranged such that a first one of the virtual microphones
608A provides a left-side sound feed for a user and a second one of the virtual microphones 608B provides a right-side sound feed for a user. The left-side and right-side sound feeds can be outputted on suitable speaker devices 614 such as stereo speaker devices. Furthermore, as the user will always be at the centre of the tetrahedral object 600, the two virtual microphones 608A,B can be fed to right and left ears of, for example, a headphones device or VR goggles device to represent an audio immersion into a live or recorded 3D sound field such as a B format sound field.
The first and second virtual microphones 608A,B are configured to capture the same portion, overlapping portions or adjacent portions of the panoramic sound view in a forward visual direction of the user relative to the user’s gaze direction. The selected sound field sizes for each of the first and second virtual microphones 608A,B may be adjustable, but, in one embodiment, they each have a selected sound field size of about 30° with first virtual microphone 608A being orientated at about 30° above the X axis (horizontal) and at about +30° with respect to the Y axis and with the sound field for the second virtual microphone 608B being orientated at about 30° above the X axis (horizontal) and at about -30° with respect to the Y axis.
A downside of a VR immersion experience provided by the microphone unit 601 is that a user must point the one or more virtual microphone(s) 608 in a direction of interest, i.e. in a direction of a visual field of view which may be limited by, for example, VR goggles. Acoustic information away from this “direction of interest” may be lost or severely attenuated.
Using the microphone unit 601 as depicted by Fig. 9 provides the user with about a 60° sound field just above the horizontal plane in a direction of gaze of the user. Any sound sources (sound events) within the 60° sound field will be detected and played to the user through, for example, VR googles, headphones device or speaker devices 614, but sound events occurring outside the 60° sound field may not be detected.
Take by way of an example a scenario in which the user (first person) is facing north and thus has a 60° sound field generally in the northerly direction and where the user is listening to a second person situated somewhere north of the user (i.e. within the northerly directed 60° sound field), the two virtual microphones 608A,B will detect the second person’s conversation as the two virtual microphones 608A,B will detect any/all audio originating within the northerly directed 60° sound field. However, consider now that a vehicle located south-west of the user slams into a lamp post creating an audio event at least equal if not greater in amplitude than that of the second person’s conversation. Since the microphone unit 601 has only two virtual microphones 608A,B, each having a selected sound field size orientated in a fixed forward direction, the sound of the car crash event may be missed.
Taking the above scenario and adding a third person to the south-west of the user who shouts out the name of the second person talking to the user, the natural reaction of both the second person and the user would be to turn in the direction of the third person. However, using the microphone unit 601 as described would lead to the microphone unit 601 not detecting the third person’s call with the result that the second person would turn to face the third person, but the user would be unaware of the presence of the third person thus degrading the immersive experience.
This problem could be addressed by combining the first microphone unit 601 with a second microphone unit as schematically depicted in Fig. 10 to provide a microphone apparatus 700 in accordance with the invention. The second microphone unit 701 could be formed identically to the first microphone unit 601, but with a different configuration of virtual microphones as depicted, by way of example, in Fig. 11. The second microphone unit 701 preferably comprises an Ambisonic microphone unit configured to provide at least one virtual microphone 702 having a selected sound field size where said virtual microphone 702 is movable under control of the signal processor 610 within said panoramic sound field. Preferably, the selected sound field size of the at least one virtual microphone 702 is narrower than the selected sound field size of the at least one virtual microphone 608 of the first microphone unit 601. As shown in Fig. 11, preferably the second microphone unit 701 is configured to provide a plurality of virtual microphones 702, each having a selected sound field size movable within said panoramic sound field. The sound fields of the plurality of virtual microphones 702 may be independently movable within the panoramic sound field, but are preferably movable together as a unit within said panoramic sound field.
The one or more virtual microphones 702 are controlled by the signal processor 610 to scan the panoramic sound field to identify sound events occurring outside the forward sound field of the one or more virtual microphones 608 of the first microphone unit 601. For reasons of efficiency, the signal processor 610 preferably controls the one or more virtual microphones 701 to not scan the sound field covered by the one or more virtual microphones 601.
One of the primary functions of the one or more virtual microphones 701 is to provide audio for detected sound events occurring outside the forward sound field of the one or more virtual microphones 608 of the first microphone unit 601 to enable said audio to be inserted into an audio stream provided to a user based on the audio signals from the one or more virtual microphones 608 of the first microphone unit 601. The arrangement may be such that, for sound events occurring outside the forward sound field of the one or more virtual microphones 608, audio for such sound events is only inserted into the audio stream provided to the user based on the audio signals from the one or more virtual microphone 608 of the first microphone unit 601 if any of such detected sound events is above a selected or calculated sound level threshold.
The signal processor 610 may be configured to adjust the scan speed and/or the scan direction and/or the scan regions of the one or more virtual microphones 702 of the second microphone unit 701.
It is preferred that each of the acoustic sensors (microphone elements, assemblies or systems) of the second microphone unit 701 has a hyper-cardioid polar pattern and/or comprises MEMS microphone elements, assemblies or systems. The second microphone unit 701 is preferably arranged in a tetrahedral configuration.
The second (tetrahedral) microphone unit 701 is independently controllable to the first microphone unit 601. However, in situations where the scanning function of the second microphone unit 701 is not required, the second microphone unit 701 may be reconfigured by the signal processor 610 to a same or similar configuration as the first microphone unit 601 and the first and second microphone units 601, 701 operated in combination to provide a higher resolution eight microphone configuration.
As shown in Fig. 11 where the tetrahedral object 703 represents the second microphone unit 701, it is preferred to configure the second microphone unit 701 to have four virtual microphones 702 each with a hyper cardioid polar pattern, i.e. smaller cone of interest, where said four virtual microphones 702 are arranged in a fixed relationship preferably at 90° to each other. The set of four virtual microphones 702 is configured to rotate in azimuth and altitude to scan the entire panoramic (360° x360°) sound field, but preferably excluding the sound field of the one or more virtual microphones 608 of the first microphone unit 601. The speed of scanning may be optimized for reduced latency in event detection.
When a sound event not in the sound field of the one or more virtual microphones 608 of the first microphone unit 601 is detected, it is processed to determine if its sound level is greater than a selected threshold. In the event that a detected sound event is above the threshold, an audio signal for that sound event may be injected into a binaural audio stream based on sound signals detected by the one or more virtual microphones 608 of the first microphone unit 601.
For example, for a security environment having an otherwise audio and visually quiet panoramic field, when an intrusion event occurs the second microphone unit 701 detects the sound of any events occurring far outside an image view of a camera which then enables the signal processor 610 to control the camera and the first microphone unit 601 to be reorientated and to then record both image and audio signals for the detected event.
For panoramic video conferencing, a remote participant may be concentrating on audio and video outputs for a primary speaker when a second speaker makes a contribution/interruption. Where the second speaker is outside the image view of the video conferencing camera, the second microphone unit 701 of the apparatus 700 in accordance with the invention detects the sound event and may inject audio of the second speaker into the audio feed of the first speaker. In fact, a number of options are possible including allowing the listener to turn the audio/video cone of interest towards the second speaker, or to ignore the second speaker, or to allow the audio injection without changing the cone of interest (i.e. superimpose the audio of the interrupter into the current audio for the primary speaker in the visual field of view).
The Ambisonic microphone apparatus 700 is preferably configured to be worn by a user.
In another microphone apparatus in accordance with the invention depicted by Fig. 12, the apparatus 800 comprises only the first microphone unit 601 controlled by the signal processor 610, but where the microphone apparatus includes an input 616 for receiving audio from a remote device or system 618. Consequently, the second microphone unit is not required in this embodiment.
For example, an “event injection” could be to add a virtual tour guide, to a real-time or recorded panoramic audio-visual immersive experience using a VR apparatus or the like augmented by the modified first microphone unit 601 with input 616. The user, depending on their location within the real-time or recorded panoramic visual/audio field, could request or automatically experience a processor generated visual guide to further describe what the user is looking at/listening to.
Consider, for example, a language learning 3D video. The user might navigate through a busy market where everyone in the 3D video space is talking in a foreign language. The user might move about listening to different conversations. At any point, the user can request assistance of a virtual translator for a particular conversation whereby the signal processor 610 inserts an audio stream comprising a translation of the conversation or some commentary on the conversation, said inserted audio stream being inserted into the primary audio stream generated by the first microphone unit 601.
The invention therefore also provides a method of processing sound signals, comprising the steps of: processing a first sound signal from at least one virtual microphone of a first Ambisonic microphone unit having a selected sound field size orientated in a fixed direction within a panoramic sound field; outputting said processed first sound signal to a user on a speaker device; and inserting an out of field sound signal into said processed first sound signal.
It should be understood that the elements shown in the figures, may be implemented in various forms of hardware, software or combinations thereof. Preferably, these elements are implemented in a combination of hardware and software on one or more appropriately programmed general-purpose devices, which may include a processor, memory and input/output interfaces.
The present description illustrates the principles of the present invention. It will thus be appreciated that those skilled in the art will be able to devise various arrangements that, although not explicitly described or shown herein, embody the principles of the invention and are included within its spirit and scope.
Moreover, all statements herein reciting principles, aspects, and embodiments of the invention, as well as specific examples thereof, are intended to encompass both structural and functional equivalents thereof. Additionally, it is intended that such equivalents include both currently known equivalents as well as equivalents developed in the future, i.e., any elements developed that perform the same function, regardless of structure.
In the claims hereof, any element expressed as a means for performing a specified function is intended to encompass any way of performing that function including, for example, a) a combination of circuit elements that performs that function or b) software in any form, including, therefore, firmware, microcode or the like, combined with appropriate circuitry for executing that software to perform the function. The invention as defined by such claims resides in the fact that the functionalities provided by the various recited means are combined and brought together in the manner which the claims call for. It is thus regarded that any means that can provide those functionalities are equivalent to those shown herein.
While the invention has been illustrated and described in detail in the drawings and foregoing description, the same is to be considered as illustrative and not restrictive in character, it being understood that only exemplary embodiments have been shown and described and do not limit the scope of the invention in any manner. It can be appreciated that any of the features described herein may be used with any embodiment. The illustrative embodiments are not exclusive of each other or of other embodiments not recited herein. Accordingly, the invention also provides embodiments that comprise combinations of one or more of the illustrative embodiments described above. Modifications and variations of the invention as herein set forth can be made without departing from the spirit and scope thereof, and, therefore, only such limitations should be imposed as are indicated by the appended claims.
In the claims which follow and in the preceding description of the invention, except where the context requires otherwise due to express language or necessary implication, the word “comprise” or variations such as “comprises” or “comprising” is used in an inclusive sense, i.e. to specify the presence of the stated features but not to preclude the presence or addition of further features in various embodiments of the invention.
It is to be understood that, if any prior art publication is referred to herein, such reference does not constitute an admission that the publication forms a part of the common general knowledge in the art.

Claims (23)

1. An Ambisonic microphone apparatus comprising:
a first Ambisonic microphone unit configured to provide at least one virtual microphone having a selected sound field size orientated in a fixed direction within a panoramic sound field; and a second Ambisonic microphone unit configured to provide at least one virtual microphone having a selected sound field size movable within said panoramic sound field.
2. The Ambisonic microphone apparatus of claim 1, wherein first Ambisonic microphone unit is configured to provide two virtual microphones, each having a selected sound field size orientated in a fixed direction within said panoramic sound field and arranged such that a first one of the virtual microphones provides a left-side sound feed for a user and a second one of the virtual microphones provides a right-side sound feed for a user.
3. The Ambisonic microphone apparatus of claim 2, wherein the microphone apparatus is configured to be worn by a user and the first and second virtual microphones are configured to capture the same portion or respective portions of the panoramic sound view in a forward direction of the user.
4. The Ambisonic microphone apparatus of claim 2 or claim 3, wherein the selected sound field size for each of the first and second virtual microphones is adjustable.
5. The Ambisonic microphone apparatus of any one of claims 2 to 4, wherein each of the first and second virtual microphones has a selected sound field size of 30°.
6. The Ambisonic microphone apparatus of any one of claims 2 to 5, wherein the sound field for the first virtual microphone is orientated at 30° above horizontal and at +30° with respect to vertical and/or the sound field for the second virtual microphone is orientated at 30° above horizontal and at -30° with respect to vertical.
7. The Ambisonic microphone apparatus of any one of claims 2 to 6, wherein a binaural Tenderer including Head Related Transfer Functions (HRTF) is applied to sound signals from the two virtual microphones of the first Ambisonic microphone unit.
8. The Ambisonic microphone apparatus of any one of claims 1 to 7, wherein each of a plurality of microphone assemblies comprising said first Ambisonic microphone unit has a cardioid polar pattern and/or comprises a MEMS microphone assembly.
9. The Ambisonic microphone apparatus of claim 8, wherein the plurality of microphone assemblies comprising said first Ambisonic microphone unit is arranged in a tetrahedral configuration.
10. The Ambisonic microphone apparatus of any one of claims 1 to 9, wherein the selected sound field size of the at least one virtual microphone of the second Ambisonic microphone unit is narrower than the selected sound field size of the at least one virtual microphone of the first Ambisonic microphone unit.
11. The Ambisonic microphone apparatus of any one of claims 1 to 10, wherein the second Ambisonic microphone unit is configured to provide a plurality of virtual microphones, each having a selected sound field size movable within said panoramic sound field.
12. The Ambisonic microphone apparatus of claim 11, wherein the sound fields of the plurality of virtual microphones of the second Ambisonic microphone unit are independently movable within the panoramic sound field or are movable together within said panoramic sound field.
13. The Ambisonic microphone apparatus of any one of claims 1 to 12, wherein the at least one virtual microphone of the second Ambisonic microphone unit is controlled by a processor to scan the panoramic sound field to identify sound events occurring outside the sound field of the at least one virtual microphone of the first Ambisonic microphone unit.
14. The Ambisonic microphone apparatus of claim 13, wherein the at least one virtual microphone of the second Ambisonic microphone unit is controlled by the processor to not scan the sound field of the at least one virtual microphone of the first Ambisonic microphone unit.
15. The Ambisonic microphone apparatus of claim 13 or claim 14, wherein the at least one virtual microphone of the second Ambisonic microphone unit is configured to insert detected sound events into the sound stream provided to a user by the at least one virtual microphone of the first Ambisonic microphone unit.
16. The Ambisonic microphone apparatus of claim 15, wherein the at least one virtual microphone of the second Ambisonic microphone unit is configured to insert detected sound events into the sound stream provided to a user by the at least one virtual microphone of the first Ambisonic microphone unit only where a detected sound event is above a selected sound level threshold.
17. The Ambisonic microphone apparatus of any one of claims 13 to 16, wherein the processor is configured to adjust the scan speed and/or scan direction of the at least one virtual microphone of the second Ambisonic microphone unit.
18. The Ambisonic microphone apparatus of any one of claims 1 to 17, wherein each of a plurality of microphone assemblies comprising said second Ambisonic microphone unit has a hyper-cardioid polar pattern and/or comprises a MEMS microphone assembly.
19. The Ambisonic microphone apparatus of claim 18, wherein the plurality of microphone assemblies comprising said second Ambisonic microphone unit is arranged in a tetrahedral configuration.
20. A method of processing sound signals, comprising the steps of: processing a first sound signal from at least one virtual microphone of a first Ambisonic microphone unit having a selected sound field size orientated in a fixed direction within a panoramic sound field;
outputting said processed first sound signal to a user on a speaker device;
processing a second sound signal from at least one virtual microphone of a second Ambisonic microphone unit having a selected sound field size movable within said panoramic sound field; and inserting said processed second sound signal into said processed first sound signal.
21. A non-transitory computer readable medium comprising machine readable instructions which, when executed by a processor of a microphone apparatus according to any one of claims 1 to 19 implements the steps of:
processing a first sound signal from at least one virtual microphone of a first Ambisonic microphone unit having a selected sound field size orientated in a fixed direction within a panoramic sound field;
outputting said processed first sound signal to a user on a speaker device;
processing a second sound signal from at least one virtual microphone of a second Ambisonic microphone unit having a selected sound field size movable within said panoramic sound field; and inserting said processed second sound signal into said processed first sound signal.
22. A sound system comprising:
an Ambisonic microphone unit configured to provide at least one virtual microphone having a selected sound field size orientated in a fixed direction within a panoramic sound field, said at least one virtual microphone providing a first sound signal;
a speaker for outputting said first sound signal; and a signal processor device for injecting an out of field sound signal into said first sound signal.
23. A non-transitory computer readable medium comprising machine readable instructions which, when executed by a processor of a sound system according to claims 21 implements the steps of:
10 processing a first sound signal from at least one virtual microphone of a first
Ambisonic microphone unit having a selected sound field size orientated in a fixed direction within a panoramic sound field;
outputting said processed first sound signal to a user on a speaker device; inserting an out of field sound signal into said processed first sound signal.
23. A method of processing sound signals, comprising the steps of:
processing a first sound signal from at least one virtual microphone of a first Ambisonic microphone unit having a selected sound field size orientated in a fixed direction within a panoramic sound field;
outputting said processed first sound signal to a user on a speaker device;
5 inserting an out of field sound signal into said processed first sound signal.
GB1811458.7A 2018-07-12 2018-07-12 An ambisonic microphone apparatus Withdrawn GB2575492A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
GB1811458.7A GB2575492A (en) 2018-07-12 2018-07-12 An ambisonic microphone apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB1811458.7A GB2575492A (en) 2018-07-12 2018-07-12 An ambisonic microphone apparatus

Publications (2)

Publication Number Publication Date
GB201811458D0 GB201811458D0 (en) 2018-08-29
GB2575492A true GB2575492A (en) 2020-01-15

Family

ID=63273181

Family Applications (1)

Application Number Title Priority Date Filing Date
GB1811458.7A Withdrawn GB2575492A (en) 2018-07-12 2018-07-12 An ambisonic microphone apparatus

Country Status (1)

Country Link
GB (1) GB2575492A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022157252A1 (en) * 2021-01-21 2022-07-28 Kaetel Systems Gmbh Microphone, method for recording an acoustic signal, playback device for an acoustic signal, or method for playing back an acoustic signal

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114513698B (en) * 2020-11-16 2023-08-22 中国联合网络通信集团有限公司 Panoramic sound playing system and method

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150271621A1 (en) * 2014-03-21 2015-09-24 Qualcomm Incorporated Inserting audio channels into descriptions of soundfields
US20160227340A1 (en) * 2015-02-03 2016-08-04 Qualcomm Incorporated Coding higher-order ambisonic audio data with motion stabilization
US20180352360A1 (en) * 2017-05-31 2018-12-06 Qualcomm Incorporated System and method for mixing and adjusting multi-input ambisonics

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150271621A1 (en) * 2014-03-21 2015-09-24 Qualcomm Incorporated Inserting audio channels into descriptions of soundfields
US20160227340A1 (en) * 2015-02-03 2016-08-04 Qualcomm Incorporated Coding higher-order ambisonic audio data with motion stabilization
US20180352360A1 (en) * 2017-05-31 2018-12-06 Qualcomm Incorporated System and method for mixing and adjusting multi-input ambisonics

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022157252A1 (en) * 2021-01-21 2022-07-28 Kaetel Systems Gmbh Microphone, method for recording an acoustic signal, playback device for an acoustic signal, or method for playing back an acoustic signal

Also Published As

Publication number Publication date
GB201811458D0 (en) 2018-08-29

Similar Documents

Publication Publication Date Title
US10356514B2 (en) Spatial encoding directional microphone array
KR100964353B1 (en) Method for processing audio data and sound acquisition device therefor
US9294838B2 (en) Sound capture system
US10659873B2 (en) Spatial encoding directional microphone array
US20120288114A1 (en) Audio camera using microphone arrays for real time capture of audio images and method for jointly processing the audio images with video images
KR20130116271A (en) Three-dimensional sound capturing and reproducing with multi-microphones
Kearney et al. Distance perception in interactive virtual acoustic environments using first and higher order ambisonic sound fields
WO2017176338A1 (en) Cylindrical microphone array for efficient recording of 3d sound fields
Merimaa Applications of a 3-D microphone array
GB2575492A (en) An ambisonic microphone apparatus
KR20220038478A (en) Apparatus, method or computer program for processing a sound field representation in a spatial transformation domain
KR20060121807A (en) System and method for determining a representation of an acoustic field
JP2013110633A (en) Transoral system
JP2012109643A (en) Sound reproduction system, sound reproduction device and sound reproduction method
Bai et al. Localization and separation of acoustic sources by using a 2.5-dimensional circular microphone array
Savioja et al. Introduction to the issue on spatial audio
Shabtai et al. Spherical array beamforming for binaural sound reproduction
Emura Sound field estimation using two spherical microphone arrays
Galindo Microphone array beamforming for spatial audio object capture.
Hamdan et al. Weighted orthogonal vector rejection method for loudspeaker-based binaural audio reproduction
US20240022855A1 (en) Stereo enhancement system and stereo enhancement method
Bai et al. An integrated analysis-synthesis array system for spatial sound fields
KR102534802B1 (en) Multi-channel binaural recording and dynamic playback
AU2005100255A4 (en) 3.0 Microphone for Surround-Recording
GB2575491A (en) A microphone system

Legal Events

Date Code Title Description
WAP Application withdrawn, taken to be withdrawn or refused ** after publication under section 16(1)