EP2757811A1 - Modal beamforming - Google Patents

Modal beamforming Download PDF

Info

Publication number
EP2757811A1
EP2757811A1 EP13152209.6A EP13152209A EP2757811A1 EP 2757811 A1 EP2757811 A1 EP 2757811A1 EP 13152209 A EP13152209 A EP 13152209A EP 2757811 A1 EP2757811 A1 EP 2757811A1
Authority
EP
European Patent Office
Prior art keywords
eigenbeam
white noise
regularization
function
parameter
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
EP13152209.6A
Other languages
German (de)
French (fr)
Other versions
EP2757811B1 (en
Inventor
Markus Christoph
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Harman Becker Automotive Systems GmbH
Original Assignee
Harman Becker Automotive Systems GmbH
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Harman Becker Automotive Systems GmbH filed Critical Harman Becker Automotive Systems GmbH
Priority to EP13152209.6A priority Critical patent/EP2757811B1/en
Publication of EP2757811A1 publication Critical patent/EP2757811A1/en
Application granted granted Critical
Publication of EP2757811B1 publication Critical patent/EP2757811B1/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R3/00Circuits for transducers, loudspeakers or microphones
    • H04R3/005Circuits for transducers, loudspeakers or microphones for combining the signals of two or more microphones
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2201/00Details of transducers, loudspeakers or microphones covered by H04R1/00 but not provided for in any of its subgroups
    • H04R2201/40Details of arrangements for obtaining desired directional characteristic by combining a number of identical transducers covered by H04R1/40 but not provided for in any of its subgroups
    • H04R2201/4012D or 3D arrays of transducers

Definitions

  • the embodiments disclosed herein refer to sound capture systems and methods, particularly to sound capture methods that employ modal beamforming.
  • Beamforming sound capture systems comprise at least (a) an array of two or more microphones and (b) a beamformer that combines audio signals generated by the microphones to form an auditory scene representative of at least a portion of an acoustic sound field. Due to the underlying geometry, it is natural to represent the sound field captured on the surface of a sphere with respect to spherical harmonics. In this context, spherical harmonics are also known as acoustic modes (or eigenbeams) and the appending signal-processing techniques as modal beamforming.
  • the sphere may exist physically, or may merely be conceptual.
  • the microphones are arranged around a rigid sphere made of, for example, wood or hard plastic.
  • the microphones are arranged in free-field around an "open" sphere, referred to as an open-sphere configuration.
  • the rigid-sphere configuration provides a more robust numerical formulation, the open-sphere configuration might be more desirable in practice at low frequencies where large spheres are realized.
  • Beamforming techniques allow for the controlling of the characteristics of the microphone array in order to achieve a desired directivity.
  • One of the most general formulations is the filter-and-sum beamformer, which has readily been generalized by the concept of modal subspace decomposition. This approach finds optimum finite impulse response (FIR) filter coefficients for each microphone by solving an eigenvalue problem and projecting the desired beam pattern onto the set of eigenbeam patterns found.
  • FIR finite impulse response
  • Beamforming sound capture systems enable picking up acoustic signals dependent on their direction of propagation.
  • the directional pattern of the microphone array can be varied over a wide range due to the degrees of freedom offered by the plurality of microphones and the processing of the associated beamformer. This enables, for example, steering the look direction, adapting the pattern according to the actual acoustic situation, and/or zooming in to or out from an acoustic source. All this can be done by controlling the beamformer, which is typically implemented via software, such that no mechanical alteration of the microphone array is needed.
  • common beamformers fail to be directive at very low frequencies. Therefore, modal beamformers having less frequency-dependent directivity are desired.
  • a method for generating an auditory scene comprises: receiving eigenbeam outputs generated by decomposing a plurality of audio signals, each audio signal having been generated by a different microphone of a microphone array, wherein each eigenbeam output corresponds to a different eigenbeam for the microphone array; generating the auditory scene based on the eigenbeam outputs and their corresponding eigenbeams, wherein generating the auditory scene comprises applying a weighting value to each eigenbeam output to form a steered eigenbeam output; and combining the weighted eigenbeams to generate the auditory scene, wherein generating the auditory scene further comprises applying a regularized equalizer filter to each eigenbeam output or steered eigenbeam output, the regularized equalizer filter(s) being configured to compensate for acoustic deficiencies of the microphone array and having a regularized equalization function.
  • a modal beamformer system for generating an auditory scene comprises: a steering unit that is configured to receive eigenbeam outputs, the eigenbeam outputs having been generated by decomposing a plurality of audio signals, each audio signal having been generated by a different microphone of a microphone array, wherein each eigenbeam output corresponds to a different eigenbeam for the microphone array and the microphones are arranged on a rigid or open sphere; a weighting unit that is configured to generate the auditory scene based on the eigenbeam outputs and their corresponding eigenbeams, wherein generating the auditory scene comprises applying a weighting value to each eigenbeam output to form a steered eigenbeam output; and a summing element configured to combine the weighted eigenbeams to generate the auditory scene, wherein the weighting unit or the summing element are further configured to apply a regularized equalizer filter to each eigenbeam output or steered eigenbeam output
  • FIG. 1 is a block diagram illustrating the basic structure of a beamforming sound capture system as described in more detail, for instance, in WO 03/061336 .
  • the sound capture system comprises a plurality Q of microphones Mic1, Mic2, ... MicQ configured to form a microphone array, a matrixing unit MU (also known as modal decomposer or eigenbeam former), and a modal beamformer BF.
  • modal beamformer BF comprises a steering unit SU, a weighting unit WU, and a summing element SE, each of which will be discussed in further detail later in this specification.
  • MicQ generates a time-varying analog or digital audio signal S i ( ⁇ i , ⁇ i ,ka), S 2 ( ⁇ 1 , ⁇ 2 ,ka) ... S Q ( ⁇ Q , ⁇ Q ,ka) corresponding to the sound incident at the location of that microphone.
  • Y + ⁇ m,n ( ⁇ , ⁇ ) corresponds to a different mode for the microphone array.
  • the term auditory scene is used generically to refer to any desired output from a sound capture system, such as the system of FIG. 1 .
  • the definition of the particular auditory scene will vary from application to application.
  • the output generated by beamformer BF may correspond to one or more output signals, e.g., one for each speaker used to generate the resultant auditory scene.
  • beamformer BF may simultaneously generate beampatterns for two or more different auditory scenes, each of which can be independently steered to any direction in space.
  • microphones Mic1, Mic2, ... MicQ may be mounted on the surface of an acoustically rigid sphere or may be arranged on a virtual (open) sphere to form the microphone array.
  • weighting unit WU may be arranged upstream of steering unit SU so that the non-steered eigenbeams are weighted (not shown).
  • FIG. 2 shows a schematic diagram of a possible microphone array MA for the sound capture system of FIG. 1 .
  • microphone array MA comprises the Q microphones Mic1, Mic2, ... MicQ of FIG. 1 mounted on the surface of an acoustically rigid sphere RS in a "truncated icosahedron" pattern.
  • Each microphone Mic1, Mic2, ... MicQ in microphone array MA generates one of the audio signals S 1 ( ⁇ 1 , ⁇ 1 ,ka), S 2 ( ⁇ 1 , ⁇ 2 ,ka) ... S Q ( ⁇ Q , ⁇ Q ,ka) that is transmitted to matrixing unit MU of FIG. 1 via some suitable (e.g., wired or wireless) connection (not shown in FIG. 2 ).
  • the continuous spherical sensor may be replaced by a discrete spherical array, in particular when the subsequent processing is digital-signal processing.
  • beamformer BF exploits the geometry of the spherical array of FIG. 2 and relies on the spherical harmonic decomposition of the incoming sound field by matrixing unit MU to construct a desired spatial response.
  • steering unit SU generates (according to Y + ⁇ m,n ( ⁇ Des , ⁇ Des )) steered spherical harmonics Y +1 0,0 ( ⁇ Des , ⁇ Des ), Y +1 1,0 ( ⁇ Des , ⁇ Des ), ... Y + ⁇ m,n ( ⁇ Des , ⁇ Des ) from the spherical harmonics Y +1 0,0 ( ⁇ , ⁇ ), Y +1 1,0 ( ⁇ , ⁇ ), ...
  • Beamformer BF can provide continuous steering of the beampattern in 3-D space by changing a few scalar multipliers, while the filters determining the beampattern itself remain constant. The shape of the beampattern is invariant with respect to the steering direction. Beamformer BF needs only one filter per spherical harmonic (in the weighting unit WU), rather than per microphone as in known beamforming concepts, which significantly reduces the computational cost.
  • the sound capture system of FIG. 1 with the spherical array geometry of FIG. 2 enables accurate control over the beampattern in 3-D space.
  • the sound capture system can also provide multi-direction beampatterns or toroidal beampatterns giving uniform directivity in one plane. These properties can be useful for applications such as general multichannel speech pick-up, video conferencing, and direction of arrival (DOA) estimation. It can also be used as an analysis tool for room acoustics to measure, e.g., directional properties of the sound field.
  • DOA direction of arrival
  • the eigenbeams are also suitable for wave field synthesis (WFS) methods that enable spatially accurate sound reproduction in a fairly large volume, allowing for reproduction of the sound field that is present around the recording sphere. This allows for all kinds of general real-time spatial audio.
  • WFS wave field synthesis
  • FIG. 3 A circuit that provides the beamforming functionality is shown in detail in FIG. 3 .
  • the modal beamformer circuit of FIG. 3 receives the Q audio signals S 1 ( ⁇ 1 , ⁇ 1 ,ka), S 2 ( ⁇ 1 , ⁇ 2 ,ka) ... S Q ( ⁇ Q , ⁇ Q ,ka) provided by microphones Mic1, Mic2, ... MicQ, transforms the audio signals S 1 ( ⁇ 1 , ⁇ 1 ,ka), S 2 ( ⁇ 1 , ⁇ 2 ,ka) ... S Q ( ⁇ Q , ⁇ Q ,ka) into the spherical harmonics Y +1 0,0 ( ⁇ , ⁇ ), Y +1 1,0 ( ⁇ , ⁇ ), ...
  • the circuit of FIG. 3 may be realized by hardware (and software) components that (together) build matrixing unit MU and the modal beamformer, which includes steering unit SU, modal weighting unit WU, and summing element SE.
  • Matrixing unit MU and steering unit SU include coefficient elements CE that multiply the respective input signals with given coefficients and adders AD that sum up the input signals multiplied with coefficients so that the audio signals S 1 ( ⁇ 1 , ⁇ 1 ,ka), S 2 ( ⁇ 1 , ⁇ 2 ,ka) ...
  • S Q ( ⁇ Q , ⁇ Q ,ka) are decomposed into the eigenbeams, i.e., the spherical harmonics Y +1 0,0 ( ⁇ , ⁇ ), Y +1 1,0 ( ⁇ , ⁇ ), ... Y + ⁇ m,n ( ⁇ , ⁇ ), which are then processed to provide the steered spherical harmonics Y +1 0,0 ( ⁇ Des , ⁇ Des ), Y +1 1,0 ( ⁇ Des , ⁇ Des ), ... Y + ⁇ m,n ( ⁇ Des , ⁇ Des ).
  • Modal weighting unit WU includes delay elements DE, coefficient elements CE, and adders AD, which are connected to form FIR filters for weighting. The output signals of these FIR filters are summed up by summing element SE.
  • Matrixing unit MU in the modal beamformer of FIG. 3 is responsible for decomposing the sound field, which is picked up by microphones Mic1, Mic2, ... MicQ and decomposed into the different eigenbeam outputs, i.e., the spherical harmonics Y +1 0,0 ( ⁇ , ⁇ ), Y +1 1,0 ( ⁇ , ⁇ ), ... Y + ⁇ m,n ( ⁇ , ⁇ ), corresponding to the zero-order, first-order, and second-order spherical harmonics.
  • This can also be seen as a transformation, where the sound field is transformed from the time or frequency domain into the "modal domain”.
  • the real and imaginary parts of the spherical harmonics can also work with the real and imaginary parts of the spherical harmonics.
  • weighting unit WU may be implemented accordingly.
  • Steering unit SU allows for steering the look direction by the angles ⁇ Des and ⁇ Des .
  • Weighting unit WU compensates for a frequency-dependent sensitivity over the modes (eigenbeams), i.e., modal weighting over frequency, to the effect that the modal composition is adjusted, e.g., equalized.
  • Equalizing is used to compensate for deficiencies of the microphone array, e.g., self-noise of the microphones, location errors of the microphones at the surface of the sphere, and other electrical and mechanical drawbacks.
  • the order of a modal beamformer has to be reduced toward low frequencies, leading to a gradually decreasing directivity pattern with decreasing frequency.
  • the ambisonic components up to M th order can be calculated from the Q microphone signals:
  • B W - 1 ⁇ Y T ⁇ Y - 1 ⁇ Y T ⁇ p a
  • B diag W m - 1 ⁇ Y + ⁇ p a
  • diag EQ m ka diagonal matrix having the radial equalizing functions EQ m ka , in which 0 ⁇ m ⁇ M .
  • FIG. 4 An arrangement for extracting the N ambisonic components B from the wave field p a is illustrated in FIG. 4 .
  • the related sound field is defined solely by the pressure distribution p a ( ⁇ q , ⁇ q ) on the sphere's surface, which can be easily measured by sound pressure sensors (microphones).
  • p a ⁇ q , ⁇ q , ⁇ q , 0
  • inner sources i.e., sources inside the measurement sphere
  • outer sources i.e., sources outside the measurement sphere
  • the outer sources serve to model the scattered field occurring at the surface of a scattered sphere.
  • Nabla operator expressed as spherical coordinates
  • a parameter called susceptibility K( ⁇ ) or its reciprocal white noise gain WNG( ⁇ ) may be used.
  • white noise gain WNG( ⁇ ) addresses most effects and problems caused by microphone noise, changes in the transfer function, and variations of the microphone positions, so that it is representative of the sensitivity of the beamformer.
  • a white noise gain WNG( ⁇ ) > 0 [dB] characterizes a sufficient suppression of uncorrelated errors and is thus indicative of a robust system behavior, while a white noise gain WNG( ⁇ ) ⁇ 0 [dB] is indicative of an amplification of the noise and is therefore indicative of an increasingly unstable system behavior.
  • the array gain G( ⁇ ) is the ratio of the energy of sound coming from the look direction of the beamformer to the energy of omnidirectionally incoming sound.
  • the array gain G( ⁇ ) is a measure for the improvement in the acoustic signal-to-noise ratio SNR, based on the directivity of the modal beamformer for sound coming from the look direction of the beamformer.
  • parameters required for calculation are set to a starting value or a constant value, as the case may be.
  • the following parameters may be set to, for instance:
  • Regularization provides the ability to achieve a robust system by way of adjusting the regularization parameter ⁇ ( ⁇ ). This is a trade-off between a higher robustness, i.e., a higher white noise gain WNG dB ( ⁇ ), and less directivity in look direction ⁇ ( ⁇ 0 , ⁇ 0 , ⁇ ), i.e., a decreasing array gain G dB ( ⁇ ) .
  • the adaptation process begins with the maximum directivity G dBMax ( ⁇ ) and is then decreased by the increasing regularization parameter ⁇ ( ⁇ ) until the desired white noise gain threshold WNG dBMin is no more undercut.
  • Steps 4, 5, and 6 serve to calculate the white noise gain WNG db ( ⁇ ).
  • the regularization filter T m ( ⁇ ) or T m (ka) is calculated as outlined above using regularization parameter ⁇ (w).
  • the transfer function EQ m ( ⁇ ) is calculated as outlined above using the current version of the transfer function T m ( ⁇ ) of the regularization filter or the current version of the regularization parameter ⁇ ( ⁇ ).
  • the white noise gain WNG db ( ⁇ ) is calculated as outlined above using the transfer function EQ m ( ⁇ ) and the current version of the transfer function T m ( ⁇ ) of the regularization filter (regularization function). Steps 4 and 5 may be taken simultaneously or in opposite order.
  • step 10 the directivity ⁇ ( ⁇ 0 , ⁇ 0 , ⁇ ) of the modal beamformer is calculated for sound coming from the look direction using the transfer function EQ m ( ⁇ ) provided in step 5.
  • step 12 the current white noise gain WNG db ( ⁇ ) is compared with the predetermined white noise gain threshold WNG dBMin ( ⁇ ), and it is checked to see if the regularization parameter ⁇ ( ⁇ ) has reached its maximum according to (
  • step 14 the adaptation process for the current angular frequency ⁇ has been completed so that the current equalizing function EQ m ( ⁇ ) has been limited to the given threshold or if the current regularization parameter has reached its maximum.
  • step 14 the current angular frequency ⁇ is checked to see if it has reached its maximum value ⁇ Max . If ⁇ ⁇ ⁇ Max , the process jumps back to step 2 using the current angular frequency ⁇ . Otherwise, i.e., if the equalizing filter has been adapted for the complete set of frequencies, the filter coefficients are outputted in step 15.
  • the directivity characteristic of the beamformer is a 4 th -order cardioid and the minimum white noise gain WNG db ( ⁇ ) used in the adaptation process is -10 [dB].
  • FIG. 9 illustrates a regularization parameter over frequency ⁇ ( ⁇ ) for a common 4 th -order modal beamformer.
  • regularization i.e., limiting the maximum directivity index for frequencies up to, for instance, 750 [Hz]
  • values above a minimum lower threshold WNG dbMin of -10 [dB] may be maintained.
  • the exemplary beamformer exhibits the desired directivity of a 4 th -order cardioid.
  • FIG. 10 illustrates the corresponding white noise gain WNG for the above-mentioned 4 th -order beamformer, which supports the findings in connection with the diagram of FIG. 9 .
  • the corresponding directivity index DI and the array gain G db ( ⁇ ) as shown FIG. 11 illustrate that the maximum array gain G db ( ⁇ ) is more or less below 10 [dB] depending on the frequency.
  • FIG. 16 depicts the resulting directivity of the beamformer outlined above in look directivity ⁇ ( ⁇ 0 , ⁇ 0 , ⁇ ) as amplitudes over frequency.

Abstract

A method and system for generating an auditory scene that comprises: receiving eigenbeam outputs, the eigenbeam outputs having been generated by decomposing a plurality of audio signals, each audio signal having been generated by a different microphone of a microphone array, wherein each eigenbeam output corresponds to a different eigenbeam for the microphone array; generating the auditory scene based on the eigenbeam outputs and their corresponding eigenbeams, wherein generating the auditory scene comprises applying a weighting value to each eigenbeam output to form a steered eigenbeam output; and combining the weighted eigenbeams to generate the auditory scene, wherein generating the auditory scene further comprises applying a regularized equalizer filter to each eigenbeam output or steered eigenbeam output, the regularized equalizer filter(s) being configured to compensate for acoustic deficiencies of the microphone array and having a regularized equalization function.

Description

    FIELD OF TECHNOLOGY
  • The embodiments disclosed herein refer to sound capture systems and methods, particularly to sound capture methods that employ modal beamforming.
  • BACKGROUND
  • Beamforming sound capture systems comprise at least (a) an array of two or more microphones and (b) a beamformer that combines audio signals generated by the microphones to form an auditory scene representative of at least a portion of an acoustic sound field. Due to the underlying geometry, it is natural to represent the sound field captured on the surface of a sphere with respect to spherical harmonics. In this context, spherical harmonics are also known as acoustic modes (or eigenbeams) and the appending signal-processing techniques as modal beamforming.
  • Two spherical microphone array configurations are commonly employed: the sphere may exist physically, or may merely be conceptual. In the first configuration, the microphones are arranged around a rigid sphere made of, for example, wood or hard plastic. In the second configuration, the microphones are arranged in free-field around an "open" sphere, referred to as an open-sphere configuration. Although the rigid-sphere configuration provides a more robust numerical formulation, the open-sphere configuration might be more desirable in practice at low frequencies where large spheres are realized.
  • Beamforming techniques allow for the controlling of the characteristics of the microphone array in order to achieve a desired directivity. One of the most general formulations is the filter-and-sum beamformer, which has readily been generalized by the concept of modal subspace decomposition. This approach finds optimum finite impulse response (FIR) filter coefficients for each microphone by solving an eigenvalue problem and projecting the desired beam pattern onto the set of eigenbeam patterns found.
  • Beamforming sound capture systems enable picking up acoustic signals dependent on their direction of propagation. The directional pattern of the microphone array can be varied over a wide range due to the degrees of freedom offered by the plurality of microphones and the processing of the associated beamformer. This enables, for example, steering the look direction, adapting the pattern according to the actual acoustic situation, and/or zooming in to or out from an acoustic source. All this can be done by controlling the beamformer, which is typically implemented via software, such that no mechanical alteration of the microphone array is needed. However, common beamformers fail to be directive at very low frequencies. Therefore, modal beamformers having less frequency-dependent directivity are desired.
  • SUMMARY
  • A method for generating an auditory scene comprises: receiving eigenbeam outputs generated by decomposing a plurality of audio signals, each audio signal having been generated by a different microphone of a microphone array, wherein each eigenbeam output corresponds to a different eigenbeam for the microphone array; generating the auditory scene based on the eigenbeam outputs and their corresponding eigenbeams, wherein generating the auditory scene comprises applying a weighting value to each eigenbeam output to form a steered eigenbeam output; and combining the weighted eigenbeams to generate the auditory scene, wherein generating the auditory scene further comprises applying a regularized equalizer filter to each eigenbeam output or steered eigenbeam output, the regularized equalizer filter(s) being configured to compensate for acoustic deficiencies of the microphone array and having a regularized equalization function.
  • A modal beamformer system for generating an auditory scene comprises: a steering unit that is configured to receive eigenbeam outputs, the eigenbeam outputs having been generated by decomposing a plurality of audio signals, each audio signal having been generated by a different microphone of a microphone array, wherein each eigenbeam output corresponds to a different eigenbeam for the microphone array and the microphones are arranged on a rigid or open sphere; a weighting unit that is configured to generate the auditory scene based on the eigenbeam outputs and their corresponding eigenbeams, wherein generating the auditory scene comprises applying a weighting value to each eigenbeam output to form a steered eigenbeam output; and a summing element configured to combine the weighted eigenbeams to generate the auditory scene, wherein the weighting unit or the summing element are further configured to apply a regularized equalizer filter to each eigenbeam output or steered eigenbeam output, the regularized equalizer filter(s) being configured to compensate for acoustic deficiencies of the microphone array and having a regularized equalization function.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • The figures identified below are illustrative of some embodiments of the invention. The figures are not intended to be limiting of the invention recited in the appended claims. The embodiments, both as to their organization and manner of operation, together with further object and advantages thereof, may best be understood with reference to the following description, taken in connection with the accompanying drawings, in which:
    • FIG. 1 is a schematic representation of a generalized structure of a sound capture system that employs modal beamforming;
    • FIG. 2 is a schematic representation of a possible microphone array for the sound capture system of FIG. 1 ;
    • FIG. 3 is a schematic representation of a more detailed structure of a sound capture system that employs modal beamforming;
    • FIG. 4 is a schematic representation of an arrangement for extracting ambisonic components with which an arbitrary sound field can be coded and/or decoded;
    • FIG. 5 is a schematic representation of an arrangement for measuring a sound pressure field;
    • FIG. 6 is a schematic diagram illustrating the radial function of a spherical microphone array;
    • FIG. 7 is a schematic diagram illustrating the magnitude frequency response of the equalizer filter corresponding to the radial function illustrated in FIG. 6;
    • FIG. 8 is a flow chart illustrating the process of calculating the equalizer filter referred to above in connection with FIG. 7;
    • FIG. 9 is a schematic diagram illustrating the regularization parameter over frequency for an improved 4th-order modal beamformer with a given minimal white noise gain of -10 [dB];
    • FIG. 10 is a schematic diagram corresponding to the flow chart of FIG. 8 and the diagram of FIG. 9, and illustrating the white noise gain for a 4th-order modal beamformer utilizing a regularized equalizing filter;
    • FIG. 11 is a schematic diagram corresponding to the flow chart of FIG. 8 and the diagram of FIG. 9, and illustrating the directivity index for a 4th-order modal beamformer utilizing a regularized equalizing filter;
    • FIG. 12 is a schematic diagram illustrating the magnitude frequency response of the improved regularized equalizing filter;
    • FIG. 13 is a schematic diagram illustrating the corresponding phase response of the improved filter of FIG. 12;
    • FIG. 14 is a schematic diagram illustrating the magnitude frequency response of an improved, regularized equalizing filter;
    • FIG. 15 is a schematic diagram illustrating the corresponding phase frequency response of the improved filter of FIG. 14; and
    • FIG. 16 is a schematic diagram illustrating the cylindrical view of the directional pattern of the improved 4th-order modal beamformer over frequency.
    DESCRIPTION
  • FIG. 1 is a block diagram illustrating the basic structure of a beamforming sound capture system as described in more detail, for instance, in WO 03/061336 . The sound capture system comprises a plurality Q of microphones Mic1, Mic2, ... MicQ configured to form a microphone array, a matrixing unit MU (also known as modal decomposer or eigenbeam former), and a modal beamformer BF. In the system of FIG. 1, modal beamformer BF comprises a steering unit SU, a weighting unit WU, and a summing element SE, each of which will be discussed in further detail later in this specification. Each microphone Mic1, Mic2, ... MicQ generates a time-varying analog or digital audio signal Siii,ka), S212,ka) ... SQQQ,ka) corresponding to the sound incident at the location of that microphone.
  • Matrixing unit MU decomposes (according to Y+ = (YTY)-1YT) the audio signals S111,ka), S212,ka) ... SQQQ,ka) generated by the different microphones Mic1, Mic2, ... MicQ to generate a set of spherical harmonics Y+1 0,0(θ,ϕ), Y+1 1,0(θ,ϕ), ... Y m,n(θ,ϕ), also known as eigenbeams or modal outputs, where each spherical harmonic Y+1 0,0(θ,ϕ), Y+1 1,0(θ,ϕ), ... Y m,n(θ,ϕ) corresponds to a different mode for the microphone array. The spherical harmonics Y+1 0,0(θ,ϕ), Y+1 1,0(θ,ϕ), ... Y m,n(θ,ϕ) are then processed by beamformer BF to generate an auditory scene that is represented in the present example by output signal OUT (=Ψ(θDesDes)). In this specification, the term auditory scene is used generically to refer to any desired output from a sound capture system, such as the system of FIG. 1. The definition of the particular auditory scene will vary from application to application. For example, the output generated by beamformer BF may correspond to one or more output signals, e.g., one for each speaker used to generate the resultant auditory scene. Moreover, depending on the application, beamformer BF may simultaneously generate beampatterns for two or more different auditory scenes, each of which can be independently steered to any direction in space. In certain implementations of the sound capture system, microphones Mic1, Mic2, ... MicQ may be mounted on the surface of an acoustically rigid sphere or may be arranged on a virtual (open) sphere to form the microphone array. Alternatively, weighting unit WU may be arranged upstream of steering unit SU so that the non-steered eigenbeams are weighted (not shown).
  • FIG. 2 shows a schematic diagram of a possible microphone array MA for the sound capture system of FIG. 1. In particular, microphone array MA comprises the Q microphones Mic1, Mic2, ... MicQ of FIG. 1 mounted on the surface of an acoustically rigid sphere RS in a "truncated icosahedron" pattern. Each microphone Mic1, Mic2, ... MicQ in microphone array MA generates one of the audio signals S111,ka), S212,ka) ... SQQQ,ka) that is transmitted to matrixing unit MU of FIG. 1 via some suitable (e.g., wired or wireless) connection (not shown in FIG. 2). The continuous spherical sensor may be replaced by a discrete spherical array, in particular when the subsequent processing is digital-signal processing.
  • Referring again to FIG. 1, beamformer BF exploits the geometry of the spherical array of FIG. 2 and relies on the spherical harmonic decomposition of the incoming sound field by matrixing unit MU to construct a desired spatial response. In beamformer BF, steering unit SU generates (according to Y m,nDesDes)) steered spherical harmonics Y+1 0,0DesDes), Y+1 1,0DesDes), ... Y+ σ m,nDesDes) from the spherical harmonics Y+1 0,0(θ,ϕ), Y+1 1,0(θ,ϕ), ... Y m,n(θ,ϕ), which are further processed by weighting unit WU and summing element SE. Beamformer BF can provide continuous steering of the beampattern in 3-D space by changing a few scalar multipliers, while the filters determining the beampattern itself remain constant. The shape of the beampattern is invariant with respect to the steering direction. Beamformer BF needs only one filter per spherical harmonic (in the weighting unit WU), rather than per microphone as in known beamforming concepts, which significantly reduces the computational cost.
  • The sound capture system of FIG. 1 with the spherical array geometry of FIG. 2 enables accurate control over the beampattern in 3-D space. In addition to pencil-like beams, the sound capture system can also provide multi-direction beampatterns or toroidal beampatterns giving uniform directivity in one plane. These properties can be useful for applications such as general multichannel speech pick-up, video conferencing, and direction of arrival (DOA) estimation. It can also be used as an analysis tool for room acoustics to measure, e.g., directional properties of the sound field. The sound capture system of FIG. 1 offers another advantage: it supports decomposition of the sound field into mutually orthogonal components, the eigenbeams (i.e., spherical harmonics) that can also be used to reproduce the sound field. The eigenbeams are also suitable for wave field synthesis (WFS) methods that enable spatially accurate sound reproduction in a fairly large volume, allowing for reproduction of the sound field that is present around the recording sphere. This allows for all kinds of general real-time spatial audio.
  • A circuit that provides the beamforming functionality is shown in detail in FIG. 3. The modal beamformer circuit of FIG. 3 receives the Q audio signals S111,ka), S212,ka) ... SQQQ,ka) provided by microphones Mic1, Mic2, ... MicQ, transforms the audio signals S111,ka), S212,ka) ... SQQQ,ka) into the spherical harmonics Y+1 0,0(θ,ϕ), Y+1 1,0(θ,ϕ), ... Y m,n(θ,ϕ), and steers the spherical harmonics. The circuit of FIG. 3 may be realized by hardware (and software) components that (together) build matrixing unit MU and the modal beamformer, which includes steering unit SU, modal weighting unit WU, and summing element SE. Matrixing unit MU and steering unit SU include coefficient elements CE that multiply the respective input signals with given coefficients and adders AD that sum up the input signals multiplied with coefficients so that the audio signals S111,ka), S212,ka) ... SQQQ,ka) are decomposed into the eigenbeams, i.e., the spherical harmonics Y+1 0,0(θ,ϕ), Y+1 1,0(θ,ϕ), ... Y m,n(θ,ϕ), which are then processed to provide the steered spherical harmonics Y+1 0,0DesDes), Y+1 1,0DesDes), ... Y+ σ m,nDesDes). Modal weighting unit WU includes delay elements DE, coefficient elements CE, and adders AD, which are connected to form FIR filters for weighting. The output signals of these FIR filters are summed up by summing element SE.
  • Matrixing unit MU in the modal beamformer of FIG. 3 is responsible for decomposing the sound field, which is picked up by microphones Mic1, Mic2, ... MicQ and decomposed into the different eigenbeam outputs, i.e., the spherical harmonics Y+1 0,0(θ,ϕ), Y+1 1,0(θ,ϕ), ... Y m,n(θ,ϕ), corresponding to the zero-order, first-order, and second-order spherical harmonics. This can also be seen as a transformation, where the sound field is transformed from the time or frequency domain into the "modal domain". To simplify a time-domain implementation, one can also work with the real and imaginary parts of the spherical harmonics. This will result in real-value coefficients, which are more suitable for a time-domain implementation. If the sensitivity equals the imaginary part of a spherical harmonic, then the beampattern of the corresponding array factor will also be the imaginary part of this spherical harmonic. To compensate for this frequency dependence, weighting unit WU may be implemented accordingly. Steering unit SU allows for steering the look direction by the angles θDes and ϕDes. Weighting unit WU compensates for a frequency-dependent sensitivity over the modes (eigenbeams), i.e., modal weighting over frequency, to the effect that the modal composition is adjusted, e.g., equalized. Equalizing is used to compensate for deficiencies of the microphone array, e.g., self-noise of the microphones, location errors of the microphones at the surface of the sphere, and other electrical and mechanical drawbacks. Summation node SE performs the actual beamforming for the sound capture system by summing up the weighted harmonics to yield the beamformer output OUT = ψ(θDes, ϕDes), i.e., the auditory scene.
  • Due to self-noise amplification, the order of a modal beamformer has to be reduced toward low frequencies, leading to a gradually decreasing directivity pattern with decreasing frequency. Regularization of the radial filter is configured such that, for example, the white noise gain will not fall below a given limit (e.g., WNGdBMin = - 10[dB] ( ±3 [dB])) to keep the robustness, i.e., the self-noise amplification, within a tolerable range, and a constant directivity in look direction over frequency, such as 0 [dB], will be reached. By doing this, an optimum balance between robustness and directivity will result, leading to a modal beamformer with enhanced properties in which the directivity of the modal beamformer is enhanced by keeping the transfer function in look direction at a frequency-independent constant value and a minimum threshold of robustness. Regularization may be achieved by adapting the weighting coefficients of the FIR filters in weighting unit WU to an optimum.
  • But before going into detail on the regularization process, some general issues are discussed, in particular issues with regard to the measurement of the acoustic wave field via a rigid spherical microphone array. In general, sound pressure values paq, ϕq) can be described by way of the Fourier-Bessel series truncated to the Mth order at positions θq, ϕq of the Q microphones located at radius a, in which 1 ≤ q ≤ Q, as follows: p a θ q ϕ q = m = 0 M W m ka 0 n m , σ = ± 1 B m , n σ Y m , n σ θ q ϕ q
    Figure imgb0001

    in which: paq, ϕq) is the sound pressure measured by the qth microphone located at position(s) θq, ϕq at the surface of a sphere having a radius a; Wm(ka) is the radial function that describes the acoustic wave field in the vicinity of the sphere center, i.e., in a certain distance from the center; and Bσ m,n is the complex, mth order, nth degree, ambisonic component that completely describes wave fields up to the Mth order.
  • The above equation can be rewritten as: p a Q × 1 = Y Q × N W N × N B N × 1 , W N × N = diag 2 m + 1 W m ka 2 M + 1 = diag W 0 ka , , W m ka , , W m ka , , W m ka , , W m ka , diag = diagonal matrix
    Figure imgb0002
  • By rearranging the previous equation, the ambisonic components up to Mth order can be calculated from the Q microphone signals: B = W - 1 Y T Y - 1 Y T p a , Y + = Y T Y - 1 Y T = pseudo inverse of Y , B = diag W m - 1 Y + p a , in which 2 m + 1 diag W m - 1 = diag EQ m ka = diag 1 W 0 ka 1 W m ka 1 W m ka 1 W M ka 1 W M ka , and diag EQ m ka = diagonal matrix having the radial equalizing functions EQ m ka , in which 0 m M .
    Figure imgb0003
  • An arrangement for extracting the N ambisonic components B from the wave field pa is illustrated in FIG. 4. The room and, thus, the spherical harmonics Y+1 0,0(θ,ϕ), Y+1 1,0(θ,ϕ), ... Y m,n(θ,ϕ) are sampled by way of matrix Y+ at the position(s) θq, ϕq with the Q microphones, in which: 1 q Q ,
    Figure imgb0004

    and M = Q - 1
    Figure imgb0005

    so that the N = (M+1)2 ambisonic components of Mth order can be calculated from the samples.
  • Combining the Q microphone signals (1 ≤ q ≤ Q), i.e., S111,ka), S212,ka) ... SQQQ,ka), by way of matrix Y+ into N output signals, which correspond to signals that would have been obtained when a wave field is sampled with N microphones having a certain directivity, can be seen as a transformation from the time domain into the spatial domain. By way of a radial equalizing function EQm(ka) the thereby generated spherical harmonic signals are then weighted to provide frequency-independent normalized-to-1 ambisonic components Bσ m,n or the ambisonic signals B.
  • Referring now to FIG. 5, the derivation of the radial function Wm(ka) of a rigid closed sphere with microphones arranged on the sphere's surface can be described as follows: at the surface of a rigid closed sphere, velocity va is zero, i.e., vaqq,ka) = 0.
  • Therefore, the related sound field is defined solely by the pressure distribution paq, ϕq) on the sphere's surface, which can be easily measured by sound pressure sensors (microphones). Mathematically, the underlying, physically logical condition that vaqq,ka) = 0 holds at the surface of a rigid body can be met when inner sources (i.e., sources inside the measurement sphere) and outer sources (i.e., sources outside the measurement sphere) are superposed, as illustrated in FIG. 5. For instance, the outer sources serve to model the scattered field occurring at the surface of a scattered sphere. Based on the general form of the Bessel series, P r ω = S j ω ( m = 0 j m j m kr 0 n m , σ = ± 1 B m , n σ Y m , n σ θ ϕ + m = 0 j m h m 2 kr 0 n m , σ = ± 1 A m , n σ Y m , n σ θ ϕ ) ,
    Figure imgb0006

    in which A m , n σ
    Figure imgb0007
    are the weighting coefficients (ambisonic coefficients) that relate to the spherical Bessel function of the 1st kind jm (kr) and that describe the pervasive wave field (plane wave); A m , n σ
    Figure imgb0008
    are the weighting coefficients that relate to the spherical Hankel function of the 2nd kind h m 2 kr
    Figure imgb0009
    and that describe the outgoing spherical wave field (spherical wave), eventually representing the scattered wave field at the surface of the solid sphere; P(r,ω) is the sound pressure spectrum at the position r = r,θ,φ; S(jω) is the input signal in the spectral domain; j is the imaginary unit for complex numbers with j = - 1 ;
    Figure imgb0010
    jm (kr) is the spherical Bessel function of the 1st kind, mth order; h m 2 kr
    Figure imgb0011
    is the spherical Bessel function of the 2nd kind, mth order; and based on the assumption that the outer sources provide incoming plane waves (indicated by the index "Inc"), the wave field generated by the outer sources that moves toward the center and thus toward the sphere's surface can be described as follows: p Inc θ q ϕ q ka = m = 0 j m j m ka 0 n m , σ = ± 1 B m , n σ Y m , n σ θ q ϕ q , in which p Inc θ q ϕ q ka = Sound pressure received by the q - th microphone at position θ q ϕ q , and generated by the outer source .
    Figure imgb0012
  • Furthermore, it is required that the velocity at the sphere's surface, i.e., r = a, is zero:
    VIncqq,ka) + VScatqq,ka) = 0 or
    VScatqq,ka) = - VIncqq,ka), in which
    VIncqq,ka) = velocity at the qth microphone at position (θqq) caused by the plane wave from the outer source, and
    VScatqq,ka) = velocity at the qth microphone at position (θqq) caused by the spherical wave from the outer source.
  • Differentiating the previous equation with respect to r or a leads to v Scat θ q φ q ka = - k m = 0 j m j m ʹ ka 0 nm , σ = ± 1 B m , n σ Y m , n σ θ q φ q ,
    Figure imgb0013

    j'm(ka) = 1st derivative of the spheric Bessel function of the 1st kind, mth order.
  • Applying the Euler equation to this leads to: v θ ϕ kr = 1 j ρck p θ ϕ kr = 1 r 2 r r 2 r + 1 r 2 sin ϕ ϕ sin ϕ ϕ + 1 r 2 sin 2 ϕ 2 θ 2 , i . e . , = Nabla operator expressed as spherical coordinates , ρ = Medium air density in kg m 3 ρ 0 = 1.292 kg m 3 , c = Sonic speed in m s v 0 = 343 m s ,
    Figure imgb0014
  • The Euler equation links the sound velocity v(θqq,ka) to the sound pressure p(θqq,ka) and the fact that sound velocity v(θqq,ka) and sound pressure p(θqq,ka) can be derived by weighting spherical harmonics according to the Fourier-Bessel series: v θ ϕ kr = m = 0 ka 0 n m , σ = ± 1 v m , n σ kr Y m , n σ θ q ϕ q , in which v m , n σ kr = Sound coefficients of corresponding , spherical harmonics ,
    Figure imgb0015

    so that the following relationship of sound velocity v(θqq,ka) and sound pressure p(θqq,ka) at the surface of a rigid sphere applies: p θ ϕ kr = jρc m = 0 h m 2 kr h m 2 ʹ kr 0 n m , σ = ± 1 v m , n σ kr Y m , n σ θ q φ q , in which h m 2 ʹ kr = 1 st derivative of the spherical Bessel function of the 2 nd kind , m th order .
    Figure imgb0016
  • The sound velocity coefficients vσ m,n of an incoming plane wave can be calculated from the ambisonic coefficients Bσ m,n as follows: v m , n σ ka = - k j m j m ʹ ka B m , n σ .
    Figure imgb0017
  • From the two previous equations, a simplified relationship can be provided for the sound pressure pscatqq,ka) that results from the sound field of the spherical waves distributing inner sound sources and that can be measured on the sphere's surface (r = a) at the positions (θqq) where the q pressure sensors (microphones) are arranged,, thereby neglecting the constants jpck and 4π: p Scat θ q ϕ q ka = - m = 0 j m j m ʹ ka h m 2 ka h m 2 ʹ ka 0 n m , σ = ± 1 B m , n σ Y m , n σ θ q ϕ q .
    Figure imgb0018
  • Superimposing the wave fields, e.g., the sound pressures of the inner and outer sources, leads to the sound pressures occurring at the surface of a rigid sphere having a radius a: p θ q ϕ q ka = p Inc θ q ϕ q ka + p Scat θ q ϕ q ka p θ q ϕ q ka = m = 0 j m j m ka - j m ʹ ka h m 2 ka h m 2 ʹ ka 0 n m , σ = ± 1 B m , n σ Y m , n σ θ q ϕ q
    Figure imgb0019

    which can be simplified by way of the Wronskian relation: j m kr h m 2 kr j m ʹ kr h m 2 ʹ kr = j kr 2 ,
    Figure imgb0020

    to read as: W m ka = j m - 1 ka 2 h m 2 ʹ ka ,
    Figure imgb0021

    so that: p θ q ϕ q ka = m = 0 j m - 1 ka 2 h m 2 ʹ ka 0 n m , σ = ± 1 B m , n σ Y m , n σ θ q ϕ q .
    Figure imgb0022
  • An accordingly calculated magnitude frequency response for the radial functions wm(ka)=1/EQm(ka) for a sphere radius of a=0.9m in a spectral range of 50Hz to 6700Hz for orders m up to M=10 is shown in FIG. 6. The corresponding radial equalizing function EQm(ka) for orders m up to M=4, is depicted in FIG. 7. The equations outlined above provide a least-square solution that offers the smallest-mean-squared error, but cannot be used per se in connection with small or very small wm(ka) values. However, this is the case at higher orders m and/or lower frequencies f so that instabilities may occur due to amplified noise of the sensors or measurement system, positioning errors of the microphones, or irregularities in the frequency characteristic, which may deteriorate the results.
  • By introducing a regularization functionality, which means limiting the radial equalizing function EQm(ka) by way of a regularization function Tm(ka), e.g., to a maximum gain, these drawbacks can be overcome, whereby filters known as Tikhonov filters may be used. The following applies: EQ m ka = T m ka W m ka , T m ka = W m ka 2 W m ka 2 + ϵ 2 , ϵ = { ϵ = 0 , = , T m ka = Regularization function with m = 0 M , ϵ = Regularization parameter ϵ 0.
    Figure imgb0023
  • If ε = 0, the system works as a least-square beamformer (ideal case as shown above, i.e., without any regularization, which leads to the solution with the highest directivity but also with the least robustness). If ε = ∞, the system works as a delay-and-sum beamformer, which delivers the maximum possible robustness but the least directivity. The radial equalizing functions EQm(ka) can be further simplified to read as: EQ m ka = T m ka W m ka = W m ka * W m ka 2 + ϵ 2 , W m ka = Complex conjugate of matrix W m ka
    Figure imgb0024
  • Thus, with regularization parameters ε(ka) or ε(ω) one can control the modal beamformer to exhibit a certain robustness with respect to the inherent noise that is amplified with wm(ka), in particular at lower frequencies.
  • In order to calculate appropriate values for the regularization parameters ε(ka) or ε(ω), a parameter called susceptibility K(ω) or its reciprocal white noise gain WNG(ω) may be used. For instance, white noise gain WNG(ω) addresses most effects and problems caused by microphone noise, changes in the transfer function, and variations of the microphone positions, so that it is representative of the sensitivity of the beamformer. A white noise gain WNG(ω) > 0 [dB] characterizes a sufficient suppression of uncorrelated errors and is thus indicative of a robust system behavior, while a white noise gain WNG(ω) < 0 [dB] is indicative of an amplification of the noise and is therefore indicative of an increasingly unstable system behavior.
  • The white noise gain WNG(ω) represents the ratio of the energy of the useful signal provided by the microphone array to the energy of the noise signal provided by the microphone array and can be expressed as: WING ω = d θ 0 φ 0 ω 2 1 Q 2 q = 1 Q | H q θ q φ q ω | 2 , in which d θ 0 φ 0 ω = output signal of the beamformer having a look direction of θ 0 φ 0 H q θ q φ q ω = noise signal of the microphones caused by inherent noise .
    Figure imgb0025
  • The useful signal d(θ0, ϕ0, ω) output by the microphone array and the output signal of the beamformer having the required look direction can be described as follows: d θ 0 φ 0 ω = 1 N m = 0 M 0 n m , σ = ± 1 T m ω Y m , n σ θ 0 φ 0 , in which N = M + 1 2 = Number of spherical harmonics , Y m , n σ θ 0 φ 0 = Spherical harmonic of m - th order , n - th degree in look direction of the beamformer θ 0 φ 0 , T m ω = Tikhonov regularization filter .
    Figure imgb0026
  • The noise signal of the qth microphone of the microphone array over frequency caused by the inherent noise of the microphone is represented by Hqq, ϕq, ω), which is: H θ q φ q ω = 1 N 2 m = 0 M 0 n m , σ = ± 1 E Q m ω Y m , n σ θ q φ q .
    Figure imgb0027
  • The frequency-dependent white noise gain WNGdB(ω) in [dB] is: WNG dB ω = 10 log 10 WNG ω .
    Figure imgb0028
  • Thereby, the maximum white noise gain WNGdB(ω) for a modal beamformer is as follows: WNG dB M at = 10 log 10 Q ,
    Figure imgb0029

    which is, e.g., ≈ 15 [dB] for Q = 32.
  • Furthermore, it has been found that best results are achieved when an array gain G(ω) is maximum and the white noise gain WNGdB(ω) is above a given minimum value, for instance, WNGdB(ω) > -10 [dB]. The array gain G(ω) can be calculated according to: G ω = ψ θ 0 φ 0 ω 2 4 π θ = 0 2 π φ = 0 π ψ θ φ ω 2 sin θ dθdφ , ψ θ φ ω = Directivity of the microphone array .
    Figure imgb0030
  • In words, the array gain G(ω) is the ratio of the energy of sound coming from the look direction of the beamformer to the energy of omnidirectionally incoming sound.
  • The directivity Ψ(θ0, ϕ0, ω) for incoming sound from the look direction can be described as: φ θ 0 φ 0 ω = 1 N m = 0 M 0 n m , σ = ± 1 C m , n σ θ φ ω Y m , n σ θ 0 φ 0 ω , in which C m , n σ θ φ ω = diag T m ω W m ω Y + p a , and C m , n σ θ φ ω = Weighting factors of the modal beamformer ,
    Figure imgb0031

    while the directivity Ψ(θ0, ϕ0, ω) for omnidirectionally incoming sound can be described as: ψ θ φ ω = 4 π Q 1 N m = 0 M 0 n m , σ = ± 1 C m , n σ θ φ ω Y m , n σ θ φ ω .
    Figure imgb0032
  • Then, the frequency-dependent array gain G(ω) is: G dB ω = 10 log 10 G ω .
    Figure imgb0033
  • The array gain G(ω) is a measure for the improvement in the acoustic signal-to-noise ratio SNR, based on the directivity of the modal beamformer for sound coming from the look direction of the beamformer. The achievable maximum array gain GdBmax(ω) in [dB] is: G dB Max ω = 20 log 10 M + 1 .
    Figure imgb0034
  • For instance, when M = 4, then the achievable maximum array gain GdBmax(ω) is approximately 14dB.
  • Referring now to FIG. 8, an exemplary iterative process of adapting the parameters of a modal beamformer is described in detail. In an initializing step 1, parameters required for calculation are set to a starting value or a constant value, as the case may be. The following parameters may be set to, for instance:
    • WNG parameter
      • Minimum white noise gain threshold WNGdBMin(ω), which is not undercut by the regularized modal beamformer; for instance, WNG dBMin = -10[dB].
      • Offset ΔWNGdB in [dB], by which the minimum white noise gain threshold WNGdBMin(ω) is overcut or undercut during the adaptation process; for instance, ΔWNGdB = 0.5dB.
    • Regularization parameter ε(ω)
      • Maximum regularization parameter ε Max , which is the upper limit for the regularization parameter ε(ω); for instance, ε Max = 1.
      • Step size by which the regularization parameter ε(ω) is increased or decreased.
    • Frequency ω
      • Start value of the (angular) frequency for the adaptation process; for instance, ω = 2π1[HZ],
      • Step size by which the (angular) frequency is increased or decreased when the adaption is completed at a certain frequency; for instance, Δω = 2π1[Hz],
      • Maximum (angular) frequency at which an adaptation is performed; for instance, ω Max = πfs [Hz].
  • Then the adaptation process is started in step 2. In step 3, the regularization parameter is set to, e.g., ε(ω) = 0 for the current frequency ω under investigation. Regularization provides the ability to achieve a robust system by way of adjusting the regularization parameter ε(ω). This is a trade-off between a higher robustness, i.e., a higher white noise gain WNG dB(ω), and less directivity in look direction ψ(θ00,ω), i.e., a decreasing array gain GdB (ω). If the regularization parameter is set to ε(ω) = 0, the adaptation process begins with the maximum directivity GdBMax(ω) and is then decreased by the increasing regularization parameter ε(ω) until the desired white noise gain threshold WNGdBMin is no more undercut.
  • Steps 4, 5, and 6 serve to calculate the white noise gain WNGdb(ω). In step 4, the regularization filter Tm(ω) or Tm(ka), is calculated as outlined above using regularization parameter ε(w). In step 5, the transfer function EQm(ω) is calculated as outlined above using the current version of the transfer function Tm(ω) of the regularization filter or the current version of the regularization parameter ε(ω). In step 6, the white noise gain WNGdb(ω) is calculated as outlined above using the transfer function EQm(ω) and the current version of the transfer function Tm(ω) of the regularization filter (regularization function). Steps 4 and 5 may be taken simultaneously or in opposite order.
  • In the following step 7, the current white noise gain WNGdb(ω) is compared with the predetermined threshold WNGdbMin so that according to step 8 or 9, ϵ ω = { ϵ ω + Δ ϵ , if WNG dB ( ω > WNG dB Min ϵ ω + Δ ϵ , otherwise .
    Figure imgb0035
  • In step 10, the directivity ψ(θ00,ω) of the modal beamformer is calculated for sound coming from the look direction using the transfer function EQm(ω) provided in step 5.
  • In step 11, the transfer function of the equalizing filter, the equalizing function EQm(ω), is scaled according to: EQ m ω = EQ m ω ψ θ 0 φ 0 ω .
    Figure imgb0036
  • In step 12, the current white noise gain WNGdb(ω) is compared with the predetermined white noise gain threshold WNGdBMin(ω), and it is checked to see if the regularization parameter ε(ω) has reached its maximum according to (|WNG dBMin - WNG dB(ω)| > ΔWNG) and (ε(ω) ≤ εMax). If both requirements are met, i.e., if (|WNG dBMin - WNG dB(ω)| > ΔWNG)&(ε(ω) ≤ ε Max ), the adaptation process is not yet finished, resulting in jumping back to step 3 and starting again with an updated regularization parameter ε(ω).
  • Otherwise, i.e., if the adaptation process for the current angular frequency ω has been completed so that the current equalizing function EQm(ω) has been limited to the given threshold or if the current regularization parameter has reached its maximum, the angular frequency ω is incremented according to ω = ω + Δω in step 13, which is followed by step 14.
  • In step 14, the current angular frequency ω is checked to see if it has reached its maximum value ωMax. If ω < ωMax, the process jumps back to step 2 using the current angular frequency ω. Otherwise, i.e., if the equalizing filter has been adapted for the complete set of frequencies, the filter coefficients are outputted in step 15.
  • Referring to FIGS. 9 through 16, measurements made with an exemplary arrangement in combination with an exemplary adaptation method are described in detail. The arrangement includes a sphere having a radius of a = 0.09 [m] and the shape of an obtuse icosahedron, which is a blend of two platonic solids, i.e., an icosahedron and a dodecahedron. The number of microphones arranged on the sphere is Q = 32. The directivity characteristic of the beamformer is a 4th-order cardioid and the minimum white noise gain WNGdb(ω) used in the adaptation process is -10 [dB].
  • FIG. 9 illustrates a regularization parameter over frequency ε(ω) for a common 4th-order modal beamformer. As can be seen from FIG. 9 with regularization, i.e., limiting the maximum directivity index for frequencies up to, for instance, 750 [Hz], values above a minimum lower threshold WNGdbMin of -10 [dB] may be maintained. Above 750 [Hz], the exemplary beamformer exhibits the desired directivity of a 4th-order cardioid. FIG. 10 illustrates the corresponding white noise gain WNG for the above-mentioned 4th-order beamformer, which supports the findings in connection with the diagram of FIG. 9. The corresponding directivity index DI and the array gain Gdb(ω) as shown FIG. 11 illustrate that the maximum array gain Gdb(ω) is more or less below 10 [dB] depending on the frequency.
  • However, applying the adapted regularization filter (Tm(ω)) described herein causes a monotonic decrease of the array gain Gdb(ω) down to 7.5 [dB] at 20 [Hz] as shown in FIG. 11. The magnitude frequency responses of the thereby applied M regularization filter Tm(ω) is shown in FIG. 12, and its corresponding frequency-independent phase characteristic is illustrated in FIG. 13.
  • Further applying the optimized radial equalizing filter (EQm(ω)) causes an improved regularized equalizing filter whose magnitude frequency response is depicted in FIG. 14 and whose phase frequency response is depicted in FIG. 15. The directivity of the corresponding improved beamformer at frequencies above 650 [Hz] is a 4th-order cardioid, between 300 [Hz] and 650 [Hz] a 3rd-order cardioid, between 70 [Hz] and 300 [Hz] a 2nd-order cardioid, and below 70 [Hz] a 1st-order cardioid. FIG. 16 depicts the resulting directivity of the beamformer outlined above in look directivity ψ(θ00,ω) as amplitudes over frequency.
  • While exemplary embodiments are described above, it is not intended that these embodiments describe all possible forms of the invention. The words used in the specification are words of description rather than limitation, and it is understood that various changes may be made without departing from the spirit and scope of the invention. Additionally, the features of various implementing embodiments may be combined to form further embodiments of the invention.

Claims (15)

  1. A method for generating an auditory scene, comprising:
    receiving eigenbeam outputs, the eigenbeam outputs having been generated by decomposing a plurality of audio signals, each audio signal having been generated by a different microphone of a microphone array, wherein each eigenbeam output corresponds to a different eigenbeam for the microphone array, and the microphones are arranged on a rigid or open sphere; and
    generating the auditory scene based on the eigenbeam outputs and their corresponding eigenbeams, wherein generating the auditory scene comprises applying a weighting value to each eigenbeam output to form a steered eigenbeam output; and
    combining the weighted eigenbeams to generate the auditory scene, wherein
    generating the auditory scene further comprises applying a regularized equalizer filter to each eigenbeam output or steered eigenbeam output, the regularized equalizer filter(s) being configured to compensate for acoustic deficiencies of the microphone array and having a regularized equalization function.
  2. The method of claim 1 wherein the regularized equalization function is a radial equalization function that comprises the quotient of a regularization function limiting the radial equalization function and a radial function describing an acoustic wave field in the vicinity of the surface of the rigid sphere or the center of the open sphere.
  3. The method of claim 2 wherein the regularization function is the quotient of the absolute value of the square of the radial function and the sum of the absolute value of the square of the radial function and a regularization parameter, the regularization parameter being set to a value greater than 0 and smaller than a maximum value that is smaller than infinity.
  4. The method of claim 3 wherein the maximum value of the regularization parameter is 1.
  5. The method of claim 3 or 4 wherein the regularization parameter depends on a susceptibility parameter that is the reciprocal of a white noise gain parameter, the white noise gain parameter being greater than a minimum white noise gain parameter that is not undercut by the equalizer filter.
  6. The method of claim 5 wherein the minimum white noise gain parameter is -10 [dB].
  7. The method of any one of claims 3 through 6 wherein the regularization parameter is adapted in an iterative process.
  8. The method of claim 7 wherein, for a given frequency, the iterative process comprises:
    setting at least the minimum white noise gain parameter and the regularization parameters to a starting value or a constant value; and
    calculating the white noise gain, the regularization function, and the radial equalization function; and
    comparing the calculated white noise gain parameter with the set minimum white noise gain parameter; and
    calculating the directivity for sound coming from the look direction using the radial equalization function; and
    scaling the radial equalization function; and
    comparing the calculated white noise gain with the set minimum white noise gain and checking if the regularization parameter has reached its maximum; if both requirements are met, the adaptation process is not yet finished, resulting in jumping back and starting again with an updated regularization parameter; otherwise the process for the current frequency has been completed and the frequency is incremented; and
    checking if the current frequency has reached its maximum value; if the frequency has not reached its maximum, the process jumps back and starts again with another frequency; otherwise the filter coefficients are outputted.
  9. The method of claim 8 wherein the iterative process comprises an offset white noise gain parameter by which the minimum white noise gain parameter is overcut or undercut at maximum during adaptation.
  10. A modal beamformer system for generating an auditory scene, comprising:
    a steering unit that is configured to receive eigenbeam outputs, the eigenbeam outputs having been generated by decomposing a plurality of audio signals, each audio signal having been generated by a different microphone of a microphone array, wherein each eigenbeam output corresponds to a different eigenbeam for the microphone array, and the microphones are arranged on a rigid or open sphere; and
    a weighting unit that is configured to generate the auditory scene based on the eigenbeam outputs and their corresponding eigenbeams, wherein generating the auditory scene comprises applying a weighting value to each eigenbeam output to form a steered eigenbeam output; and
    a summing element configured to combine the weighted eigenbeams to generate the auditory scene, wherein
    the weighting unit or the summing element are further configured to apply a regularized equalizer filter to each eigenbeam output or steered eigenbeam output, the regularized equalizer filter(s) being configured to compensate for acoustic deficiencies of the microphone array and having a regularized equalization function.
  11. The system of claim 10 wherein the regularized equalization function is a radial equalization function that comprises the quotient of a regularization function limiting the radial equalization function and a radial function describing an acoustic wave field in the vicinity of the sphere.
  12. The system of claim 11 wherein the regularization function is the quotient of the absolute value of the square of the radial function and the sum of the absolute value of the square of the radial function and a regularization parameter, the regularization parameter being set to a value greater than 0 and smaller than a maximum value that is smaller than infinity.
  13. The system of claim 12 wherein the maximum value of the regularization parameter is 1.
  14. The system of claim 12 or 13 wherein the regularization parameter depends on a susceptibility parameter that is the reciprocal of a white noise gain parameter, the white noise gain parameter being greater than a minimum white noise gain parameter that is not undercut by the equalizer filter.
  15. The system of claim 14 wherein the minimum white noise gain parameter is -10 [dB].
EP13152209.6A 2013-01-22 2013-01-22 Modal beamforming Active EP2757811B1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
EP13152209.6A EP2757811B1 (en) 2013-01-22 2013-01-22 Modal beamforming

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
EP13152209.6A EP2757811B1 (en) 2013-01-22 2013-01-22 Modal beamforming

Publications (2)

Publication Number Publication Date
EP2757811A1 true EP2757811A1 (en) 2014-07-23
EP2757811B1 EP2757811B1 (en) 2017-11-01

Family

ID=47605386

Family Applications (1)

Application Number Title Priority Date Filing Date
EP13152209.6A Active EP2757811B1 (en) 2013-01-22 2013-01-22 Modal beamforming

Country Status (1)

Country Link
EP (1) EP2757811B1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110139200A (en) * 2018-02-09 2019-08-16 奥迪康有限公司 Hearing devices including the Beam-former filter unit for reducing feedback
CN111929665A (en) * 2020-09-01 2020-11-13 中国科学院声学研究所 Target depth identification method and system based on wave number spectrum main lobe position

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2003061336A1 (en) 2002-01-11 2003-07-24 Mh Acoustics, Llc Audio system based on at least second-order eigenbeams
US20120093344A1 (en) * 2009-04-09 2012-04-19 Ntnu Technology Transfer As Optimal modal beamformer for sensor arrays

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2003061336A1 (en) 2002-01-11 2003-07-24 Mh Acoustics, Llc Audio system based on at least second-order eigenbeams
US20120093344A1 (en) * 2009-04-09 2012-04-19 Ntnu Technology Transfer As Optimal modal beamformer for sensor arrays

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110139200A (en) * 2018-02-09 2019-08-16 奥迪康有限公司 Hearing devices including the Beam-former filter unit for reducing feedback
CN110139200B (en) * 2018-02-09 2022-05-31 奥迪康有限公司 Hearing device comprising a beamformer filtering unit for reducing feedback
CN111929665A (en) * 2020-09-01 2020-11-13 中国科学院声学研究所 Target depth identification method and system based on wave number spectrum main lobe position
CN111929665B (en) * 2020-09-01 2024-02-09 中国科学院声学研究所 Target depth identification method and system based on wave number spectrum main lobe position

Also Published As

Publication number Publication date
EP2757811B1 (en) 2017-11-01

Similar Documents

Publication Publication Date Title
US11032663B2 (en) System and method for virtual navigation of sound fields through interpolation of signals from an array of microphone assemblies
EP2905975B1 (en) Sound capture system
EP1856948B1 (en) Position-independent microphone system
KR101415026B1 (en) Method and apparatus for acquiring the multi-channel sound with a microphone array
Jin et al. Design, optimization and evaluation of a dual-radius spherical microphone array
Moreau et al. 3d sound field recording with higher order ambisonics–objective measurements and validation of a 4th order spherical microphone
Fisher et al. Near-field spherical microphone array processing with radial filtering
JP6030660B2 (en) Method and apparatus for processing a spherical microphone array signal on a hard sphere used to generate an ambisonic representation of a sound field
KR101555416B1 (en) Apparatus and method for spatially selective sound acquisition by acoustic triangulation
US10659873B2 (en) Spatial encoding directional microphone array
CN110557710B (en) Low complexity multi-channel intelligent loudspeaker with voice control
KR100856246B1 (en) Apparatus And Method For Beamforming Reflective Of Character Of Actual Noise Environment
WO2017218399A1 (en) Spatial encoding directional microphone array
JP2013543987A (en) System, method, apparatus and computer readable medium for far-field multi-source tracking and separation
Epain et al. Independent component analysis using spherical microphone arrays
Delikaris-Manias et al. Signal-dependent spatial filtering based on weighted-orthogonal beamformers in the spherical harmonic domain
Zhao et al. Design of robust differential microphone arrays with the Jacobi–Anger expansion
Jackson et al. Sound field planarity characterized by superdirective beamforming
WO2021018830A1 (en) Apparatus, method or computer program for processing a sound field representation in a spatial transform domain
EP2757811B1 (en) Modal beamforming
Zaunschirm et al. Measurement-based modal beamforming using planar circular microphone arrays
Corey et al. Motion-tolerant beamforming with deformable microphone arrays
Shabtai et al. Spherical array beamforming for binaural sound reproduction
Rasumow et al. The impact of the white noise gain (WNG) of a virtual artificial head on the appraisal of binaural sound reproduction
Mendoza et al. An Adaptive Algorithm for Speaker Localization in Real Environments using Smartphones

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20130122

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

AX Request for extension of the european patent

Extension state: BA ME

R17P Request for examination filed (corrected)

Effective date: 20150108

RBV Designated contracting states (corrected)

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

17Q First examination report despatched

Effective date: 20151116

GRAP Despatch of communication of intention to grant a patent

Free format text: ORIGINAL CODE: EPIDOSNIGR1

INTG Intention to grant announced

Effective date: 20170526

GRAS Grant fee paid

Free format text: ORIGINAL CODE: EPIDOSNIGR3

GRAA (expected) grant

Free format text: ORIGINAL CODE: 0009210

AK Designated contracting states

Kind code of ref document: B1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

REG Reference to a national code

Ref country code: GB

Ref legal event code: FG4D

REG Reference to a national code

Ref country code: CH

Ref legal event code: EP

Ref country code: AT

Ref legal event code: REF

Ref document number: 943133

Country of ref document: AT

Kind code of ref document: T

Effective date: 20171115

REG Reference to a national code

Ref country code: IE

Ref legal event code: FG4D

REG Reference to a national code

Ref country code: DE

Ref legal event code: R096

Ref document number: 602013028624

Country of ref document: DE

REG Reference to a national code

Ref country code: NL

Ref legal event code: MP

Effective date: 20171101

REG Reference to a national code

Ref country code: LT

Ref legal event code: MG4D

REG Reference to a national code

Ref country code: AT

Ref legal event code: MK05

Ref document number: 943133

Country of ref document: AT

Kind code of ref document: T

Effective date: 20171101

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: NL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20171101

Ref country code: FI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20171101

Ref country code: SE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20171101

Ref country code: ES

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20171101

Ref country code: NO

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20180201

Ref country code: LT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20171101

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: RS

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20171101

Ref country code: BG

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20180201

Ref country code: IS

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20180301

Ref country code: GR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20180202

Ref country code: HR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20171101

Ref country code: LV

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20171101

Ref country code: AT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20171101

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: CZ

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20171101

Ref country code: DK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20171101

Ref country code: CY

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20171101

Ref country code: EE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20171101

Ref country code: SK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20171101

REG Reference to a national code

Ref country code: DE

Ref legal event code: R097

Ref document number: 602013028624

Country of ref document: DE

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: RO

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20171101

Ref country code: IT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20171101

Ref country code: SM

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20171101

Ref country code: PL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20171101

REG Reference to a national code

Ref country code: CH

Ref legal event code: PL

PLBE No opposition filed within time limit

Free format text: ORIGINAL CODE: 0009261

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT

26N No opposition filed

Effective date: 20180802

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: FR

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20180131

Ref country code: LU

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20180122

REG Reference to a national code

Ref country code: IE

Ref legal event code: MM4A

REG Reference to a national code

Ref country code: FR

Ref legal event code: ST

Effective date: 20180928

REG Reference to a national code

Ref country code: BE

Ref legal event code: MM

Effective date: 20180131

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: BE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20180131

Ref country code: CH

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20180131

Ref country code: LI

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20180131

Ref country code: SI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20171101

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: IE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20180122

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: MC

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20171101

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: MT

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20180122

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: TR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20171101

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: PT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20171101

Ref country code: HU

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT; INVALID AB INITIO

Effective date: 20130122

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: MK

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20171101

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: AL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20171101

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: DE

Payment date: 20221220

Year of fee payment: 11

P01 Opt-out of the competence of the unified patent court (upc) registered

Effective date: 20230526

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: GB

Payment date: 20231219

Year of fee payment: 12