WO1994024835A1 - Method of reproducing sound - Google Patents

Method of reproducing sound Download PDF

Info

Publication number
WO1994024835A1
WO1994024835A1 PCT/GB1994/000799 GB9400799W WO9424835A1 WO 1994024835 A1 WO1994024835 A1 WO 1994024835A1 GB 9400799 W GB9400799 W GB 9400799W WO 9424835 A1 WO9424835 A1 WO 9424835A1
Authority
WO
WIPO (PCT)
Prior art keywords
sound
recording
signals
sensors
matrix
Prior art date
Application number
PCT/GB1994/000799
Other languages
French (fr)
Original Assignee
Adaptive Audio Limited
Nelson, Philip, Arthur
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Adaptive Audio Limited, Nelson, Philip, Arthur filed Critical Adaptive Audio Limited
Publication of WO1994024835A1 publication Critical patent/WO1994024835A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S3/00Systems employing more than two channels, e.g. quadraphonic
    • H04S3/02Systems employing more than two channels, e.g. quadraphonic of the matrix type, i.e. in which input signals are combined algebraically, e.g. after having been phase shifted with respect to each other

Definitions

  • This invention relates to a method and apparatus for reproducing sound.
  • the present invention in one aspect is aimed at ensuring that the direction of propagation of the waves in the original field are well approximated in the reproduced field. This approach appears to be a more practicable alternative, and offers the promise of successful operation over a wide frequency bandwidth.
  • a method of reproducing sound comprises creating a sound recording by recording the sound received by individual sensors of a compact cluster of at least three spaced-apart sound sensors which are located in a localised region of the recording space sound field which is desired to be subsequently reproduced, and subsequently reconstructing a representation of the original sound field in a localised region of the listening space corresponding to said localised region of the recording space, by arranging at least three sound sources in a spaced-apart distribution which surrounds the centre of the listening space localised region, the reproduction being aimed primarily at reproducing the direction of propagation of the sound waves in the localised region of the recording space, the vector of signals input to the sound sources being produced by subjecting the vector of recorded outputs of the sound sensors to a matrix of linear filters which have been derived using a least squares technique.
  • the sound sensors are desirably spaced apart by no more than one half, and preferably one third, of an acoustic wavelength at the highest frequency of interest.
  • the filter matrix is preferably designed by minimising the mean square error between desired signals and reproduced signals, the desired signals being simply taken as delayed versions of the original recording.
  • R ⁇ (n) is a matrix of signals obtained by filtering the recorded signals through the transfer function matrix C(z)
  • is a leak coefficient
  • is a convergence coefficient
  • e is an error vector equal to the difference between the desired signals and the reproduced signals.
  • At least four sound sensors are employed in the recording space, and the sound sensors are preferably arranged in rectangular array.
  • At least four sound sources are employed in the listening space and the sound sources are preferably circumferentially equally spaced-apart on a circle centred on said centre of the listening space localised region.
  • the sound sources are preferably substantially farther from the centre of said listening space localised region than the maximum distance of a sound sensor from the centre of said recording space localised region.
  • sixteen sound sensors are arranged in a 4 ⁇ 4 square array to produce a sound recording, and twelve sound sources are arranged on a circle in the listening space.
  • Figure 1 An illustration of the possibilities for the perfect reproduction of sound. Recordings are made of u(y, t) and p(y, t) on a surface S enclosing a volume V. The field is later reproduced in an identical volume V' by using a continuous layer of monopole and dipole sources on a surface S' that is geometrically identical to S.
  • Figure 2 Reproduction of a plane wave sound field.
  • the strengths of the sources are optimally adjusted to minimise the error between the recorded signals and those reproduced at equivalent locations in the listening space.
  • the vector u is a vector of recorded signals
  • v is a vector of signals input to the sources used for reproduction and is a vector of signals reproduced in the sound field.
  • the vector d defines the vector of signals that are desired to be reproduced and is a vector of error signals.
  • the matrix C defines the transfer functions between v and
  • the matrix H defines a matrix of filters which are used to operate on the recorded signals u in order to determine the source input signals v.
  • the matrix A is used to define the desired signals d in terms of the recorded signals u.
  • the (x 1 , x 2 ) coordinate positions of the reproducing sources are shown.
  • the recording transducer array was a 0.5 m ⁇ 0.5 m square centred on the origin and contained 8 ⁇ 8 transducers spaced on a uniform grid.
  • Figure 5 The normalised minimum error in the reproduction of the sound field as a function of frequency and angle of incidence of the plane wave in the recording space when the field is reproduced using the source arrangement of Figure 4.
  • Figure 6 The geometrical arrangement of reproducing sources and recording transducers used for investigating the effectiveness with which the direction of propagation of the recorded plane wave can be reproduced.
  • the recording transducer array was a 0.045 m ⁇ 0.045 m square centred on the origin and contained 4 ⁇ 4 transducers spaced on a uniform grid.
  • Figure 7 The output of the sources in the reproduction system illustrated in Figure 10 as a function of the angle ⁇ of the incident plane wave. Results are shown in the form of I ⁇ m ( ⁇ ) I as a polar plot on a linear scale for all the sources in the array and for a single source in the array.
  • I ⁇ m ( ⁇ ) I as a polar plot on a linear scale for all the sources in the array and for a single source in the array.
  • Figure 8 The two source/three sensor geometry used in the study of the stability of the optimal filters.
  • the sources and sensors are all situated on the x 1 axis in the coordinate positions shown.
  • the positions of the system poles in the z-plane are also shown. Note that for every pole inside the unit circle, there is a pole outside.
  • Figure 10 The geometrical arrangement of reproducing sources and recording transducers used for the design of a causal, static realisation of the optimal filter matrix H.
  • Four sources were used in the coordinate positions shown together with four sensors spaced 0.1 m apart on a square grid.
  • Figure 11 The impulse responses of four of the optimal filters designed using the geometry of Figure 10. The impulse responses are shown corresponding to a) H 1 1 (z) b) H 1 2 (z) c) H 13 (z) d) H 14 (z).
  • Figure 12 The power spectral density of the sequence ⁇ 1 (n) input to source
  • FIG. 13 The "reversed transfer function" form of the block diagram when the elements of H are implemented as recursive filters. All the filters are given identical recursive parts as shown in (a) which enables the block diagram to be redrawn as in (b). 1. INTRODUCTION
  • Reference [1] does not, however, deal with recent advances in what may be termed the active control of "structure-borne” sound. That is, the control of wave fields in elastic solids and their interaction with fluid borne sound fields.
  • reference [2] Much of the recent work in this area will be summarised in reference [2] and is also dealt with in reference [3].
  • the present invention stems from some work in the active control of acoustic fields, but with a rather different objective in mind than that traditionally associated with the subject.
  • Most work to date has understandably been focused on the active suppression of unwanted acoustic noise, where the "desired" sound field is simply a sound field whose amplitude is of considerably lower amplitude than that associated with the unwanted sound.
  • the scope of the subject we will broaden the scope of the subject to include the production of a sound field which has predefined spatial and temporal characteristics. The application of interest in thus in the accurate reproduction of a given sound field rather than in its suppression.
  • p o is the density of the medium
  • x is the position vector of the field point contained within the volume V
  • the vector y defines the position on the surface S that encloses V
  • the distance R
  • n is the unit normal vector that points into the volume V from the surface S.
  • the strength of the monopoles is determined by the particle velocity distribution u(y, t) on the surface.
  • the second integral can be interpreted as the sound field produced by a continuous layer of dipole sources on the surface S, their strength being determined by the pressure fluctuation p(y, t).
  • the K transducers detecting the harmonic plane wave in the recording space produce harmonic signals described by the complex numbers u k ( ⁇ ) which comprise the complex vector u( ⁇ ).
  • the objective is to reproduce these signals as closely as possible at the equivalent locations in the listening space.
  • M sources are used to reproduce the field and their "input" signals are described by the complex numbers ⁇ m ( ⁇ ) which comprise the complex vector v( ⁇ ).
  • These sources produce signals at L locations in the listening space, these signals comprising the vector
  • the desired signal vector is exactly the recorded signal vector.
  • the cost function thus consists of the sum of the squared errors e H ( ⁇ ) e( ⁇ ) plus the sum of squared source input voltages v H ( ⁇ ) v( ⁇ ) multiplied by the factor ⁇ .
  • the term ⁇ thus quantifies the relative weighting in the cost function given to the "effort" used in minimising the sum of squared errors. Equation (3) can be expanded to give
  • v o ( ⁇ ) is the optimal vector of source input signals and J o ( ⁇ ) is the minimum value of the cost function.
  • FIG. 4 This analysis has been used by Kirkeby and Nelson [8] to investigate the effectiveness of a number of geometrical arrangements of recording and reproducing transducers.
  • Figure 4 This consists of an array of four (point monopole) sources spaced on a 90° arc.
  • the recorded signals u( ⁇ ) are assumed to be those produced by a harmonic plane wave travelling at an angle ⁇ to the x 1 - axis of the coordinate system.
  • the complex pressure produced by such a wave can be written as
  • FIG. 6 This shows a reproduction system in accordance with the present invention which uses 12 loudspeakers to surround a central array of 16 sensors spaced uniformly on a grid that is only 0.045 m square. Assume that these sensors record signals due to a harmonic plane wave at an angle ⁇ , exactly as described in Section 3.
  • the source inputs v( ⁇ ) necessary to ensure that the cost function J( ⁇ ) is minimised can again be calculated by using the solution given by equation (5).
  • Figure 7 shows the modulus of the signals ⁇ m ( ⁇ ) for just one source and for all the sources as a function of the angle of incidence ⁇ of the recorded plane wave. Results are presented at frequencies of 100 Hz, 1 kHz and 10 kHz.
  • the frequency domain derivation of the source input signals necessary for optimal reproduction of the sound field can also be interpreted as a technique for designing a matrix of linear filters which is used to operate on the recorded signals in order to produce the source input signals.
  • the filter matrix H operates on the recorded signal vector u in order to produce the source input signal vector v.
  • the realisability of the filters in this matrix will be considered, again by using an analysis in the frequency domain.
  • the filters operate in discrete time on sampled input signals.
  • the frequency domain cost function to be minimised can be written as
  • e(e i ⁇ ) and v(e i ⁇ ) are vectors containing the Fourier transforms of the sampled error signals and sampled source input signals. It follows that the minimum value of this cost function (see equation (5)) is produced by the source input vector
  • a two source-three sensor geometry is illustrated in Figure 12 together with a plot of the z-plane showing the corresponding zeros of equation (18). These zeros are thus the poles of all the filters H mk (z); the existence of poles outside the unit circle implies that all the elements of H(z) will be unstable. This also appears to be the case for any system which involves inversion of the matrix C H (z)C(z), since this product seems always to result in a determinant having the general form of equation (18). It also appears, however, that a system which uses the same number of sources and sensors can be made stable, depending upon the geometry chosen.
  • preliminary investigations also suggest that "square" systems consisting of 4 sources and 4 sensors can also be made stable.
  • the general rules governing the choice of geometry have yet to be established.
  • the frequency domain analysis suggests that the filters required are unrealisable, it is still always possible to seek a "least squares" solution to the problem in the time domain. This involves finding the filters that are constrained to be causal and stable and which minimise the mean square error between the desired and reproduced signals. This approach is discussed in the next section.
  • the effective inversion of the response of the space in which the sound is reproduced can also be accomplished, at least on a pointwise basis [12] but it is perhaps debatable whether in the majority of applications this is a worthwhile procedure. It is well known that human hearing exhibits a well-defined "precedence effect" [13] and localization of sources will very much be determined by the earliest arriving sound. In some cases therefore, it may be of benefit simply to disregard the response of the listening space and focus effort on obtaining accurate reproduction of the recorded signals by using the direct field radiated by the sources used for reproduction.
  • the matrix H of optimal filters In an event, it is in principle relatively easy to deduce the matrix H of optimal filters by using the recorded signals and by making measurements of the reproduced field, the latter being undertaken either under anechoic conditions or in the listening space to be used. It is firstly assumed that the matrix H consists of FIR filters. Thus although the analysis of the previous section has demonstrated that H has an intrinsically recursive structure, it is assumed that a sufficient number of coefficients are used in each of the elements of H to ensure that their impulse responses are of requisite duration.
  • filtered reference signals are defined. These are the signals generated by passing the k'th recorded signal u k ( ⁇ ) through the transfer function C lm ( ⁇ ) which comprises the l,m'th element of the matrix C( ⁇ ). This signal is denoted r lmk ( ⁇ ).
  • the generation of the filtered reference signal can be explained with reference to the block diagram of Figure 3. Since the system is linear, the operation of the elements of the transfer functions H( ⁇ ) and C( ⁇ ) can be reversed. This is illustrated in Figure 9.
  • the sampled signal reproduced at the l'th location in the sound field can be written as where the signal S /mk (n) is defined by and h mk (i) is the i'th coefficient of the FIR filter processing the k'th recorded signal to produce the m'th source input signal (see Figure 8).
  • Each of the FIR filters is assumed to have an impulse response of I samples in duration.
  • the signal can also be written as where the vectors h m k and r lmk (n) are defined by
  • Equation (30) therefore defines the optimal values of all the coefficients in the filters that comprise the matrix H.
  • One way to determine these coefficients is obviously by direct inversion of the matrix in equation (30).
  • this matrix is clearly of high order, being of dimension I ⁇ M ⁇ K.
  • Another approach is to use the LMS algorithm, extended for use with multiple errors by Elliott and Nelson [14,15]. It is demonstrated in Appendix 2 that the algorithm can be written in the form where ⁇ is a convergence coefficient and ⁇ is a "leak coefficient" whose value is directly related to the penalisation of effort associated with the parameter ⁇ .
  • the four sensors are placed in a square array of dimension 0.1 m, as illustrated in Figure 10.
  • the effective sample rate used was 34 kHz.
  • This enabled the matrix C(z) to be approximated to good accuracy by transfer functions of the form of equation (15) with ⁇ lm given by the closest integer value to R lm /c o , where c o 344 m/s.
  • the delays ⁇ /m were thus all in the range between 270 and 290 samples and the matrix A(z) was assumed to be Iz - ⁇ with the modelling delay ⁇ set equal to 350 samples.
  • Each of the filters in H(z) was assumed to have 128 coefficients.
  • Figure 11 shows the impulse responses of the filters H 11 (z),H 12 (z), H 13 (z) and
  • H 14 (z) i.e., the filters that operate on the four recorded signals u 1 (n) to u 4 (n) and whose outputs are added together to produce the signal ⁇ 1 (n) input to source 1.
  • ⁇ 1 ( ⁇ ) to ⁇ 4 ( ⁇ ) are the delays (in integer numbers of samples) produced in the white noise sequence recorded by the sensors when the incident plane waves arrive at an angle ⁇ .
  • Figure 12 shows S ⁇ 1 ⁇ 1 ( ⁇ , ⁇ ) the power spectral density of the input signal to source 1, as a function of both frequency and the angle of incidence ⁇ of the recorded waves.
  • the source produces an output irrespective of ⁇ , which one might anticipate when the distance between the sensors is very small compared to the wavelength of the incident field.
  • frequencies up to about 1500 Hz the source only produces an output for waves falling in the range of angles of incidence which can effectively be reproduced by the source.
  • H o (z) The structure of H o (z) can now be examined with reference to a specific example. Assume that 2 sources are used to reproduce the field at 3 locations. The matrix C(z) then has the form
  • the optimal filter matrix can be written as where the symbols "det” and “adj” refer to the determinant and adjoint of the matrix respectively.
  • the numerator of this expression is a matrix of FIR filters. These filters can be made causal by choosing A(z) to consist of a diagonal matrix of "modelling delays" [16] having the transfer function z - ⁇ . Thus if we choose A(z) to be z - ⁇ I, such that the desired signals d(z) are simply delayed versions of the recorded signals u(z), the numerator matrix reduces to the form where, for example,
  • the filters in the numerator matrix can be made causal, since a term of the form ) will represent a delay in discrete time provided ⁇ > ⁇ 1 1 .
  • the determinant inevitably contains terms such as z d 1 which represent a forward shift in time (or if d 1 is negative for a given geometry, z -d 1 will represent a forward shift).
  • the determinant can however be reduced to a polynomial in only the backward shift operator z -1 through multiplication by a term z - ⁇ det where ⁇ det is equal to the largest positive value of d 1 , d 2 or d 3 .
  • the reciprocal of the determinant can be written as
  • Equation (21) of the main text can be rewritten
  • the m'th sampled source input signal can be written as where the vectors u k (n) are the recorded signal sequences defined by
  • Equation (A3.9) can thus be written
  • the filter matrix H consists of recursive filters having forward paths A mk ( ⁇ ) which are purely FIR filters, together with recursive parts characterised by the frequency response function B( ⁇ ) which is common to all filters, then the "reversed transfer function" block diagram of Figure 9 can be redrawn in the two equivalent representations shown in Figure 13. It is the block diagram representation of Figure 13b that enables the equation error approach to be taken. First note that one can write the sampled value of the signa ) defined in Figure 12 as
  • Equation (A4.7) can be further reduced to the form where the composite vectors q;(n) and g are defined by
  • Equation error adaptive IIR filters for single channel response equalisation Equation error adaptive IIR filters for single channel response equalisation.

Abstract

A method of reproducing sound comprises creating a sound recording by recording the sound received by individual sensors of a compact cluster of at least three spaced-apart sound sensors which are located in a localised region of the recording space sound field which is desired to be subsequently reproduced, and subsequently reconstructing a representation of the original sound field in a localised region (S, Figure 2) of the listening space corresponding to said localised region of the recording space, by arranging at least three sound sources (Sources 1 to 4, Figure 10) in a spaced-apart distribution which surrounds the centre of the listening space localised region, the reproduction being aimed primarily at reproducing the direction of propagation of the sound waves in the localised region of the recording space, the vector of signals input to the sound sources being produced by subjecting the vector of recorded outputs of the sound sensors to a matrix (H(z)) of linear filters which have been derived using a least squares technique.

Description

METHOD OF REPRODUCING SOUND
This invention relates to a method and apparatus for reproducing sound.
Recent studies of the active control of acoustic fields have used analytical methods and multi-channel signal processing techniques that can be usefully applied to problems in sound reproduction. The perfect reproduction of an acoustic field in both space and time is an unrealistic objective in practice. We have shown that reproduction of a sound field over a restricted spatial region can be achieved, to a close approximation to the original, by first recording the acoustic signals at a finite number of positions in the original sound field. The signals are processed via a matrix of linear filters in order to produce the inputs to a number of sources used for reproduction. An analysis in the frequency domain shows that such a strategy could be useful, but its practicability at high frequencies appears to be limited by the need to provide adequate spatial sampling of the original field.
The present invention in one aspect is aimed at ensuring that the direction of propagation of the waves in the original field are well approximated in the reproduced field. This approach appears to be a more practicable alternative, and offers the promise of successful operation over a wide frequency bandwidth. We discuss hereinafter the readability of the optimal filter matrix, and a practical, adaptive filter design technique is presented.
According to one aspect of the invention, a method of reproducing sound comprises creating a sound recording by recording the sound received by individual sensors of a compact cluster of at least three spaced-apart sound sensors which are located in a localised region of the recording space sound field which is desired to be subsequently reproduced, and subsequently reconstructing a representation of the original sound field in a localised region of the listening space corresponding to said localised region of the recording space, by arranging at least three sound sources in a spaced-apart distribution which surrounds the centre of the listening space localised region, the reproduction being aimed primarily at reproducing the direction of propagation of the sound waves in the localised region of the recording space, the vector of signals input to the sound sources being produced by subjecting the vector of recorded outputs of the sound sensors to a matrix of linear filters which have been derived using a least squares technique.
The sound sensors are desirably spaced apart by no more than one half, and preferably one third, of an acoustic wavelength at the highest frequency of interest.
The filter matrix is preferably designed by minimising the mean square error between desired signals and reproduced signals, the desired signals being simply taken as delayed versions of the original recording.
The filter matrix H(z) is preferably designed using the LMS algorithm in the form of h(n + 1) = γh(n) = αRτ(n) e(n) where h is the composite tap weight vector
Rτ(n) is a matrix of signals obtained by filtering the recorded signals through the transfer function matrix C(z)
γ is a leak coefficient
α is a convergence coefficient
e is an error vector equal to the difference between the desired signals and the reproduced signals.
Preferably at least four sound sensors are employed in the recording space, and the sound sensors are preferably arranged in rectangular array.
Conveniently four sound sensors only are employed, the four sensors being arranged at the corners of a square.
Preferably at least four sound sources are employed in the listening space and the sound sources are preferably circumferentially equally spaced-apart on a circle centred on said centre of the listening space localised region. The sound sources are preferably substantially farther from the centre of said listening space localised region than the maximum distance of a sound sensor from the centre of said recording space localised region.
In one embodiment sixteen sound sensors are arranged in a 4 × 4 square array to produce a sound recording, and twelve sound sources are arranged on a circle in the listening space.
In a second, simpler, embodiment four sound sensors are arranged at the corners of a square, and four sound sources are arranged at the corners of a large square which is correspondingly orientated in the listening space.
The use of regular geometric distributions of both sound sensors and sound sources assists in deriving the matrix of linear filters.
The invention and the background thereto will now be further described, by way of example only, with reference to the accompanying drawings.
What is shown in the drawings is as follows:
Figure 1 An illustration of the possibilities for the perfect reproduction of sound. Recordings are made of u(y, t) and p(y, t) on a surface S enclosing a volume V. The field is later reproduced in an identical volume V' by using a continuous layer of monopole and dipole sources on a surface S' that is geometrically identical to S.
Figure 2 Reproduction of a plane wave sound field. The strengths of the sources are optimally adjusted to minimise the error between the recorded signals and those reproduced at equivalent locations in the listening space.
Figure 3 The sound reproduction problem in block diagram form. The vector u is a vector of recorded signals, v is a vector of signals input to the sources used for reproduction and is a vector of signals reproduced in the sound field. The vector d defines the vector of signals that are desired to be reproduced and
Figure imgf000006_0001
is a vector of error signals. The matrix C defines the transfer functions between v and
Figure imgf000006_0002
, and the matrix H defines a matrix of filters which are used to operate on the recorded signals u in order to determine the source input signals v. The matrix A is used to define the desired signals d in terms of the recorded signals u.
Figure 4 The geometry of reproducing sources studied by Kirkeby and
Nelson [8]. The (x1, x2) coordinate positions of the reproducing sources are shown. The recording transducer array was a 0.5 m × 0.5 m square centred on the origin and contained 8 × 8 transducers spaced on a uniform grid.
Figure 5 The normalised minimum error in the reproduction of the sound field as a function of frequency and angle of incidence of the plane wave in the recording space when the field is reproduced using the source arrangement of Figure 4.
Figure 6 The geometrical arrangement of reproducing sources and recording transducers used for investigating the effectiveness with which the direction of propagation of the recorded plane wave can be reproduced. The recording transducer array was a 0.045 m × 0.045 m square centred on the origin and contained 4 × 4 transducers spaced on a uniform grid.
Figure 7 The output of the sources in the reproduction system illustrated in Figure 10 as a function of the angle θ of the incident plane wave. Results are shown in the form of I ʋm(ω) I as a polar plot on a linear scale for all the sources in the array and for a single source in the array. At a) 100 Hz with ε = 0, b) at 100 Hz with ε = 0.001, c) at 1 kHz with ε = 0.001, d) at 10 kHz with ε = 0.001.
Figure 8 The two source/three sensor geometry used in the study of the stability of the optimal filters. The sources and sensors are all situated on the x1 axis in the coordinate positions shown. The positions of the system poles in the z-plane are also shown. Note that for every pole inside the unit circle, there is a pole outside.
Figure 9 The reversal of operation of the elements of the matrices H(ω) and C(ω) which leads to the definition of the filtered reference signals rlmk(ω) and the filtered output signals slmk(ω). Note that
Figure imgf000007_0001
consists of contributions due to all K recorded signals.
Figure 10 The geometrical arrangement of reproducing sources and recording transducers used for the design of a causal, static realisation of the optimal filter matrix H. Four sources were used in the coordinate positions shown together with four sensors spaced 0.1 m apart on a square grid.
Figure 11 The impulse responses of four of the optimal filters designed using the geometry of Figure 10. The impulse responses are shown corresponding to a) H1 1(z) b) H1 2(z) c) H 13(z) d) H14(z).
Figure 12 The power spectral density of the sequence ʋ1 (n) input to source
1 of Figure 10 when plane waves producing a white noise sequence is recorded by the four sensors shown in Figure 10 and processed using the optimal filter matrix H. The power spectral density is shown as a polar plot as a function of θ on a linear scale at a) 30 Hz b) 180 Hz c) 800 Hz d) 1730 Hz.
Figure 13 The "reversed transfer function" form of the block diagram when the elements of H are implemented as recursive filters. All the filters are given identical recursive parts as shown in (a) which enables the block diagram to be redrawn as in (b). 1. INTRODUCTION
Research into the potential of active techniques for the control of acoustic fields has undergone a rapid expansion during the last two decades. This growth has paralleled the expansion in the capability of modern electronic devices for the digital processing of acoustic signals. The study of the subject has embraced both the "physical" aspects of the problem (which, perhaps surprisingly, were only partly understood at the beginning of the 1970's) and also the "technological" aspects of the problem. The latter have involved the development and study of novel digital signal processing techniques required specifically for the active control of sound. The fusion of the two subject disciplines of "classical" acoustics and "modern" digital signal processing has produced some exciting developments. Much of the work in this field that had been undertaken by the start of the 1990's is summarised in reference [1], which also presents a unified introduction to the two contributing subject disciplines. Reference [1] does not, however, deal with recent advances in what may be termed the active control of "structure-borne" sound. That is, the control of wave fields in elastic solids and their interaction with fluid borne sound fields. Much of the recent work in this area will be summarised in reference [2] and is also dealt with in reference [3].
The present invention stems from some work in the active control of acoustic fields, but with a rather different objective in mind than that traditionally associated with the subject. Most work to date has understandably been focused on the active suppression of unwanted acoustic noise, where the "desired" sound field is simply a sound field whose amplitude is of considerably lower amplitude than that associated with the unwanted sound. In this work we will broaden the scope of the subject to include the production of a sound field which has predefined spatial and temporal characteristics. The application of interest in thus in the accurate reproduction of a given sound field rather than in its suppression.
Naturally, there is already a vast literature that deals with the reproduction of sound, and the subject continues to be of great technological interest in modern times, with phenomenal strides having been made in the accuracy with which acoustic signals can be recorded, stored and reproduced. Again, most of these recent advances have arisen through the application of digital techniques and have come to fruition during the period in which the active control of unwanted noise has become a practical proposition. However, most of the work in the field of sound reproduction has been directed towards the technological problem of accurate reproduction of recorded signals. Surprisingly little attention has been devoted to assessing the extent to which an acoustic field (rather than just an acoustic signal) can be faithfully reproduced.
2. THE PERFECT REPRODUCTION OF SOUND FIELDS
It is worth pointing out at the initiation of these discussions that the sound field within a given spatial volume can in principle be reproduced perfectly in both space and time, given a complete description of the acoustic pressure and pressure gradient on the hypothetical surface that bounds the spatial volume. This reasoning follows from the Kirchhoff-Helmholtz integral equation which enables the sound field within a given volume V to be uniquely described by these acoustic properties on the bounding surface S. Thus an acoustic pressure field p(x, t) which satisfies the homogeneous wave equation
Figure imgf000009_0001
in a medium with a sound speed co, can be described by the integral equation
Figure imgf000009_0002
Figure imgf000009_0003
In this expression, po is the density of the medium, x is the position vector of the field point contained within the volume V, the vector y defines the position on the surface S that encloses V, the distance R = |x - y I and n is the unit normal vector that points into the volume V from the surface S. A full description of the derivation of this relationship is given by Pierce [4]. Although not obvious from the form of the integral equation given above, it is well known that the two surface integrals in the equation have a clearly defined physical interpretation. The first term can be considered to be the contribution to the sound field in V that is radiated by a continuous distribution of monopole sources located on the surface S. The strength of the monopoles is determined by the particle velocity distribution u(y, t) on the surface. Similarly, the second integral can be interpreted as the sound field produced by a continuous layer of dipole sources on the surface S, their strength being determined by the pressure fluctuation p(y, t). (A description of the physical reasoning that leads to these conclusions is presented in reference [1].)
One can conclude from this well established principle of classical acoustics, that given a complete knowledge of u(y, t) and p(y, t ) on a surface S that encloses V, one could perfectly reproduce p(x, t) inside V by activating an appropriate distribution of monopole and dipole sources on S. The possibility for reproducing a sound field in this way is illustrated in Figure 1. Thus one records u(y, t) and p(y, t) on S surrounding the volume V of interest. Given these recordings, one can activate at a later time, and in a different space, a continuous source layer on a surface S' that is geometrically identical to the surface S. This will result in the reproduction within V", of the sound field that previously existed within V. Note that, as illustrated in Figure 1, in reproducing sound within V, no field is reproduced outside V. This (obviously necessary) condition also follows from the Kirchhoff-Helmholtz integral theorem, which shows that for field points x outside V, equation (2) holds with p(x, t) equal to zero. Finally, of course one has to assume that both po and co are identical in V and V.
However, variations in density and sound speed between V and V are probably the least of the difficulties involved in implementing such a scheme. The recording of signals over a continuous surface and their subsequent use in activating a continuous source layer is certainly not a current technological possibility. Nevertheless, accepting that both recording and reproduction must be accomplished with discrete transducers, it leads one to speculate upon how closely this scheme could be realised in practice. Previous work on active noise control has gone at least some way to answering this question. This is reviewed in reference [1] (see Chapter 9, Section 9.14). Considerable work on the discretization of continuous source layers has been undertaken by Soviet authors (see, for example, Zavadskaya et al [5], Konyaev et al [6], and Konyaev and Fedoryuk [7]). Although not entirely conclusive, the work of these authors, together with the analysis presented in reference [1], suggests that the linear separation between discrete monopole/dipole source elements used to approximate a planar continuous source layer should not be greater than λ/2, where λ is the acoustic wavelength at the frequency of interest. Applying this argument to the reproduction of a field inside a spherical volume whose diameter is D suggests that one would require approximately 4πD22 discrete source elements. Thus for a sphere 10 m in diameter and a frequency of 10 kHz (λ ≈ 3.44 × 10-2 m in air at 20 °C), in excess of 106 sources would be required! However for a sphere of 1 m in diameter and frequency of 1 kHz, this number drops to around 102. To adopt this philosophy, even for modest volumes and frequencies, represents a task of considerable complexity.
3. REPRODUCTION OF A SOUND HELD OVER
A RESTRICTED SPATIAL REGION
The discussion of the last section suggests that the perfect reproduction of a sound field over a large spatial volume is not a currently realistic aim, even with the rapidly advancing technology at our disposal. The question then arises as to how existing capabilities might be best utilised to improve, in some sense, existing sound reproduction techniques. Here attention will be initially restricted to the objective of providing a single listener in a given "listening space" (see Figure 1) with an incident acoustic field that matches, as closely as possible in space and time, that sound field which would have been incident upon the listener in the "recording space". In simple terms this is the age-old objective of reproducing a restricted region of the concert hall sound field in a restricted region of the living room. The region in question is, of course, that which surrounds the listener.
An obvious starting point for an appraisal of this possibility is to undertake an analysis in the frequency domain. In fact, the approach taken here is exactly that which has already proved so useful in defining performance limits in the study of the active control of sound [1]. Here the definition is sought of the "optimal" outputs of a number of discrete acoustic sources which give, in a least squares sense, the "best fit" (in amplitude and phase) to a desired single frequency sound field. Whilst there are limitations to the extent to which conclusions arrived at in the frequency domain can be extended to the time domain, this type of analysis invariably leads to a useful assessment of the "best that can be done".
First it will be assumed that the sound field in the "recording space" consists of a single plane wave at an angular frequency ω. Second, it is assumed that an array of discrete transducers is used to record this sound field. For the sake of simplicity it will be assumed that the transducer array and the plane wave are restricted to the horizontal plane as illustrated in Figure 2. The optimisation problem and its subsequent interpretation in terms of a signal processing problem is best described with reference to Figure 3.
It is assumed that the K transducers detecting the harmonic plane wave in the recording space produce harmonic signals described by the complex numbers uk(ω) which comprise the complex vector u(ω). The objective is to reproduce these signals as closely as possible at the equivalent locations in the listening space. M sources are used to reproduce the field and their "input" signals are described by the complex numbers ʋm(ω) which comprise the complex vector v(ω). These sources produce signals
Figure imgf000012_0002
at L locations in the listening space, these signals comprising the vector
Figure imgf000012_0001
Here it will be assumed that the L locations in the listening space are geometrically equivalent to the K locations of the recording transducers in the recording space such that K = L and that d(ω) = u(ω). Thus the desired signal vector is exactly the recorded signal vector. In general, it is useful to define the desired signals d(ω) in terms of the recorded signals u(ω) through the more general relationship d(ω) = A(ω) u(ω). Here of course it is assumed simply that A(ω) = I, the identity matrix.
One can now find the signal vector v(ω) which minimises the sum of squared errors between the desired and reproduced signals. The quadratic cost function that is to be minimised is given by
J (ω) = eH(ω) e(ω) + β vH(ω) v(ω), (3) where the complex error vector e(ω) = d(ω) - d(ω). The cost function thus consists of the sum of the squared errors eH(ω) e(ω) plus the sum of squared source input voltages vH(ω) v(ω) multiplied by the factor β. The term β thus quantifies the relative weighting in the cost function given to the "effort" used in minimising the sum of squared errors. Equation (3) can be expanded to give
Figure imgf000013_0001
Since [(CH(ω ) C(ω) + β I ] must be a positive definite matrix (i.e. vH(ω)[CH(ω) CH(ω) + β I] v(ώ) > 0 for all v(ω)≠ 0), then this function must have the unique minimum defined by [1]
Figure imgf000013_0002
Figure imgf000013_0003
where vo(ω) is the optimal vector of source input signals and Jo(ω) is the minimum value of the cost function.
This analysis has been used by Kirkeby and Nelson [8] to investigate the effectiveness of a number of geometrical arrangements of recording and reproducing transducers. One such specific arrangement is illustrated in Figure 4. This consists of an array of four (point monopole) sources spaced on a 90° arc. The recorded signals u(ω) are assumed to be those produced by a harmonic plane wave travelling at an angle θ to the x1- axis of the coordinate system. The complex pressure produced by such a wave can be written as
Figure imgf000013_0004
where ω/co is the wavenumber and the wave is assumed to have unit amplitude. Thus it is assumed that the recorded signals (and thus the desired signals) are given by
Figure imgf000013_0005
where the position of the k'th recording sensor is defined by the coordinates (x1k, x2k). In reproducing the sound field, we assume that the elements of the matrix C (ω) of frequency response functions are given by
Figure imgf000014_0001
where Rlm is the distance between the l'th point at which reproduction is sought and the m'th source used for reproduction. It is thus assumed that the reproduced signals are exactly the sound pressure fluctuations that would be produced by point monopole sources having volume accelerations equal to ʋm(ω), the source input signals. Furthermore, it is implicitly assumed that the listening space is anechoic.
4. RESULTS OF THE FREQUENCY DOMAIN ANALYSIS
Some results of using the solution given by equations (5) and (6) with β = 0 and with the arrangement shown in Figure 4 are illustrated in Figure 5. This shows the value of (Jo/L)½ where L = K is the total number of recorded signals (64 in this case) as a function of frequency and the angle of incidence θ of the plane wave. First note that for angles of incidence within the range 45° to 135°, the normalised error always remains reasonably low. This range of incidence angles of course lies within the angle subtended at the origin of the coordinate system by the array of sources. The normalised error is also obviously smallest when the angle of incidence of the plane wave coincides with the angle subtended by each of the individual sources. There is also a general trend of increasing error with increasing frequency and at high frequencies especially, as one would expect, the normalised error rapidly approaches unity outside the range of incidence angles subtended by the sources. 5. REPRODUCTION OF THE PROPAGATION
DIRECTION OF RECORDED WAVE FIELDS
The analysis of the last section has demonstrated that there are distinct limitations to the degree to which a sound field can be accurately reproduced even over a relatively small spatial region. A more modest objective, that can be investigated within the same analytical framework, is that of attempting to ensure that the directional properties of the sound field at a point (or small region of space) are preserved in the reproduced field. Thus, simply speaking, one wishes to record the field with a number of sensors close to the point of interest and process those signals such that the direction of propagation of the waves is, as far as possible, reproduced at an equivalent point in the listening space. This objective is central to the operation of "surround sound" or "ambisonic" [10] systems. Here it will be shown that the least squares solution given above automatically ensures that directional information will be well reproduced.
Consider the geometry illustrated in Figure 6. This shows a reproduction system in accordance with the present invention which uses 12 loudspeakers to surround a central array of 16 sensors spaced uniformly on a grid that is only 0.045 m square. Assume that these sensors record signals due to a harmonic plane wave at an angle θ, exactly as described in Section 3. The source inputs v(ω) necessary to ensure that the cost function J(ω) is minimised can again be calculated by using the solution given by equation (5). The results are shown in Figure 7 which shows the modulus of the signals ʋm(ω) for just one source and for all the sources as a function of the angle of incidence θ of the recorded plane wave. Results are presented at frequencies of 100 Hz, 1 kHz and 10 kHz. The important feature of these results is that whatever the frequency or angle of incidence of the recorded wave, it is always the source closest to this angle of incidence that produces the dominant output. For waves whose angle of incidence falls exactly between two sources, then the two sources have roughly equal outputs, these being greater than those of any of the other sources. The least squares solution therefore always ensures that the recorded sound will at least be radiated from the correct direction in the reproduced field. A small value of β (given by ε trace CH(ω) C(ω) with ε = 0.001) was used in order to improve the conditioning of the solution. As shown by the results illustrated, at 100 Hz, the solution "blows up" at low frequencies with β = 0. There is clearly scope for further investigation of the number of sensors and sources necessary in such a system to ensure the most accurate reproduction of directional information with minimum processing power.
6. FREQUENCY DOMAIN CHARACTERISTICS
OF THE OPTIMAL FILTERS
The frequency domain derivation of the source input signals necessary for optimal reproduction of the sound field can also be interpreted as a technique for designing a matrix of linear filters which is used to operate on the recorded signals in order to produce the source input signals. This can most easily be understood with reference to Figure 3: the filter matrix H operates on the recorded signal vector u in order to produce the source input signal vector v. Here the realisability of the filters in this matrix will be considered, again by using an analysis in the frequency domain. However, it will prove convenient to assume that the filters operate in discrete time on sampled input signals. The frequency domain cost function to be minimised can be written as
Figure imgf000016_0003
where e(e) and v(e) are vectors containing the Fourier transforms of the sampled error signals and sampled source input signals. It follows that the minimum value of this cost function (see equation (5)) is produced by the source input vector
Figure imgf000016_0001
This therefore relates vo(e) to the desired signal vector d(e). However, according to Figure 3, the vector d(e) is related to the recorded signal vector u(e) by d(e) = A(e) u(e) where the filter matrix A(e) can be chosen at will. It therefore follows that
Figure imgf000016_0002
If it is now assumed that the optimal source input signals vo(e) are produced by operating on u(e) with a matrix of "optimal filters" Ho(e) such that
Figure imgf000017_0003
then it follows that the optimal filter matrix is given by
Figure imgf000017_0002
For the purposes of appraising the readability of the filters in this matrix the substitution z = ewill be made, where z is the z transform variable. It will also be assumed that the transfer functions Clm(z) relating the signal at the l'th location in the reproduced field to the m'th source input has the form
Figure imgf000017_0001
where Δlm is the number of samples of delay produced by the acoustic propagation from the m'th source to the l'th field location; the transfer function is again simply that which relates the pressure at the field location to the volume acceleration of the source. For the purposes of this analysis it will also be assumed that Δlm is always an integer number of samples delay.
A particular geometry consisting of 2 sources and 3 sensors is studied in detail in Appendix 1. It is demonstrated there that the causality of the optimal filters can be ensured by choosing the matrix A(z) to consist of a diagonal matrix of "modelling delays" of Δ samples duration such that A(z) = I z-. The m,k'th element of the matrix H(z) takes the general form
Figure imgf000017_0004
Note that the term in the square brackets is common to αll the elements of H and is given by 1/det[CH(z) C(z) + β I]. It is demonstrated in Appendix 1 that the inverse of this determinant can be expressed in this form, where Δdet is the largest positive value of exponent of z that results from expanding the determinant. The order N of the denominator polynomial in equation (16) is given by M × K. Evaluation of the adjoint of the matrix [CH(z) C(z) + β I] produces elements fmk(z) of this adjoint matrix which have the general form
Figure imgf000018_0002
If∆adj denotes the maximum positive value of any of the Δi appearing in any of the elements fmk(z) of the adjoint matrix, then it is clear that all the filters comprising H(z) can be made causal by choosing the modelling delay Δ such that
Δ > (∆adj -∆det).
The stability however of all these filters is determined by the denominator polynomial in equation (16). Thus all the zeros of this polynomial (the poles of the system) must lie within the unit circle in the complex z-plane. However, the particular form of the determinant of the matrix [CH(z)C(z) + β I] suggests that any system designed in the frequency domain, which uses more sensors for recording than sources for reproduction, will not yield a stable system in the time domain. In the particular case of 3 sensors and 2 sources that is examined in detail in Appendix 1, note that equation (Al.6) can be written in the form
Figure imgf000018_0001
where the delays ά1, d2 and d3 are defined by equations (A1.7). This particular form of the denominator polynomial has zeros (and thus poles of the system) which are arranged in pairs, with each zero inside the unit circle in the z-plane being associated with a zero outside the unit circle. Thus for any zero of equation (18) in the z-plane at z = r0e0, there will also be a corresponding zero at z = (1/r0)e 0, i.e., at the conjugate reciprocal location in the z-plane. That this must be so follows directly from the form of equation (18) which still holds if z is replaced by (1 /z*). A two source-three sensor geometry is illustrated in Figure 12 together with a plot of the z-plane showing the corresponding zeros of equation (18). These zeros are thus the poles of all the filters Hmk(z); the existence of poles outside the unit circle implies that all the elements of H(z) will be unstable. This also appears to be the case for any system which involves inversion of the matrix CH(z)C(z), since this product seems always to result in a determinant having the general form of equation (18). It also appears, however, that a system which uses the same number of sources and sensors can be made stable, depending upon the geometry chosen. For example, in the case of a 2 source-2 sensor system it can readily be shown that the filter matrix H(z) can be made stable (and causal) when the optimal value chosen is simply given by H0(z) = C-1(z)A(z), again depending on the choice of geometrical arrangement. Although a thorough investigation of the realisability of the optimal filters has yet to be undertaken, preliminary investigations also suggest that "square" systems consisting of 4 sources and 4 sensors can also be made stable. However, the general rules governing the choice of geometry have yet to be established. In cases where the frequency domain analysis suggests that the filters required are unrealisable, it is still always possible to seek a "least squares" solution to the problem in the time domain. This involves finding the filters that are constrained to be causal and stable and which minimise the mean square error between the desired and reproduced signals. This approach is discussed in the next section.
7. PRACTICAL FILTER DESIGN METHODS; FIR FILTERS
Whilst the analyses of the previous sections have succeeded in throwing some light on the nature of the filters required for the processing of the recorded signals, filters designed on a purely analytical basis will not make use of the full capability of modern signal processing techniques. The very considerable drawback associated with the direct application of the theory outlined above is, of course, that it assumes both ideal sources for reproduction and an ideal (anechoic) response of the listening space in which the sound is reproduced. Both of these factors can, in practical applications, be compensated for by using a simple on-line filter design procedure. Thus, the effective inversion of the "on-axis" frequency response functions of the loudspeakers used for reproduction can be accomplished relatively easily [11]. The effective inversion of the response of the space in which the sound is reproduced can also be accomplished, at least on a pointwise basis [12] but it is perhaps debatable whether in the majority of applications this is a worthwhile procedure. It is well known that human hearing exhibits a well-defined "precedence effect" [13] and localization of sources will very much be determined by the earliest arriving sound. In some cases therefore, it may be of benefit simply to disregard the response of the listening space and focus effort on obtaining accurate reproduction of the recorded signals by using the direct field radiated by the sources used for reproduction.
In an event, it is in principle relatively easy to deduce the matrix H of optimal filters by using the recorded signals and by making measurements of the reproduced field, the latter being undertaken either under anechoic conditions or in the listening space to be used. It is firstly assumed that the matrix H consists of FIR filters. Thus although the analysis of the previous section has demonstrated that H has an intrinsically recursive structure, it is assumed that a sufficient number of coefficients are used in each of the elements of H to ensure that their impulse responses are of requisite duration.
The analysis presented below follows that in reference [1]. First the "filtered reference signals" are defined. These are the signals generated by passing the k'th recorded signal uk(ω) through the transfer function Clm(ω) which comprises the l,m'th element of the matrix C(ω). This signal is denoted rlmk(ω). The generation of the filtered reference signal can be explained with reference to the block diagram of Figure 3. Since the system is linear, the operation of the elements of the transfer functions H(ω) and C(ω) can be reversed. This is illustrated in Figure 9. In discrete time, the sampled signal reproduced at the l'th location in the sound field can be written as
Figure imgf000020_0001
where the signal S/mk(n) is defined by
Figure imgf000020_0002
and hmk(i) is the i'th coefficient of the FIR filter processing the k'th recorded signal to produce the m'th source input signal (see Figure 8). Each of the FIR filters is assumed to have an impulse response of I samples in duration. Thus the signal
Figure imgf000021_0010
can also be written as
Figure imgf000021_0009
where the vectors hm k and rlmk(n) are defined by
Figure imgf000021_0008
Figure imgf000021_0007
The following composite vectors are now defined
Figure imgf000021_0006
Figure imgf000021_0005
Figure imgf000021_0001
together with the matrix
Figure imgf000021_0002
These definitions are used in Appendix 2 to find the solution for the optimal set of coefficients in the composite vector h that minimises the time averaged sum of squared errors between the desired and reproduced signals. The cost function minimised is given by
Figure imgf000021_0003
where the error vector
Figure imgf000021_0004
and the second term in the cost function weights the effort associated with the source input signals. It is demonstrated in Appendix 2, that if all the recorded signals comprising the vector u(n) are assumed to be mutually uncorrelated white noise sequences with a mean square value of σ2, then equation (28) reduces to the form
Figure imgf000022_0004
The positive definiteness of the matrix {E[RT(n) R(n)] + β σ2 I} ensures the existence of a unique minimum of this function. This is defined by the optimal composite tap weight vector and associated minimum value of J given by
Figure imgf000022_0001
Figure imgf000022_0002
Equation (30) therefore defines the optimal values of all the coefficients in the filters that comprise the matrix H. One way to determine these coefficients is obviously by direct inversion of the matrix in equation (30). However this matrix is clearly of high order, being of dimension I × M × K. Another approach is to use the LMS algorithm, extended for use with multiple errors by Elliott and Nelson [14,15]. It is demonstrated in Appendix 2 that the algorithm can be written in the form
Figure imgf000022_0003
where α is a convergence coefficient and γ is a "leak coefficient" whose value is directly related to the penalisation of effort associated with the parameter β.
8. THE APPLICATION OF AN FIR FILTER MATRIX
The above on-line filter design technique has been used successfully in the practical implementation of a system for reproducing signals recorded at two points in space by using two sources for reproduction [16]. Full details of this "cross-talk cancellation system" are given in reference [16] together with measurements of the spatial effectiveness of the technique. Some other examples of the application of this filter design method are also presented in reference [17]. As a further illustration of the use of this least squares technique in the time domain, it has been used in a computer simulation of a system in accordance with the invention to design a causal, stable realisation of the filter matrix H used to operate on the signals recorded by four sensors in order to provide the inputs to four sources used to "reconstruct optimally" the direction of arrival of the waves in the region in which the recordings were made. Note that the four sensors are placed in a square array of dimension 0.1 m, as illustrated in Figure 10. The effective sample rate used was 34 kHz. This enabled the matrix C(z) to be approximated to good accuracy by transfer functions of the form of equation (15) with∆lm given by the closest integer value to Rlm/co, where co = 344 m/s. The delays Δ/m were thus all in the range between 270 and 290 samples and the matrix A(z) was assumed to be Iz with the modelling delay Δ set equal to 350 samples. Each of the filters in H(z) was assumed to have 128 coefficients.
Figure 11 shows the impulse responses of the filters H11(z),H12(z), H13(z) and
H14(z): i.e., the filters that operate on the four recorded signals u 1(n) to u4(n) and whose outputs are added together to produce the signal ʋ1(n) input to source 1.
Having designed these filters by using the algorithm in equation (32), their effectiveness in producing the appropriate value of ʋ1(n) was evaluated by assuming that the recorded signals u1(n) to u4(ri) were produced by plane waves falling on the sensor array at an angle θ (Figure 2). The waves were assumed to produce a white noise sequence, with a power spectral density of unity, the same sequence being recorded by each sensor but all differing by delays that are a function of only θ. The power spectral density Sʋ1ʋ1(ω,θ) of the sequence ʋ\(n) could then be calculated from
Figure imgf000023_0001
where Δ1(θ) to Δ4(θ) are the delays (in integer numbers of samples) produced in the white noise sequence recorded by the sensors when the incident plane waves arrive at an angle θ. Figure 12 shows S ʋ1 ʋ1(ω,θ) the power spectral density of the input signal to source 1, as a function of both frequency and the angle of incidence θ of the recorded waves. Clearly at very low frequencies (30 Hz), the source produces an output irrespective of θ, which one might anticipate when the distance between the sensors is very small compared to the wavelength of the incident field. At frequencies up to about 1500 Hz the source only produces an output for waves falling in the range of angles of incidence which can effectively be reproduced by the source. Above this frequency, the effect of inadequate spatial sampling of the field becomes apparent and the source will produce an output for waves having angles of incidence that the source cannot hope to reproduce. These results again emphasise the requirement to comply with the sampling theorem by having the recording sensors spaced apart by less than one half (and preferably one third) of an acoustic wavelength at the highest frequency of interest. Nevertheless, the results show considerable promise and the technique clearly offers scope for refinement.
9. PRACTICAL FILTER DESIGN METHODS; IIR FILTERS
Whilst the adaptive design of FIR filters is clearly a successful approach to the problem, since the filters required are intrinsically recursive, one is also led to consider the use of adaptive recursive filters. These offer considerable scope for improvements in the efficiency with which the filters can be implemented. There are, however, difficulties involved in their design. There are essentially two classes of adaptive recursive filter; "output error" and "equation error" types (see the review by Shynk [18]). The application of these classes of filter is considered in Appendix 3 and Appendix 4 respectively. In the case of "output error" adaptive filters one simply replaces each of the elements of H with a recursive filter. It is shown in Appendix 3 that, by making some fairly gross assumptions, an algorithm can be derived that is directly analogous to that given by equation (32). This amounts to the multi-channel generalisation of the simple algorithm presented by Feintuch [19] which was first generalised for use with multiple errors by Elliott and Nelson [20]. However the use of filters with this architecture does not guarantee either the existence of a unique minimum or a stable convergence. A more attractive alternative is that described in Appendix 4 in which is used the "equation error" approach together with the filter architecture illustrated in Figure 13. In this case all the filters are assumed to have common poles, consistent with the analysis of Section 4. In addition, the quadratic cost function minimised has a unique minimum, although there may be problems with bias in the solutions reached, especially if high levels of noise are present.
As shown in Appendix 4, one first defines the coefficients of each of the filters Amk and B illustrated in Figure 13 by
Figure imgf000025_0001
Figure imgf000025_0002
A composite vector of coefficients is then defined (analogous to h defined by equation (24)) such that
Figure imgf000025_0003
Similarly, by analogy with the definition of rl(n) given by equation (25), one defines the composite vector
Figure imgf000025_0004
and the matrix Q(n) (by analogy with R(n)) such that
Figure imgf000025_0005
The composite vector of coefficients g is then found that minimises the cost function
Figure imgf000025_0006
where ee(n) is the "equation error" vector defined by
Figure imgf000025_0007
As argued in Appendix 4, an algorithm which can be used to minimise the cost function defined by equation (39) is given by
Figure imgf000026_0001
which is clearly analogous to equation (32). Initial simulations of this algorithm have been undertaken and demonstrate that the algorithm can be made to converge, but its advantages in efficiency of filter implementation have yet to be clearly demonstrated.
APPENDDC 1
THE OPTIMAL FILTER MATRIX IN THE 2-SOURCE/ 3-LOCATION CASE
The structure of Ho(z) can now be examined with reference to a specific example. Assume that 2 sources are used to reproduce the field at 3 locations. The matrix C(z) then has the form
Figure imgf000026_0002
and the matrix to be inverted is given by
Figure imgf000026_0003
E 26 The optimal filter matrix can be written as
Figure imgf000027_0004
where the symbols "det" and "adj" refer to the determinant and adjoint of the matrix respectively. The numerator of this expression is a matrix of FIR filters. These filters can be made causal by choosing A(z) to consist of a diagonal matrix of "modelling delays" [16] having the transfer function z. Thus if we choose A(z) to be z I, such that the desired signals d(z) are simply delayed versions of the recorded signals u(z), the numerator matrix reduces to the form
Figure imgf000027_0003
where, for example,
Figure imgf000027_0002
Thus, irrespective of the values of the delays∆lm, provided the modelling delay Δ is chosen to be sufficiently large, the filters in the numerator matrix can be made causal, since a term of the form
Figure imgf000027_0005
) will represent a delay in discrete time provided Δ > Δ1 1.
The denominator of equation (A1.3), given by the determinant of [CH(z) C(z) + β I], also has an influence on the choice of modelling delay Δ. The realisability of the filters in Ho(z) is also dictated by the form of this determinant. The zeros of the determinant will give the poles of the filters in Ho(z). Provided these poles lie inside the unit circle, the filters will be stable. In the specific case considered here,
Figure imgf000027_0001
Figure imgf000028_0004
where the delays in this expression are given by
Figure imgf000028_0003
Note that the determinant inevitably contains terms such as zd1 which represent a forward shift in time (or if d1 is negative for a given geometry, z-d1 will represent a forward shift). The determinant can however be reduced to a polynomial in only the backward shift operator z-1 through multiplication by a term z-∆det where∆det is equal to the largest positive value of d1, d2 or d3. Thus the reciprocal of the determinant can be written as
Figure imgf000028_0001
where the coefficients in this expression are given by
Figure imgf000028_0002
Thus the appearance of the term z-∆det in the numerator of this expression reduces the value of modelling delay Δ required to ensue that the matrix of filters remains causal. The stability of the system is now determined by finding the roots of the denominator polynomial in equation (A1.8). Note that this is determined entirely by the geometry of the system used for recording and reproduction.
APPENDIX 2
ADAPTIVE FIR FILTERS
Equation (21) of the main text can be rewritten
Figure imgf000029_0006
by using the definitions of the composite vectors h and rl(n) given in equations (24) and (25). Furthermore, if we define the composite vector
Figure imgf000029_0005
as
Figure imgf000029_0001
one can write
Figure imgf000029_0002
where the matrix R(π) is defined by
Figure imgf000029_0003
The optimal value of the composite tap weight vector h is now sought which minimises the time averaged sum of the squared error signals and the source input signals, the latter being included in order to penalise the "effort" associated with the source input signals at the optimal solution. The following cost function is minimised;
Figure imgf000029_0004
Note that both the contributions to the cost function can be written in terms of the composite tap weight vector h. Thus using equation (A2.3) shows that the vector of sampled error signals can be written as
Figure imgf000030_0007
In addition, the m'th sampled source input signal can be written as
Figure imgf000030_0006
where the vectors uk(n) are the recorded signal sequences defined by
Figure imgf000030_0005
Defining the composite vectors
Figure imgf000030_0004
Figure imgf000030_0003
leads to the expression
Figure imgf000030_0002
If all the recorded signals uk(n) are modelled as uncorrelated white noise sequences all having a mean squared value of σ2 then E [w(n) wT(n)] = σ2 I
Under these conditions
Figure imgf000030_0001
where h is the composite tap weight vector defined by equation (24) of the main text. Thus the cost function for minimisation can be written as
Figure imgf000031_0007
which on substitution of equation (A2.6) reduces to
Figure imgf000031_0006
The minimum of this cost function is defined by
Figure imgf000031_0005
Figure imgf000031_0003
An efficient means of converging adaptively to the minimum of this cost function is given by the Multiple Error LMS algorithm [15]. This follows from application of the method of steepest descent. First note that the gradient of the cost function with respect to h can be written as
Figure imgf000031_0004
which upon using equation (A2.6) can be written as
The classical assumption used in the derivation of the LMS algorithm is now made [22] and the composite tap weight vector is updated every sample by an amount proportional to the negative of the instantaneous value of the gradient vector. This leads to the tap weight update equation
Figure imgf000031_0002
where μ is a convergence coefficient. This can also be written as h(fi + l) = r hW + αRτ(fi) e(n) , (A2.20) where α = 2 μ and γ= (1 - μ σ2 β). This equation is now in the form of the "leaky" LMS algorithm [22] where the factor γ (< 1) ensures that the algorithm continuously searches for the "least effort" solution by slightly reducing the value of all the tap weights at each iteration. As pointed out in reference [15], it is interesting to observe that this is a direct consequence of including the "effort" term in the cost function.
APPENDIX 3
"OUTPUT ERROR" ADAPTIVE IIR FILTERS
Since the analysis presented in Section 4 has demonstrated the intrinsically recursive nature of the optimal filters necessary to process the recorded signals, it is worthwhile to consider briefly the possibilities for using adaptive IIR filters as elements of the matrix H. Thus it can be assumed that equation (20) of the main text, which describes the input-output relationship of the m k'th element of H, can be written in the form
Figure imgf000032_0001
where as illustrated in Figure 11, the coefficients amk(i) and bm k(j) characterise the forward and recursive parts of the filter respectively. One is tempted to try to deduce values of these filter coefficients by again proceeding using the methodology of the previous section. Thus one can write the signal Sιmk(n) as the inner product
Figure imgf000032_0002
by using the definition of the vectors given by
Figure imgf000032_0003
Figure imgf000033_0005
and then defining the composite vectors f and pl (n) by direct analogy with equations (24) and (25) respectively. Thus
Figure imgf000033_0004
Figure imgf000033_0003
This in turn leads to the definition of P(n) by analogy with equation (A2.4). This is given by
Figure imgf000033_0002
One could again choose to minimise a cost function of the form of (A2.5). In penalising "effort" however, it is not possible to justifiably proceed to the analogous form of equation (A2.13) which neatly expresses the effort in terms of the sum of the squares of all the coefficients of the FIR filters comprising H. This is the case since the analogous forms of equation (A2.7) and (A2.8) will include the filter output sequences and equation (A2.12) (with h replaced by f) would only follow if the filter outputs could be assumed to be white and to be uncorrelated with their inputs, even in the case of white recorded signals. Nevertheless one may regard σ2 fT f as some crude approximation to the sum of squared values of ʋm(n), in which case the analogous cost function to be minimised could be written as
Figure imgf000033_0001
This has the appearance of a quadratic dependence on the composite coefficient vector h which contains all the coefficients of all the M × K recursive filters. Unfortunately, however the existence of a unique minimum and "quadratic shape" of the function is not ensured, due to the nature of E[PT(n) P(n)] which now includes cross- and auto- correlations between the filtered reference signals rlmk(n) and the output signals slmk(n). Despite this, one can again make some crude approximations to the instantaneous estimate of the gradient of the function and derive an algorithm which is directly analogous to equation (A2.20). First note that the instantaneous estimate of the gradient of J with respect to the composite vector f can be written
Figure imgf000034_0001
where the gradient vectors∂el(n)/∂ f consist of contributions from the subvectors∂el(n)/∂fmk. Since
Figure imgf000034_0002
then it follows that
Figure imgf000034_0003
where the sub-vector on the right side of this equation is given by
Figure imgf000034_0004
It follows from equation (A3.1) that
Figure imgf000034_0005
Figure imgf000035_0004
These equations constitute recursive relationships for the gradients of slmk(n) with respect to amk(i) and bmk(j). A number of approximations are now possible, including the use of these relationships in deriving a coefficient vector update equation (see the discussion presented in [22] regarding the scalar case). The simplest assumption, however, is that adopted by Feintuch [19] in the scalar case and extended to the multi-channel case by Elliott and Nelson [20]. This simply ignores the second terms on the right side of equations (A3.13) and (A3.14), such that equation (A3.12) becomes
Figure imgf000035_0003
It then follows that∂el(n)/∂f = - pl(n), where, as mentioned above, pl(n) is defined by analogy with equation (25) of the main text. Equation (A3.9) can thus be written
Figure imgf000035_0001
If this instantaneous estimate of the gradient of the cost function is now used, together with the gradient of the effort term, then the coefficient update equation that is exactly analogous to equation (A2.20) is given by
Figure imgf000035_0002
In view of the indeterminate form of the function whose "minimum" this algorithm is attempting to find, there is no guarantee of convergence of the algorithm and a high chance of instability as poles associated with the recursive filters migrate outside the unit circle during the adaptation process. Nevertheless, there is some evidence in the application of the scalar version of this algorithm to active noise control [23] that it can be successful in producing substantial reductions in mean square error. APPENDIX 4
"EQUATION ERROR" ADAPTIVE IIR FILTERS
Another approach to the adaptive design of IIR filters is to use an "equation error" approach [18]. A description of the application of this technique to the sound reproduction problem in the single channel case is given by Nakaji and Nelson [24]. It turns out that this approach also appears to be well suited to the multi-channel sound reproduction problem. The analysis of Section 6 has demonstrated that the intrinsic structure of the filters necessary to process the recorded signals is that of recursive filters, but all the filters have the same denominator polynomial; i.e. the filters have common poles. If it is therefore assumed that the filter matrix H consists of recursive filters having forward paths Amk(ω) which are purely FIR filters, together with recursive parts characterised by the frequency response function B(ω) which is common to all filters, then the "reversed transfer function" block diagram of Figure 9 can be redrawn in the two equivalent representations shown in Figure 13. It is the block diagram representation of Figure 13b that enables the equation error approach to be taken. First note that one can write the sampled value of the signa
Figure imgf000036_0004
) defined in Figure 12 as
Figure imgf000036_0001
where the vector a.mk is defined by
Figure imgf000036_0002
and rlmk(n) is as defined previously in equation (23) of the main text. Defining the composite vector a by
Figure imgf000036_0003
then enables the expression given by equation (A4.1) to be written as
Figure imgf000037_0007
where rl(n) is the composite vector defined by equation (25) of the main text.
Λ
The signal dι(n) can then be written as
Figure imgf000037_0006
where b(j) are the coefficients of the recursive filters common to all elements of H.
The equation error approach to adaptive IIR filtering then proceeds by replacing
Figure imgf000037_0008
on the right side of equation (A4.5) by dl(n - j); i.e. the estimate of past values of the desired signal is replaced by past values of the desired signal itself. The following signal is now defined
Figure imgf000037_0002
This can also be written as
Figure imgf000037_0001
by defining the vectors
Figure imgf000037_0003
Figure imgf000037_0004
Equation (A4.7) can be further reduced to the form
Figure imgf000037_0005
where the composite vectors q;(n) and g are defined by
Figure imgf000038_0001
Figure imgf000038_0002
Furthermore one can define the vector of L signals
Figure imgf000038_0008
by the composite vector
Figure imgf000038_0003
such that
Figure imgf000038_0004
where the matrix Q(n) is defined by
Figure imgf000038_0005
The cost function for minimisation can be written as
Figure imgf000038_0006
where the "equation error" vector ee(n) is given by
Figure imgf000038_0007
One can again proceed to derive an algorithm for adaptively finding the minimum of the cost function by following exactly analogous steps to those presented in Appendix 2: Again however, as in the case of output error adaptive filters, the "effort term" in the cost function cannot be reduced with full justification to that given in equation (A2.12). One has again to assume that the sum of squared filter coefficients including those in the recursive parts, is an approximate measure of "effort". With this assumption, the cost function for minimisation reduces to
Figure imgf000039_0002
In this case, however, unlike that of the output error formulation, E[QT(n) Q(n)] will be a positive definite matrix and a unique minimum to the function will exist [18]. It is also possible to make the same assumptions regarding the evaluation of the gradient vector∂J/∂g as made in the FIR case. This leads directly to the coefficient update equation
Figure imgf000039_0001
However, there is still the possibility of instability during adaptation and it may be necessary to monitor the poles associated with the recursive part of the filter. Note however, that there is only one set of poles to be monitored and that represents a significant advantage in this multi-channel case. The final drawback with this approach is that it may lead to significant bias in the optimal solution, especially in the presence of additive noise [18]. Nevertheless the approach seems an attractive possibility for dealing with the problem at hand.
REFERENCES
1. P. A. NELSON and S.J. ELLIOTT 1992 Active Control of Sound, London;
Academic Press.
2. C.R. FULLER, S.J. ELLIOTT and P.A. NELSON Active Control of Vibration, London; Academic Press (to appear).
3. S.J. ELLIOTT 1993 Proceedings of the Institute of Acoustics Spring Conference.
Active control of structure-borne noise.
4. A.D. PIERCE 1981 Acoustics: An Introduction to its Physical Principles and Applications. New York: McGraw-Hill. 5. M.P. ZAVADSKAYA 1976 Soviet Physics Acoustics 21, 451-454. Approximation of wave potentials in the active suppression of sound fields by the Malyuzhinets method.
6. S.I. KONYAEV, V.I. LEBEDEV and M.V. FEDORYUK 1977 Soviet Physics Acoustics 23, 373-374. Discrete approximation of a spherical Huygens surface.
7. S.I. KONYAEV and M.V. FEDORYUK 1987 Soviet Physics Acoustics 33, 622- 625. Spherical Huygens surfaces and their discrete approximation.
8. O. KIRKEBY and P.A. NELSON 1993 Proceedings of the Second International Conference on Recent Advances in the Active Control of Sound and Vibration (Blacksburg, Virginia). Reconstructing a plane wave over a continuous area using monopole sources.
9. C.E. SHANNON 1949 Proceedings of IRE 37, 10-21. Communication in the presence of noise.
10. M.A. GERZON 1985 Journal of the Audio Engineering Society 33, 859-871.
Ambisonics in multichannel broadcasting and video.
11. R. WILSON 1989 Paper presented at the 86'th Convention of the Audio Engineering Society, Hamburg. Equalisation of loudspeaker drive units considering both on- and off-axis responses.
12. K.D. FARNSWORTH, P.A. NELSON and S.J. ELLIOTT 1985 Proceedings of the Institute of Acoustics Autumn Conference, Reproduced Sound, Windermere. Equalisation of room acoustic responses over spatially distributed regions.
13. B.C. J. MOORE 1982 An Introduction to the Psychology of Hearing (2nd edition) London; Academic Press.
14. S.J. ELLIOTT and P.A. NELSON 1985 Electronics Letters 21, 979-981.
Algorithm for multichannel LMS adaptive filtering.
15. S.J. ELLIOTT, I.M. STOTHERS and P.A. NELSON 1987 7EEE Transactions on Acoustics Speech and Signal Processing ASSP-35, 1423-1434. A multiple error LMS algorithm and its application to the active control of sound and vibration.
16. P.A. NELSON, H. HAMADA and S.J. ELLIOTT 1992 7EEE Transactions on Signal Processing 40, 1621-1632. Adaptive inverse filters for stereophonic sound reproduction.
17. P.A. NELSON, F. ORDUNA-BUSTAMANTE and H. HAMADA 1992 Proceedings of the Audio Engineering Society U.K. Conference on Digital Signal Processing, London, 154-174. Multichannel signal processing techniques in the reproduction of sound.
18. J.J. SHYNK 1989 IEEE ASSP Magazine, April 4-21. Adaptive IIR filtering.
19. P.L. FEINTUCH 1976 Proceedings of IEEE 64, 1622. An adaptive recursive LMS filter.
20. S.J. ELLIOTT and P.A. NELSON 1988 ISVR Memorandum No. 681. An adaptive algorithm for IIR filters used in multi-channel active sound control problems.
21. F. ORDUNA-BUSTAMANTE, P.A. NELSON and H. HAMADA 1993 Proceedings of 2nd International Conference on Recent Advances in Active Control of Sound and Vibration, Virginia.
22. B. WIDROW and S.D. STEARNS 1985 Adaptive Signal Processing, Englewood Cliffs, New Jersey; Prentice Hall.
23. L.J. ERIKSSON, M.C. ALLIE, CD. BREMIGAN and R.A. GREINER 1987 IEEE Transactions on Acoustics, Speech and Signal Processing ASSP-35, 433-437. The selection and application of an IIR adaptive filter for use in active sound attenuation.
24. Y. NAKAJI and P.A. NELSON 1992 ISVR Technical Memorandum No. 713.
Equation error adaptive IIR filters for single channel response equalisation.

Claims

1. A method of reproducing sound comprising creating a sound recording by recording the sound received by individual sensors of a compact cluster of at least three spaced-apart sound sensors which are located in a localised region of the recording space sound field which is desired to be subsequently reproduced, and subsequently reconstructing a representation of the original sound field in a localised region of the listening space corresponding to said localised region of the recording space, by arranging at least three sound sources in a spaced-apart distribution which surrounds the centre of the listening space localised region, the reproduction being aimed primarily at reproducing the direction of propagation of the sound waves in the localised region of the recording space, the vector of signals input to the sound sources being produced by subjecting the vector of recorded outputs of the sound sensors to a matrix (H(z)) of linear filters which have been derived using a least squares technique.
2. A method as claimed in claim 1 in which the sound sensors are spaced apart by no more than one half of an acoustic wavelength at the highest frequency of interest.
3. A method as claimed in claim 2 in which the sound sensors are spaced apart by less than one third of an acoustic wavelength at said highest frequency.
4. A method as claimed in any of the preceding claims in which the filter matrix is designed by minimising the mean square error between desired signals and reproduced signals, the desired signals being simply taken as delayed versions of the original recording.
5. A method as claimed in any of the preceding claims in which the filter matrix H(z) is designed using the LMS algorithm in the form of
Figure imgf000042_0001
where h is the composite tap weight vector Rτ(n) is a matrix of signals obtained by filtering the recorded signals through the transfer function matrix C(z)
7 is a leak coefficient
α is a convergence coefficient
e is an error vector equal to the difference between the desired signals and the reproduced signals
6. A method as claimed in any of the preceding claims in which at least four sound sensors are employed in the recording space.
7. A method as claimed in claim 6 in which the sound sensors are arranged in a rectangular array.
8. A method as claimed in claim 7 in which four sound sensors only are employed, the four sensors being arranged at the corners of a square.
9. A method as claimed in claim 8 in which at least four sound sources (sources 1, 2, 3, 4) are employed in the listening space and the sound sources are substantially circumferentially equally spaced-apart on a circle centred on said centre of the listening space localised region.
10. A method as claimed in any of the preceding claims in which the sound sources are substantially farther from the centre of said listening space localised region than the maximum distance of a sound sensor from the centre of said recording space localised region.
11. Sound reproducing apparatus constructed and adapted to reproduce sound in accordance with the method of any of the preceding claims by utilising a sound recording that has been made by recording sound received by individual sensors of a compact cluster of at least three spaced-apart sound sensors which were located in a localised region of the recording space sound field, the reproducing apparatus comprising three sound sources for positioning in a spaced-apart distribution in a listening space, and a matrix of linear filters adapted to derive from the vector of recorded outputs of the sound sensors a vector of signals to be input to the sound sources, the matrix of filters being designed to reproduce in use the direction of propagation of the sound waves in the localised region of the recording space.
PCT/GB1994/000799 1993-04-17 1994-04-15 Method of reproducing sound WO1994024835A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
GB9307986.1 1993-04-17
GB939307986A GB9307986D0 (en) 1993-04-17 1993-04-17 Method of reproducing sound

Publications (1)

Publication Number Publication Date
WO1994024835A1 true WO1994024835A1 (en) 1994-10-27

Family

ID=10734036

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/GB1994/000799 WO1994024835A1 (en) 1993-04-17 1994-04-15 Method of reproducing sound

Country Status (2)

Country Link
GB (1) GB9307986D0 (en)
WO (1) WO1994024835A1 (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2001078451A1 (en) * 2000-04-10 2001-10-18 Harman International Industries, Incorporated Creating virtual surround using dipole and monopole pressure fields
FR2836571A1 (en) * 2002-02-28 2003-08-29 Remy Henri Denis Bruno Multiple speaker sound reproduction system use filtering applied to signals feeding respective loudspeakers according to spatial position
FR2858403A1 (en) * 2003-07-31 2005-02-04 Remy Henri Denis Bruno SYSTEM AND METHOD FOR DETERMINING REPRESENTATION OF AN ACOUSTIC FIELD
US7215787B2 (en) 2002-04-17 2007-05-08 Dirac Research Ab Digital audio precompensation
EP2257083A1 (en) 2009-05-28 2010-12-01 Dirac Research AB Sound field control in multiple listening regions
CN101359043B (en) * 2008-09-05 2011-04-27 清华大学 Determining method for sound field rebuilding plane in acoustics video camera system
US8213637B2 (en) 2009-05-28 2012-07-03 Dirac Research Ab Sound field control in multiple listening regions
US9749769B2 (en) 2014-07-30 2017-08-29 Sony Corporation Method, device and system
CN112669805A (en) * 2020-12-14 2021-04-16 重庆邮电大学 Active noise control system of compressor in gas station based on equation error algorithm
CN117292698A (en) * 2023-11-22 2023-12-26 科大讯飞(苏州)科技有限公司 Processing method and device for vehicle-mounted audio data and electronic equipment

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4152542A (en) * 1971-10-06 1979-05-01 Cooper Duane P Multichannel matrix logic and encoding systems
US4741038A (en) * 1986-09-26 1988-04-26 American Telephone And Telegraph Company, At&T Bell Laboratories Sound location arrangement
WO1990000851A1 (en) * 1988-07-08 1990-01-25 Adaptive Control Limited Improvements in or relating to sound reproduction systems

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4152542A (en) * 1971-10-06 1979-05-01 Cooper Duane P Multichannel matrix logic and encoding systems
US4741038A (en) * 1986-09-26 1988-04-26 American Telephone And Telegraph Company, At&T Bell Laboratories Sound location arrangement
WO1990000851A1 (en) * 1988-07-08 1990-01-25 Adaptive Control Limited Improvements in or relating to sound reproduction systems

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2001078451A1 (en) * 2000-04-10 2001-10-18 Harman International Industries, Incorporated Creating virtual surround using dipole and monopole pressure fields
CN1643982B (en) * 2002-02-28 2012-06-06 雷米·布鲁诺 Method and device for control of a unit for reproduction of an acoustic field
WO2003073791A3 (en) * 2002-02-28 2004-04-08 Remy Bruno Method and device for control of a unit for reproduction of an acoustic field
FR2836571A1 (en) * 2002-02-28 2003-08-29 Remy Henri Denis Bruno Multiple speaker sound reproduction system use filtering applied to signals feeding respective loudspeakers according to spatial position
US7394904B2 (en) 2002-02-28 2008-07-01 Bruno Remy Method and device for control of a unit for reproduction of an acoustic field
WO2003073791A2 (en) * 2002-02-28 2003-09-04 Bruno Remy Method and device for control of a unit for reproduction of an acoustic field
JP2005519502A (en) * 2002-02-28 2005-06-30 レミ・ブリュノ Method and apparatus for controlling a unit for sound field reproduction
US7215787B2 (en) 2002-04-17 2007-05-08 Dirac Research Ab Digital audio precompensation
WO2005013643A1 (en) * 2003-07-31 2005-02-10 Trinnov Audio System and method for determining a representation of an acoustic field
US7856106B2 (en) 2003-07-31 2010-12-21 Trinnov Audio System and method for determining a representation of an acoustic field
FR2858403A1 (en) * 2003-07-31 2005-02-04 Remy Henri Denis Bruno SYSTEM AND METHOD FOR DETERMINING REPRESENTATION OF AN ACOUSTIC FIELD
CN101359043B (en) * 2008-09-05 2011-04-27 清华大学 Determining method for sound field rebuilding plane in acoustics video camera system
EP2257083A1 (en) 2009-05-28 2010-12-01 Dirac Research AB Sound field control in multiple listening regions
US8213637B2 (en) 2009-05-28 2012-07-03 Dirac Research Ab Sound field control in multiple listening regions
US9749769B2 (en) 2014-07-30 2017-08-29 Sony Corporation Method, device and system
CN112669805A (en) * 2020-12-14 2021-04-16 重庆邮电大学 Active noise control system of compressor in gas station based on equation error algorithm
CN112669805B (en) * 2020-12-14 2022-07-01 重庆邮电大学 Active noise control system of compressor in gas station based on equation error algorithm
CN117292698A (en) * 2023-11-22 2023-12-26 科大讯飞(苏州)科技有限公司 Processing method and device for vehicle-mounted audio data and electronic equipment

Also Published As

Publication number Publication date
GB9307986D0 (en) 1993-06-02

Similar Documents

Publication Publication Date Title
Nelson Active control of acoustic fields and the reproduction of sound
US5500900A (en) Methods and apparatus for producing directional sound
JP3264489B2 (en) Sound reproduction device
US5949894A (en) Adaptive audio systems and sound reproduction systems
US6444892B1 (en) Sound system and method for creating a sound event based on a modeled sound field
Gauthier et al. Adaptive wave field synthesis with independent radiation mode control for active sound field reproduction: Theory
CN108141691B (en) Adaptive reverberation cancellation system
Tylka et al. Performance of linear extrapolation methods for virtual sound field navigation
Santillán Spatially extended sound equalization in rectangular rooms
Maeno et al. Mode domain spatial active noise control using sparse signal representation
Lee et al. Fast generation of sound zones using variable span trade-off filters in the DFT-domain
WO1994024835A1 (en) Method of reproducing sound
Lecomte et al. Cancellation of room reflections over an extended area using Ambisonics
JP2001285998A (en) Out-of-head sound image localization device
Hoffman et al. Robust time-domain processing of broadband microphone array data
EP0649589B1 (en) Adaptive audio systems and sound reproduction systems
Gauthier et al. Experiments of multichannel least-square methods for sound field reproduction inside aircraft mock-up: Objective evaluations
Zaunschirm et al. An interactive virtual icosahedral loudspeaker array
Ahrens et al. The theory of wave field synthesis revisited
Støfringsdal et al. Conversion of discretely sampled sound field data to auralization formats
Pulsipher et al. Reduction of nonstationary acoustic noise in speech using LMS adaptive noise cancelling
Berkovitz Digital equalization of audio signals
Petrausch et al. Simulation and visualization of room compensation for wave field synthesis with the functional transformation method
Sun et al. Secondary channel estimation in spatial active noise control systems using a single moving higher order microphone
Donley et al. On the comparison of two room compensation/dereverberation methods employing active acoustic boundary absorption

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): GB JP US

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): AT BE CH DE DK ES FR GB GR IE IT LU MC NL PT SE

DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
121 Ep: the epo has been informed by wipo that ep was designated in this application
WWE Wipo information: entry into national phase

Ref document number: 1994912626

Country of ref document: EP

WWW Wipo information: withdrawn in national office

Ref document number: 1994912626

Country of ref document: EP

122 Ep: pct application non-entry in european phase