EP0760197B1 - Affichage audio virtuel tridimensionnel utilisant des filtres de formation d'images a complexite reduite - Google Patents

Affichage audio virtuel tridimensionnel utilisant des filtres de formation d'images a complexite reduite Download PDF

Info

Publication number
EP0760197B1
EP0760197B1 EP95918832A EP95918832A EP0760197B1 EP 0760197 B1 EP0760197 B1 EP 0760197B1 EP 95918832 A EP95918832 A EP 95918832A EP 95918832 A EP95918832 A EP 95918832A EP 0760197 B1 EP0760197 B1 EP 0760197B1
Authority
EP
European Patent Office
Prior art keywords
function
head
transfer function
related transfer
frequency
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Lifetime
Application number
EP95918832A
Other languages
German (de)
English (en)
Other versions
EP0760197A1 (fr
EP0760197A4 (fr
Inventor
Jonathan S. Abel
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Aureal Semiconductor Inc
Original Assignee
Aureal Semiconductor Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US08/303,705 external-priority patent/US5659619A/en
Application filed by Aureal Semiconductor Inc filed Critical Aureal Semiconductor Inc
Publication of EP0760197A1 publication Critical patent/EP0760197A1/fr
Publication of EP0760197A4 publication Critical patent/EP0760197A4/fr
Application granted granted Critical
Publication of EP0760197B1 publication Critical patent/EP0760197B1/fr
Anticipated expiration legal-status Critical
Expired - Lifetime legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S1/00Two-channel systems
    • H04S1/002Non-adaptive circuits, e.g. manually adjustable or static, for enhancing the sound image or the spatial distribution
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S1/00Two-channel systems
    • H04S1/002Non-adaptive circuits, e.g. manually adjustable or static, for enhancing the sound image or the spatial distribution
    • H04S1/005For headphones
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S2420/00Techniques used stereophonic systems covered by H04S but not provided for in its groups
    • H04S2420/01Enhancing the perception of the sound image or of the spatial distribution using head related transfer functions [HRTF's] or equivalents thereof, e.g. interaural time difference [ITD] or interaural level difference [ILD]

Definitions

  • This invention relates generally to three-dimensional or "virtual" audio. More particularly, this invention relates to a method and apparatus for reducing the complexity of imaging filters employed in virtual audio displays. In accordance with the teachings of the invention, such reduction in complexity may be achieved without substantially affecting the psychoacoustic localization characteristics of the resulting three-dimensional audio presentation.
  • Sounds arriving at a listener's ears exhibit propagation effects which depend on the relative positions of the sound source and listener. Listening environment effects may also be present. These effects, including differences in signal intensity and time of arrival, impart to the listener a sense of the sound source location. If included, environmental effects, such as early and late sound reflections, may also impart to the listener a sense of an acoustical environment.
  • a listener By processing a sound so as to simulate the appropriate propagation effects, a listener will perceive the sound to originate from a specified point in three-dimensional space - that is a "virtual" position. See, for example, "Headphone simulation of free-field listening" by Wightman and Kistler, J. Acoust. Soc. Am., Vol. 85, No. 2, 1989 .
  • HRTFs head-related transfer functions
  • Each HRTF is designed to reproduce the propagation effects and acoustic cues responsible for psychoacoustic localization at a particular position or region in three-dimensional space or a direction in three-dimensional space. See, for example, " Localization in Virtual Acoustic Displays" by Elizabeth M. Wenzel, Presence, Vol. 1, No. 1, Summer 1992 .
  • the present document will refer only to a single HRTF operating on a single audio channel. In practice, pairs of HRTFs are employed in order to provide the proper signals to the ears of the listener.
  • HRTFs are indexed by spatial direction only, the range component being taken into account independently.
  • Some HRTFs define spatial position by including both range and direction and are indexed by position. Although particular examples herein may refer to HRTFs defining direction, the present invention applies to HRTFs representing either direction or position.
  • HRTFs are typically derived by experimental measurements or by modifying experimentally derived HRTFs.
  • a table of HRTF parameter sets are stored, each HRTF parameter set being associated with a particular point or region in three-dimensional space.
  • HRTF parameters for only a few spatial positions are stored.
  • HRTF parameters for other spatial positions are generated by interpolating among appropriate sets of HRTF positions which are stored in the table.
  • the acoustic environment may also be taken into account. In practice, this may be accomplished by modifying the HRTF or by subjecting the audio signal to additional filtering simulating the desired acoustic environment.
  • the embodiments disclosed refer to the HRTFs, however, the invention applies more generally to all transfer functions for use in virtual audio displays, including HRTFs, transfer functions representing acoustic environmental effects and transfer functions representing both head-related transforms and acoustic environmental effects.
  • FIG. 1 A typical prior art arrangement is shown in Figure 1 .
  • a three-dimensional spatial location or position signal 10 is applied to an HRTF parameter table and interpolation function 11, resulting in a set of interpolated HRTF parameters 12 responsive to the three-dimensional position identified by signal 10.
  • An input audio signal 14 is applied to an imaging filter 15 whose transfer function is determined by the applied interpolated HRTF parameters.
  • the filter 15 provides a "spatialized" audio output suitable for application to one channel of a headphone 17.
  • HRTFs may create psychoacoustically localized audio with other types of audio transducers, including loudspeakers.
  • the invention is not limited to use with any particular type of audio transducer.
  • the HRTF parameters define the FIR filter taps which comprise the impulse response associated with the HRTF.
  • the invention is not limited to use with FIR filters.
  • the main drawback to the prior art approach shown in Figure 1 is the computational cost of relatively long or complex HRTFs.
  • the prior art employs several techniques to reduce the length or complexity of HRTFs.
  • An HRTF as shown in Figure 2a , comprises a time delay D component and an impulse response g(t) component.
  • imaging filters may be implemented as a time delay function z -D and an impulse response function g(t) , as shown in Figure 2b .
  • Figure 3a shows a prior art arrangement in which pairs of unprocessed or "raw" HRTF parameters 100 are applied to a time-alignment processor 101, providing at its outputs time-aligned HRTFs 102 and time-delay values 103 for later use (not shown).
  • Processor 101 cross-correlates pairs of raw HRTFs to determine their time difference of arrival; these time differences are the delay values 103. Because the time delay value values 103 and the filter terms are retained for later use, there is no psychoacoustic localization loss — the perceptual impact is preserved.
  • Each time-aligned HRTF 102 is then processed by a minimum-phase converter 104 to remove residual time delay and to further shorten the time-aligned HRTFs.
  • Figure 3b shows two left-right pairs (R1/L1 and R2/L2) of exemplary raw HRTFs resulting from raw HRTF parameters 100.
  • Figure 3c shows corresponding time-aligned HRTFs 102.
  • Figure 3d shows the corresponding output minimum-phase HRTFs 105.
  • the impulse response lengths of the time-aligned HRTFs 102 are shortened with respect to the raw HRTFs 100 and the minimum-phase HRTFs 105 are shortened with respect to the time-aligned HRTFs 102.
  • the filter complexity its length, in the case of an FIR filter
  • One technique is to reduce the sampling rate by down sampling the HRTF as shown in Figure 4a . Since many localization cues, particularly those important to elevation, involve high-frequency components, reducing the sampling rate may unacceptably degrade the performance of the audio display.
  • Another technique is to apply a windowing function to the HRTF by multiplying the HRTF by a windowing function in the time domain or by convolving the HRTF with a corresponding weighting function in the frequency domain.
  • This process is most easily understood by considering the multiplication of the HRTF by a window in the time domain — the window width is selected to be narrower than the HRTF, resulting in a shortened HRTF.
  • Such windowing results in a frequency-domain smoothing with a fixed weighting function.
  • This known windowing technique degrades psychoacoustic localization characteristics, particularly with respect to spatial positions or directions having complex or long impulse responses.
  • US 5,105,462 describes methods and apparatus for creating the illusion of distinct sound sources distributed throughout a three-dimensional space containing the listener.
  • Each channel of a left/right stereo signal is separately processed and then combined for playback.
  • the sound processing involves dividing each monaural or single channel signal into two signals and then adjusting the differential phase and amplitude of the two channel signals on a frequency dependent basis in accordance with an empirically derived transfer function that has a specific phase and amplitude adjustment for each predetermined frequency interval over the audio spectrum.
  • Each transfer function is empirically derived to relate to a different sound source location and by providing a number of different transfer functions and selecting them accordingly the sound source can be made to appear to move.
  • a three-dimensional virtual audio display method comprising: generating a set of head-related transfer function parameters in response to a spatial location or direction signal, wherein said set of head-related transfer function parameters are selected from, or interpolated among, head-related transfer function parameters derived by smoothing frequency components of a known head-related transfer function over a bandwidth which is a non-constant function of frequency, and noting the head-related transfer function parameters of the transfer function of a resulting compressed transfer function; and filtering an audio signal in response to said set of head-related transfer function parameters.
  • the smoothing according to the present invention is best explained by considering its action in the frequency domain: the frequency components of known transfer functions are smoothed over bandwidths which are a non-constant function of frequency.
  • the parameters of the resulting transfer functions referred to herein as "compressed" transfer functions, are used to filter the audio signal for the virtual audio display.
  • the compressed head-related transfer function parameters may be prederived or may be derived in real time.
  • the smoothing bandwidth is a function of the width of the ear's critical bands (i.e., a function of "critical bandwidth”).
  • the function may be such that the smoothing bandwidth is proportional to critical bandwidth.
  • the ear's critical bands increase in width with increasing frequency, thus the smoothing bandwidth also increases with frequency.
  • the length of the filter (the number of filter taps) is inversely related to the smoothing bandwidth expressed as a multiple of critical bandwidth.
  • the resulting less complex or shortened HRTFs have less degradation of perceptual impact and psychoacoustic localization than HRTFs made less complex or shortened by prior art windowing techniques such as described above.
  • FIG. 5a An example HRTF ("raw HRTF") and shortened versions produced by a prior art windowing method (“prior art HRTF”) and by the method according to the present invention (“compressed HRTF”) are shown in Figures 5a (time domain) and 5b (frequency domain).
  • the raw HRTF is an example of a known HRTF that has not been processed to reduce its complexity or length.
  • the HRTF time-domain impulse response amplitudes are plotted along a time axis of 0 to 3 milliseconds.
  • Figure 5b the frequency-domain transfer function power of each HRTF is plotted along a log frequency axis extending from 1 kHz to 20 kHz.
  • the present invention may be implemented in at least two ways.
  • an HRTF is smoothed by convolving the HRTF with a frequency dependent weighting function in the frequency domain.
  • This weighting function differs from the frequency domain dual of the prior art time-domain windowing function in that the weighting function varies as a function of frequency instead of being invariant.
  • a time-domain dual of the frequency dependent weighting function may be applied to the HRTF impulse response in the time domain.
  • the HRTF's frequency axis is warped or mapped into a non-linear frequency domain and the frequency-warped HRTF is either multiplied by a conventional window function in the time domain (after transformation to the time domain) or convolved with the non-varying frequency response of the conventional window function in the frequency domain. Inverse frequency warping is subsequently applied to the windowed signal.
  • the present invention may be implemented using any type of imaging filter, including, but not limited to, analog filters, hybrid analog/digital filters, and digital filters. Such filters maybe implemented in hardware, software or hybrid hardware/software arrangements, including, for example, digital signal processing. When implemented digitally or partially digitally, FIR, IIR (infinite-impulse-response) and hybrid FIR/IIR filters may be employed.
  • the present invention may also be implemented by a principal component filter architecture.
  • Other aspects of the virtual audio display may be implemented using any combination of analog, digital, hybrid analog/digital, hardware, software, and hybrid hardware/software techniques, including, for example, digital signal processing.
  • the HRTF parameters are the filter taps defining the FIR filter.
  • the HRTF parameters are the poles and zeroes or other characteristics defining the IIR filter.
  • the HRTF parameters are the position-dependent weights.
  • three-dimensional virtual audio display apparatus comprising: means for smoothing frequency components of a known head related transfer function over a bandwidth which is a non-constant function of frequency; means for noting the parameters of the transfer function of a resulting compressed transfer function; means for generating a set of head-related transfer function parameters in response to a spatial location or direction signal, said set of head-related transfer function parameters being selected from, or interpolated among, said parameters of the transfer function of the resulting compressed transfer function; and means for filtering an audio signal in response to said set of head-related transfer function parameters.
  • an optional nonlinear scaling function 51 is applied to an input HRTF 50.
  • a smoothing function 54 is then applied to the HRTF 52.
  • an inverse scaling function 56 is then applied to the smoothed HRTF 54.
  • a compressed HRTF 57 is provided at the output.
  • the nonlinear scaling 51 and inverse scaling 56 can control whether the smoothing mean function is with respect to signal amplitude or power and whether it is an arithmetic averaging, a geometric averaging or another mean function.
  • the smoothing processor 54 convolves the HRTF with a frequency-dependent weighting function.
  • the width of the weighting function increases with frequency; preferably, the weighting function length is a multiple of critical bandwidth: the shorter the required HRTF impulse response length, the greater the multiple.
  • HRTFs typically lack low-frequency content (below about 300 Hz) and high-frequency content (above about 16 kHz). In order to provide the shortest possible (and, hence, least complex) HRTFs, it is desirable to extend HRTF frequency response to or even beyond the normal lower and upper extremes of human hearing. However, if this is done, the width of the weighting function in the extended low-frequency and high-frequency audio-band regions should be wider relative to the ear's critical bands than the multiple of critical bandwidth used through the main, unextended portion of the audio band in which HRTFs typically have content.
  • a smoothing bandwidth wider than the above-mentioned multiple of critical bandwidth preferably is used.
  • a smoothing bandwidth wider than the above-mentioned multiple of critical bandwidth preferably is also used because human hearing is poor at such high frequencies and most localization cues are concentrated below such high frequencies.
  • the weighting bandwidth at the low-frequency and high-frequency extremes of the audio band preferably may be widened beyond the bandwidths predicted by the equations set forth herein.
  • a constant smoothing bandwidth of about 250 Hz is used for frequencies below 1 kHz, and a third-octave bandwidth is used above 1 kHz.
  • One-third octave bandwidth approximates critical bandwidth; at 1 kHz the one-third octave bandwidth is about 250 Hz.
  • the smoothing bandwidth is wider than the critical bandwidth.
  • power noted at low frequencies (say, in the range 300 to 500 Hz) is extrapolated to DC to fill in data not accurately determined using conventional HRTF measurement techniques.
  • weighting functions having different critical bandwidth multiples may be applied to respective HRTFs so that not all HRTFs are compressed to the same extent — this may be necessary in order to assure that the resulting compressed HRTFs are generally of the same complexity or length (certain ones of the raw HRTFs will be of greater complexity or length depending on the spatial location which they represent and may therefore require greater or lesser compression).
  • HRTFs representing certain directions or spatial positions may be compressed less than others in order to maintain the perception of better overall spatial localization while still obtaining some overall lessening in computational complexity.
  • the amount of HRTF compression may be varied as a function of the relative psychoacoustic importance of the HRTF.
  • each weighting function W f, ⁇ (n) there are a family of weighting functions W f, ⁇ (n), each defined on an interval 0 to N, which have a width which is a function of their center frequency f and, optionally, also a function of the HRTF position ⁇ .
  • the summation of each weighting function is 1 (Equation 3).
  • Figure 8 shows three members of a family of Gaussian-shaped weighting functions with their amplitude response plotted against frequency. Only three of the family of weighting functions are shown for simplicity.
  • the weighting functions need not have a Gaussian shape. Other shaped weighting functions, including rectangular, for simplicity, may be employed. Also, the weighting functions need not be symmetrical about their center frequency.
  • the nonlinear scaling 51 of Figure 6a is preferred to implement as a magnitude squared operation and the output inverse scaler 56 as a square root. It may be desirable to apply certain pre-processing or post-processing such as minimum phase conversion. Alternatively, or in addition to the magnitude squared scaling and square root inverse scaling, the arithmetic mean of the smoothing 54 becomes a geometric mean when the nonlinear scaling 51 provides a logarithm function and the inverse scaling 56 an exponentiation function. Such a mean is useful in preserving spectral nulls thought to be important for elevation perception.
  • Figures 6b and 6c show an exemplary input HRTF frequency spectrum and input impulse response, respectively, in the frequency domain and the time domain.
  • Figures 6d and 6e show the compressed output HRTF 57 in the respective domains.
  • the degree to which the HRTF spectrum is smoothed and its impulse response is shortened will depend on the multiple of critical bandwidth chosen for the smoothing 54.
  • the compressed HRTF characteristics will also depend on the window shape and other factors discussed above.
  • the frequency axis of the input HRTF is altered by a frequency warping function 121 so that a constant-bandwidth smoothing 125 acting on the warped frequency spectrum implements the equivalent of smoothing 54 of Figure 6a .
  • the smoothed HRTF is processed by an inverse warping 129 to provide the output compressed HRTF.
  • nonlinear scaling 51 and inverse scaling 56 optionally may be applied to the input and output HRTFs.
  • the frequency warping function 121 in conjunction with constant bandwidth smoothing serves the purpose of the frequency-varying smoothing bandwidth of the Figure 6a embodiment.
  • a warping function mapping frequency to Bark may be used to implement critical-band smoothing.
  • Smoothing 125 may be implemented as a time-domain window function multiplication or as a frequency-domain weighting function convolution similar to the embodiment of Figure 6a except that the weighting function width is constant with frequency.
  • it may be desirable to apply certain pre-processing or post-processing such as minimum phase conversion.
  • the order in which the frequency warping function 121 and the scaling function 51 are applied may be reversed. Although these functions are not linear, they do commute because the frequency warping 121 affects the frequency domain while the scaling 51 affects only the value of the frequency bins. Consequently, the inverse scaling function 56 and the inverse warping function 129 may also be reversed.
  • the output HRTF may be taken after block 125, in which case inverse scaling and inverse warping may be provided in the apparatus or functions which receive the compressed HRTF parameters.
  • Figures 7b and 7c show an exemplary input HRTF input response and frequency spectrum, respectively.
  • Figure 7d shows the frequency spectrum of the HRTF mapped into Bark.
  • Figure 7e shows the spectrum of the HRTF after smoothing 125. After undergoing inverse frequency warping, the resulting compressed HRTF has a spectrum as shown in Figure 7f and an impulse response as shown in Figure 7g . It will be noted that the resulting HRTF characteristics are the same as those of the embodiment of Figure 6a .
  • the imaging filter may also be embodied as a principal component filter in the manner of Figure 9 .
  • a position signal 30 is applied to a weight table and interpolation function 31 which is functionally similar to block 11 of Figure 1 .
  • the parameters provided by block 31, the interpolated weights, the directional matrix and the principal component filters are functionally equivalent to HRTF parameters controlling an imaging filter.
  • the imaging filter 15' of this embodiment filters the input signal 33 in a set of parallel fixed filters 34, principal component filters, PC 0 through PC N , whose outputs are mixed via a position-dependent weighting to form an approximation to the desired imaging filter.
  • the accuracy of the approximations increase with the number of principal component filters used. More computational resources, in the form of additional principal component filters, are needed to achieve a given degree of approximation to a set of raw HRTFs than to versions compressed in accordance with this embodiment of the present invention.
  • a three-dimensional spatial location or position signal 70 is applied to an equalized HRTF parameter table and interpolation function 71, resulting in a set of interpolated equalized HRTF parameters 72 responsive to the three-dimensional position identified by signal 70.
  • An input audio signal 73 is applied to an equalizing filter 74 and an imaging filter 75 whose transfer function is determined by the applied interpolated equalized HRTF parameters.
  • the equalizing filter 74 may be located after the imaging filter 75.
  • the filter 75 provides a spatialized audio output suitable for application to one channel of a headphone 77.
  • the sets of equalized head-related transfer function parameters in the table 71 are prederived by splitting a group of known head-related transfer functions into a fixed head-related transfer function common to all head-related transfer functions in the group and a variable, position-dependent head-related transfer function associated with each of the known head-related transfer functions, the combination of the fixed and each variable head-related transfer function being substantially equal to the respective original known head-related transfer function.
  • the equalizing filter 74 thus represents the fixed head-related transfer function common to all head-related transfer functions in the table. In this manner the HRTFs and imaging filter are reduced in complexity.
  • the equalization filter characteristics are chosen to minimize the complexity of the imaging filters. This minimizes the size of the equalized HRTF table, reduces the computational resources for HRTF interpolation and image filtering and reduces memory resources for tabulated HRTFs. In the case of FIR imaging filters, it is desired to minimize filter length.
  • the equalization filter may approximate the average HRTF, as this choice makes the position-dependent portion spectrally flat (and short in time) on average.
  • the equalization filter may represent the diffuse field sound component of the group of known transfer functions. When the equalization filter is formed as a weighted average of HRTFs, the weighting should give more importance to longer or more complex HRTFs.
  • Different fixed equalization may be provided for left and right channels (either before or after the position variable HRTFs) or a single equalization may be applied to the monaural source signal (either as a single filter before the monaural signal is split into left and right components or as two filters applied to each of the left and right components).
  • the optimal left-ear and right-ear equalization filters are often nearly identical.
  • the audio source signal may be filtered using a single equalization filter, with its output passed to both position-dependent HRTF filters.
  • the equalization filter and the imaging filter may result in computational savings: for example, one may be implemented as an IIR filter and the other as an FIR filter. Because it is a fixed filter typically with a fairly smooth response, the equalizing filter may best be implemented as a low-order IIR filter. Also, it could readily be implemented as an analog filter.
  • Figure 10 may be modified to employ as imaging filter 75 a principal component imaging filter 15' of the type described in connection with the embodiment of Figure 9 .

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Acoustics & Sound (AREA)
  • Signal Processing (AREA)
  • Stereophonic System (AREA)
  • Color Television Image Signal Generators (AREA)
  • Ultra Sonic Daignosis Equipment (AREA)
  • Holo Graphy (AREA)

Abstract

Des paramètres de fonction de transfert comprimée asservie aux mouvements de la tête (FTAMT) (130) sont prédérivés ou dérivés en temps réel pour servir à filtrer un signal audio destiné à un affichage audio virtuel. Du point de vue du domaine fréquentiel, les composantes fréquentielles de fonctions de transfert connues sont lissées (125) sur des largeurs de bande qui sont une fonction de la largeur des bandes critiques de l'ouïe. Dans la première variante, une FTAMT est lissée (125) par convolution de la FTAMT (120) avec une fonction de pondération dépendant de la fréquence dans le domaine fréquentiel. Dans la deuxième variante, l'axe fréquentiel de la FTAMT est gondolé ou mappé sous forme de domaine fréquentiel non linéaire.

Claims (38)

  1. Procédé d'affichage audio virtuel tridimensionnel consistant à :
    générer un ensemble de paramètres de fonction de transfert asservie aux mouvements de la tête en réponse à un signal de situation ou de direction dans l'espace, dans lequel ledit ensemble de paramètres de fonction de transfert asservie aux mouvements de la tête sont sélectionnés, ou interpolés, parmi les paramètres de fonction de transfert asservie aux mouvements de la tête dérivés en lissant (54, 125) les composants de fréquence (50) d'une fonction de transfert asservie aux mouvements de la tête sur une largeur de bande qui est une fonction non constante de la fréquence, et en notant les paramètres de fonction de transfert asservie aux mouvements de la tête d'une fonction de transfert asservie aux mouvements de la tête comprimée résultante (57, 130) ; et
    filtrer un signal audio en réponse audit ensemble de paramètres de fonction de transfert asservie aux mouvements de la tête.
  2. Procédé d'affichage audio selon la revendication 1, dans lequel la largeur de bande est fonction de la largeur de la bande critique de l'ouïe.
  3. Procédé d'affichage audio selon la revendication 2, dans lequel le lissage (54) consiste, pour chaque composante de fréquence dans au moins une partie de la bande audio de l'affichage, à appliquer une fonction de moyenne aux composantes de fréquence à l'intérieur de la largeur de bande contenant la composante de fréquence.
  4. Procédé d'affichage audio selon la revendication 3, dans lequel la fonction de moyenne est fonction de l'amplitude des composantes de fréquence.
  5. Procédé d'affichage audio selon la revendication 3, dans lequel la fonction de moyenne est fonction de la puissance des composantes de fréquence.
  6. Procédé d'affichage audio selon la revendication 4 ou la revendication 5, dans lequel ladite fonction de moyenne détermine la médiane.
  7. Procédé d'affichage audio selon la revendication 4 ou la revendication 5, dans lequel ladite fonction de moyenne détermine la moyenne arithmétique pondérée.
  8. Procédé d'affichage audio selon la revendication 4 ou la revendication 5, dans lequel ladite fonction de moyenne détermine la moyenne géométrique pondérée.
  9. Procédé d'affichage audio selon la revendication 4 ou la revendication 5, dans lequel ladite fonction de moyenne détermine une moyenne compensée.
  10. Procédé d'affichage audio selon la revendication 2, dans lequel le lissage consiste à convolutionner la fonction de transfert asservie aux mouvements de la tête avec une fonction de pondération liée à la fréquence, ladite fonction de pondération ayant une forme rectangulaire.
  11. Procédé d'affichage audio selon la revendication 1, dans lequel la largeur de bande est proportionnelle à la largeur de la bande critique de l'ouïe.
  12. Procédé d'affichage audio selon la revendication 11, dans lequel lesdits paramètres de fonction de transfert asservie aux mouvements de la tête sont étendus à basse et haute fréquences et dans lequel ladite largeur de bande est plus large qu'une largeur de bande proportionnelle à la largeur de la bande critique de l'ouïe dans lesdites régions de basse et haute fréquences.
  13. Procédé d'affichage audio selon la revendication 1, dans lequel le lissage consiste à convolutionner la fonction de transfert asservie aux mouvements de la tête avec une fonction de pondération liée à la fréquence, dont la largeur est fonction de la largeur de la bande critique de l'ouïe.
  14. Procédé d'affichage audio selon la revendication 13, dans lequel la fonction de pondération a une largeur de bande qui est un multiple de un, ou plus, de la largeur de la bande critique de l'ouïe.
  15. Procédé d'affichage audio selon la revendication 14, dans lequel lesdits paramètres de fonction de transfert asservie aux mouvements de la tête sont étendus à basse et haute fréquences et dans lequel ladite largeur de bande est plus large qu'une largeur de bande proportionnelle à la largeur de la bande critique de l'ouïe dans lesdites régions de basse et haute fréquences.
  16. Procédé d'affichage audio selon la revendication 13, dans lequel ladite fonction de pondération a une forme ayant une continuité d'ordre plus élevé qu'une fenêtre de forme rectangulaire.
  17. Procédé d'affichage audio selon la revendication 1, dans lequel le lissage des composantes de fréquence consiste à lisser lesdites composantes de fréquence dans le domaine fréquentiel.
  18. Procédé d'affichage audio selon la revendication 17, dans lequel ledit lissage consiste à convolutionner ladite fonction de transfert connue H(f) avec la réponse en fréquence d'une fonction de pondération wf(i) dans le domaine fréquentiel selon la relation S f = 1 2 b f + 1 n = b f b f W f n . H f - n
    Figure imgb0008

    où au moins la largeur de bande de lissage bf et, éventuellement, la forme de la fonction de pondération Wf sont fonction de la fréquence.
  19. Procédé d'affichage audio selon la revendication 1, dans lequel le lissage des composantes de fréquence consiste à appliquer une fonction de déformation de fréquence (121) à ladite fonction de transfert asservie aux mouvements de la tête connue, à transformer la fonction de transfert déformée en fréquence en domaine temporel, et à assurer le fenêtrage dans le domaine temporel de la réponse d'impulsion de la fonction de transfert déformée en fréquence.
  20. Procédé d'affichage audio selon la revendication 1, dans lequel le lissage des composantes de fréquence consiste à appliquer une fonction de déformation de fréquence (121) à ladite fonction de transfert asservie aux mouvements de la tête connue et à convolutionner dans le domaine fréquentiel la fonction de transfert déformée en fréquence avec la réponse en fréquence d'une fonction de pondération constante.
  21. Procédé d'affichage audio selon la revendication 19 ou la revendication 20, dans lequel ladite fonction de déformation de fréquence mappe la fonction de transfert sur Bark.
  22. Procédé d'affichage audio selon la revendication 19 ou la revendication 20, consistant en outre à appliquer une échelle non linéaire (51) à ladite fonction de transfert asservie aux mouvements de la tête connue avant ladite multiplication ou ladite convolution et à appliquer une échelle inverse (56) à la fonction de transfert en fenêtre et convolutée.
  23. Procédé d'affichage audio selon la revendication 1, dans lequel ledit filtrage est un filtrage de composante principale (15').
  24. Procédé d'affichage audio selon la revendication 1, dans lequel lesdits paramètres de fonction de transfert asservie aux mouvements de la tête sont des paramètres de fonction de transfert égalisés et ledit filtrage comprend le filtrage d'égalisation fixe et le filtrage en réponse auxdits paramètres de fonction de transfert égalisés.
  25. Procédé d'affichage audio selon la revendication 1, dans lequel ledit ensemble de fonctions de transfert asservies aux mouvements de la tête est dérivé en lissant les composantes de fréquence de fonctions de transfert asservies aux mouvements de la tête connues sur différentes largeurs de bande en fonction de la situation spatiale ou de directions associées à la fonction de transfert.
  26. Procédé d'affichage audio selon la revendication 1, dans lequel ledit ensemble de fonctions de transfert asservies aux mouvements de la tête est dérivé en lissant les composantes de fréquence de fonctions de transfert asservies aux mouvements de la tête connues sur différentes largeurs de bande en fonction de la complexité de la fonction de transfert.
  27. Procédé d'affichage audio selon la revendication 1, dans lequel ledit ensemble de fonctions de transfert asservies aux mouvements de la tête est dérivé en lissant les composantes de fréquence de fonctions de transfert asservies aux mouvements de la tête connues sur différentes largeurs de bande en fonction de la situation ou de la direction dans l'espace associée à la fonction de transfert et en fonction de la complexité de la fonction de transfert.
  28. Procédé d'affichage audio selon la revendication 26 ou 27, dans lequel la largeur de bande augmente avec l'augmentation de la complexité de la fonction de transfert.
  29. Procédé d'affichage audio selon la revendication 1 ou la revendication 28, dans lequel la largeur de bande est choisie de telle sorte que la complexité de la fonction de transfert asservie aux mouvements de la tête comprimée résultante la plus complexe ne dépasse pas une complexité prédéterminée.
  30. Procédé d'affichage audio selon la revendication 1, dans lequel ledit ensemble de fonctions de transfert asservies aux mouvements de la tête est dérivé en lissant les composantes de fréquence de fonctions de transfert asservies aux mouvements de la tête connues sur différentes largeurs de bande en fonction de l'importance psychoacoustique relative de la fonction de transfert.
  31. Procédé d'affichage audio selon la revendication 1, dans lequel ledit ensemble de fonctions de transfert asservies aux mouvements de la tête est dérivé en lissant les composantes de fréquence de fonctions de transfert asservies aux mouvements de la tête connues sur différentes largeurs de bande en fonction de la situation ou de la direction dans l'espace associée à la fonction de transfert et en fonction de l'importance psychoacoustique relative de la fonction de transfert.
  32. Procédé selon la revendication 1, dans lequel un ensemble de paramètres de fonction de transfert asservie aux mouvements de la tête est généré en réponse au signal de situation ou de direction dans l'espace (70), dans lequel des paramètres de filtrage d'égalisation fixes et ledit ensemble de paramètres de fonction de transfert asservie aux mouvements de la tête égalisés sont sélectionnés ou interpolés parmi les paramètres dérivés en séparant un groupe de fonctions de transfert asservies aux mouvements de la tête en une fonction de transfert asservie aux mouvements de la tête fixe commune à l'ensemble des fonctions de transfert asservies aux mouvements de la tête dans le groupe et une fonction de transfert asservie aux mouvements de la tête variable associée à chacune des fonctions de transfert asservies aux mouvements de la tête connues, la combinaison de la fonction de transfert asservie aux mouvements de la tête fixe et de chaque fonction de transfert variable étant sensiblement égale à la fonction de transfert asservie aux mouvements de la tête connue originale respective, et dans lequel les composantes de fréquence de chacune des fonctions de transfert asservies aux mouvements de la tête variables sont lissées sur la largeur de bande qui est une fonction non constante de la fréquence, et dans lequel les paramètres de ladite fonction de transfert asservie aux mouvements de la tête fixe pour caractériser ledit filtrage d'égalisation fixe sont notés, et les paramètres de chacune des fonctions de transfert asservies aux mouvements de la tête de la fonction de transfert asservie aux mouvements de la tête variable résultante pour utilisation en tant que paramètres de fonction de transfert égalisés sont notés ; et dans lequel le signal audio est filtré avec un filtrage d'égalisation fixe (74) et en réponse (75) auxdits paramètres de fonction de transfert asservie aux mouvements de la tête égalisés.
  33. Procédé d'affichage audio selon la revendication 32, dans lequel la dérivation desdits paramètres de filtrage d'égalisation fixes et dudit ensemble de paramètres de fonction de transfert égalisés consiste en outre à :
    lisser les composantes de fréquence de la fonction de transfert fixe sur une largeur de bande qui est une fonction non constante de la fréquence.
  34. Procédé d'affichage audio selon la revendication 32, dans lequel ledit groupe de fonctions de transfert asservies aux mouvements de la tête connues est séparé en une fonction de transfert fixe et une pluralité de fonctions de transfert variables en sélectionnant une fonction de transfert fixe résultant en les fonctions de transfert variables les moins complexes.
  35. Procédé d'affichage audio selon la revendication 32, dans lequel le groupe de fonctions de transfert asservies aux mouvements de la tête est séparé en une fonction de transfert asservie aux mouvements de la tête fixe et une pluralité de fonctions de transfert asservies aux mouvements de la tête variables en sélectionnant une fonction de transfert asservie aux mouvements de la tête fixe représentant la composante sonore en champ diffus du groupe de fonctions de transfert asservies aux mouvements de la tête connues.
  36. Procédé d'affichage audio selon la revendication 32, dans lequel ledit groupe de fonctions de transfert asservies aux mouvements de la tête connues sont des fonctions de transfert asservies aux mouvements de la tête représentant une direction ou plage de directions particulière dans l'espace.
  37. Procédé d'affichage audio selon la revendication 32, dans lequel les ensembles de paramètres de fonction de transfert asservie aux mouvements de la tête générés en réponse à un signal de situation ou de direction dans l'espace sont générés par un filtrage de composante principale.
  38. Dispositif d'affichage audio virtuel tridimensionnel comprenant :
    des moyens pour lisser les composantes de fréquence (54, 125) d'une fonction de transfert asservie aux mouvements de la tête connue sur une largeur de bande qui est une fonction non constante de la fréquence ;
    des moyens pour noter les paramètres de la fonction de transfert d'une fonction de transfert comprimée résultante ;
    des moyens pour générer un ensemble de paramètres de fonction de transfert asservie aux mouvements de la tête (71) en réponse à un signal de situation ou de direction dans l'espace, ledit ensemble de paramètres de fonction de transfert asservie aux mouvements de la tête étant choisi, ou interpolé, parmi lesdits paramètres de la fonction de transfert de la fonction de transfert comprimée résultante ; et
    des moyens pour filtrer (74, 75) un signal audio en réponse audit ensemble de paramètres de fonction de transfert asservie aux mouvements de la tête.
EP95918832A 1994-05-11 1995-05-03 Affichage audio virtuel tridimensionnel utilisant des filtres de formation d'images a complexite reduite Expired - Lifetime EP0760197B1 (fr)

Applications Claiming Priority (5)

Application Number Priority Date Filing Date Title
US24186794A 1994-05-11 1994-05-11
US241867 1994-05-11
US303705 1994-09-09
US08/303,705 US5659619A (en) 1994-05-11 1994-09-09 Three-dimensional virtual audio display employing reduced complexity imaging filters
PCT/US1995/004839 WO1995031881A1 (fr) 1994-05-11 1995-05-03 Affichage audio virtuel tridimensionnel utilisant des filtres de formation d'images a complexite reduite

Publications (3)

Publication Number Publication Date
EP0760197A1 EP0760197A1 (fr) 1997-03-05
EP0760197A4 EP0760197A4 (fr) 2004-08-11
EP0760197B1 true EP0760197B1 (fr) 2009-01-28

Family

ID=26934650

Family Applications (1)

Application Number Title Priority Date Filing Date
EP95918832A Expired - Lifetime EP0760197B1 (fr) 1994-05-11 1995-05-03 Affichage audio virtuel tridimensionnel utilisant des filtres de formation d'images a complexite reduite

Country Status (5)

Country Link
EP (1) EP0760197B1 (fr)
JP (1) JPH11503882A (fr)
AU (1) AU703379B2 (fr)
CA (1) CA2189126C (fr)
WO (1) WO1995031881A1 (fr)

Families Citing this family (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1997025834A2 (fr) * 1996-01-04 1997-07-17 Virtual Listening Systems, Inc. Procede et dispositif de traitement d'un signal multicanal destine a un casque audio
US6009179A (en) * 1997-01-24 1999-12-28 Sony Corporation Method and apparatus for electronically embedding directional cues in two channels of sound
JPH1188994A (ja) * 1997-09-04 1999-03-30 Matsushita Electric Ind Co Ltd 音像定位装置及び音像制御方法
US6307941B1 (en) 1997-07-15 2001-10-23 Desper Products, Inc. System and method for localization of virtual sound
US6067361A (en) * 1997-07-16 2000-05-23 Sony Corporation Method and apparatus for two channels of sound having directional cues
US6125115A (en) * 1998-02-12 2000-09-26 Qsound Labs, Inc. Teleconferencing method and apparatus with three-dimensional sound positioning
EP1072089B1 (fr) 1998-03-25 2011-03-09 Dolby Laboratories Licensing Corp. Procede et appareil de traitement de signaux audio
AUPP271598A0 (en) * 1998-03-31 1998-04-23 Lake Dsp Pty Limited Headtracked processing for headtracked playback of audio signals
AU6400699A (en) * 1998-09-25 2000-04-17 Creative Technology Ltd Method and apparatus for three-dimensional audio display
FI108504B (fi) * 1999-04-30 2002-01-31 Nokia Corp Tietoliikennejõrjestelmõn puheryhmien hallinta
GB2351213B (en) * 1999-05-29 2003-08-27 Central Research Lab Ltd A method of modifying one or more original head related transfer functions
JP4867121B2 (ja) * 2001-09-28 2012-02-01 ソニー株式会社 音声信号処理方法および音声再生システム
EP1905002B1 (fr) 2005-05-26 2013-05-22 LG Electronics Inc. Procede et appareil de decodage d'un signal audio
JP4988717B2 (ja) 2005-05-26 2012-08-01 エルジー エレクトロニクス インコーポレイティド オーディオ信号のデコーディング方法及び装置
CN101263741B (zh) 2005-09-13 2013-10-30 皇家飞利浦电子股份有限公司 产生和处理表示hrtf的参数的方法和设备
US8208641B2 (en) * 2006-01-19 2012-06-26 Lg Electronics Inc. Method and apparatus for processing a media signal
KR100863479B1 (ko) 2006-02-07 2008-10-16 엘지전자 주식회사 부호화/복호화 장치 및 방법
JP2007221445A (ja) * 2006-02-16 2007-08-30 Sharp Corp サラウンドシステム
FR2899424A1 (fr) 2006-03-28 2007-10-05 France Telecom Procede de synthese binaurale prenant en compte un effet de salle
JP5227946B2 (ja) * 2006-03-28 2013-07-03 テレフオンアクチーボラゲット エル エム エリクソン(パブル) フィルタ適応周波数分解能
ES2905764T3 (es) 2006-07-04 2022-04-12 Dolby Int Ab Sistema de filtro que comprende un convertidor de filtro y un compresor de filtro y método de funcionamiento del sistema de filtro
JP5960851B2 (ja) 2012-03-23 2016-08-02 ドルビー ラボラトリーズ ライセンシング コーポレイション 頭部伝達関数の線形混合による頭部伝達関数の生成のための方法およびシステム
US9263055B2 (en) 2013-04-10 2016-02-16 Google Inc. Systems and methods for three-dimensional audio CAPTCHA

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5105462A (en) * 1989-08-28 1992-04-14 Qsound Ltd. Sound imaging method and apparatus
US5440639A (en) * 1992-10-14 1995-08-08 Yamaha Corporation Sound localization control apparatus
US5404406A (en) * 1992-11-30 1995-04-04 Victor Company Of Japan, Ltd. Method for controlling localization of sound image
US5438623A (en) * 1993-10-04 1995-08-01 The United States Of America As Represented By The Administrator Of National Aeronautics And Space Administration Multi-channel spatialization system for audio signals

Also Published As

Publication number Publication date
CA2189126A1 (fr) 1995-11-23
AU703379B2 (en) 1999-03-25
JPH11503882A (ja) 1999-03-30
WO1995031881A1 (fr) 1995-11-23
EP0760197A1 (fr) 1997-03-05
EP0760197A4 (fr) 2004-08-11
CA2189126C (fr) 2001-05-01
AU2460395A (en) 1995-12-05

Similar Documents

Publication Publication Date Title
US5659619A (en) Three-dimensional virtual audio display employing reduced complexity imaging filters
US6072877A (en) Three-dimensional virtual audio display employing reduced complexity imaging filters
EP0760197B1 (fr) Affichage audio virtuel tridimensionnel utilisant des filtres de formation d'images a complexite reduite
US8515104B2 (en) Binaural filters for monophonic compatibility and loudspeaker compatibility
EP2002692B1 (fr) Rendu de données audios de canal central
EP2258120B1 (fr) Procédés et dispositifs pour fournir des signaux ambiophoniques
US6449368B1 (en) Multidirectional audio decoding
US6668061B1 (en) Crosstalk canceler
US11611828B2 (en) Systems and methods for improving audio virtualization
US20030002693A1 (en) Audio signal processing
EP2629552A1 (fr) Système de traitement d'ambiance audio
US20110026718A1 (en) Virtualizer with cross-talk cancellation and reverb
EP2134108B1 (fr) Dispositif de traitement sonore, appareil de haut-parleur et procédé de traitement sonore
US6178245B1 (en) Audio signal generator to emulate three-dimensional audio signals
US9848274B2 (en) Sound spatialization with room effect
DE112006002548T5 (de) Vorrichtung und Verfahren zur Wiedergabe von virtuellem Zweikanal-Ton
KR100684029B1 (ko) 푸리에 변환을 이용한 배음 생성 방법 및 이를 위한 장치,다운 샘플링에 의한 배음 생성 방법 및 이를 위한 장치와소리 보정 방법 및 이를 위한 장치
AU732016B2 (en) Three-dimensional virtual audio display employing reduced complexity imaging filters
US11202152B2 (en) Acoustic beamforming
Tamulionis et al. Listener movement prediction based realistic real-time binaural rendering
JPH0775439B2 (ja) 立体音場再生装置

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 19961203

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AT BE CH DE DK ES FR GB GR IE IT LI LU MC NL PT SE

A4 Supplementary search report drawn up and despatched

Effective date: 20040624

GRAP Despatch of communication of intention to grant a patent

Free format text: ORIGINAL CODE: EPIDOSNIGR1

RIC1 Information provided on ipc code assigned before grant

Ipc: H04S 1/00 20060101ALI20071017BHEP

Ipc: H04S 3/00 20060101AFI20071017BHEP

GRAS Grant fee paid

Free format text: ORIGINAL CODE: EPIDOSNIGR3

GRAA (expected) grant

Free format text: ORIGINAL CODE: 0009210

AK Designated contracting states

Kind code of ref document: B1

Designated state(s): AT BE CH DE DK ES FR GB GR IE IT LI LU MC NL PT SE

REG Reference to a national code

Ref country code: GB

Ref legal event code: FG4D

REG Reference to a national code

Ref country code: CH

Ref legal event code: EP

REG Reference to a national code

Ref country code: IE

Ref legal event code: FG4D

REF Corresponds to:

Ref document number: 69535912

Country of ref document: DE

Date of ref document: 20090319

Kind code of ref document: P

NLV1 Nl: lapsed or annulled due to failure to fulfill the requirements of art. 29p and 29m of the patents act
PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: NL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20090128

Ref country code: ES

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20090509

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: SE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20090428

Ref country code: PT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20090629

Ref country code: AT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20090128

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: BE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20090128

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: DK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20090128

PLBE No opposition filed within time limit

Free format text: ORIGINAL CODE: 0009261

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: MC

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20090531

REG Reference to a national code

Ref country code: CH

Ref legal event code: PL

26N No opposition filed

Effective date: 20091029

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: LI

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20090531

Ref country code: CH

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20090531

REG Reference to a national code

Ref country code: FR

Ref legal event code: ST

Effective date: 20100129

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: IE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20090503

Ref country code: FR

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20090602

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: DE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20091201

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: GR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20090429

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: IT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20090128

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: LU

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20090503

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: GB

Payment date: 20140527

Year of fee payment: 20

REG Reference to a national code

Ref country code: GB

Ref legal event code: PE20

Expiry date: 20150502

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: GB

Free format text: LAPSE BECAUSE OF EXPIRATION OF PROTECTION

Effective date: 20150502