EP1873754A1 - Audiokodierer, Audiodekodierer und Audioprozessor mit einer dynamisch variablen Warp-Charakteristik - Google Patents

Audiokodierer, Audiodekodierer und Audioprozessor mit einer dynamisch variablen Warp-Charakteristik Download PDF

Info

Publication number
EP1873754A1
EP1873754A1 EP06013604A EP06013604A EP1873754A1 EP 1873754 A1 EP1873754 A1 EP 1873754A1 EP 06013604 A EP06013604 A EP 06013604A EP 06013604 A EP06013604 A EP 06013604A EP 1873754 A1 EP1873754 A1 EP 1873754A1
Authority
EP
European Patent Office
Prior art keywords
filter
audio
coding
warping
signal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
EP06013604A
Other languages
English (en)
French (fr)
Other versions
EP1873754B1 (de
Inventor
Stefan Wabnik
Gerald Schuller
Jürgen HERRE
Bernhard Grill
Markus Multrus
Stefan Bayer
Ulrich Krämer
Jens Hirschfeld
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fraunhofer Gesellschaft zur Forderung der Angewandten Forschung eV
Original Assignee
Fraunhofer Gesellschaft zur Forderung der Angewandten Forschung eV
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority to EP06013604A priority Critical patent/EP1873754B1/de
Application filed by Fraunhofer Gesellschaft zur Forderung der Angewandten Forschung eV filed Critical Fraunhofer Gesellschaft zur Forderung der Angewandten Forschung eV
Priority to AT06013604T priority patent/ATE408217T1/de
Priority to DE602006002739T priority patent/DE602006002739D1/de
Priority to EP08014723A priority patent/EP1990799A1/de
Priority to ES07725316.9T priority patent/ES2559307T3/es
Priority to CN2007800302813A priority patent/CN101501759B/zh
Priority to JP2009516921A priority patent/JP5205373B2/ja
Priority to MYPI20085310A priority patent/MY142675A/en
Priority to US12/305,936 priority patent/US8682652B2/en
Priority to PCT/EP2007/004401 priority patent/WO2008000316A1/en
Priority to AU2007264175A priority patent/AU2007264175B2/en
Priority to EP07725316.9A priority patent/EP2038879B1/de
Priority to KR1020087032110A priority patent/KR101145578B1/ko
Priority to RU2009103010/09A priority patent/RU2418322C2/ru
Priority to BRPI0712625-5A priority patent/BRPI0712625B1/pt
Priority to CA2656423A priority patent/CA2656423C/en
Priority to PL07725316T priority patent/PL2038879T3/pl
Priority to MX2008016163A priority patent/MX2008016163A/es
Priority to TW096122715A priority patent/TWI348683B/zh
Priority to ARP070102797A priority patent/AR061696A1/es
Publication of EP1873754A1 publication Critical patent/EP1873754A1/de
Priority to HK08103465A priority patent/HK1109817A1/xx
Application granted granted Critical
Publication of EP1873754B1 publication Critical patent/EP1873754B1/de
Priority to IL195983A priority patent/IL195983A/en
Priority to NO20090400A priority patent/NO340436B1/no
Priority to HK09108366.0A priority patent/HK1128811A1/zh
Priority to AU2011200461A priority patent/AU2011200461B2/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
    • G10L19/04Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using predictive techniques
    • G10L19/16Vocoder architecture
    • G10L19/18Vocoders using multiple modes
    • G10L19/20Vocoders using multiple modes using sound class specific coding, hybrid encoders or object based coding
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
    • G10L19/04Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using predictive techniques
    • G10L19/26Pre-filtering or post-filtering
    • G10L19/265Pre-filtering, e.g. high frequency emphasis prior to encoding

Definitions

  • the present invention relates to audio processing using warped filters and, particularly, to multi-purpose audio coding.
  • general audio coders like MPEG-1 Layer 3, or MPEG-2/4 Advanced Audio Coding, AAC usually do not perform as well for speech signals at very low data rates as dedicated LPC-based speech coders due to the lack of exploitation of a speech source model.
  • LPC-based speech coders usually do not achieve convincing results when applied to general music signals because of their inability to flexibly shape the spectral envelope of the coding distortion according to a masking threshold curve. It is the object of the present invention to provide a concept that combines the advantages of both LPC-based coding and perceptual audio coding into a single framework and thus describes unified audio coding that is efficient for both general audio and speech signals.
  • perceptual audio coders use a filterbank-based approach to efficiently code audio signals and shape the quantization distortion according to an estimate of the masking curve.
  • Figure 9 shows the basic block diagram of a monophonic perceptual coding system.
  • An analysis filterbank is used to map the time domain samples into sub sampled spectral components.
  • the system is also referred to as a subband coder (small number of subbands, e.g. 32) or a filterbank-based coder (large number of frequency lines, e.g. 512).
  • a perceptual ("psycho-acoustic") model is used to estimate the actual time dependent masking threshold.
  • the spectral (“subband” or “frequency domain”) components are quantized and coded in such a way that the quantization noise is hidden under the actual transmitted signal and is not perceptible after decoding. This is achieved by varying the granularity of quantization of the spectral values over time and frequency.
  • a perceptual audio coder which separates the aspects of irrelevance reduction (i.e. noise shaping according to perceptual criteria) and redundancy reduction (i.e. obtaining a mathematically more compact representation of information) by using a so-called pre-filter rather than a variable quantization of the spectral coefficients over frequency.
  • the principle is illustrated in the following figure.
  • the input signal is analyzed by a perceptual model to compute an estimate of the masking threshold curve over frequency.
  • the masking threshold is converted into a set of pre-filter coefficients such that the magnitude of its frequency response is inversely proportional to the masking threshold.
  • the pre-filter operation applies this set of coefficients to the input signal which produces an output signal wherein all frequency components are represented according to their perceptual importance ("perceptual whitening").
  • This signal is subsequently coded by any kind of audio coder which produces a "white” quantization distortion, i.e. does not apply any perceptual noise shaping.
  • the transmission / storage of the audio signal includes both the coder's bit-stream and a coded version of the pre-filtering coefficients.
  • the coder bit-stream is decoded into an intermediate audio signal which is then subjected to a post-filtering operation according to the transmitted filter coefficients.
  • the post-filter Since the post-filter performs the inverse filtering process relative to the pre-filter, it applies a spectral weighting to its input signal according to the masking curve. In this way, the spectrally flat ("white”) coding noise appears perceptually shaped at the decoder output, as intended.
  • the frequency resolution of the pre-/post-filter In order to enable appropriate spectral noise shaping by using pre-/post-filtering techniques, it is important to adapt the frequency resolution of the pre-/post-filter to that of the human auditory system. Ideally, the frequency resolution would follow well-known perceptual frequency scales, such as the BARK or ERB frequency scale [Zwi]. This is especially desirable in order to minimize the order of the pre-/post-filter model and thus the associated computational complexity and side information transmission rate.
  • the adaptation of the pre-/post-filter frequency resolution can be achieved by the well-known frequency warping concept [KHL97].
  • the unit delays within a filter structure are replaced by (first or higher order) allpass filters which leads to a non-uniform deformation ("warping") of the frequency response of the filter.
  • warping non-uniform deformation
  • warped filtering e.g. modeling of room impulse responses [HKS00] and parametric modeling of a noise component in the audio signal (under the equivalent name Laguerre / Kauz filtering) [SOB03]
  • LPC Linear Predictive Coding
  • MPE Multi-Pulse Excitation
  • RPE Regular Pulse Excitation
  • CELP Code-Excited Linear Prediction
  • Linear Predictive Coding attempts to produce an estimate of the current sample value of a sequence based on the observation of a certain number of past values as a linear combination of the past observations.
  • the encoder LPC filter "whitens" the input signal in its spectral envelope, i.e. its frequency response is a model of the inverse of the signal's spectral envelope.
  • the frequency response of the decoder LPC filter is a model of the signal's spectral envelope.
  • AR auto-regressive linear predictive analysis
  • narrow band speech coders i.e. speech coders with a sampling rate of 8kHz
  • LPC filter with an order between 8 and 12. Due to the nature of the LPC filter, a uniform frequency resolution is effective across the full frequency range. This does not correspond to a perceptual frequency scale.
  • TML94 proposes a speech coder that models the speech spectral envelope by cepstral coefficients c(m) which are updated sample by sample according to the time-varying input signal.
  • the frequency scale of the model is adapted to approximate the perceptual MEL scale [Zwi] by using a first order all-pass filter instead of the usual unit delay.
  • a fixed value of 0.31 for the warping coefficient is used at the coder sampling rate of 8kHz.
  • the approach has been developed further to include a CELP coding core for representing the excitation signal in [KTK95], again using a fixed value of 0.31 for the warping coefficient at the coder sampling rate of 8kHz.
  • warped LPC and CELP coding are known, e.g. [HLM99] for which a warping factor of 0.723 is used at a sampling rate of 44.1kHz.
  • general audio coders are optimized to perfectly hide the quantization noise below the masking threshold, i.e., are optimally adapted to perform an irrelevance reduction. To this end, they have a functionality for accounting for the non-uniform frequency resolution of the human hearing mechanism.
  • they due to the fact that they are general audio encoders, they cannot specifically make use of any a-priori knowledge on a specific kind of signal patterns which are the reason for obtaining the very low bitrates known from e.g. speech coders.
  • an audio encoder for encoding an audio signal, comprising a pre-filter for generating a pre-filtered audio signal, the pre- filter having a variable warping characteristic, the warping characteristic being controllable in response to a time-varying control signal, the control signal indicating a small or no warping characteristic or a comparatively high warping characteristic; a controller for providing the time-varying control signal, the time-varying control signal depending on the audio signal; and a controllable encoding processor for processing the pre-filtered audio signal to obtain an encoded audio signal, wherein the encoding processor is adapted to process the pre-filtered audio signal in accordance with a first coding algorithm adapted to a specific signal pattern, or in accordance with a second different encoding algorithm suitable for encoding a general audio signal.
  • the encoding processor is adapted to be controlled by the controller so that an audio signal portion being filtered using the comparatively high warping characteristic is processed using the second encoding algorithm to obtain the encoded signal and an audio signal being filtered using the small or no warping characteristic is processed using the first encoding algorithm.
  • an audio decoder for decoding an encoded audio signal, the encoded audio signal having a first portion encoded in accordance with a first coding algorithm adapted to a specific signal pattern, and having a second portion encoded in accordance with a different second coding algorithm suitable for encoding a general audio signal, comprising: a detector for detecting a coding algorithm underlying the first portion or the second portion; a decoding processor for decoding, in response to the detector, the first portion using the first coding algorithm to obtain a first decoded time portion and for decoding the second portion using the second coding algorithm to obtain a second decoded time portion; and a post-filter having a variable warping characteristic being controllable between a first state having a small or no warping characteristic and a second state having a comparatively high warping characteristic.
  • the post-filter is controlled such that the first decoded time portion is filtered using the small or no warping characteristic and the second decoded time portion is filtered using a comparatively high warping characteristic.
  • an audio processor for processing an audio signal, comprising: a filter for generating a filtered audio signal, the filter having a variable warping characteristic, the warping characteristic being controllable in response to a time-varying control signal, the control signal indicating a small or no warping characteristic or a comparatively high warping characteristic; and a controller for providing the time-varying control signal, the time-varying control signal depending on the audio signal.
  • the present invention is based on the finding that a pre-filter having a variable warping characteristic on the audio encoder side is the key feature for integrating different coding algorithms to a single encoder frame. These two different coding algorithms are different from each other.
  • the first coding algorithm is adapted to a specific signal pattern such as speech signals, but also any other specifically harmonic patterns, pitched patterns or transient patterns are an option, while the second coding algorithm is suitable for encoding a general audio signal.
  • the pre-filter on the encoder-side or the post-filter on the decoder-side make it possible to integrate the signal specific coding module and the general coding module within a single encoder/decoder framework.
  • the input for the general audio encoder module or the signal specific encoder module can be warped to a higher or lower or no degree. This depends on the specific signal and the implementation of the encoder modules. Thus, the interrelation of which warp filter characteristic belongs to which coding module can be signaled. In several cases the result might be that the stronger warping characteristic belongs to the general audio coder and the lighter or no warping characteristic belongs to the signal specific module. This situation can - in some embodiments - fixedly set or can be the result of dynamically signaling the encoder module for a certain signal portion.
  • the coding algorithm adapted for specific signal patterns normally does not heavily rely on using the masking threshold for irrelevance reduction, this coding algorithm does not necessarily need any warping pre-processing or only a "soft" warping pre-processing.
  • the first coding algorithm adapted for a specific signal pattern advantageously uses a-priori knowledge on the specific signal pattern but does not rely that much on the masking threshold and, therefore, does not need to approach the non-uniform frequency resolution of the human listening mechanism.
  • the non-uniform frequency resolution of the human listening mechanism is reflected by scale factor bands having different bandwidths along the frequency scale. This non-uniform frequency scale is also known as the BARK or ERB scale.
  • the second coding algorithm can only produce an acceptable output bitrate together with an acceptable audio quality, when any measure is taken which accounts for the non-uniform frequency resolution of the human listening mechanism so that optimum benefit can be drawn from the masking threshold.
  • the inventive pre-filter only warps to a strong degree, when there is a signal portion not having the specific signal pattern, while for a signal not having the specific signal pattern, no warping at all or only a small warping characteristic is applied.
  • the pre-filter can perform different tasks using the same filter.
  • the pre-filter works as an LPC analysis filter so that the first encoding algorithm is only related to the encoding of the residual signal or the LPC excitation signal.
  • the pre-filter is controlled to have a strong warping characteristic and, preferably, to perform LPC filtering based on the psycho-acoustic masking threshold so that the pre-filtered output signal is filtered by the frequency-warped filter and is such that psychoacoustically more important spectral portions are amplified with respect to psychoacoustically less important spectral portions.
  • a straight-forward quantizer can be used, or, generally stated, quantization during encoding can take place without having to distribute the coding noise non-uniformly over the frequency range in the output of the warped filter.
  • the noise shaping of the quantization noise will automatically take place by the post-filtering action obtained by the time-varying warped filter on the decoder-side, which is - with respect to the warping characteristic - identical to the encoder-side pre-filter and, due to the fact that this filter is inverse to the pre-filter on the decoder side, automatically produces the noise shaping to obtain a maximum irrelevance reduction while maintaining a high audio quality.
  • Preferred embodiments of the present invention provide a uniform method that allows coding of both general audio signals and speech signals with a coding performance that - at least - matches the performance of the best known coding schemes for both types of signals. It is based on the following considerations:
  • this dilemma is solved by a coding system that includes an encoder filter that can smoothly fade in its characteristics between a fully warped operation, as it is generally preferable for coding of music signals, and a non-warped operation, as it is generally preferable for coding of speech signals.
  • the proposed inventive approach includes a linear filter with a time-varying warping factor. This filter is controlled by an extra input that receives the desired warping factor and modifies the filter operation accordingly.
  • the inverse decoder filtering mechanism is similarly equipped, i.e. a linear decoder filter with a time-varying warping factor and can act as a perceptual pre-filter as well as an LPC filter.
  • the corresponding decoder works accordingly: It receives the transmitted information, decodes the speech and generic audio parts according to the coding mode information, combines them into a single intermediate signal (e.g. by adding them), and filters this intermediate signal using the coding mode / warping factor and filter coefficients to form the final output signal.
  • the Fig. 1 audio encoder is operative for encoding an audio signal input at line 10.
  • the audio signal is input into a pre-filter 12 for generating a pre-filtered audio signal appearing at line 14.
  • the pre-filter has a variable warping characteristic, the warping characteristic being controllable in response to a time-varying control signal on line 16.
  • the control signal indicates a small or no warping characteristic or a comparatively high warping characteristic.
  • the time-varying warp control signal can be a signal having two different states such as "1" for a strong warp or a "0" for no warping.
  • the intended goal for applying warping is to obtain a frequency resolution of the pre-filter similar to the BARK scale. However, also different states of the signal / warping characteristic setting are possible.
  • the inventive audio encoder includes a controller 18 for providing the time-varying control signal, wherein the time varying control signal depends on the audio signal as shown by line 20 in Fig. 1.
  • the inventive audio encoder includes a controllable encoding processor 22 for processing the pre-filtered audio signal to obtain an encoded audio signal output at line 24.
  • the encoding processor 22 is adapted to process the pre-filtered audio signal in accordance with a first coding algorithm adapted to a specific signal pattern, or in accordance with a second, different encoding algorithm suitable for encoding a general audio signal.
  • the encoding processor 22 is adapted to be controlled by the controller 18 preferably via a separate encoder control signal on line 26 so that an audio signal portion being filtered using the comparatively high warping factor is processed using the second encoding algorithm to obtain the encoded signal for this audio signal portion, so that an audio signal portion being filtered using no or only a small warping characteristic is processed using the first encoding algorithm.
  • the filter for a signal being filtered in accordance with the first coding algorithm in some situations when processing an audio signal, no or only a small warp is performed by the filter for a signal being filtered in accordance with the first coding algorithm, while, when a strong and preferably perceptually full-scale warp is applied by the pre-filter, the time portion is processed using the second coding algorithm for general audio signals, which is preferably based on hiding quantization noise below a psycho-acoustic masking threshold.
  • the invention also covers the case that for a further portion of the audio signal, which has the signal-specific pattern, a high warping characteristic is applied while for an even further portion not having the specific signal pattern, a low or no warping characteristic is used.
  • the encoder module control can also be fixedly set depending on the transmitted warping factor or the warping factor can be derived from a transmitted coder module indication.
  • both information items can be transmitted as side information, i.e., the coder module and the warping factor.
  • Fig. 2 illustrates an inventive decoder for decoding an encoded audio signal input at line 30.
  • the encoded audio signal has a first portion encoded in accordance with a first coding algorithm adapted to a specific signal pattern, and has a second portion encoded in accordance with a different second coding algorithm suitable for encoding a general audio signal.
  • the inventive decoder comprises a detector 32 for detecting a coding algorithm underlying the first or the second portion. This detection can take place by extracting side information from the encoded audio signal as illustrated by broken line 34, and/or can take place by examining the bit-stream coming into a decoding processor 36 as illustrated by broken line 38.
  • the decoding processor 36 is for decoding in response to the detector as illustrated by control line 40 so that for both the first and second portions the correct coding algorithm is selected.
  • the decoding processor is operative to use the first coding algorithm for decoding the first time portion and to use the second coding algorithm for decoding the second time portion so that the first and the second decoded time portions are output on line 42.
  • Line 42 carries the input into a post-filter 44 having a variable warping characteristic.
  • the post-filter 44 is controllable using a time-varying warp control signal on line 46 so that this post-filter has only small or no warping characteristic in a first state and has a high warping characteristic in a second state.
  • the post-filter 44 is controlled such that the first time portion decoded using the first coding algorithm is filtered using the small or no warping characteristic and the second time portion of the decoded audio signal is filtered using the comparatively strong warping characteristic so that an audio decoder output signal is obtained at line 48.
  • the first coding algorithm determines the encoder-related steps to be taken in the encoding processor 22 and the corresponding decoder-related steps to be implemented in decoding processor 36. Furthermore, the second coding algorithm determines the encoder-related second coding algorithm steps to be used in the encoding processor and corresponding second coding algorithm-related decoding steps to be used in decoding processor 36.
  • pre-filter 12 and the post-filter 44 are, in general, inverse to each other.
  • the warping characteristics of those filters are controlled such that the post-filter has the same warping characteristic as the pre-filter or at least a similar warping characteristic within a 10 percent tolerance range.
  • the post-filter also does not have to be a warped filter.
  • the pre-filter 12 as well as the post-filter 44 can implement any other pre-filter or post-filter operations required in connection with the first coding algorithm or the second coding algorithm as will be outlined later on.
  • Fig. 3a illustrates an example of an encoded audio signal as obtained on line 24 of Fig. 1 and as can be found on line 30 of Fig. 2.
  • the encoded audio signal includes a first time portion in encoded form, which has been generated by the first coding algorithm as outlined at 50 and corresponding side information 52 for the first portion.
  • the bit-stream includes a second time portion in encoded form as shown at 54 and side information 56 for the second time portion.
  • the order of the items in Fig. 3a may vary.
  • the side information does not necessarily have to be multiplexed between the main information 50 and 54. Those signals can even come from separate sources as dictated by external requirements or implementations.
  • Fig. 3b illustrates side information for the explicit signaling embodiment of the present invention for explicitly signaling the warping factor and encoder mode, which can be used in 52 and 56 of Fig. 3a. This is indicated below the Fig. 3b side information stream.
  • the side information may include a coding mode indication explicitly signaling the first or the second coding algorithm underlying this portion to which the side information belongs to.
  • a warping factor can be signaled. Signaling of the warping factor is not necessary, when the whole system can only use two different warping characteristics, i.e., no warping characteristic as the first possibility and a perceptually full-scale warping characteristic as the second possibility. In this case, a warping factor can be fixed and does not necessarily have to be transmitted.
  • the warping factor can have more than these two extreme values so that an explicit signaling of the warping factor such as by absolute values or differentially coded values is used.
  • the pre-filter not only implements is warped but also implements tasks dictated by the first coding algorithm and the second coding algorithm, which leads to a more efficient functionality of the first and the second coding algorithms.
  • the pre-filter also performs the functionality of the LPC analysis filter and the post-filter on the decoder-side performs the functionality of an LPC synthesis filter.
  • the pre-filter is preferably an LPC filter, which pre-filters the audio signal so that, after pre-filtering, psychoacoustically more important portions are amplified with respect to psychoacoustically less important portions.
  • the post-filter is implemented as a filter for regenerating a situation similar to a situation before pre-filtering, i.e. an inverse filter which amplifies less important portions with respect to more important portions so that the signal after post-filtering is - apart from coding errors - similar to the original audio signal input into the encoder.
  • the filter coefficients for the above described pre-filter are preferably also transmitted via side information from the encoder to the decoder.
  • the pre-filter as well as the post-filter will be implemented as a warped FIR filter, a structure of which is illustrated in Fig. 4, or as a warped IIR digital filter.
  • the Fig. 4 filter is described in detail in [KHL 97].
  • Examples for warped IIR filters are also shown in [KHL 97]. All those digital filters have in common that they have warped delay elements 60 and weighting coefficients or weighting elements indicated by ⁇ 0 , . ⁇ 1 , ⁇ 2 ,....
  • a filter structure is transformed to a warped filter, when a delay element in an unwarped filter structure (not shown here) is replaced by an all-pass filter, such as a first-order all-pass filter D(z), as illustrated in on both sides of the filter structures in Fig. 4.
  • an all-pass filter such as a first-order all-pass filter D(z)
  • D(z) a first-order all-pass filter
  • the filter structure to the right of Fig. 4 can easily be implemented within the pre-filter as well as within the post-filter, wherein the warping factor is controlled by the parameter ⁇ , while the filter characteristic, i.e., the filter coefficients of the LPC analysis/synthesis or pre-filtering or post-filtering for amplifying/damping psycho-acoustically more important portions is controlled by setting the weighting parameters ⁇ 0 , ⁇ 1 , ⁇ 2 ,.... to appropriate values.
  • Fig. 5 illustrates the dependence of the frequency-warping characteristic on the warping factor ⁇ for ⁇ s between -0.8 and +0.8. No warping at all will be obtained, when ⁇ is set to 0.0.
  • a psycho-acoustically full-scale warp is obtained by setting ⁇ between 0.3 and 0.4.
  • the optimum warping factor depends on the chosen sampling rate and has a value of between about 0.3 and 0.4 for sampling rates between 32 and 48 kHz.
  • the then obtained non-uniform frequency resolution by using the warped filter is similar to the BARK or ERB scale. Substantially stronger warping characteristics can be implemented, but those are only useful in certain situations, which can happen when the controller determines that those higher warping factors are useful.
  • the pre-filter on the encoder-side will preferably have positive warping factors ⁇ to increase the frequency resolution in the low frequency range and to decrease the frequency resolution in the high frequency range.
  • the post-filter on the decoder-side will also have the positive warping factors.
  • a preferred inventive time-varying warping filter is shown in Fig. 6 at 70 as a part of the audio processor.
  • the inventive filter is, preferably, a linear filter, which is implemented as a pre-filter or a post-filter for filtering to amplify or damp psycho-acoustically more/less important portions or which is implemented as an LPC analysis/synthesis filter depending on the control signal of the system.
  • the warped filter is a linear filter and does not change the frequency of a component such as a sine wave input into the filter.
  • the filter before warping is a low pass filter
  • the Fig. 5 diagram has to be interpreted as set out below.
  • the filter would apply - for a warping factor equal to 0.0 - the phase and amplitude weighting defined by the filter impulse response of this unwarped filter.
  • the sine wave having a normalized frequency of 0.6 will be filtered such that the output is weighted by the phase and amplitude weighting which the unwarped filter has for a normalized frequency of 0.97 in Fig. 5. Since this filter is a linear filter, the frequency of the sine wave is not changed.
  • the filter coefficients ⁇ i are derived from the masking threshold. These filter coefficients can be pre- or post-filter coefficients, or LPC analysis/synthesis filter coefficients, or any other filter coefficients useful in connection with any first or second coding algorithms.
  • an audio processor in accordance with the present invention includes, in addition to the filter having variable warping characteristics, the controller 18 of Fig. 1 or the controller implemented as the coding algorithm detector 32 of Fig. 2 or a general audio input signal analyzer looking for a specific signal pattern in the audio input 10/42 so that a certain warping characteristic can be set, which fits to the specific signal pattern so that a time-adapted variable warping of the audio input be it an encoded or a decoded audio input can be obtained.
  • the pre-filter coefficients and the post-filter coefficients are identical.
  • the output of the audio processor illustrated in Fig. 6 which consists of the filter 70 and the controller 74 can then be stored for any purposes or can be processed by encoding processor 22, or by an audio reproduction device when the audio processor is on the decoder-side, or can be processed by any other signal processing algorithms.
  • Figs. 7 and 8 show preferred embodiments of the inventive encoder (Fig. 7) and the inventive decoder (Fig. 8).
  • the functionalities of the devices are similar to the Fig. 1, Fig. 2 devices.
  • Fig. 7 illustrates the embodiment, wherein the first coding algorithm is a speech-coder like coding algorithm, wherein the specific signal pattern is a speech pattern in the audio input 10.
  • the second coding algorithm 22b is a generic audio coder such as the straight-forward filterbank-based audio coder as illustrated and discussed in connection with Fig. 9, or the pre-filter/post-filter audio coding algorithm as illustrated in Fig. 10.
  • the first coding algorithm corresponds to the Fig. 11 speech coding system, which, in addition to an LPC analysis/synthesis filter 1100 and 1102 also includes a residual/excitation coder 1104 and a corresponding excitation decoder 1106.
  • the time-varying warped filter 12 in Fig. 7 has the same functionality as the LPC filter 1100, and the LPC analysis implemented in block 1108 in Fig. 11 is implemented in controller 18.
  • the residual/excitation coder 1104 corresponds to the residual/excitation coder kernel 22a in Fig. 7.
  • the excitation decoder 1106 corresponds to the residual/excitation decoder 36a in Fig. 8, and the time-varying warped filter 44 has the functionality of the inverse LPC filter 1102 for a first time portion being coded in accordance with the first coding algorithm.
  • the LPC filter coefficients generated by LPC analysis block 1108 correspond to the filter coefficients shown at 90 in Fig. 7 for the first time portion and the LPC filter coefficients input into block 1102 in Fig. 11 correspond to the filter coefficients on line 92 of Fig. 8.
  • the Fig. 7 encoder includes an encoder output interface 94, which can be implemented as a bit-stream multiplexer, but which can also be implemented as any other device producing a data stream suitable for transmission and/or storage.
  • the Fig. 8 decoder includes an input interface 96, which can be implemented as a bit-stream demultiplexer for de-multiplexing the specific time portion information as discussed in connection with Fig. 3a and for also extracting the required side-information as illustrated in Fig. 3b.
  • both encoding kernels 22a, 22b have a common input 96, and are controlled by the controller 18 via lines 97a and 97b. This control makes sure that, at a certain time instant, only one of both encoder kernels 22a, 22b outputs main and side information to the output interface.
  • both encoding kernels could work fully parallel, and the encoder controller 18 would make sure that only the output of the encoding kernel is input into the bit-stream, which is indicated by the coding mode information while the output of the other encoder is discarded.
  • both decoders can operate in parallel and outputs thereof can be added.
  • this embodiment processes e.g. a speech portion of a signal such as a certain frequency range or - generally - signal portion by the first coding algorithm and the remainder of the signal by the second general coding algorithm. Then outputs of both coders are transmitted from the encoder to the decoder side.
  • the decoder-side combination makes sure that the signal is rejoined before being post-filtered.
  • any kind of specific controls can be implemented as long as they make sure that the output encoded audio signal 24 has a sequence of first and second portions as illustrated in Fig. 3 or a correct combination of signal portions such as a speech portion and a general audio portion.
  • the coding mode information is used for decoding the time portion using the correct decoding algorithm so that a time-staggered pattern of first portions and second portions obtain at the outputs of decoder kernels 36a, and 36b, which are, then, multiplexed into a single time domain signal, which is illustrated schematically using the adder symbol 36c. Then, at the output of element 36c, there is a time-domain audio signal, which only has to be post-filtered so that the decoded audio signal is obtained.
  • both the encoder in Fig. 7 as well as the decoder in Fig. 8 may include an interpolator 100 or 102 so that a smooth transition via a certain time portion, which at least includes two samples, but which preferably includes more than 50 samples and even more than 100 samples, is implementable. This makes sure that coding artifacts are avoided, which might be caused by rapid changes of the warping factor and the filter coefficients. Since, however, the post-filter as well as the pre-filter fully operate in the time domain, there are no problems related to block-based specific implementations. Thus, one can change, when Fig.
  • the generic audio coder kernel 22b as illustrated in Fig. 7 may be identical to the coder 1000 in Fig. 10.
  • the pre-filter 12 will also perform the functionality of the pre-filter 1002 in Fig. 10.
  • the perceptual model 1004 in Fig. 10 will then be implemented within controller 18 of Fig. 7.
  • the filter coefficients generated by the perceptual model 1004 correspond to the filter coefficients on line 90 in Fig. 7 for a time portion, for which the second coding algorithm is on.
  • the decoder 1006 in Fig. 10 is implemented by the generic audio decoder kernel 36b in Fig. 8, and the post-filter 1008 is implemented by the time-varying warped filter 44 in Fig. 8.
  • the preferably coded filter coefficients generated by the perceptual model are received, on the decoder-side, on line 92, so that a line titled "filter coefficients" entering post-filter 1008 in Fig. 10 corresponds to line 92 in Fig. 8 for the second coding algorithm time portion.
  • the inventive encoder devices and the inventive decoder devices only use a single, but controllable filter and perform a discrimination on the input audio signal to find out whether the time portion of the audio signal has the specific pattern or is just a general audio signal.
  • a variety of different implementations can be used for determining, whether a portion of an audio signal is a portion having the specific signal pattern or whether this portion does not have this specific signal pattern, and, therefore, has to be processed using the general audio encoding algorithm.
  • the specific signal pattern is a speech signal
  • other signal-specific patterns can be determined and can be encoded using such signal-specific first encoding algorithms such as encoding algorithm for harmonic signals, for noise signals, for tonal signals, for pulse-train-like signals, etc.
  • Straightforward detectors are analysis by synthesis detectors, which, for example, try different encoding algorithms, together with different warping detectors to find out the best warping factor together with the best filter coefficients and the best coding algorithm.
  • Such analysis by synthesis detectors are in some cases quite computationally expensive. This does not matter in a situation, wherein there is a small number of encoders and a high number of decoders, since the decoder can be very simple in that case. This is due to the fact that only the encoder performs this complex computational task, while the decoder can simply use the transmitted side-information.
  • the inventive methods can be implemented in hardware or in software.
  • the implementation can be performed using a digital storage medium, in particular a disk or a CD having electronically readable control signals stored thereon, which can cooperate with a programmable computer system such that the inventive methods are performed.
  • the present invention is, therefore, a computer program product with a program code stored on a machine-readable carrier, the program code being configured for performing at least one of the inventive methods, when the computer program products runs on a computer.
  • the inventive methods are, therefore, a computer program having a program code for performing the inventive methods, when the computer program runs on a computer.
EP06013604A 2006-06-30 2006-06-30 Audiokodierer, Audiodekodierer und Audioprozessor mit einer dynamisch variablen Warp-Charakteristik Active EP1873754B1 (de)

Priority Applications (25)

Application Number Priority Date Filing Date Title
AT06013604T ATE408217T1 (de) 2006-06-30 2006-06-30 Audiokodierer, audiodekodierer und audioprozessor mit einer dynamisch variablen warp-charakteristik
DE602006002739T DE602006002739D1 (de) 2006-06-30 2006-06-30 Audiokodierer, Audiodekodierer und Audioprozessor mit einer dynamisch variablen Warp-Charakteristik
EP08014723A EP1990799A1 (de) 2006-06-30 2006-06-30 Audiokodierer, Audiodekodierer und Audioprozessor mit einer dynamisch variablen Warp-Charakteristik
EP06013604A EP1873754B1 (de) 2006-06-30 2006-06-30 Audiokodierer, Audiodekodierer und Audioprozessor mit einer dynamisch variablen Warp-Charakteristik
MX2008016163A MX2008016163A (es) 2006-06-30 2007-05-16 Codificador de audio, decodificador de audio y procesador de audio con caracteristicas de warping variable de manera dinamica.
JP2009516921A JP5205373B2 (ja) 2006-06-30 2007-05-16 動的可変ワーピング特性を有するオーディオエンコーダ、オーディオデコーダ及びオーディオプロセッサ
MYPI20085310A MY142675A (en) 2006-06-30 2007-05-16 Audio encoder, audio decoder and audio processor having a dynamically variable warping characteristic
US12/305,936 US8682652B2 (en) 2006-06-30 2007-05-16 Audio encoder, audio decoder and audio processor having a dynamically variable warping characteristic
PCT/EP2007/004401 WO2008000316A1 (en) 2006-06-30 2007-05-16 Audio encoder, audio decoder and audio processor having a dynamically variable harping characteristic
AU2007264175A AU2007264175B2 (en) 2006-06-30 2007-05-16 Audio encoder, audio decoder and audio processor having a dynamically variable harping characteristic
EP07725316.9A EP2038879B1 (de) 2006-06-30 2007-05-16 Audiokodierer und audiodekodierer mit einer dynamisch variablen warping-charakteristik
KR1020087032110A KR101145578B1 (ko) 2006-06-30 2007-05-16 동적 가변 와핑 특성을 가지는 오디오 인코더, 오디오 디코더 및 오디오 프로세서
ES07725316.9T ES2559307T3 (es) 2006-06-30 2007-05-16 Codificador de audio y decodificador de audio que tiene una característica de deformación dinámicamente variable
BRPI0712625-5A BRPI0712625B1 (pt) 2006-06-30 2007-05-16 Codificador de áudio, decodificador de áudio, e processador de áudio tendo uma caractéristica de distorção ("warping") dinamicamente variável
CA2656423A CA2656423C (en) 2006-06-30 2007-05-16 Audio encoder, audio decoder and audio processor having a dynamically variable warping characteristic
PL07725316T PL2038879T3 (pl) 2006-06-30 2007-05-16 Koder audio i dekoder audio mające dynamicznie zmienną charakterystykę odkształcania
CN2007800302813A CN101501759B (zh) 2006-06-30 2007-05-16 具有动态可变规整特性的音频编码器、音频解码器和音频处理器
RU2009103010/09A RU2418322C2 (ru) 2006-06-30 2007-05-16 Аудиокодер, аудиодекодер и аудиопроцессор, имеющий динамически изменяющуюся характеристику перекоса
TW096122715A TWI348683B (en) 2006-06-30 2007-06-23 Audio encoder,audio decoder,audio processor having a dynamically variable warping characteristic,storage medium having stored thereon an encoded audio signal,audio encoding method,audio decoding method,audio processing method and program for executing th
ARP070102797A AR061696A1 (es) 2006-06-30 2007-06-25 Codificador de audio , decodificador de audio y procesador de audio que poseen una caracteristica de distorsion variable dinamicamente
HK08103465A HK1109817A1 (en) 2006-06-30 2008-03-27 Audio encoder, audio decoder and audio processor having a dynamically variable warping characteristic
IL195983A IL195983A (en) 2006-06-30 2008-12-16 Audio encoder, audio decoder and audio processor that have a dynamic variable distortion characteristic
NO20090400A NO340436B1 (no) 2006-06-30 2009-01-27 Audiokoder, audiodekoder og audioprosessor med en dynamisk, variabel forvrengningskarakteristikk
HK09108366.0A HK1128811A1 (zh) 2006-06-30 2009-09-11 具有動態可變規整特性的音頻編碼器和音頻解碼器
AU2011200461A AU2011200461B2 (en) 2006-06-30 2011-02-04 Audio encoder, audio decoder and audio processor having a dynamically variable harping characteristic

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
EP06013604A EP1873754B1 (de) 2006-06-30 2006-06-30 Audiokodierer, Audiodekodierer und Audioprozessor mit einer dynamisch variablen Warp-Charakteristik

Related Child Applications (1)

Application Number Title Priority Date Filing Date
EP08014723A Division EP1990799A1 (de) 2006-06-30 2006-06-30 Audiokodierer, Audiodekodierer und Audioprozessor mit einer dynamisch variablen Warp-Charakteristik

Publications (2)

Publication Number Publication Date
EP1873754A1 true EP1873754A1 (de) 2008-01-02
EP1873754B1 EP1873754B1 (de) 2008-09-10

Family

ID=37402718

Family Applications (2)

Application Number Title Priority Date Filing Date
EP06013604A Active EP1873754B1 (de) 2006-06-30 2006-06-30 Audiokodierer, Audiodekodierer und Audioprozessor mit einer dynamisch variablen Warp-Charakteristik
EP08014723A Withdrawn EP1990799A1 (de) 2006-06-30 2006-06-30 Audiokodierer, Audiodekodierer und Audioprozessor mit einer dynamisch variablen Warp-Charakteristik

Family Applications After (1)

Application Number Title Priority Date Filing Date
EP08014723A Withdrawn EP1990799A1 (de) 2006-06-30 2006-06-30 Audiokodierer, Audiodekodierer und Audioprozessor mit einer dynamisch variablen Warp-Charakteristik

Country Status (4)

Country Link
EP (2) EP1873754B1 (de)
AT (1) ATE408217T1 (de)
DE (1) DE602006002739D1 (de)
HK (1) HK1109817A1 (de)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107710324A (zh) * 2015-04-09 2018-02-16 弗劳恩霍夫应用研究促进协会 音频编码器和用于对音频信号进行编码的方法

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
RU2445719C2 (ru) * 2010-04-21 2012-03-20 Государственное образовательное учреждение высшего профессионального образования Академия Федеральной службы охраны Российской Федерации (Академия ФСО России) Способ улучшения восприятия синтезированной речи при реализации процедуры анализа через синтез в вокодерах с линейным предсказанием
IL311020A (en) 2010-07-02 2024-04-01 Dolby Int Ab After–selective bass filter
CN105229736B (zh) 2013-01-29 2019-07-19 弗劳恩霍夫应用研究促进协会 用于选择第一编码算法与第二编码算法中的一个的装置及方法
EP2980795A1 (de) 2014-07-28 2016-02-03 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Audiokodierung und -decodierung mit Nutzung eines Frequenzdomänenprozessors, eines Zeitdomänenprozessors und eines Kreuzprozessors zur Initialisierung des Zeitdomänenprozessors
EP2980801A1 (de) 2014-07-28 2016-02-03 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Verfahren zur Schätzung des Rauschens in einem Audiosignal, Rauschschätzer, Audiocodierer, Audiodecodierer und System zur Übertragung von Audiosignalen
EP2980794A1 (de) 2014-07-28 2016-02-03 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Audiocodierer und -decodierer mit einem Frequenzdomänenprozessor und Zeitdomänenprozessor

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
JOON-HYUK CHANG ET AL: "Speech enhancement using warped discrete cosine transform", SPEECH CODING, 2002, IEEE WORKSHOP PROCEEDINGS. OCT. 6-9, 2002, PISCATAWAY, NJ, USA,IEEE, 6 October 2002 (2002-10-06), pages 175 - 177, XP010647252, ISBN: 0-7803-7549-1 *
TANCEREL L ET AL: "Combined speech and audio coding by discrimination", SPEECH CODING, 2000. PROCEEDINGS. 2000 IEEE WORKSHOP ON SEPTEMBER 17-20, 2000, PISCATAWAY, NJ, USA,IEEE, 17 September 2000 (2000-09-17), pages 154 - 156, XP010520073, ISBN: 0-7803-6416-3 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107710324A (zh) * 2015-04-09 2018-02-16 弗劳恩霍夫应用研究促进协会 音频编码器和用于对音频信号进行编码的方法
CN107710324B (zh) * 2015-04-09 2021-12-03 弗劳恩霍夫应用研究促进协会 音频编码器和用于对音频信号进行编码的方法

Also Published As

Publication number Publication date
DE602006002739D1 (de) 2008-10-23
HK1109817A1 (en) 2008-06-20
EP1990799A1 (de) 2008-11-12
ATE408217T1 (de) 2008-09-15
EP1873754B1 (de) 2008-09-10

Similar Documents

Publication Publication Date Title
US7873511B2 (en) Audio encoder, audio decoder and audio processor having a dynamically variable warping characteristic
US8682652B2 (en) Audio encoder, audio decoder and audio processor having a dynamically variable warping characteristic
EP2038879B1 (de) Audiokodierer und audiodekodierer mit einer dynamisch variablen warping-charakteristik
CA2691993C (en) Audio encoder for encoding an audio signal having an impulse-like portion and stationary portion, encoding methods, decoder, decoding method, and encoded audio signal
RU2485606C2 (ru) Схема кодирования/декодирования аудио сигналов с низким битрейтом с применением каскадных переключений
EP2589046B1 (de) Selektives Nachfilter
EP1873754B1 (de) Audiokodierer, Audiodekodierer und Audioprozessor mit einer dynamisch variablen Warp-Charakteristik
AU2016204672B2 (en) Audio encoder and decoder with multiple coding modes

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20070525

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LI LT LU LV MC NL PL PT RO SE SI SK TR

AX Request for extension of the european patent

Extension state: AL BA HR MK YU

GRAP Despatch of communication of intention to grant a patent

Free format text: ORIGINAL CODE: EPIDOSNIGR1

REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 1109817

Country of ref document: HK

GRAS Grant fee paid

Free format text: ORIGINAL CODE: EPIDOSNIGR3

GRAA (expected) grant

Free format text: ORIGINAL CODE: 0009210

AK Designated contracting states

Kind code of ref document: B1

Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LI LT LU LV MC NL PL PT RO SE SI SK TR

REG Reference to a national code

Ref country code: GB

Ref legal event code: FG4D

REG Reference to a national code

Ref country code: CH

Ref legal event code: EP

REG Reference to a national code

Ref country code: IE

Ref legal event code: FG4D

REF Corresponds to:

Ref document number: 602006002739

Country of ref document: DE

Date of ref document: 20081023

Kind code of ref document: P

REG Reference to a national code

Ref country code: HK

Ref legal event code: GR

Ref document number: 1109817

Country of ref document: HK

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: LT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20080910

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: AT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20080910

Ref country code: FI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20080910

Ref country code: LV

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20080910

Ref country code: SI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20080910

NLV1 Nl: lapsed or annulled due to failure to fulfill the requirements of art. 29p and 29m of the patents act
PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: BE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20080910

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: BG

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20081210

Ref country code: ES

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20081221

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: NL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20080910

Ref country code: PT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20090210

Ref country code: RO

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20080910

Ref country code: CZ

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20080910

Ref country code: SK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20080910

Ref country code: IS

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20090110

PLBE No opposition filed within time limit

Free format text: ORIGINAL CODE: 0009261

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: EE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20080910

Ref country code: DK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20080910

26N No opposition filed

Effective date: 20090611

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: IT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20080910

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: MC

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20090630

Ref country code: SE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20081210

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: IE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20090630

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: PL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20080910

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: GR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20081211

REG Reference to a national code

Ref country code: CH

Ref legal event code: PL

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: LI

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20090630

Ref country code: CH

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20090630

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: LU

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20090630

Ref country code: CH

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20100630

Ref country code: LI

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20100630

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: HU

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20090311

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: TR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20080910

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: CY

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20080910

REG Reference to a national code

Ref country code: FR

Ref legal event code: PLFP

Year of fee payment: 11

REG Reference to a national code

Ref country code: FR

Ref legal event code: PLFP

Year of fee payment: 12

REG Reference to a national code

Ref country code: FR

Ref legal event code: PLFP

Year of fee payment: 13

P01 Opt-out of the competence of the unified patent court (upc) registered

Effective date: 20230512

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: FR

Payment date: 20230621

Year of fee payment: 18

Ref country code: DE

Payment date: 20230620

Year of fee payment: 18

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: GB

Payment date: 20230622

Year of fee payment: 18