EP1383112A2 - Verfahren und Vorrichtung zur Sprachkodierung mit erhöhter Bandbreite, insbesondere mit einer erhöhten Qualität stimmhafter Sprachrahmen - Google Patents

Verfahren und Vorrichtung zur Sprachkodierung mit erhöhter Bandbreite, insbesondere mit einer erhöhten Qualität stimmhafter Sprachrahmen Download PDF

Info

Publication number
EP1383112A2
EP1383112A2 EP03291748A EP03291748A EP1383112A2 EP 1383112 A2 EP1383112 A2 EP 1383112A2 EP 03291748 A EP03291748 A EP 03291748A EP 03291748 A EP03291748 A EP 03291748A EP 1383112 A2 EP1383112 A2 EP 1383112A2
Authority
EP
European Patent Office
Prior art keywords
term
filter
excitation
short
long
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP03291748A
Other languages
English (en)
French (fr)
Other versions
EP1383112A3 (de
Inventor
Michael Ansorge
Giuseppina Biunedo Lotito
Benito Carnero
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
STMicroelectronics NV
Original Assignee
STMicroelectronics NV
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from EP02015920A external-priority patent/EP1383110A1/de
Application filed by STMicroelectronics NV filed Critical STMicroelectronics NV
Priority to EP03291748A priority Critical patent/EP1383112A3/de
Publication of EP1383112A2 publication Critical patent/EP1383112A2/de
Publication of EP1383112A3 publication Critical patent/EP1383112A3/de
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
    • G10L19/04Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using predictive techniques
    • G10L19/06Determination or coding of the spectral characteristics, e.g. of the short-term prediction coefficients
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
    • G10L19/04Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using predictive techniques
    • G10L19/08Determination or coding of the excitation function; Determination or coding of the long-term prediction parameters
    • G10L19/083Determination or coding of the excitation function; Determination or coding of the long-term prediction parameters the excitation function being an excitation gain
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
    • G10L19/04Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using predictive techniques
    • G10L19/08Determination or coding of the excitation function; Determination or coding of the long-term prediction parameters
    • G10L19/12Determination or coding of the excitation function; Determination or coding of the long-term prediction parameters the excitation function being a code excitation, e.g. in code excited linear prediction [CELP] vocoders

Definitions

  • the invention relates to speech encoding / decoding extended band, in particular but not limited to telephony mobile.
  • the bandwidth of the speech signal is between 50 and 7000 Hz.
  • Successive speech sequences sampled at one predetermined sampling frequency are processed in a CELP-type coding device, using a linear prediction with excitation by coded sequences (by ACELP example: "algebraic-code-excited linear-prediction"), well known to those skilled in the art, and described in particular in the recommendation ITU-TG 729, version 3/96, entitled “coding of the speech at 8 kbits / s by linear prediction with excitation by coded sequences with conjugated algebraic structure ”.
  • the CD prediction coder of the CELP type, is based on the linear predictive coding model with code excitation.
  • the coder operates on vocal superframes equivalent for example to 20 ms of signal and each comprising 320 samples.
  • the extraction of the linear prediction parameters i.e. the coefficients of the linear prediction filter, also called short-term synthesis filter 1 / A (z), is carried out for each speech superframe.
  • each superframe is subdivided into 5 ms frames comprising 80 samples.
  • the speech signal is analyzed to extract the parameters of the CELP prediction model (that is to say, in particular, a long-term digital excitation word v i extracted from an adaptive coded DLT directory, also called “adaptive long-term dictionary", an associated long-term gain Ga, a short-term excitation word c j , extracted from a fixed coded repertoire DCT, also called “short-term dictionary”, and a gain at associated short term Gc).
  • a long-term digital excitation word v i extracted from an adaptive coded DLT directory, also called “adaptive long-term dictionary”
  • an associated long-term gain Ga a short-term excitation word c j
  • a short-term excitation word c j extracted from a fixed coded repertoire DCT, also called “short-term dictionary”
  • Gc gain at associated short term Gc
  • these parameters are used, in a decoder, to retrieve the excitation and predictive filter parameters. We then reconstitutes speech by filtering this excitation flow in a short-term synthesis filter.
  • the short-term dictionary DCT is founded on a fixed structure, for example of the stochastic type, or of the algebraic using an interlaced permutation model of Dirac pulses.
  • the coded repertoire contains innovative excitations also called algebraic or short-term excitations, and each vector contains a number of non-zero pulses, for example four, each of which can have amplitude +1 or -1 with predetermined positions.
  • the CD encoder processing means include functionally of the first MEXT1 extraction means intended to extract the word long-term excitement, and second MEXT2 extraction means intended to extract the word short-term excitement. Functionally, these means are made for example in software within a processor.
  • These extraction means include a predictive filter FP having a transfer function equal to 1 / A (z), as well as a filter FPP perceptual weighting with a transfer function W (z).
  • the perceptual weighting filter is applied to the signal to model the perception of the ear.
  • the extraction means include means MECM intended to perform a minimization of a square error average.
  • the linear prediction FP synthesis filter models the spectral envelope of the signal. Linear predictive analysis is performed all superframes, so as to determine the linear predictive filter coefficients. These are converted to spectral line pairs (LSP: “Line Spectrum Pairs”) and digitized by predictive vector quantization in two stages.
  • LSP Line Spectrum Pairs
  • Each 20 ms speech superframe is divided into four frames of 5 ms each containing 80 samples.
  • the settings Quantized LSPs are transmitted to the decoder once per superframe while long term and short term parameters are passed at each frame.
  • the coefficients of the linear prediction filter, quantified and not quantified, are used for the most recent frame of a super-frame, while the other three frames of the same super-frame use an interpolation of these coefficients.
  • Tonal delay in open loop is estimated for example every two frames on the basis of the perceptually weighted voice signal. Then, the The following operations are repeated for each frame:
  • the long-term target signal X LT is calculated by filtering the sampled speech signal s (n) by the perceptual weighting filter FPP.
  • the impulse response of the weighted synthesis filter is calculated.
  • Closed loop tonal analysis using a minimization of the average square error is then carried out in order to to determine the long-term excitation word v; and the associated gain Ga, by means of the target signal and the impulse response, by searches around the value of the tone delay in open loop.
  • the long-term target signal is then updated by subtracting the filtered contribution y from the adaptive coded directory DLT and this new short-term target signal X ST is used when exploring the fixed coded directory DCT in order to determine the password.
  • short term excitation c j and the associated gain G c is used when exploring the fixed coded directory DCT in order to determine the password.
  • CELP algorithm strongly depends on the richness of the DCT short term excitation dictionary for example from an algebraic excitation dictionary. If the effectiveness of such algorithm is unquestionable for bandwidth signals narrow (300-3400 Hz), problems arise for signals with widened band.
  • the inventors have in fact observed that even with a very rich dictionary, speech encoding algorithm produced a reconstructed signal corrupted by different kinds of noise and in particular a noise of “whistling” type which taints the frames of spoken word.
  • the invention also provides a control type solution gain, but totally different from that described in particular in the articles of Taniguchi and others and of Shoham.
  • the invention also aims to control regardless of short-term and long-term distortions.
  • the invention therefore provides a speech encoding method with wide band, in which the speech is sampled so as to obtain successive voice frames each comprising a predetermined number of samples, and for each voice frame, we determines parameters of a linear prediction model at excitation by code, these parameters comprising a numeric word of long-term excitement extracted from an adaptive coded repertoire and a associated long-term gain, as well as a word of short-term excitement short term dictionary extract and short term gain associated, and we update the adaptive coded directory from the word excerpt long term excitement and short term excitement word extract.
  • the method includes an update of the state of the linear prediction filter with the word short-term excitation filtered by an order filter greater than or equal to 1, for example an impulse response filter finite of order 1, whose coefficients depend on the value of the gain in the long run, so as to weaken the contribution of excitement to short term when the gain of long term excitement is greater at a predetermined threshold, for example equal to 0.8.
  • the solution according to the invention consists here to weaken the contribution of short-term excitement if the gain of long-term excitement is important.
  • this is the contribution of undiminished short-term excitement which is stored in the adaptive dictionary for updating. So the reduction occurs only on exit. Preserving the short-term contribution to store is important, since the richness of the adaptive dictionary is thus preserved for the most low frequencies.
  • the first coefficient B0 of the filter is equal to 1 / (1 + ⁇ . min (Ga, 1))
  • the second coefficient B1 of the filter is equal to ⁇ .min (Ga, 1) / (1 + ⁇ .min (Ga, 1))
  • is a real number with a lower absolute value at 1
  • Ga is the long-term gain
  • min (Ga, 1) designates the minimum value between Ga and 1.
  • the extraction of the word of long-term excitement using a first filter of perceptual weighting including a first weighting filter formant we perform the short excitation word extraction term using the first perceptual weighting filter cascaded to a second perceptual weighting filter comprising a second formantic weighting filter.
  • the denominator of the transfer function of the first formantic weighting filter equals the numerator of the second weighting filter formant.
  • the use of two filters of weighting different formant allows to control regardless of short-term and long-term distortions.
  • the short-term weighting filter is cascaded to the filter of long-term weighting.
  • tying the denominator of the long-term weighting filter in the numerator of the short-term weighting allows these two to be controlled separately filters and also allows a clear simplification when these two filters are cascaded.
  • the first extraction means include a digital filter of linear prediction
  • the device comprises second means update capable of updating the filter status of the linear prediction with short term excitation word filtered by a filter of order greater than or equal to 1 whose coefficients depend on the value of the long-term gain, so as to weaken the contribution of short-term excitement when the gain of long-term excitement is above a predetermined threshold.
  • the first extraction means include a first weighting filter perceptual with a first weighting filter formantic, by the fact that the second means of extraction include the first perceptual weighting filter cascaded to a second perceptual weighting filter comprising a second formantic weighting filter, and the denominator of the transfer function of the first formantic weighting filter equals the numerator of the second weighting filter formant.
  • the invention also relates to a terminal of a system wireless communication, such as a mobile phone cell, incorporating a device as defined above.
  • the encoding device, or CD encoder, according to the invention, as illustrated in FIG. 2, differs from that of the prior art as illustrated in FIG. 1 by the fact that the CD encoder further comprises second updating means MAJ2 able to update the state of the linear prediction filter FP and the state of the perceptual weighting filter FPP with the short-term excitation word c j filtered by a filter of order greater than or equal to 1 FLT1 which is for example here a filter of order 1 with finite impulse response.
  • second updating means MAJ2 able to update the state of the linear prediction filter FP and the state of the perceptual weighting filter FPP with the short-term excitation word c j filtered by a filter of order greater than or equal to 1 FLT1 which is for example here a filter of order 1 with finite impulse response.
  • the coefficients of this first order filter depend on the value long-term gain Ga, so as to weaken the contribution of short-term excitement when gaining long-term excitement Ga is greater than a predetermined threshold, for example equal to 0.8.
  • the transfer function of the filter FLT1 is equal to B0 + B1 z -1 and the first coefficient of the filter B0 can be determined by formula (I) below. 1 / (1 + 0.98 min (Ga, 1)) while the second coefficient of filter B1 can be determined by formula (II) below. 0.98 min (Ga, 1) / (1 + 0.98 min (Ga, 1))
  • the filtering of the excitation must also be applied for updating the memory status of the filters in the DCD decoder, as shown diagrammatically in FIG. 2a.
  • the variant embodiment illustrated in FIG. 2 allows eliminating hissing type noise on voiced speech frames.
  • the FPP perceptual weighting filter uses the masking properties of the human ear compared to the spectral envelope of the speech signal, whose shape is a function resonances of the vocal tract. This filter allows you to assign more importance of the error appearing in the spectral valleys by compared to formic peaks.
  • the same FPP perceptual weighting filter is used for short-term research and for long-term research.
  • the transfer function W (z) of this FPP filter is given by the formula (III) below.
  • W ( z ) AT ( z / ⁇ 1 ) AT ( z / ⁇ 2 ) in which 1 / A (z) is the transfer function of the predictive filter FP and ⁇ 1 and ⁇ 2 are the perceptual weighting coefficients, the two coefficients being positive or zero and less than or equal to 1 with the coefficient ⁇ 2 less than or equal to the coefficient ⁇ 1.
  • the perceptual weighting filter consists of a formantic weighting filter and a weighting of the slope of the spectral envelope of the signal (tilt).
  • FIG. 3 Such an embodiment is illustrated in FIG. 3, in which, compared to Figure 2, the unique FPP filter was replaced by a first formantic weighting filter FPP1 for long-term research, cascaded with a second filter of FPP2 formant weighting for short-term research.
  • the filters appearing in the long-term research loop should also appear in the short-term research loop.
  • the transfer function W 1 (z) of the formantic weighting filter FPP1 is given by formula (IV) below.
  • W 1 (z ) AT ( z / ⁇ 11 ) AT ( z / ⁇ 12 ) while the transfer function W 2 (z) of the formantic weighting filter FPP2 is given by the formula (V) below.
  • W 2 (z ) AT ( z / ⁇ 21 ) AT ( z / ⁇ 22 )
  • the coefficient ⁇ 12 is equal to the coefficient ⁇ 21 . This allows a clear simplification when cascading these two filters.
  • the filter equivalent to the cascade of these two filters has a transfer function given by the formula (VI) below.
  • the synthesis filter FP (having the transfer function 1 / A (z)) followed by the long-term weighting filter FPP1 and the weighting filter FPP2 is then equivalent to the filter whose transfer function is given by formula (VII) below. 1 AT ( z / ⁇ 22 )
  • the invention advantageously applies to telephony mobile, and in particular to all remote terminals belonging to a wireless communication system.
  • Such a terminal for example a TP mobile telephone, such as that illustrated in FIG. 4, conventionally comprises a antenna connected via a DUP duplexer to a chain reception CHR and a CHT transmission chain.
  • a baseband processor BB is connected to the chain respectively of reception CHR and to the chain of transmission CHT by via analog digital ADCs and analog digital DACs.
  • the processor BB performs processing in baseband, including DCN channel decoding, followed by DCS source decoding.
  • the processor For transmission, the processor performs source coding CCS followed by CCN channel coding.
  • the mobile phone incorporates an encoder according to the invention, it is incorporated within the coding means of CCS source, while the decoder is incorporated within the means DCS source decoding.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Spectroscopy & Molecular Physics (AREA)
  • Computational Linguistics (AREA)
  • Signal Processing (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Compression, Expansion, Code Conversion, And Decoders (AREA)
EP03291748A 2002-07-17 2003-07-15 Verfahren und Vorrichtung zur Sprachkodierung mit erhöhter Bandbreite, insbesondere mit einer erhöhten Qualität stimmhafter Sprachrahmen Withdrawn EP1383112A3 (de)

Priority Applications (1)

Application Number Priority Date Filing Date Title
EP03291748A EP1383112A3 (de) 2002-07-17 2003-07-15 Verfahren und Vorrichtung zur Sprachkodierung mit erhöhter Bandbreite, insbesondere mit einer erhöhten Qualität stimmhafter Sprachrahmen

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
EP02015920 2002-07-17
EP02015920A EP1383110A1 (de) 2002-07-17 2002-07-17 Verfahren und Vorrichtung für Breitbandsprachkodierung, insbesondere mit einer verbesserten Qualität der stimmhaften Rahmen
EP03291748A EP1383112A3 (de) 2002-07-17 2003-07-15 Verfahren und Vorrichtung zur Sprachkodierung mit erhöhter Bandbreite, insbesondere mit einer erhöhten Qualität stimmhafter Sprachrahmen

Publications (2)

Publication Number Publication Date
EP1383112A2 true EP1383112A2 (de) 2004-01-21
EP1383112A3 EP1383112A3 (de) 2008-08-20

Family

ID=29781470

Family Applications (1)

Application Number Title Priority Date Filing Date
EP03291748A Withdrawn EP1383112A3 (de) 2002-07-17 2003-07-15 Verfahren und Vorrichtung zur Sprachkodierung mit erhöhter Bandbreite, insbesondere mit einer erhöhten Qualität stimmhafter Sprachrahmen

Country Status (1)

Country Link
EP (1) EP1383112A3 (de)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0593255A1 (de) * 1992-10-12 1994-04-20 Nec Corporation Anordnung zum Demodulieren von Sprachsignalen, welche diskontinuierlich von einer mobilen Einheit übertragen werden
US6148282A (en) * 1997-01-02 2000-11-14 Texas Instruments Incorporated Multimodal code-excited linear prediction (CELP) coder and method using peakiness measure
WO2002023534A2 (en) * 2000-09-15 2002-03-21 Conexant Systems, Inc. Selection of coding parameters based on spectral content of a speech signal
US6385573B1 (en) * 1998-08-24 2002-05-07 Conexant Systems, Inc. Adaptive tilt compensation for synthesized speech residual

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0593255A1 (de) * 1992-10-12 1994-04-20 Nec Corporation Anordnung zum Demodulieren von Sprachsignalen, welche diskontinuierlich von einer mobilen Einheit übertragen werden
US6148282A (en) * 1997-01-02 2000-11-14 Texas Instruments Incorporated Multimodal code-excited linear prediction (CELP) coder and method using peakiness measure
US6385573B1 (en) * 1998-08-24 2002-05-07 Conexant Systems, Inc. Adaptive tilt compensation for synthesized speech residual
WO2002023534A2 (en) * 2000-09-15 2002-03-21 Conexant Systems, Inc. Selection of coding parameters based on spectral content of a speech signal

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
REDWAN SALAMI ET AL: "Design and Description of CS-ACELP: A Toll Quality 8 kb/s Speech Coder" IEEE TRANSACTIONS ON SPEECH AND AUDIO PROCESSING, IEEE SERVICE CENTER, NEW YORK, NY, US, vol. 6, no. 2, 1 mars 1998 (1998-03-01), XP011054298 ISSN: 1063-6676 *

Also Published As

Publication number Publication date
EP1383112A3 (de) 2008-08-20

Similar Documents

Publication Publication Date Title
EP0608174B1 (de) System zur prädiktiven Kodierung/Dekodierung eines digitalen Sprachsignals mittels einer adaptiven Transformation mit eingebetteten Kodes
EP0782128B1 (de) Verfahren zur Analyse eines Audiofrequenzsignals durch lineare Prädiktion, und Anwendung auf ein Verfahren zur Kodierung und Dekodierung eines Audiofrequenzsignals
EP2002428B1 (de) Verfahren zur trainierten diskrimination und dämpfung von echos eines digitalsignals in einem decoder und entsprechende einrichtung
EP1320087B1 (de) Synthese eines Anregungssignales zur Verwendung in einem Generator von Komfortrauschen
EP0749626B1 (de) Verfahren zur sprachkodierung mittels linearer prädiktion und anregung durch algebraische kodes
EP1316087B1 (de) Übertragungsfehler-verdeckung in einem audiosignal
FR2596936A1 (fr) Systeme de transmission d'un signal vocal
EP1267325B1 (de) Verfahren zur Sprachaktivitätsdetektion in einem Signal, und Sprachkodierer mit Vorrichtung zur Ausführung des Verfahrens
EP1125283B1 (de) Verfahren zur quantisierung der parameter eines sprachkodierers
WO2009004225A1 (fr) Post-traitement de reduction du bruit de quantification d'un codeur, au decodage
EP0428445B1 (de) Verfahren und Einrichtung zur Codierung von Prädiktionsfiltern in Vocodern mit sehr niedriger Datenrate
Kroon et al. Predictive coding of speech using analysis-by-synthesis techniques
WO2007107670A2 (fr) Procede de post-traitement d'un signal dans un decodeur audio
EP2652735B1 (de) Verbesserte kodierung einer verbesserungsstufe bei einem hierarchischen kodierer
EP1383109A1 (de) Verfahren und Vorrichtung für breitbandige Sprachkodierung
EP1383110A1 (de) Verfahren und Vorrichtung für Breitbandsprachkodierung, insbesondere mit einer verbesserten Qualität der stimmhaften Rahmen
WO2023165946A1 (fr) Codage et décodage optimisé d'un signal audio utilisant un auto-encodeur à base de réseau de neurones
FR2702590A1 (fr) Dispositif de codage et de décodage numériques de la parole, procédé d'exploration d'un dictionnaire pseudo-logarithmique de délais LTP, et procédé d'analyse LTP.
EP1383112A2 (de) Verfahren und Vorrichtung zur Sprachkodierung mit erhöhter Bandbreite, insbesondere mit einer erhöhten Qualität stimmhafter Sprachrahmen
JPH09508479A (ja) バースト励起線形予測
EP1383111A2 (de) Verfahren und Vorrichtung zur Sprachkodierung mit erweiterter Bandbreite
EP1383113A1 (de) Verfahren und Vorrichtung für Breitbandsprachkodierung geeignet zur Kontrolle von Kurzzeit- und Langzeitverzerrungen
EP1388846A2 (de) Verfahren und Vorrichtung zur Breitbandkodierung von Sprachsignalen geeignet zurunabhängigen Steuerung lang- und kurzzeitiger Verzerrungen
FR2783651A1 (fr) Dispositif et procede de filtrage d'un signal de parole, recepteur et systeme de communications telephonique
WO2005114653A1 (fr) Procede de quantification d'un codeur de parole a tres bas debit

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

AK Designated contracting states

Kind code of ref document: A2

Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IT LI LU MC NL PT RO SE SI SK TR

AX Request for extension of the european patent

Extension state: AL LT LV MK

PUAL Search report despatched

Free format text: ORIGINAL CODE: 0009013

AK Designated contracting states

Kind code of ref document: A3

Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IT LI LU MC NL PT RO SE SI SK TR

AX Request for extension of the european patent

Extension state: AL LT LV MK

RIC1 Information provided on ipc code assigned before grant

Ipc: G10L 19/12 20060101ALN20080715BHEP

Ipc: G10L 19/06 20060101AFI20080715BHEP

AKX Designation fees paid
STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN

18D Application deemed to be withdrawn

Effective date: 20090203

REG Reference to a national code

Ref country code: DE

Ref legal event code: 8566