EP1246163B1 - Speech synthesis method and speech synthesizer - Google Patents

Speech synthesis method and speech synthesizer Download PDF

Info

Publication number
EP1246163B1
EP1246163B1 EP02252159A EP02252159A EP1246163B1 EP 1246163 B1 EP1246163 B1 EP 1246163B1 EP 02252159 A EP02252159 A EP 02252159A EP 02252159 A EP02252159 A EP 02252159A EP 1246163 B1 EP1246163 B1 EP 1246163B1
Authority
EP
European Patent Office
Prior art keywords
formant
speech
pitch
waveforms
functions
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Lifetime
Application number
EP02252159A
Other languages
German (de)
English (en)
French (fr)
Other versions
EP1246163A2 (en
EP1246163A3 (en
Inventor
Masami C/O Intellectual Property Div. Akamine
Takehiko c/o Intellectual Property Div Kagoshima
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Toshiba Corp
Original Assignee
Toshiba Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Toshiba Corp filed Critical Toshiba Corp
Publication of EP1246163A2 publication Critical patent/EP1246163A2/en
Publication of EP1246163A3 publication Critical patent/EP1246163A3/en
Application granted granted Critical
Publication of EP1246163B1 publication Critical patent/EP1246163B1/en
Anticipated expiration legal-status Critical
Expired - Lifetime legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L13/00Speech synthesis; Text to speech systems
    • G10L13/02Methods for producing synthetic speech; Speech synthesisers
    • G10L13/04Details of speech synthesis systems, e.g. synthesiser structure or memory management
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L13/00Speech synthesis; Text to speech systems
    • G10L13/02Methods for producing synthetic speech; Speech synthesisers
    • G10L13/027Concept to speech synthesisers; Generation of natural phrases from machine-based concepts
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/27Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the analysis technique

Definitions

  • the present invention relates to a text-to-speech synthesis, particularly a speech synthesis method of generating a synthesized speech from information such as phoneme symbol string, pitch, and phoneme duration.
  • Text-to-speech synthesis means producing artificial speech from text.
  • This text-to-speech synthesis system comprises three stages: a linguistic processor, prosody processor and speech signal generator.
  • the input text is subjected to morphological analysis or syntax analysis in a linguistic processor, and then the process of accent and intonation is performed in the prosody processor, and information such as phoneme symbol string, pitch pattern (the change pattern of voice pitch), and the phoneme duration is output.
  • a speech signal generator that is, speech synthesizer synthesizes a speech signal from information such as phoneme symbol strings, pitch patterns and phoneme duration.
  • synthesis units basic characteristic parameters units such as phone, syllable, diphone and triphone are stored in a storage and selectively read out.
  • the read-out synthesis units are connected, with their pitches and phoneme durations being controlled, whereby a speech synthesis is performed.
  • PSOLA Puls-Synchronous Overlap-add
  • An alternative method involves a formant synthesis.
  • This system was designed to emulate the way humans speak.
  • the formant synthesis system generates a speech signal by exciting a filter modeling the property of vocal tract with a speech source signal obtained by modeling a signal generated from the vocal cords.
  • the phonemes (/a/, /i/, /u/, etc) and voice variety (male voice, female voice, etc.) of synthesized speech are determined by combining the formant frequency with the bandwidth. Therefore, the synthesis unit information is generated by combining the formant frequency with the bandwidth, rather than the waveform. Since the formant synthesis system can control parameters relating to phoneme and voice variety, it is advantageous in that variations in the voice variety and so on can be flexibly controlled. However, the precision of modeling lacks, which is disadvantageous.
  • the formant synthesis system cannot mimic the finely detailed spectrum of real speech signal because only the formant frequency and bandwidth are used, meaning that speech quality is unacceptable.
  • a speech synthesis method comprising the steps of:
  • the invention also provides a speech synthesizer supplied with a pitch pattern, phoneme duration and phoneme symbol string, comprising:
  • the present invention can be implemented either in hardware or on software in a general purpose computer. Further the present invention can be implemented in a combination of hardware and software. The present invention can also be implemented by a single processing apparatus or a distributed network of processing apparatuses.
  • the present invention can be implemented by software, the present invention encompasses computer code provided to a general purpose computer on any suitable carrier medium.
  • the carrier medium can comprise any storage medium such as a floppy disk, a CD ROM, a magnetic device or a programmable memory device, or any transient medium such as any signal e.g. an electrical, optical or microwave signal.
  • FIG. 1 shows a configuration of a speech synthesizer realizing a speech synthesis method according to the first embodiment of the present invention.
  • the speech synthesizer receives pitch pattern 306, phoneme duration 307 and phoneme symbol string 308 and outputs a synthesized speech signal 305.
  • the speech synthesizer comprises a voiced speech synthesizer 31 and an unvoiced sound synthesizer 32, and generates the synthesized speech signal 305 by adding the unvoiced speech signal 304 and voiced speech signal 303 output from the synthesizers, respectively.
  • the unvoiced speech synthesizer 32 generates the unvoiced speech signal 304 referring to phoneme duration 307 and phoneme symbol string 308, when the phoneme is mainly an unvoiced consonant and voiced fricative sound,
  • the unvoiced speech synthesizer 32 can be realized by a conventional technique, such as the method of exciting an LPC synthesis filter with white noise.
  • the voiced speech synthesizer 31 comprises a pitch mark generator 33, a pitch waveform generator 34 and a waveform superposing device 35.
  • the pitch mark generator 33 generates pitch marks 302 as shown in FIG. 2 referring to the pitch pattern 306 and phoneme duration 307.
  • the pitch marks 302 indicate positions at which the pitch waveforms 301 are superposed. The interval between the pitch marks correspond to the pitch period.
  • the pitch waveform generator 34 generates pitch waveforms 301 corresponding to the pitch marks 302 as shown in FIG. 2, referring to the pitch pattern 306, phoneme duration 307 and phoneme symbol string 308.
  • the waveform superposing device 35 generates a voiced speech signal 303 by superposing, at positions of the pitch marks 302, the pitch waveforms corresponding to the pitch marks 302.
  • the pitch waveform generator 34 comprises a formant parameter storage 41, a parameter selector 42 and sine wave generators 43, 44 and 45 as shown in FIG. 3.
  • the formant parameters are stored in the formant parameter storage 41 in units of a synthesis unit.
  • FIG. 4 indicates an example of formant parameters of phonemes /a/.
  • the phonemes /a/ comprise three frames each including three formants.
  • Formant frequency, formant phase and windowing functions are stored in the formant parameter storage 41 as parameters to express the characteristics of each formant.
  • the formant parameter selector 42 selects and reads formant parameters 401 for one frame corresponding to the pitch marks 302 from the formant parameter storage 41, referring to the pitch pattern 306, phoneme duration 307 and phoneme symbol string 308 which are input to the pitch waveform generator 34.
  • the parameters corresponding to the formant number 1 are read out from the formant parameter storage 41 as formant frequency 402, formant phase 403 and windowing functions 411.
  • the parameters corresponding to the formant number 2 are read out from the formant parameter storage 41 as formant frequency 404, formant phase 405 and windowing functions 412.
  • the parameters corresponding to the formant number 3 are read out from the formant parameter storage 41 as formant frequency 406, formant phase 407 and windowing functions 413.
  • the sine wave generator 43 generates sine wave 408 according to the formant frequency 402 and formant phase 403.
  • the sine wave 408 is subjected to the windowing functions 411 to generate a formant waveform 414.
  • the sine wave generator 44 outputs sine wave 409 based on the formant frequency 404 and formant phase 405. This sine wave 409 is multiplied by the windowing function 412 to generate a formant waveform 415.
  • the sine wave generator 45 outputs a sine wave 410 based on the formant frequency 406 and formant phase 407. This sine wave 410 is multiplied by the windowing functions 413 to generate a formant waveform 416.
  • Adding the formant waveforms 414, 415 and 416 generates the pitch waveform 301.
  • Examples of the sine waves, windowing functions, formant waveforms and pitch waveforms are shown in FIG. 6.
  • the power spectrums of these waveforms are shown in FIG. 7.
  • the abscissa axis expresses time and the ordinate axes express amplitude.
  • the abscissa axes express frequency and the ordinate axes express amplitude.
  • the sine wave becomes a line spectrum having a sharp peak
  • the windowing function becomes the spectrum concentrated on a low frequency domain.
  • the windowing (multiplication) in the time domain corresponds to convolution in the frequency domain.
  • the spectrum of formant waveform indicates a shape obtained by shifting the spectrum of windowing function to the position of frequency of the sine wave in parallel. Therefore, controlling the frequency or phase of the sine wave can change the center frequency or phase of the formant of the pitch waveform. Controlling the shape of the windowing function can change the spectrum shape of the formant of the pitch waveform.
  • the center frequency, phase and spectrum shape of the formant can be independently controlled for each formant, a highly flexible model can be realized. Further, since the windowing function allows the highly detailed structure of spectrum to be expressed, the synthesized speech can approximate to a high accuracy the spectrum structure of natural voice, thus producing the feeling of natural voice.
  • the pitch waveform generator 34 of the second embodiment of the present invention will be described referring to FIG. 8.
  • like reference numerals are used to designate like structural elements corresponding to those in the first embodiment. Only the portions that differ will be described.
  • the windowing functions are developed by basis functions, and a group of weighting factors is stored in the storage 51 instead of storing the windowing functions as the formant parameters.
  • the windowing function generator 56 newly added generates windowing functions from the weighting factors.
  • the windowing function is obtained by the sum of three basis functions weighted by the weighting factors.
  • a set of three factors is stored in the storage 51 as a set of windowing function weighting factors.
  • the parameter selector 42 outputs the formant frequencies 402, 404 and 406 and formant phases 403, 405 and 407 in the selected formant parameters 501 to the sine wave generators 43, 44 and 45, and outputs a set of windowing function weighting factors 517, 518 and 519 to the windowing function generator 56.
  • the basis functions may use DCT basis, and may use basis functions generated by subjecting the windowing functions to KL-expansion.
  • the basis order is set to 3, but it is not limited to 3. Developing the windowing functions to the basis functions reduces the memory capacity of the formant parameter storage.
  • the pitch waveform generator 34 of the third embodiment of the present invention will be described referring to FIG. 9.
  • like reference numerals are used to designate like structural elements corresponding to those in the first embodiment. Only the portions that differ will be described.
  • a parameter transformer 67 is newly added, and the formant parameters are varied according to the pitch pattern 306.
  • the parameter transformer 67 outputs formant frequency 720, formant phase 721, windowing function 717, formant frequency 722, formant phase 723, windowing function 718, formant frequency 724, formant phase 725, and windowing function 719 by changing the formant frequency 402, formant phase 403, windowing function 411, formant frequency 404, formant phase 405, windowing function 412, formant frequency 406, formant phase 407, and windowing function 413 according to the pitch pattern 306. All parameters may be changed, and a part of the parameters may be changed.
  • FIG. 10 shows an example of a control function when the parameter transformer 67 controls the formant frequency according to the pitch period.
  • Such control function may be set for every phoneme, every frame or every formant number.
  • the formant frequency can be controlled according to the pitch period, by inputting such control function to the parameter transformer 67.
  • a control function to control the differential value and ratio of the input/output formant frequency may be used instead of the formant frequency itself.
  • FIG. 11 shows the control function to control the power of formant by multiplying the gain corresponding to the pitch period by the windowing functions. It is possible to model the spectrum change of speech according to the change of the pitch period by inputting such a control function to the parameter transformer 67 and changing the parameters according to the pitch period. As a result, it is possible to generate high quality synthesized speech which is not dependent on the pitch of voice.
  • the formant parameters may be changed according to a kind of preceding or following phoneme. As a result, it is possible to model a variable speech spectrum based on the phoneme environment, and to improve speech quality.
  • the voice variety information 309 inputted to the parameter transformer 67 from an external device may be altered to produce different parameters. In this case, it is possible to generate synthesized speech of various voice qualities.
  • FIG. 12 shows an example of changing the voice pitch by changing the formant frequency. If all formant frequencies are converted by the control function (a), since the formant is shifted to a high frequency domain, a thin voice is generated. The control function (b) generates a somewhat thin voice. If the control function (d) is used, since the formant frequency shifts to a low frequency domain, a deep voice is generated. The control function (c) generates a deeper voice.
  • the pitch waveform generator 34 of the fourth embodiment of the present invention will be described referring to FIG. 13.
  • like reference numerals are used to designate like structural elements corresponding to those in the first embodiment. Only the portions that differ will be described.
  • the parameter smoothing device 77 is added to smooth the parameters so that the time based change of each formant parameters is smoothed.
  • the parameter smoothing device 77 outputs formant frequency 820, formant phase 821, windowing function 817, formant frequency 822, formant phase 823, windowing function 818, formant frequency 824, formant phase 825 and windowing function 819 by smoothing the formant frequency 402, formant phase 403, windowing function 411, formant frequency 404, formant phase 405, windowing function 412, formant frequency 406, formant phase 407 and windowing function 413, respectively. All parameters may be smoothed, or merely partly smoothed.
  • FIG. 14 shows an example of smoothing of formant.
  • represents the formant frequencies 402, 404 and 406 before smoothing.
  • the smoothed formant frequencies 820, 822 and 824 indicated by ⁇ are generated by performing smoothing so that a change between corresponding formant frequencies of the current frame and the preceding or following frame are smoothed.
  • the formant corresponding to the formant frequency 404 becomes extinct, as shown by ⁇ in FIG. 15A.
  • the formant frequency 822 is generated by adding formants as shown by ⁇ .
  • the power of the windowing function 818 corresponding to the formant frequency 822 is attenuated as shown in FIG. 15B, to prevent the formant power from discontinuity.
  • FIGS. 16A and 16B show examples of windowing function position smoothing. Smoothing the windowing function positions so that the peak position of the windowing function 411 varies between frames smoothly generates the windowing function 817. Further, the shape and power of the windowing function may also be smoothed.
  • the above embodiment is explained for 3 formants.
  • the number of formants is not limited to 3, and may be changed every frame.
  • the sine wave generator of the embodiments of the present invention outputs a sine wave.
  • a waveform having a near-line power spectrum may be used instead of a complete sine wave.
  • the sine wave generator comprises a table in order to reduce computation cost, for example, the complete sine wave is not obtained because of error.
  • the spectrum of formant waveform may not always indicate the peak of the spectrum of speech signal, and the spectrum of the pitch waveform, which is the sum of plural formant waveforms, expresses a spectrum of speech.
  • the above embodiment of the present invention provides a synthesizer for text-to-speech synthesis, but another embodiment of the present invention provides a decoder for speed coding.
  • the encoder obtains, from the speech signal, formant parameters such as formant frequency, formant phase, windowing function, etc. and pitch period, etc. by analysis, and encodes them and transmits or store codes.
  • the decoder decodes the formant parameters and pitch periods, and reconstructs the speech signal similarly to the above synthesizer.
  • FIG. 17A show a flowchart of the speech synthesis process
  • FIG. 17B shows a flowchart of the voiced speech generation process of the speech synthesis process
  • FIG. 17C shows a flowchart of the pitch waveform generation process of the voiced speech generation process of FIG. 17B.
  • the pitch pattern 306, phoneme duration 307 and phoneme symbol string 308 are input (S11).
  • the voiced speech signal 303 is generated based on the pitch pattern 306, phoneme duration 307 and phoneme symbol string 308 (S12).
  • the unvoiced speech signal 304 is generated referring to the phoneme duration 307 and phoneme symbol string 308 (S13).
  • the voiced speech signal and unvoiced speech signal are added to generate the synthesized speech signal 305 (S14).
  • the pitch mark 302 is generated referring to the pitch pattern 306 and phoneme duration 307 (S21).
  • the pitch waveforms 301 are generated corresponding to the pitch marks 302, referring to the pitch pattern 306, phoneme duration 307 and phoneme symbol string 308 (S22).
  • the pitch waveforms 301 are superposed in the positions indicated by the pitch marks 302 to generate a voiced speech (S23).
  • the formant parameters 401 for 1 frame corresponding to the pitch mark 302 is selected from the formant parameter storage 41 referring to the pitch pattern 306, phoneme duration 307 and phoneme symbol string 308 (S31).
  • Plural sine waves are generated according to the formant frequencies and formant phases corresponding to the formant numbers of the selected formant parameters 401 (S32).
  • the formant waveforms 414, 415 and 416 are generated by multiplying the plural sine waves by the windowing functions (S33).
  • the formant waveforms are added to generate a pitch waveform (S34).
  • the formant frequency and formant shape are independently controlled for every formant, it is possible to express the spectrum change of speech due to the pitch period variation and voice variety change between the formants, and realize highly flexibility speech synthesis. Because the shape of the windowing functions can express the detailed structure of the formant spectrum, high quality synthesized speech having a natural voice feeling can be generated.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Electrophonic Musical Instruments (AREA)
EP02252159A 2001-03-26 2002-03-26 Speech synthesis method and speech synthesizer Expired - Lifetime EP1246163B1 (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
JP2001087041 2001-03-26
JP2001087041 2001-03-26
JP2002077096 2002-03-19
JP2002077096A JP3732793B2 (ja) 2001-03-26 2002-03-19 音声合成方法、音声合成装置及び記録媒体

Publications (3)

Publication Number Publication Date
EP1246163A2 EP1246163A2 (en) 2002-10-02
EP1246163A3 EP1246163A3 (en) 2003-08-13
EP1246163B1 true EP1246163B1 (en) 2005-08-10

Family

ID=26612017

Family Applications (1)

Application Number Title Priority Date Filing Date
EP02252159A Expired - Lifetime EP1246163B1 (en) 2001-03-26 2002-03-26 Speech synthesis method and speech synthesizer

Country Status (5)

Country Link
EP (1) EP1246163B1 (ko)
JP (1) JP3732793B2 (ko)
KR (1) KR100457414B1 (ko)
CN (1) CN1185619C (ko)
DE (1) DE60205421T2 (ko)

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2004025626A1 (en) * 2002-09-10 2004-03-25 Leslie Doherty Phoneme to speech converter
JP2004294816A (ja) * 2003-03-27 2004-10-21 Yamaha Corp 携帯端末装置
JP4214842B2 (ja) 2003-06-13 2009-01-28 ソニー株式会社 音声合成装置及び音声合成方法
JP2005004105A (ja) * 2003-06-13 2005-01-06 Sony Corp 信号生成装置及び信号生成方法
JP2005234337A (ja) * 2004-02-20 2005-09-02 Yamaha Corp 音声合成装置、音声合成方法、及び音声合成プログラム
JP4469883B2 (ja) 2007-08-17 2010-06-02 株式会社東芝 音声合成方法及びその装置
JP5275102B2 (ja) 2009-03-25 2013-08-28 株式会社東芝 音声合成装置及び音声合成方法
JP5631915B2 (ja) 2012-03-29 2014-11-26 株式会社東芝 音声合成装置、音声合成方法、音声合成プログラムならびに学習装置
JP6499305B2 (ja) * 2015-09-16 2019-04-10 株式会社東芝 音声合成装置、音声合成方法、音声合成プログラム、音声合成モデル学習装置、音声合成モデル学習方法及び音声合成モデル学習プログラム
JP6728843B2 (ja) * 2016-03-24 2020-07-22 カシオ計算機株式会社 電子楽器、楽音発生装置、楽音発生方法及びプログラム
CN108257613B (zh) * 2017-12-05 2021-12-10 北京小唱科技有限公司 修正音频内容音高偏差的方法及装置
CN108597527B (zh) * 2018-04-19 2020-01-24 北京微播视界科技有限公司 多声道音频处理方法、装置、计算机可读存储介质和终端
CN110189743B (zh) * 2019-05-06 2024-03-08 平安科技(深圳)有限公司 波形拼接中的拼接点平滑方法、装置及存储介质

Also Published As

Publication number Publication date
DE60205421T2 (de) 2006-04-20
KR100457414B1 (ko) 2004-11-16
JP2002358090A (ja) 2002-12-13
KR20020076144A (ko) 2002-10-09
EP1246163A2 (en) 2002-10-02
CN1185619C (zh) 2005-01-19
DE60205421D1 (de) 2005-09-15
EP1246163A3 (en) 2003-08-13
JP3732793B2 (ja) 2006-01-11
CN1378199A (zh) 2002-11-06

Similar Documents

Publication Publication Date Title
US6304846B1 (en) Singing voice synthesis
KR940002854B1 (ko) 음성 합성시스팀의 음성단편 코딩 및 그의 피치조절 방법과 그의 유성음 합성장치
JP4705203B2 (ja) 声質変換装置、音高変換装置および声質変換方法
US8175881B2 (en) Method and apparatus using fused formant parameters to generate synthesized speech
EP1701336B1 (en) Sound processing apparatus and method, and program therefor
EP1246163B1 (en) Speech synthesis method and speech synthesizer
JPH031200A (ja) 規則型音声合成装置
EP3739571A1 (en) Speech synthesis method, speech synthesis device, and program
US7251601B2 (en) Speech synthesis method and speech synthesizer
Macon et al. Speech concatenation and synthesis using an overlap-add sinusoidal model
Degottex et al. Pitch transposition and breathiness modification using a glottal source model and its adapted vocal-tract filter
Macon et al. Concatenation-based midi-to-singing voice synthesis
JP2018077283A (ja) 音声合成方法
US7596497B2 (en) Speech synthesis apparatus and speech synthesis method
EP1543497B1 (en) Method of synthesis for a steady sound signal
Mandal et al. Epoch synchronous non-overlap-add (ESNOLA) method-based concatenative speech synthesis system for Bangla.
van Santen et al. Prediction and synthesis of prosodic effects on spectral balance of vowels
Karjalainen et al. Speech synthesis using warped linear prediction and neural networks
JPH09179576A (ja) 音声合成方法
JP2018077280A (ja) 音声合成方法
JP2018077281A (ja) 音声合成方法
JP2001312300A (ja) 音声合成装置
Rodet Sound analysis, processing and synthesis tools for music research and production
JP2765192B2 (ja) 電子楽器
JPH0836397A (ja) 音声合成装置

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20020417

AK Designated contracting states

Kind code of ref document: A2

Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LI LU MC NL PT SE TR

AX Request for extension of the european patent

Free format text: AL;LT;LV;MK;RO;SI

PUAL Search report despatched

Free format text: ORIGINAL CODE: 0009013

AK Designated contracting states

Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LI LU MC NL PT SE TR

AX Request for extension of the european patent

Extension state: AL LT LV MK RO SI

AKX Designation fees paid

Designated state(s): DE FR GB

17Q First examination report despatched

Effective date: 20040503

GRAP Despatch of communication of intention to grant a patent

Free format text: ORIGINAL CODE: EPIDOSNIGR1

GRAS Grant fee paid

Free format text: ORIGINAL CODE: EPIDOSNIGR3

GRAA (expected) grant

Free format text: ORIGINAL CODE: 0009210

AK Designated contracting states

Kind code of ref document: B1

Designated state(s): DE FR GB

REG Reference to a national code

Ref country code: GB

Ref legal event code: FG4D

REF Corresponds to:

Ref document number: 60205421

Country of ref document: DE

Date of ref document: 20050915

Kind code of ref document: P

ET Fr: translation filed
PLBE No opposition filed within time limit

Free format text: ORIGINAL CODE: 0009261

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT

26N No opposition filed

Effective date: 20060511

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: FR

Payment date: 20120319

Year of fee payment: 11

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: GB

Payment date: 20120321

Year of fee payment: 11

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: DE

Payment date: 20120411

Year of fee payment: 11

GBPC Gb: european patent ceased through non-payment of renewal fee

Effective date: 20130326

REG Reference to a national code

Ref country code: FR

Ref legal event code: ST

Effective date: 20131129

REG Reference to a national code

Ref country code: DE

Ref legal event code: R119

Ref document number: 60205421

Country of ref document: DE

Effective date: 20131001

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: GB

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20130326

Ref country code: DE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20131001

Ref country code: FR

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20130402