EP1526508A1 - Verfahren zum Auswählen von Syntheseneinheiten - Google Patents

Verfahren zum Auswählen von Syntheseneinheiten Download PDF

Info

Publication number
EP1526508A1
EP1526508A1 EP04105204A EP04105204A EP1526508A1 EP 1526508 A1 EP1526508 A1 EP 1526508A1 EP 04105204 A EP04105204 A EP 04105204A EP 04105204 A EP04105204 A EP 04105204A EP 1526508 A1 EP1526508 A1 EP 1526508A1
Authority
EP
European Patent Office
Prior art keywords
pitch
segment
similarity
information
units
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
EP04105204A
Other languages
English (en)
French (fr)
Other versions
EP1526508B1 (de
Inventor
François THALES Intellectual Property CAPMAN
Marc THALES Intellectual Property PADELLINI
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Thales SA
Original Assignee
Thales SA
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Thales SA filed Critical Thales SA
Publication of EP1526508A1 publication Critical patent/EP1526508A1/de
Application granted granted Critical
Publication of EP1526508B1 publication Critical patent/EP1526508B1/de
Anticipated expiration legal-status Critical
Expired - Lifetime legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
    • G10L19/0018Speech coding using phonetic or linguistical decoding of the source; Reconstruction using text-to-speech synthesis
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L13/00Speech synthesis; Text to speech systems
    • G10L13/06Elementary speech units used in speech synthesisers; Concatenation rules

Definitions

  • the invention relates to a method for selecting units of synthesis.
  • the indexing techniques of natural speech units have recently allowed the development of synthesis systems from the particularly powerful text. These techniques are now studied in the context of very low speech rate coding, together with algorithms borrowed from the field of speech recognition, Ref [1-5].
  • the main idea is to identify in the speech signal to be coded a almost optimal segmentation in elementary units. These units can be units obtained from a phonetic transcription, which has the disadvantage of having to be corrected manually for an optimal result, or automatically according to spectral stability criteria. From this type of segmentation, and for each segment, we look for unity synthesis in a dictionary obtained during a phase prior learning, and containing reference synthesis units.
  • the coding scheme used is to model the space acoustic of the speaker (or speakers) by Markov models hidden (HMM or Hidden Markov Models). These dependent models or independent of the speaker are obtained during a learning phase prior to using algorithms identical to those used in the speech recognition systems.
  • the essential difference lies in the fact that the models are learned on vectors grouped by classes automatically and not in a supervised way from a phonetic transcription.
  • the learning procedure then consists of automatically obtain the segmentation of learning signals (for example using the so-called temporal decomposition method), group segments obtained in a finite number of classes corresponding to the number of HMM models that we wish to build.
  • the number of models is directly related to the desired resolution for represent the acoustic space of the speaker (s).
  • these models make it possible to segment the signal to be coded using an algorithm from Viterbi. Segmentation allows to associate to each segment, the index of class and its length. This information is not sufficient to model spectral information, for each of the classes a realization of spectral trajectory is selected from among several so-called synthesis. These units are extracted from the learning base at its segmentation using HMM models. It is possible to take into account context for example by using multiple subclasses allowing for take into account transitions from one class to another. A first index indicates the class to which the segment belongs, a second index specifies the subclass to which it belongs as the index of class of the previous segment. The subclass index is therefore not transmit, and the class index must be stored for the next segment. The subclasses thus defined make it possible to take into account the different transitions to the class associated with the segment in question. To the information spectral one adds the information of prosody, that is to say the value of pitch and energy parameters and their evolutions.
  • a classic method is initially to select the unit closest to a spectral point of view then, once the unit selected, to code the prosody information, either independently of the selected unit.
  • the method according to the present invention proposes a novel method of selecting the closest synthesis unit together with modeling and quantification of additional information necessary at the level of the decoder for the reproduction of the speech signal.
  • information is a segment of speech to code and one uses as proximity criteria the fundamental frequency or pitch, or the spectral distortion, and / or the energy profile and a step of merge of the criteria used to determine the synthesis unit representative.
  • the method comprises for example a coding step and / or a pitch correction step by modifying the synthesis profile.
  • the step of encoding and / or correcting the pitch may be a Linear transformation of the pitch profile.
  • the method is for example used for the selection and / or the coding of synthesis units for a very low bit rate speech coder.
  • the speech signal is analyzed frame to frame to extract characteristic parameters (spectral parameters, pitch, energy).
  • This analysis is classically using a sliding window set on the horizon of the frame. This frame has a duration of the order of 20 ms, and the update is done with a shift of the analysis window of the order of 10 ms to 20 ms.
  • HMM Hidden Markov Model
  • model speech segments set of successive frames
  • phonemes if the learning phase is supervised (phonetic segmentation and transcription available) or to spectrally stable sounds in the case of a segmentation obtained from Automatic way.
  • 64 HMM models are used here, which make it possible recognition phase to associate with each segment the model index HMM identified, and therefore the class to which it belongs.
  • HMM models are also used with the help of a Viterbi type algorithm to be coding phase the segmentation and classification of each of the segments (belonging to a class). Each segment is therefore identified by an index between 1 and 64 which is transmitted to the decoder.
  • the decoder uses this index to find the synthesis unit in the dictionary built during the learning phase.
  • Units of synthesis that make up the dictionary are simply the sequences of parameters associated with the segments obtained on the learning corpus.
  • a dictionary class contains all units associated with the same model HMM. Each synthesis unit is therefore characterized by a sequence of spectral parameters, a sequence of pitch value (pitch profile), a sequence of gains (energy profile).
  • each class (from 1 to 64) of the dictionary is subdivided into 64 subclasses, where each subclass contains synthesis units that are temporally preceded by a segment belonging to the same class. This approach allows take into account the past context, and thus improve the recovery of the areas transients from one unit to another.
  • the present invention relates in particular to a method of selection of a multicriteria synthesis unit.
  • the process allows example to take into account simultaneously the pitch, the spectral distortion, and pitch and energy evolution profiles.
  • the method may comprise in a variant embodiment a pitch coding step by correction of the synthesis pitch profile detailed below.
  • the criterion relating to the evolution profile of the pitch makes it possible in part to consider the voicing information. It is however possible to disable when the segment is totally unvoiced, or the subclass selected is also unvoiced. Indeed, we can notice mainly three types of subclasses: the subclasses containing majority of voiced units, those containing predominantly unvoiced units, and subclasses containing predominantly units mixed.
  • the method according to the invention is not limited to optimizing the flow rate allocated to prosody information but also allows to keep for the coding phase the entirety of the synthesis units obtained during the learning phase with a constant number of bits to encode the unit of synthesis.
  • the synthesis unit is characterized by both the value pitch and its index. This approach allows in a scheme of speaker-independent coding to cover all pitch values possible and to select the synthesis unit taking into account in part characteristics of the speaker, there is indeed for the same speaker a correlation between the pitch variation range and the characteristics of the vocal tract (especially the length).
  • the similarity measure may be a spectral distance.
  • Step A9) comprises for example a step where the average the set of spectra of the same segment and the measure of similarity is an intercorrelation measure.
  • the spectral distortion criterion is for example calculated on harmonic structures resampled to constant pitch or resampled the pitch of the segment to be coded, after interpolation of initial harmonic structures.
  • the similarity criterion will depend on the spectral parameters used (for example the type of parameters used for the representation of the envelope).
  • spectral parameters for example the type of parameters used for the representation of the envelope.
  • LSP Line Spectral Pair, LSF, Line Spectral Frequencies
  • cepstral parameters are usually used, and they can either be derived from a linear prediction analysis (LPCC, Linear Prediction Cepstrum Coefficients) or estimated from a filter bank often on a Mel or Bark perceptual scale (MFCC, Mel Frequency Cepstrum Coefficients).
  • Pre-treatment then consists in estimating a spectral envelope from the amplitudes harmonics (linear or polynomial spline interpolation) and to resample the envelope thus obtained, either by using the frequency of the segment to be coded, either by using a frequency fundamental constant (100 Hz for example).
  • a fundamental frequency constant allows to pre-calculate all the harmonic structures of synthesis units during the learning phase.
  • Re-sampling is then only on the segment to be coded.
  • the similarity measure can then be estimated simply from the average harmonic structure of the segment to be coded, and that of the synthesis unit considered.
  • This measure of similarity can also be a standardized cross-correlation measurement. It can also be noted that resampling procedure can be done on a scale perceptual frequencies (Mel or Bark).
  • the method comprises a step of pitch coding by modifying the synthesis profile. This consists of resynthesizing a pitch profile from that of the synthesis unit selected and a linearly variable gain over the duration of the segment code. It is then sufficient to transmit an additional value for characterize the correction gain over the entire segment.
  • the flow associated with the prosody is then between 225 and 300 bits / sec, which leads to an overall bit rate between 450 and 600 bits / sec.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression, Expansion, Code Conversion, And Decoders (AREA)
  • Transition And Organic Metals Composition Catalysts For Addition Polymerization (AREA)
  • Separation By Low-Temperature Treatments (AREA)
EP04105204A 2003-10-24 2004-10-21 Verfahren zum Auswählen von Syntheseneinheiten Expired - Lifetime EP1526508B1 (de)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
FR0312494 2003-10-24
FR0312494A FR2861491B1 (fr) 2003-10-24 2003-10-24 Procede de selection d'unites de synthese

Publications (2)

Publication Number Publication Date
EP1526508A1 true EP1526508A1 (de) 2005-04-27
EP1526508B1 EP1526508B1 (de) 2009-05-27

Family

ID=34385390

Family Applications (1)

Application Number Title Priority Date Filing Date
EP04105204A Expired - Lifetime EP1526508B1 (de) 2003-10-24 2004-10-21 Verfahren zum Auswählen von Syntheseneinheiten

Country Status (6)

Country Link
US (1) US8195463B2 (de)
EP (1) EP1526508B1 (de)
AT (1) ATE432525T1 (de)
DE (1) DE602004021221D1 (de)
ES (1) ES2326646T3 (de)
FR (1) FR2861491B1 (de)

Families Citing this family (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4265501B2 (ja) * 2004-07-15 2009-05-20 ヤマハ株式会社 音声合成装置およびプログラム
JP4025355B2 (ja) * 2004-10-13 2007-12-19 松下電器産業株式会社 音声合成装置及び音声合成方法
US7126324B1 (en) * 2005-11-23 2006-10-24 Innalabs Technologies, Inc. Precision digital phase meter
EP2058803B1 (de) * 2007-10-29 2010-01-20 Harman/Becker Automotive Systems GmbH Partielle Sprachrekonstruktion
US8401849B2 (en) * 2008-12-18 2013-03-19 Lessac Technologies, Inc. Methods employing phase state analysis for use in speech synthesis and recognition
US8731931B2 (en) * 2010-06-18 2014-05-20 At&T Intellectual Property I, L.P. System and method for unit selection text-to-speech using a modified Viterbi approach
US9664518B2 (en) * 2010-08-27 2017-05-30 Strava, Inc. Method and system for comparing performance statistics with respect to location
CN102651217A (zh) * 2011-02-25 2012-08-29 株式会社东芝 用于合成语音的方法、设备以及用于语音合成的声学模型训练方法
US9291713B2 (en) 2011-03-31 2016-03-22 Strava, Inc. Providing real-time segment performance information
US9116922B2 (en) 2011-03-31 2015-08-25 Strava, Inc. Defining and matching segments
US8620646B2 (en) * 2011-08-08 2013-12-31 The Intellisis Corporation System and method for tracking sound pitch across an audio signal using harmonic envelope
US10453479B2 (en) 2011-09-23 2019-10-22 Lessac Technologies, Inc. Methods for aligning expressive speech utterances with text and systems therefor
US8718927B2 (en) 2012-03-12 2014-05-06 Strava, Inc. GPS data repair
US8886539B2 (en) * 2012-12-03 2014-11-11 Chengjun Julian Chen Prosody generation using syllable-centered polynomial representation of pitch contours
CN113412512A (zh) * 2019-02-20 2021-09-17 雅马哈株式会社 音信号合成方法、生成模型的训练方法、音信号合成系统及程序

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020065655A1 (en) * 2000-10-18 2002-05-30 Thales Method for the encoding of prosody for a speech encoder working at very low bit rates

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH10260692A (ja) * 1997-03-18 1998-09-29 Toshiba Corp 音声の認識合成符号化/復号化方法及び音声符号化/復号化システム
JP2000056789A (ja) * 1998-06-02 2000-02-25 Sanyo Electric Co Ltd 音声合成装置及び電話機
JP2000075878A (ja) * 1998-08-31 2000-03-14 Canon Inc 音声合成装置およびその方法ならびに記憶媒体
US6581032B1 (en) * 1999-09-22 2003-06-17 Conexant Systems, Inc. Bitstream protocol for transmission of encoded voice signals
US6574593B1 (en) * 1999-09-22 2003-06-03 Conexant Systems, Inc. Codebook tables for encoding and decoding
JP3515039B2 (ja) * 2000-03-03 2004-04-05 沖電気工業株式会社 テキスト音声変換装置におけるピッチパタン制御方法
JP3728172B2 (ja) * 2000-03-31 2005-12-21 キヤノン株式会社 音声合成方法および装置
SE521600C2 (sv) * 2001-12-04 2003-11-18 Global Ip Sound Ab Lågbittaktskodek
CA2388352A1 (en) * 2002-05-31 2003-11-30 Voiceage Corporation A method and device for frequency-selective pitch enhancement of synthesized speed

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020065655A1 (en) * 2000-10-18 2002-05-30 Thales Method for the encoding of prosody for a speech encoder working at very low bit rates

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
BAUDOIN G ; EL CHAMI F: "Corpus based very low bit rate speech coding", 2003 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, vol. 1, 6 April 2003 (2003-04-06) - 10 April 2003 (2003-04-10), HONG KONG, CHINA, pages 792 - 795, XP002285304, ISBN: 0-7803-7663-3 *
C.BAVEREL ET AL., CODAGE DE LA PAROLE À TRÈS BAS DÉBIT PAR INDEXATION D'UNITÉS DE TAILLE VARIABLE, 2001
M. PADELLINI, F. CAPMAN, G. BAUDOIN: "Dynamic Unit Selection for Very Low Bit Rate Coding at 500 bits/sec", TSD 2004, 8 September 2004 (2004-09-08) - 11 September 2004 (2004-09-11), BRNO, CZ, XP002312723 *
M. PADELLINI, G. BAUDOIN AND F. CAPMAN: "Codage de la parole a très bas débit par indexation d'unités de taille variable", RJC'2003, 5ÈMES RENCONTRES JEUNES CHERCHEURS EN PAROLE, 23 September 2003 (2003-09-23) - 26 September 2003 (2003-09-26), GRENOBLE, FRANCE, XP002285303 *

Also Published As

Publication number Publication date
FR2861491A1 (fr) 2005-04-29
US8195463B2 (en) 2012-06-05
FR2861491B1 (fr) 2006-01-06
DE602004021221D1 (de) 2009-07-09
US20050137871A1 (en) 2005-06-23
ES2326646T3 (es) 2009-10-16
ATE432525T1 (de) 2009-06-15
EP1526508B1 (de) 2009-05-27

Similar Documents

Publication Publication Date Title
US7996222B2 (en) Prosody conversion
EP1526508B1 (de) Verfahren zum Auswählen von Syntheseneinheiten
FR2522179A1 (fr) Procede et appareil de reconnaissance de paroles permettant de reconnaitre des phonemes particuliers du signal vocal quelle que soit la personne qui parle
JP2002533772A (ja) 可変レートスピーチコーディング
US20020184009A1 (en) Method and apparatus for improved voicing determination in speech signals containing high levels of jitter
EP1606792B1 (de) Verfahren zur analyse der grundfrequenz, verfahren und vorrichtung zur sprachkonversion unter dessen verwendung
JPH08123484A (ja) 信号合成方法および信号合成装置
WO2005106853A1 (fr) Procede et systeme de conversion rapides d'un signal vocal
Bhatt Simulation and overall comparative evaluation of performance between different techniques for high band feature extraction based on artificial bandwidth extension of speech over proposed global system for mobile full rate narrow band coder
Khonglah et al. Speech enhancement using source information for phoneme recognition of speech with background music
Lee et al. A segmental speech coder based on a concatenative TTS
Fernandez Gallardo et al. Spectral sub-band analysis of speaker verification employing narrowband and wideband speech
Miyamoto et al. Non-linear harmonic generation based blind bandwidth extension considering aliasing artifacts
Albahri et al. Artificial bandwidth extension to improve automatic emotion recognition from narrow-band coded speech
Min et al. Deep vocoder: Low bit rate compression of speech with deep autoencoder
Berisha et al. Bandwidth extension of speech using perceptual criteria
Bachhav et al. Exploiting explicit memory inclusion for artificial bandwidth extension
JP2000514207A (ja) 音声合成システム
JPH1097274A (ja) 話者認識方法及び装置
EP1846918B1 (de) Verfahren zur schätzung einer sprachumsetzungsfunktion
Sharma et al. Non-intrusive bit-rate detection of coded speech
Salor et al. Dynamic programming approach to voice transformation
Do et al. Objective evaluation of HMM-based speech synthesis system using kullback-leibler divergence.
Ali et al. A long term harmonic plus noise model for narrow-band speech coding at very low bit-rates
Hayashi Bandwidth Extension of Speech Signals Using RNN-LSTM Network with Multiresolution Analysis

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IT LI LU MC NL PL PT RO SE SI SK TR

AX Request for extension of the european patent

Extension state: AL HR LT LV MK

17P Request for examination filed

Effective date: 20051007

AKX Designation fees paid

Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IT LI LU MC NL PL PT RO SE SI SK TR

17Q First examination report despatched

Effective date: 20070612

GRAP Despatch of communication of intention to grant a patent

Free format text: ORIGINAL CODE: EPIDOSNIGR1

GRAS Grant fee paid

Free format text: ORIGINAL CODE: EPIDOSNIGR3

GRAA (expected) grant

Free format text: ORIGINAL CODE: 0009210

AK Designated contracting states

Kind code of ref document: B1

Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IT LI LU MC NL PL PT RO SE SI SK TR

REG Reference to a national code

Ref country code: GB

Ref legal event code: FG4D

Free format text: NOT ENGLISH

REG Reference to a national code

Ref country code: CH

Ref legal event code: EP

REG Reference to a national code

Ref country code: IE

Ref legal event code: FG4D

Free format text: LANGUAGE OF EP DOCUMENT: FRENCH

REF Corresponds to:

Ref document number: 602004021221

Country of ref document: DE

Date of ref document: 20090709

Kind code of ref document: P

REG Reference to a national code

Ref country code: DE

Ref legal event code: R096

Ref document number: 602004021221

Country of ref document: DE

Effective date: 20090709

REG Reference to a national code

Ref country code: ES

Ref legal event code: FG2A

Ref document number: 2326646

Country of ref document: ES

Kind code of ref document: T3

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: PT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20090927

Ref country code: FI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20090527

Ref country code: AT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20090527

NLV1 Nl: lapsed or annulled due to failure to fulfill the requirements of art. 29p and 29m of the patents act
PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: SI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20090527

Ref country code: NL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20090527

Ref country code: PL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20090527

Ref country code: SE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20090827

REG Reference to a national code

Ref country code: IE

Ref legal event code: FD4D

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: EE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20090527

Ref country code: DK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20090527

Ref country code: RO

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20090527

Ref country code: CZ

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20090527

Ref country code: IE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20090527

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: SK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20090527

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: BG

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20090827

PLBE No opposition filed within time limit

Free format text: ORIGINAL CODE: 0009261

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT

BERE Be: lapsed

Owner name: THALES

Effective date: 20091031

26N No opposition filed

Effective date: 20100302

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: MC

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20091031

REG Reference to a national code

Ref country code: DE

Ref legal event code: R097

Ref document number: 602004021221

Country of ref document: DE

Effective date: 20100302

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: GR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20090828

Ref country code: BE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20091031

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: LU

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20091021

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: HU

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20091128

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: TR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20090527

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: CY

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20090527

REG Reference to a national code

Ref country code: FR

Ref legal event code: PLFP

Year of fee payment: 13

REG Reference to a national code

Ref country code: FR

Ref legal event code: PLFP

Year of fee payment: 14

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: DE

Payment date: 20171018

Year of fee payment: 14

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: CH

Payment date: 20171013

Year of fee payment: 14

Ref country code: GB

Payment date: 20171013

Year of fee payment: 14

Ref country code: ES

Payment date: 20171102

Year of fee payment: 14

Ref country code: IT

Payment date: 20171024

Year of fee payment: 14

REG Reference to a national code

Ref country code: FR

Ref legal event code: PLFP

Year of fee payment: 15

REG Reference to a national code

Ref country code: DE

Ref legal event code: R119

Ref document number: 602004021221

Country of ref document: DE

REG Reference to a national code

Ref country code: CH

Ref legal event code: PL

GBPC Gb: european patent ceased through non-payment of renewal fee

Effective date: 20181021

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: DE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20190501

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: CH

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20181031

Ref country code: LI

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20181031

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: GB

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20181021

Ref country code: IT

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20181021

REG Reference to a national code

Ref country code: ES

Ref legal event code: FD2A

Effective date: 20191203

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: ES

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20181022

P01 Opt-out of the competence of the unified patent court (upc) registered

Effective date: 20230517

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: FR

Payment date: 20230921

Year of fee payment: 20