EP0309561B1 - Detecteur de signal vocal voise utilisant des valeurs seuil adaptatives - Google Patents

Detecteur de signal vocal voise utilisant des valeurs seuil adaptatives Download PDF

Info

Publication number
EP0309561B1
EP0309561B1 EP88903995A EP88903995A EP0309561B1 EP 0309561 B1 EP0309561 B1 EP 0309561B1 EP 88903995 A EP88903995 A EP 88903995A EP 88903995 A EP88903995 A EP 88903995A EP 0309561 B1 EP0309561 B1 EP 0309561B1
Authority
EP
European Patent Office
Prior art keywords
frames
speech
calculating
unvoiced
value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Lifetime
Application number
EP88903995A
Other languages
German (de)
English (en)
Other versions
EP0309561A1 (fr
Inventor
David Lynn Thomson
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
AT&T Corp
Original Assignee
American Telephone and Telegraph Co Inc
AT&T Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by American Telephone and Telegraph Co Inc, AT&T Corp filed Critical American Telephone and Telegraph Co Inc
Priority to AT88903995T priority Critical patent/ATE83329T1/de
Publication of EP0309561A1 publication Critical patent/EP0309561A1/fr
Application granted granted Critical
Publication of EP0309561B1 publication Critical patent/EP0309561B1/fr
Anticipated expiration legal-status Critical
Expired - Lifetime legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/93Discriminating between voiced and unvoiced parts of speech signals

Definitions

  • This invention relates to determining whether or not speech contains a fundamental frequency which is commonly referred to as the unvoiced/voiced decision. More particularly, the unvoiced/voiced decision is made by a two stage voiced detector with the final threshold values being adaptively calculated for the speech environment utilizing statistical techniques.
  • a frame of speech is declared voice if a weighted sum of classifiers is greater than a specified threshold, and unvoiced otherwise.
  • the weights and threshold are chosen to maximize performance on a training set of speech where the voicing of each frame is known.
  • a problem associated with the fixed weighted sum method is that it does not perform well when the speech environment changes.
  • the reason is that the threshold is determined from the training set which is different from speech subject to background noise, non-linear distortion, and filtering.
  • the adaptive threshold is decremented by one. After the adaptive threshold has been calculated, it is subtracted from a output of a elementary pitch detector. If the results of the subtraction yield a positive number, the speech frame is declared voice; otherwise, the speech frame is declared on unvoice.
  • the problem with the disclosed method is that the parameters themselves are not used in the elementary pitch detector. Hence, the adjustment of the adaptive threshold is ad hoc and is not directly linked to the physical phenomena from which it is calculated. In addition, the threshold cannot adapt to rapidly changing speech environments.
  • the present invention accordingly provides a method and apparatus for making an adaptive voiced/unvoiced determination for frames of speech as claimed in claims 1, 5 or 8.
  • a voicing decision apparatus that adapts to a changing environment by utilizing adaptive statistical values to make the voicing decision.
  • the statistical values are adapted to the changing environment by utilizing statistics based on an output of a voiced detector.
  • the statistical parameters are calculated by the voiced detector generating a general value indicating the presence of a fundamental frequency in a speech frame in response to speech attributes of the frame.
  • the mean for unvoiced ones and voiced ones of speech frames is calculated in response to the generated value.
  • the two means are then used to determine decision regions, and the determination of the presence of the fundamental frequency is done in response to the decision regions and the present speech frame.
  • the man for unvoiced frames is calculated by calculating the probability that the present speech frame is unvoiced, calculating the overall probability that any frame will be unvoiced, and calculating the probability that the present speech frame is voiced.
  • the mean of the unvoiced speech frames is then calculated in response to the probability that the present speech frame is unvoiced and the overall probability.
  • the mean of the voiced speech frame is calculated in response to the probability that the present speech frame is voiced and the overall probability.
  • the calculations of probabilities are performed utilizing a maximum likelihood statistical operation.
  • the generation of the general value is performed utilizing a discriminant analysis procedure, and the speech attributes are speech classifiers.
  • the decision regions are defined by the mean of the unvoiced and voiced speech frames and a weight and threshold value generated in response to the general values of past and present frames and the means of the voiced and unvoiced frames.
  • the method for detecting the presence of a fundamental frequency in speech frames comprises the steps of: generating a general value in response to a set of classifiers defining speech attributes of a present speech frame to indicate the presence of the fundamental frequency, calculating a set of statistical parameters in response to the general value, and determining the presence of the fundamental frequency in response to the general value and the calculated set of statistical parameters.
  • the step of generating the general value is performed utilizing a discriminant analysis procedure.
  • the step of determining the fundamental frequency comprises the step of calculating a weight and a threshold value in response to the set of parameters.
  • FIG. 1 illustrates an apparatus for performing the unvoiced/voiced decision operation by first utilizing a discriminant voiced detector to process voice classifiers in order to generate a discriminant variable or general variable.
  • the latter variable is statistically analyzed to make the voicing decision.
  • the statistical analysis adapts the threshold utilized in making the unvoiced/voiced decision so as to give reliable performance in a variety of voice environments.
  • Classifier generator 100 is responsive to each frame of voice to generate classifiers which advantageously may be the log of the speech energy, the log of the LPC gain, the log area ratio of the first reflection coefficient, and the squared correlation coefficient of two speech segments one frame long which are offset by one pitch period.
  • the calculation of these classifiers involves digitally sampling analog speech, forming frames of the digital samples, and processing those frames and is well known in the art.
  • Generator 100 transmits the classifiers to silence detector 101 and discriminant voiced detector 102 via path 106.
  • Discriminant voiced detector 102 is responsive to the classifiers received via path 106 to calculate the discriminant value, x.
  • c is a vector comprising the weights
  • y is a vector comprising the classifiers
  • d is a scalar representing a threshold value.
  • the components of vector c are initialized as follows: component corresponding to log of the speech energy equals 0.3918606, component corresponding to log of the LPC gain equals -0.0520902, component corresponding to log area ratio of the first reflection coefficient equals 0.5637082, and component corresponding to squared correlation coefficient equals 1.361249; and d initially equals -8.36454.
  • the detector 102 transmits this value via path 111 to statistical calculator 103 and subtracter 107.
  • Silence detector 101 is responsive to the classifiers transmitted via path 106 to determine whether speech is actually present on the data being received on path 109 by classifier generator 100.
  • the indication of the presence of speech is transmitted via path 110 to statistical calculator 103 by silence detector 101.
  • detector 102 For each frame of speech, detector 102 generates and transmits the discriminant value x via path 111.
  • Statistical calculator 103 maintains an average of the discriminant values received via path 111 by averaging in the discriminant value for the present, non-silence frame with the discriminant values for previous non-silence frames.
  • Statistical calculator 103 is also responsive to the signal received via path 110 to calculate the overall probability that any frame is unvoiced and the probability that any frame is voiced.
  • statistical calculator 103 calculates the statistical value that the discriminant value for the present frame would have if the frame was unvoiced and the statistical value that the discriminant value for the present frame would have if the frame was voiced.
  • that statistical value may be the mean.
  • calculator 103 performs these calculations not only on the basis of the discriminant value received for the present frame via path 106 and the average of the classifiers but also on the basis of a weight and a threshold value defining whether a frame is unvoiced or voiced received via path 113 from threshold calculator 104.
  • Calculator 104 is responsive to the probabilities and statistical values of the classifiers for the present frame as generated by calculator 103 and received via path 112 to recalculate the values used as weight value a, and threshold value b for the present frame. Then, these new values of a and b are transmitted back to statistical calculator 103 via path 113.
  • Calculator 104 transmits the weight, threshold, and statistical values via path 114 to U/V determinator 105.
  • the latter detector is responsive to the information transmitted via paths 114 and 115 to determine whether or not the frame is unvoiced or voiced and to transmit this decision via path 116.
  • Statistical calculator 103 implements an improved EM algorithm similar to that suggested in the article by N. E. Day entitled “Estimating the Components of a Mixture of Normal Distributions", Biometrika, Vol. 56, No. 3, pp. 463-474, 1969.
  • x n ) 1-P(u
  • x n) /p n - zx n (10) v n (1-z) v n-1 + z x n P(v
  • Determinator 105 is responsive to this transmitted information to decide whether the present frame is voiced or unvoiced. If the value a is positive, then, a frame is declared voiced if the following equation is true: ax n - a(u n +v n) /2 > 0 ; (14) or if the value a is negative, then, a frame is declared voiced if the following equation is true: ax n - a(u n +v n) /2 ⁇ 0 . (15) Equation 14 can also be expressed as: ax n + b - log[(1-p n) /p n] > 0 . Equation 15 can also be expressed as: ax n + b - log[(1-p n) /p n ] ⁇ 0 . If the previous conditions are not met, determinator 105 declares the frame unvoiced.
  • FIGS. 2 and 3 illustrate, in greater detail, the operations performed by the apparatus of FIG. 1.
  • Block 200 implements block 101 of FIG. 1.
  • Blocks 202 through 218 implement statistical calculator 103.
  • Block 222 implements threshold calculator 104, and blocks 226 through 239 implement block 105 of FIG. 1.
  • Subtracter 107 is implemented by both block 208 and block 224.
  • Block 202 calculates the value which represents the average of the discriminant value for the present frame and all previous frames.
  • Block 200 determines whether speech is present in the present frame; and if speech is not present in the present frame, the mean for the discriminant value is subtracted from the present discriminant value by block 224 before control is transferred to decision block 226.
  • the statistical and weight calculations are performed by blocks 202 through 222.
  • the average value is found in block 202.
  • the second moment value is calculated in block 206.
  • the latter value along with the mean value X for the present and past frames is then utilized to calculate the variance, T, also in block 206.
  • the mean X is then subtracted from the discriminant value x n in block 208.
  • Block 210 calculates the probability that the present frame is unvoiced by utilizing the present weight value a, the present threshold value b, and the discriminant value for the present frame, x n . After calculating the probability that the present frame is unvoiced, the probability that the present frame is voiced is calculated by block 212. Then, the overall probability, p n , that any frame will be unvoiced is calculated by block 214.
  • Blocks 216 and 218 calculate two values: u and v.
  • the value u represents the statistical average value that the discriminant value would have if the frame were unvoiced.
  • value v represents the statistical average value that the discriminant value would have if the frame were voiced.
  • the actual discriminant values for the present and previous frames are clustered around either value u or value v.
  • the discriminant values for the previous and present frames are clustered around value u if these frames had been found to be unvoiced; otherwise, the previous values are clustered around value v.
  • Block 222 then calculates a new weight value a and a new threshold value b. The values a and b are used in the next sequential frame by the preceding blocks in FIG. 2.
  • Blocks 226 through 239 implement U/V determinator 105 of FIG. 1.
  • Block 226 determines whether the value a for the present frame is greater than zero. If this condition is true, then decision block 228 is executed. The latter decision block determines whether the test for voiced or unvoiced is met. If the frame is found to be voiced in decision block 228, then the frame is so marked as voiced by block 230 otherwise the frame is marked as unvoiced by block 232. If the value a is less than zero for the present frame, blocks 234 through 238 are executed and function in a similar manner to blocks 228 through 232.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Acoustics & Sound (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Signal Processing (AREA)
  • Computational Linguistics (AREA)
  • Transmission Systems Not Characterized By The Medium Used For Transmission (AREA)
  • Oscillators With Electromechanical Resonators (AREA)
  • Interface Circuits In Exchanges (AREA)
  • Radio Relay Systems (AREA)
  • Radar Systems Or Details Thereof (AREA)
  • Compression, Expansion, Code Conversion, And Decoders (AREA)
  • Telephonic Communication Services (AREA)

Claims (11)

  1. Un dispositif destiné à effectuer une détermination d'état voisé/non voisé pour des trames de parole dont le voisement est inconnu, comprenant :
       des moyens (101) destinés à détecter une condition de silence pour sélectionner des trames de parole de voisement inconnu;
       des moyens (102) qui réagissent à un ensemble de classificateurs provenant d'un générateur de classificateurs (100) définissant des attributs de parole de l'une des trames de parole de voisement inconnu, de façon à générer une valeur générale indiquant initialement un état voisé ou non voisé;
       des moyens (103) oui réagissent à la valeur générale précitée en calculant un ensemble de paramètres statistiques; et
       des moyens (104) qui sont destinés à calculer une valeur de seuil sous la dépendance de l'ensemble de paramètres;
       des moyens (104) qui dont destinés à calculer une valeur de poids sous la dépendance de l'ensemble de paramètres;
       des moyens (105) qui réagissent à la valeur de poids et à la valeur de seuil et à l'ensemble calculé de paramètres statistiques, en déterminant un état voisé/non voisé dans la trame présente parmi les trames de parole de voisement inconnu;
       caractérisé par des moyens (113) qui sont destinés à communiquer la valeur de poids et la valeur de seuil aux moyens (103) destinés à calculer l'ensemble de paramètres, en vue de leur utilisation pour le calcul d'un autre ensemble de paramètres pour l'une suivante des trames de parole de voisement inconnu.
  2. Le dispositif de la revendication 1, dans lequel les moyens de génération (102) comprennent des moyens qui sont destinés à effectuer une analyse par discriminant pour générer la valeur générale.
  3. Le dispositif de la revendication 2, dans lequel les moyens (104) destinés à calculer l'ensemble de paramètres réagissent en outre aux valeurs de poids et de seuil communiquées, et à une autre valeur générale de l'autre trame parmi les trames précitées, en calculant un autre ensemble de paramètres statistiques.
  4. Le dispositif de la revendication 3, dans lequel les moyens (104) destinés à calculer l'ensemble de paramètres, comprennent en outre des moyens qui sont destinés à calculer la moyenne des valeurs générales sur des trames présente et précédentes parmi les trames de parole; et
       des moyens qui réagissent à la moyenne des valeurs générales pour la trame présente et des trames précédentes parmi les trames de parole, et aux valeurs de poids et de seuil communiquées et à l'autre valeur générale, de façon à déterminer l'autre ensemble précité de paramètres statistiques.
  5. Un dispositif destiné à effectuer une détermination d'état voisé/non voisé pour des trames de parole, comprenant :
       des moyens (101) qui sont destinés à détecter une condition de silence pour sélectionner des trames de parole;
       des moyens (102) qui réagissent à un ensemble de classificateurs provenant d'un générateur de classificateurs (100) définissant des attributs de parole de chaque trame parmi des trames de parole présente et passées, de façon à générer une valeur générale indiquant initialement un état voisé ou non voisé;
       des moyens (206) destinés à calculer la variance des valeurs généralessur les trames de parole présente et précédentes;
       des moyens qui réagissent aux trames présente et passées en calculant (210) la probabilité que la trame présente parmi les trames précitées soit non voisée;
       des moyens (212) destinés à calculer la probabilité que la trame présente parmi les trames précitées soit voisée;
       caractérisé par des moyens qui réagissent à la trame présente et à des trames passées parmi les trames précitées, et à la probabilité que la trame présente soit non voisée, en calculant (214) la probabilité globale qu'une trame quelconque soit non voisée;
       des moyens qui réagissent à la probabilité que la trame présente parmi les trames précitées soit non voisée, et à la probabilité globale et à la variance, en calculant (216) une moyenne des trames non voisées;
       des moyens qui réagissent à la probabilité précitée que la trame présente parmi les trames précitées soit voisée, et à la probabilité globale et à la variance, en calculant (218) une moyenne des trames voisées parmi les trames précitées;
       des moyens qui réagissent à la moyenne pour des trames non voisées parmi les trames précitées et à la moyenne des trames voisées parmi les trames précitées et à la variance, en déterminant (222) des régions de décision; et
       des moyens (105) qui sont destinés à effectuer la détermination d'état voisé/non voisé sous la dépendance des régions de décision pour la trame présente parmi les trames précitées.
  6. Le dispositif de la revendication 5, dans lequel les moyens destinés à calculer la probabilité que la trame présente parmi les trames précitées soit non voisée, effectuent une opération statistique du type à vraisemblance maximale.
  7. Le dispositif de la revendication 6, dans lequel les moyens qui sont destinés à calculer la probabilité que la trame présente parmi les trames précitées soit non voisée, réagissent en outre à des valeurs de poids et de seuil en effectuant l'opération statistique du type à vraisemblance maximale.
  8. Un procédé pour effectuer une détermination d'état voisé/non voisé pour des trames de parole dont le voisement est inconnu, comprenant les étapes suivantes :
       on détecte une condition de silence (200) dans le but de sélectionner des trames de parole de voisement inconnu;
       on génère une valeur générale sous la dépendance d'un ensemble de classificateurs, provenant d'un générateur de classificateurs, définissant des attributs de parole de l'une des trames de parole de voisement inconnu, pour indiquer initialement une détermination d'état voisé/non voisé;
       on calcule (103) un ensemble de paramètres statistiques sous la dépendance de la valeur générale précitée;
       on calcule (104) une valeur de seuil sous la dépendance de l'ensemble de paramètres;
       on calcule (104) une valeur de poids sous la dependance de l'ensemble de paramètres; et
       on détermine (105) l'état voisé/non voisé de la parole dans la trame présente parmi les trames de parole de voisement inconnu, sous la dépendance de la valeur de poids et de la valeur de seuil, ainsi que de l'ensemble calculé de paramètres statistiques;
       caractérisé en ce qu'on renvoie (113) la valeur de poids et la valeur de seuil pour calculer un autre ensemble de paramètres pour l'une suivante des trames de parole.
  9. Le procédé de la revendication 8, dans lequel l'étape de génération comprend l'étape qui consiste à effectuer une analyse par discriminant pour générer la valeur générale précitée.
  10. Le procédé de la revendication 9, dans lequel l'étape de calcul de l'ensemble de paramètres réagit en outre aux valeurs de poids et de seuil communiquées, et à une autre valeur générale de l'autre des trames précitées, de façon à calculer un autre ensemble de paramètres statistiques.
  11. Le procédé de la revendication 10, dans lequel l'étape de calcul de l'ensemble de paramètres comprend en outre les étapes qui consistent à calculer la valeur moyenne des valeurs générales précitées sur la trame présente et des trames précédentes parmi les trames de parole précitées; et
       à déterminer l'autre ensemble précité de paramètres statistiques sous la dépendance de la moyenne des valeurs générales pour la trame présente et des trames précédentes parmi les trames de parole précitées, et sous la dépendance des valeurs de poids et de seuil communiquées et des autres valeurs générales précitées.
EP88903995A 1987-04-03 1988-01-12 Detecteur de signal vocal voise utilisant des valeurs seuil adaptatives Expired - Lifetime EP0309561B1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AT88903995T ATE83329T1 (de) 1987-04-03 1988-01-12 Detektor fuer stimmhafte laute mit adaptiver schwelle.

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US3429887A 1987-04-03 1987-04-03
US34298 1987-04-03

Publications (2)

Publication Number Publication Date
EP0309561A1 EP0309561A1 (fr) 1989-04-05
EP0309561B1 true EP0309561B1 (fr) 1992-12-09

Family

ID=21875533

Family Applications (1)

Application Number Title Priority Date Filing Date
EP88903995A Expired - Lifetime EP0309561B1 (fr) 1987-04-03 1988-01-12 Detecteur de signal vocal voise utilisant des valeurs seuil adaptatives

Country Status (9)

Country Link
EP (1) EP0309561B1 (fr)
JP (1) JPH0795239B2 (fr)
AT (1) ATE83329T1 (fr)
AU (1) AU598933B2 (fr)
CA (1) CA1336208C (fr)
DE (1) DE3876569T2 (fr)
HK (1) HK21794A (fr)
SG (1) SG60993G (fr)
WO (1) WO1988007739A1 (fr)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
AU599459B2 (en) * 1987-04-03 1990-07-19 American Telephone And Telegraph Company An adaptive multivariate estimating apparatus
US5195138A (en) * 1990-01-18 1993-03-16 Matsushita Electric Industrial Co., Ltd. Voice signal processing device
US5204906A (en) * 1990-02-13 1993-04-20 Matsushita Electric Industrial Co., Ltd. Voice signal processing device
US5220610A (en) * 1990-05-28 1993-06-15 Matsushita Electric Industrial Co., Ltd. Speech signal processing apparatus for extracting a speech signal from a noisy speech signal

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS60114900A (ja) * 1983-11-25 1985-06-21 松下電器産業株式会社 有音・無音判定法
JPS60200300A (ja) * 1984-03-23 1985-10-09 松下電器産業株式会社 音声の始端・終端検出装置
JPS6148898A (ja) * 1984-08-16 1986-03-10 松下電器産業株式会社 音声の有声無声判定装置
AU599459B2 (en) * 1987-04-03 1990-07-19 American Telephone And Telegraph Company An adaptive multivariate estimating apparatus

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
Atal, Rabiner: "A pattern recognition approach to voiced/unvoiced/ silence classification ..." IEEE ASSP 24 No 3, 1976, Prezas et al as cited on p. 1 of description. Note: these documents were cited in the search report of a parallel application (88901684.6) by the same applicant *

Also Published As

Publication number Publication date
EP0309561A1 (fr) 1989-04-05
SG60993G (en) 1993-07-09
DE3876569D1 (de) 1993-01-21
WO1988007739A1 (fr) 1988-10-06
CA1336208C (fr) 1995-07-04
AU598933B2 (en) 1990-07-05
DE3876569T2 (de) 1993-04-08
AU1700788A (en) 1988-11-02
JPH01502858A (ja) 1989-09-28
HK21794A (en) 1994-03-18
JPH0795239B2 (ja) 1995-10-11
ATE83329T1 (de) 1992-12-15

Similar Documents

Publication Publication Date Title
EP0694906B1 (fr) Procédé et appareil pour la reconnaissance de la parole
US6993481B2 (en) Detection of speech activity using feature model adaptation
US4821325A (en) Endpoint detector
EP1083542B1 (fr) Méthode et appareil pour la détection de la parole
EP0335521B1 (fr) Détection de la présence d'un signal de parole
JPH08505715A (ja) 定常的信号と非定常的信号との識別
JP2000099080A (ja) 信頼性尺度の評価を用いる音声認識方法
US5007093A (en) Adaptive threshold voiced detector
US4937870A (en) Speech recognition arrangement
US5046100A (en) Adaptive multivariate estimating apparatus
EP0309561B1 (fr) Detecteur de signal vocal voise utilisant des valeurs seuil adaptatives
EP0308433B1 (fr) Appareil d'estimation de variations multiples utilisant des techniques adaptatives
US4972490A (en) Distance measurement control of a multiple detector system
JP4673828B2 (ja) 音声信号区間推定装置、その方法、そのプログラム及び記録媒体
EP0310636B1 (fr) Commande de mesure de la distance d'un systeme a detecteurs multiples
JP2002258881A (ja) 音声検出装置及び音声検出プログラム
KR970003035B1 (ko) 음성신호의 피치정보 검출 방법
Vlaj et al. Usage of frame dropping and frame attenuation algorithms in automatic speech recognition systems
Dal Degan et al. AUTocoRRELATION FUNCTION

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AT BE DE FR GB IT NL

17P Request for examination filed

Effective date: 19890328

17Q First examination report despatched

Effective date: 19910408

GRAA (expected) grant

Free format text: ORIGINAL CODE: 0009210

AK Designated contracting states

Kind code of ref document: B1

Designated state(s): AT BE DE FR GB IT NL

REF Corresponds to:

Ref document number: 83329

Country of ref document: AT

Date of ref document: 19921215

Kind code of ref document: T

ET Fr: translation filed
REF Corresponds to:

Ref document number: 3876569

Country of ref document: DE

Date of ref document: 19930121

ITF It: translation for a ep patent filed

Owner name: MODIANO & ASSOCIATI S.R.L.

PLBE No opposition filed within time limit

Free format text: ORIGINAL CODE: 0009261

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT

26N No opposition filed
PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: FR

Payment date: 20011221

Year of fee payment: 15

REG Reference to a national code

Ref country code: GB

Ref legal event code: IF02

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: NL

Payment date: 20020107

Year of fee payment: 15

Ref country code: GB

Payment date: 20020107

Year of fee payment: 15

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: BE

Payment date: 20020114

Year of fee payment: 15

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: AT

Payment date: 20020115

Year of fee payment: 15

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: DE

Payment date: 20020328

Year of fee payment: 15

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: GB

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20030112

Ref country code: AT

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20030112

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: BE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20030131

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: NL

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20030801

Ref country code: DE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20030801

GBPC Gb: european patent ceased through non-payment of renewal fee
PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: FR

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20030930

NLV4 Nl: lapsed or anulled due to non-payment of the annual fee

Effective date: 20030801

REG Reference to a national code

Ref country code: FR

Ref legal event code: ST

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: IT

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES;WARNING: LAPSES OF ITALIAN PATENTS WITH EFFECTIVE DATE BEFORE 2007 MAY HAVE OCCURRED AT ANY TIME BEFORE 2007. THE CORRECT EFFECTIVE DATE MAY BE DIFFERENT FROM THE ONE RECORDED.

Effective date: 20050112