EP2419900B1 - Verfahren und einrichtung zur objektiven evaluierung der sprachqualität eines sprachsignals unter berücksichtigung der klassifikation der in dem signal enthaltenen hintergrundgeräusche - Google Patents

Verfahren und einrichtung zur objektiven evaluierung der sprachqualität eines sprachsignals unter berücksichtigung der klassifikation der in dem signal enthaltenen hintergrundgeräusche Download PDF

Info

Publication number
EP2419900B1
EP2419900B1 EP10723655A EP10723655A EP2419900B1 EP 2419900 B1 EP2419900 B1 EP 2419900B1 EP 10723655 A EP10723655 A EP 10723655A EP 10723655 A EP10723655 A EP 10723655A EP 2419900 B1 EP2419900 B1 EP 2419900B1
Authority
EP
European Patent Office
Prior art keywords
signal
noise
noise signal
classification
speech
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
EP10723655A
Other languages
English (en)
French (fr)
Other versions
EP2419900A1 (de
Inventor
Julien Faure
Adrien Leman
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Orange SA
Original Assignee
France Telecom SA
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by France Telecom SA filed Critical France Telecom SA
Publication of EP2419900A1 publication Critical patent/EP2419900A1/de
Application granted granted Critical
Publication of EP2419900B1 publication Critical patent/EP2419900B1/de
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/48Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
    • G10L25/69Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for evaluating synthetic or decoded voice signals
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/02Speech enhancement, e.g. noise reduction or echo cancellation
    • G10L21/0208Noise filtering

Definitions

  • the present invention relates generally to the processing of speech signals and in particular the voice signals transmitted in telecommunications systems.
  • the invention relates to a method and a device for objectively evaluating the speech quality of a speech signal taking into account the classification of the background noise contained in the signal.
  • the invention applies in particular to speech signals transmitted during a telephone call through a communication network, for example a mobile telephony network or a switched network or packet network telephony network.
  • background noise may include various noises: sounds from engines (cars, motorcycles), aircraft passing through the sky, conversation / whispering noises - for example in a restaurant or café environment -, music, and many other audible noises.
  • background noise may be an additional element of communication that can provide useful information to listeners (mobility context, geographic location, environment sharing).
  • the figure 1 annexed to this description is derived from the above-mentioned Document [1] (see section 3.5, Figure 2 of this document) and represents the opinion means (MOS LQSN) with the associated confidence interval, calculated from notes given by auditors to audio messages containing six different types of background noise, according to the ACR method. (Absolute Category Rating).
  • the various types of noise are: pink noise, stationary speech noise (BPS), electrical noise, city noise, restaurant noise, television noise or voice, each noise being considered at three different levels of perceived loudness.
  • voice quality - that is, the quality actually perceived by users - than the known methods of objective evaluation of voice quality.
  • MOS_CLi voice quality score
  • the function f (N) is the natural logarithm, Ln (N), of the total loudness N expressed in sones.
  • the total loudness of the noise signal is estimated according to an objective model of loudness estimation, for example the Zwicker model or the Moore model.
  • the step of calculating audio parameters of the noise signal comprises the calculation of a first parameter (IND_TMP), called temporal indicator, relating to the temporal evolution of the noise signal, and a second parameter (IND_FRQ), called frequency indicator, relating to the frequency spectrum of the noise signal.
  • IND_TMP first parameter
  • IND_FRQ second parameter
  • the time indicator (IND_TMP) is obtained from a calculation of the variation of the sound level of the noise signal
  • the frequency indicator (IND_FRQ) is obtained from a calculation of variation of the amplitude of the frequency spectrum of the noise signal.
  • the invention relates to a computer program on an information medium, this program comprising instructions adapted to the implementation of a method according to the invention as briefly defined above, when the program is loaded and executed in a computer.
  • the method of objective evaluation of the voice quality of a speech signal according to the invention is remarkable in that it uses the result of the classification phase of the background noise contained in the speech signal, to estimate the voice quality of the signal.
  • the classification phase of the background noise contained in the speech signal is based on the implementation of a previously constructed background noise classification model, the method of construction of which according to the invention is described below. after.
  • the construction of a noise classification model takes place conventionally in three successive phases.
  • the first phase consists in determining a sound base composed of audio signals containing various background noises, each audio signal being labeled as belonging to a given noise class.
  • a second phase is extracted from each sound sample of the base a number of predefined characteristic parameters forming a set of indicators.
  • the set of compound pairs, each, of the set of indicators and the associated noise class is provided to a learning engine intended to provide a classification model for classifying any sound sample on the basis of specific indicators, the latter being selected as the most relevant of the various indicators used during the learning phase.
  • the classification model obtained then makes it possible, based on indicators extracted from any sound sample (not part of the sound database), to provide a noise class to which this sample belongs.
  • the sound base used consists, on the one hand, of the audio signals used for the subjective tests described in Document [1], and on the other hand of audio signals originating from public sound bases.
  • audio signals from public sound bases used to complete the sound base
  • noises such as line noise, wind, car, vacuum, hair dryers, murmurs confused ( babble in English), sounds from the natural environment (bird, running water, rain, etc.), music.
  • Each noise is sampled at 8 kHz, filtered with the IRS8 tool, coded and decoded in G.711 as well as in G.729 in the case of the narrow band (300 - 3400 Hz), then each sound is sampled at 16 kHz, then filtered with the tool described in ITU-T Recommendation P.341 ( " Transmission characteristics for wideband (150-7000 Hz) digital hands-free telephony terminals ", 1998 ), and finally coded and decoded in G.722 (broadband 50 - 7000 Hz). These three degraded conditions are then restored according to two levels whose signal-to-noise ratio (SNR) is respectively 16 and 32. Each noise lasts four seconds. Finally, a total of 288 different audio signals are obtained.
  • SNR signal-to-noise ratio
  • the sound base used to develop the classification model finally consists of 632 audio signals.
  • Each sound sample of the sound database is manually tagged to identify a background class of membership.
  • the classes chosen were defined following the subjective tests mentioned in the Document [1] and more precisely, were determined according to the indulgence vis-à-vis the perceived noise, manifested by the human subjects tested during the judgment of the voice quality depending on the type of background noise (among the 6 types mentioned above).
  • the classification model is obtained by learning using a decision tree (cf. figure 1 ), carried out using the statistical tool called “classregtree” of the MATLAB® environment marketed by The MathWorks.
  • the algorithm used is developed from techniques described in the book entitled “ Classification and regression trees “by Leo Breiman et al., Published by Chapman and Hall in 1993 .
  • Each sample of background noise from the sound database is indicated by the eight indicators mentioned above and the class of membership of the sample (1: intelligible, 2: environment, 3: breath, 4: sizzle).
  • the decision tree then calculates the various possible solutions in order to obtain an optimum classification, closest to the manually labeled classes.
  • the most relevant audio indicators are selected, and value thresholds associated with these indicators are defined, these thresholds making it possible to separate the different classes and subclasses of background noise.
  • the resulting classification uses only two of the original eight indicators to rank the 500 background noises of learning in the four classes predefined.
  • the indicators selected are the indicators (3) and (6) of the list introduced above and respectively represent the variation of the acoustic level and the spectral flow of the background noise signals.
  • the "environment” class gets a lower classification result than for the other classes. This result is due to the differentiation between “breath” and “environmental” sounds, which can sometimes be difficult to perform, because of the similarity of certain sounds that can be arranged in both classes, for example sounds such as wind noise or the sound of a hair dryer.
  • the indicators selected for the classification model according to the invention are defined in greater detail below.
  • the time indicator is characteristic of the variation of the sound level of any noise signal is defined by the standard deviation of the power values of all the considered frames of the signal.
  • a power value is determined for each of the frames.
  • Each frame is composed of 512 samples, with overlapping between successive frames of 256 samples. For a sampling frequency of 8000 Hz, this corresponds to a duration of 64 ms (milliseconds) per frame, with an overlap of 32 ms. This overlap is used by 50% to obtain continuity between successive frames, as defined in Document [5] : " P.56 Objective Measurement of Active Voice Level ', ITU-T Recommendation, 1993 .
  • the weft ⁇ i 1
  • frame means the number of the frame to be evaluated;
  • the frame refers to the length of the frame (512 samples);
  • x i corresponds to the amplitude of the sample i ;
  • log refers to the decimal logarithm. This calculates the logarithm of the calculated average to obtain a power value per frame.
  • N frame represents the number of frames present in the background noise considered
  • P i represents the power value for the frame i
  • ⁇ P > is the average power on all frames.
  • the time indicator IND_TMP the more a sound is non-stationary and the higher the value obtained for this indicator.
  • the frequency indicator designated in the rest of the description by "IND_FRQ” and characteristic of the spectral flux of the noise signal, is calculated from the Spectral Power Density (DSP) of the signal.
  • DSP Spectral Power Density
  • this indicator is determined by frame of 256 samples, corresponding to a duration of 32 ms for a sampling frequency of 8 KHz. There is no frame overlap, unlike the time indicator.
  • Spectral flow also referred to as “spectrum amplitude variation,” is a measure of the rate of change of a power spectrum of a signal over time. This indicator is calculated from the normalized cross-correlation between two successive amplitudes of the spectrum a k (t-1) and a k (t).
  • k is an index representing the different frequency components
  • t is an index representing successive frames without overlapping, consisting of 256 samples each.
  • a value of the spectral flux corresponds to the amplitude difference of the spectral vector between two successive frames. This value is close to zero if the successive spectra are similar, and is close to 1 for very different successive spectra.
  • the value of the spectral stream is high for a music signal because a musical signal varies greatly from one frame to another. For speech, with the alternation of periods of stability (vowel) and transitions (consonant / vowel), the measurement of the spectral flow takes very different values and varies strongly during a sentence.
  • the classification model of the invention obtained as explained above, is used according to the invention to determine, on the basis of indicators extracted from any noisy audio signal, the noise class to which this noisy signal belongs among the noise level. set of classes defined for the classification model.
  • the figures 3a and 3b represent a flowchart illustrating a method of objective evaluation of the voice quality of a speech signal, according to an embodiment of the invention. According to the invention, the method of classification of background noise is implemented prior to the actual phase of evaluation of voice quality.
  • the first step S1 is to obtain an audio signal, which in the embodiment presented here is a speech signal obtained in analog or digital form.
  • a voice activity detection (DAV) operation is then applied to the speech signal.
  • the purpose of this voice activity detection is to separate in the input audio signal the periods of the speech-containing signal, possibly noisy, periods of the signal containing no speech (periods of silence), therefore not being able to contain only noise.
  • the active areas of the signal that is to say presenting the noisy voice message, are separated from each other. inactive areas noisy.
  • the voice activity detection technique implemented is that described in Document [5] cited above (" P.56 Objective Measurement of Active Voice Level ", ITU-T Recommendation, 1993 ).
  • the background noise signal generated is the signal consisting of the periods of the audio signal for which the result of the speech activity detection is zero.
  • the audio parameters consisting of the two indicators mentioned above (time indicator IND_TMP and frequency indicator IND_FRQ), which were selected during the obtaining of the classification model (learning phase), are extracted. of the noise signal, in step S7.
  • step S9 the value of the time indicator (IND_TMP) obtained for the noise signal is compared with the first threshold TH1 mentioned above. If the value of the time indicator is greater than the threshold TH1 (S9, no) then the noise signal is of non-stationary type and then the test of step S11 is applied.
  • IND_TMP time indicator
  • the frequency indicator (IND_FRQ) is compared to the second threshold TH2 mentioned above. If the indicator IND_FRQ is greater (S11, no) than the threshold TH2, the class (CL) of the noise signal is determined (step S13) as CL1: "Noise intelligible”; otherwise the class of the noise signal is determined (step S15) as CL2: "Noise The classification of the analyzed noise signal is then completed and the evaluation of the voice quality of the speech signal can then be performed ( Fig. 3b step S23).
  • the noise signal is of stationary type and then the test of step S17 is applied ( Fig. 3b ).
  • the value of the frequency indicator IND_FRQ is compared with the third threshold TH3 (defined above). If the indicator IND_FRQ is greater (S17, no) than the threshold TH3, the class (CL) of the noise signal is determined (step S19) as being CL3: "Breath noise”; otherwise the class of the noise signal is determined (step S21) as being CL4: "Sizzling noise”.
  • the classification of the analyzed noise signal is then completed and the voice quality evaluation of the speech signal can then be performed ( Fig. 3b step S23).
  • the figure 4 detail the step ( Fig. 3b , S23) for evaluating the speech quality of a speech signal according to the classification of the background noise contained in the speech signal.
  • the voice quality evaluation operation starts with step S231 in which, the total loudness of the noise signal ( SIG_N ) is estimated.
  • the loudness is defined as the subjective intensity of a sound, it is expressed in sones or phones.
  • the total loudness measured subjectively can be estimated using models known targets such as Zwicker model or model of Moore.
  • the Zwicker model is described for example in the document entitled “ Psychoacoustics: Facts and Models "by E. Zwicker and H. Fastl - Berlin, Springer, 2nd updated edition, 14 April 1999 .
  • the total loudness of the noise signal is estimated using the Zwicker model, however also implement the invention using the Moore model. Moreover, the more accurate the loudness estimation model used, the more precise the voice quality evaluation according to the invention will be.
  • N The total loudness estimate, expressed in sones, of the noise signal SIG_N, obtained using the Zwicker model, is referred to herein as " N ".
  • the voice quality score for the speech signal, MOS_CLi is obtained, on the one hand, as a function of the classification obtained relating to the background noise present in the speech signal - by the choice of the coefficients ( C i-1 ; C i ) of the mathematical formula which correspond to the background noise class - and on the other hand, according to the estimated loudness N for the background noise.
  • FIG. 1 represents the opinion means (MOS LQSN) with the associated confidence interval, calculated from notes given by auditors to audio messages containing six types of noise. different backgrounds, according to the ACR (Absolute Category Rating ) method.
  • the various types of noise are: pink noise, stationary speech noise (BPS), electrical noise, city noise, restaurant noise, television noise or voice, each noise being considered at three different levels of perceived loudness.
  • the loudness levels of the various types of background noise are obtained in this test, subjectively.
  • SNR pink background noise
  • each test audio signal can be characterized by its background noise class (CL1-CL4), its perceived loudness level (in sones: 1.67, 4.6, 8.2, 14) and the MOS note.
  • -LQSN Listening Quality Subjective Narrowband assigned to him in the preliminary subjective test (Document [1], "Préliminary Experiment "). Therefore, in summary, in this test, 24 subjects underwent an assessment of the overall quality of audio signals, according to the ACR method. Finally, 152 MOS-LQSN scores were obtained by taking the average score given by the 24 subjects, for each of the 152 audio test signals, which are divided according to the four classes of background noise defined according to the invention.
  • the figure 5 graphically shows the result of the aforementioned subjective tests.
  • the 152 test conditions are represented by their points, each corresponding point on the abscissa, at a loudness level, and on the ordinate, at the assigned quality score (MOS-LQSN); the points are furthermore differentiated according to the class of the background noise contained in the corresponding audio signal.
  • the value associated with R 2 corresponds to the correlation coefficient between the results obtained from the subjective test and the corresponding logarithmic regression.
  • the perceived loudness value N - subjectively obtained value in the context of the aforementioned subjective tests - is obtained by estimation according to a known method of loudness estimation, the Zwicker model in the embodiment set forth herein.
  • the figure 6 graphically shows the degree of correlation between the quality scores obtained in the subjective tests and those obtained using the objective quality evaluation method, according to the present invention.
  • This voice quality evaluation device is designed to implement the voice quality evaluation method according to the invention which has just been described above.
  • the device 1 for evaluating the voice quality of a speech signal comprises a module 11 for extracting from the audio signal (SIG) of a background noise signal (SIG_N), said noise signal.
  • the speech signal (GIS) input to the voice quality evaluation device 1 can be delivered to the device 1 from a communication network 2, such as a voice over IP network for example.
  • the module 11 is in practice a voice activity detection module.
  • the module DAV 11 then provides a noise signal SIG_N which is inputted to a module 13 for extracting parameters, that is to say calculating the parameters constituted by the time and frequency indicators, respectively IND_TMP and IND_FRQ.
  • the calculated indicators are then provided to a classification module, implementing the classification model according to the invention, described above, which determines, as a function of the values of the indicators used, the background noise class (CL) to which the noise signal SIG_N, according to the algorithm described in connection with the figures 3a and 3b .
  • the result of the classification performed by the background noise classification module 15 is then provided to voice quality evaluation module 17.
  • voice quality evaluation module 17 implements the voice quality evaluation algorithm described above in connection with the figure 4 to ultimately deliver an objective speech quality score relating to the input speech signal (SIG).
  • the voice quality evaluation device is implemented in the form of software means, that is to say computer program modules, performing the functions described in connection with the figures 3a , 3b , 4 and 5 .
  • the voice quality evaluation module 17 can be incorporated in a computer machine separate from that housing the other modules.
  • the background noise class information (CL) can be routed via a communication network to the machine or server responsible for performing the voice quality evaluation.
  • each voice quality score calculated by the module 17 is sent to a local collection equipment or on the network, responsible for collecting this quality information in order to establish an overall quality score, established for example as a function of time and / or according to the type of communication and / or according to other types of quality notes .
  • the aforementioned program modules are implemented when they are loaded and executed in a computer or computer device.
  • a computing device may also be constituted by any processor system integrated in a communication terminal or in a communication network equipment.
  • a computer program according to the invention can be stored on an information carrier of various types.
  • an information carrier may be constituted by any entity or device capable of storing a program according to the invention.
  • the medium in question may comprise a hardware storage means, such as a memory, for example a CD ROM or a ROM or RAM microelectronic circuit memory, or a magnetic recording means, for example a Hard disk.
  • a hardware storage means such as a memory, for example a CD ROM or a ROM or RAM microelectronic circuit memory, or a magnetic recording means, for example a Hard disk.
  • a computer program according to the invention can use any programming language and be in the form of source code, object code, or intermediate code between source code and object code (for example eg, a partially compiled form), or in any other form desirable for implementing a method according to the invention.

Landscapes

  • Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Signal Processing (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Computational Linguistics (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Quality & Reliability (AREA)
  • Circuit For Audible Band Transducer (AREA)
  • Monitoring And Testing Of Exchanges (AREA)
  • Telephonic Communication Services (AREA)

Claims (15)

  1. Verfahren zur objektiven Evaluierung der Sprachqualität eines Sprachsignals, dadurch gekennzeichnet, dass es die folgenden Schritte aufweist:
    - Klassifikation (S3-S21) des in dem Sprachsignal enthaltenen Hintergrundgeräusches gemäß einer vordefinierten Menge von Klassen von Hintergrundgeräuschen (CL1-CL4);
    - Evaluierung (S23) der Sprachqualität des Sprachsignals in Abhängigkeit von mindestens der erhaltenen Klassifikation bezüglich des in dem Sprachsignal vorhandenen Hintergrundgeräusches.
  2. Verfahren nach Anspruch 1, wobei der Schritt der Klassifikation des in dem Sprachsignal enthaltenen Hintergrundgeräusches die folgenden Schritte enthält:
    - Extraktion (S3, S5) eines Hintergrundgeräuschsignals, Rauschsignal genannt, aus dem Sprachsignal;
    - Berechnung (S7) von Audioparametern des Rauschsignals;
    - Klassifikation (S9-S21) des in dem Rauschsignal enthaltenen Hintergrundgeräusches in Abhängigkeit von den berechneten Audioparametern gemäß der Menge von Klassen von Hintergrundgeräuschen (CL1-CL4).
  3. Verfahren nach Anspruch 2, wobei der Schritt (S23) der Evaluierung der Sprachqualität des Sprachsignals die folgenden Schritte aufweist:
    - Schätzung (S231) der Gesamtlautheit (N) des Rauschsignals (SIG_N);
    - Berechnung einer Sprachqualitätsstufe (MOS_CLi) in Abhängigkeit von der Klasse (CLi) des in dem Sprachsignal enthaltenen Hintergrundgeräusches und von der für das Rauschsignal geschätzten Gesamtlautheit (N).
  4. Verfahren nach Anspruch 3, wobei eine Sprachqualitätsstufe (MOS_CLi) gemäß einer mathematischen Formel der folgenden allgemeinen Form erhalten wird: MOS_CLi = C i - 1 + C i × f N ,
    Figure imgb0010

    wobei:
    MOS_CLi die berechnete Stufe für das Rauschsignal ist;
    f(N) eine mathematische Funktion der für das Rauschsignal geschätzten Gesamtlautheit N ist;
    Ci-1 und Ci zwei Koeffizienten sind, die für die für das Rauschsignal erhaltene Klasse (CLi) des Hintergrundgeräusches definiert sind.
  5. Verfahren nach Anspruch 4, wobei die Funktion f(N) der natürliche Logarithmus Ln(N) der Gesamtlautheit N ist, ausgedrückt in Sone.
  6. Verfahren nach einem der Ansprüche 3 bis 5, wobei die Gesamtlautheit des Rauschsignals gemäß einem objektiven Modell zur Schätzung der Lautheit geschätzt wird.
  7. Verfahren nach einem der Ansprüche 2 bis 6, wobei der Schritt (S7) der Berechnung von Audioparametern des Rauschsignals die Berechnung eines ersten Parameters (IND_TMP), Zeitindikator genannt, der die zeitliche Änderung des Rauschsignals betrifft, und eines zweiten Parameters (IND_FRQ), Frequenzindikator genannt, der das Frequenzspektrum des Rauschsignals betrifft, aufweist.
  8. Verfahren nach Anspruch 7, wobei der Zeitindikator (IND_TMP) ausgehend von einer Variationsrechnung des Schallpegels des Rauschsignals erhalten wird und der Frequenzindikator (IND_FRQ) ausgehend von einer Variationsrechnung der Amplitude des Frequenzspektrums des Rauschsignals erhalten wird.
  9. Verfahren nach einem der vorhergehenden Ansprüche, wobei, um das mit dem Rauschsignal verknüpfte Hintergrundgeräusch zu klassifizieren, das Verfahren die Schritte aufweist, die in Folgendem bestehen:
    - Vergleichen (S9) des Wertes des Zeitindikators (IND_TMP), der für das Rauschsignal erhalten wurde, mit einem ersten Schwellenwert (TH1) und Bestimmen, in Abhängigkeit vom Ergebnis dieses Vergleiches, ob das Rauschsignal stationär ist oder nicht;
    - wenn das Rauschsignal als nichtstationär identifiziert wird, Vergleichen (S11) des Wertes des Frequenzindikators mit einem zweiten Schwellenwert (TH2) und Bestimmen (S13, S15), in Abhängigkeit vom Ergebnis dieses Vergleiches, ob das Rauschsignal einer ersten Klasse (CL1) oder einer zweiten Klasse (CL2) von Hintergrundgeräuschen angehört;
    - wenn das Rauschsignal als stationär identifiziert wird, Vergleichen (S17) des Wertes des Frequenzindikators mit einem dritten Schwellenwert (TH3) und Bestimmen (S19, S21), in Abhängigkeit vom Ergebnis dieses Vergleiches, ob das Rauschsignal einer dritten Klasse (CL3) oder einer vierten Klasse (CL4) von Hintergrundgeräuschen angehört.
  10. Verfahren nach einem der vorhergehenden Ansprüche, wobei die Menge der Klassen mindestens die folgenden Klassen aufweist:
    - verständliches Geräusch;
    - Umgebungsgeräusch;
    - Zischgeräusch;
    - Prasselgeräusch.
  11. Verfahren nach einem der Ansprüche 2 bis 10, wobei das Rauschsignal durch Anwendung einer Operation zur Detektion von Sprachaktivität (Vocal Activity Detection) auf das Sprachsignal extrahiert wird, wobei die Bereiche des Sprachsignals, die keine Sprachaktivität aufweisen, das Rauschsignal bilden.
  12. Vorrichtung zur objektiven Evaluierung der Sprachqualität eines Sprachsignals, dadurch gekennzeichnet, dass sie Folgendes aufweist:
    - Mittel zur Klassifikation (11-15) des in dem Sprachsignal enthaltenen Hintergrundgeräusches gemäß einer vordefinierten Menge von Klassen von Hintergrundgeräuschen (CL1-CL4);
    - Mittel zur Evaluierung (17) der Sprachqualität des Sprachsignals in Abhängigkeit von mindestens der erhaltenen Klassifikation bezüglich des in dem Sprachsignal vorhandenen Hintergrundgeräusches.
  13. Vorrichtung nach Anspruch 12, welche Folgendes aufweist:
    - ein Modul (11) zur Extraktion eines Hintergrundgeräuschsignals, Rauschsignal genannt, ausgehend von dem Sprachsignal (SIG);
    - ein Modul (13) zur Berechnung von Audioparametern des Rauschsignals;
    - ein Modul (15) zur Klassifikation des in dem Rauschsignal enthaltenen Hintergrundgeräusches in Abhängigkeit von den berechneten Audioparametern gemäß einer vordefinierten Menge von Klassen von Hintergrundgeräuschen (CL);
    - ein Modul (17) zur Evaluierung der Sprachqualität des Sprachsignals in Abhängigkeit von mindestens der erhaltenen Klassifikation bezüglich des in dem Sprachsignal vorhandenen Hintergrundgeräusches.
  14. Vorrichtung nach Anspruch 13, welche außerdem Mittel aufweist, die für die Durchführung eines Verfahrens nach einem der Ansprüche 2 bis 11 geeignet sind.
  15. Computerprogramm auf einem Informationsträger, wobei das Programm Programmanweisungen aufweist, die für die Durchführung eines Verfahrens nach einem der Ansprüche 1 bis 11 geeignet sind, wenn das Programm in einen Computer geladen und auf ihm ausgeführt wird.
EP10723655A 2009-04-17 2010-04-12 Verfahren und einrichtung zur objektiven evaluierung der sprachqualität eines sprachsignals unter berücksichtigung der klassifikation der in dem signal enthaltenen hintergrundgeräusche Active EP2419900B1 (de)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
FR0952531A FR2944640A1 (fr) 2009-04-17 2009-04-17 Procede et dispositif d'evaluation objective de la qualite vocale d'un signal de parole prenant en compte la classification du bruit de fond contenu dans le signal.
PCT/FR2010/050699 WO2010119216A1 (fr) 2009-04-17 2010-04-12 Procede et dispositif d'evaluation objective de la qualite vocale d'un signal de parole prenant en compte la classification du bruit de fond contenu dans le signal

Publications (2)

Publication Number Publication Date
EP2419900A1 EP2419900A1 (de) 2012-02-22
EP2419900B1 true EP2419900B1 (de) 2013-03-13

Family

ID=41137230

Family Applications (1)

Application Number Title Priority Date Filing Date
EP10723655A Active EP2419900B1 (de) 2009-04-17 2010-04-12 Verfahren und einrichtung zur objektiven evaluierung der sprachqualität eines sprachsignals unter berücksichtigung der klassifikation der in dem signal enthaltenen hintergrundgeräusche

Country Status (4)

Country Link
US (1) US8886529B2 (de)
EP (1) EP2419900B1 (de)
FR (1) FR2944640A1 (de)
WO (1) WO2010119216A1 (de)

Families Citing this family (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FR2944640A1 (fr) * 2009-04-17 2010-10-22 France Telecom Procede et dispositif d'evaluation objective de la qualite vocale d'un signal de parole prenant en compte la classification du bruit de fond contenu dans le signal.
WO2010146711A1 (ja) * 2009-06-19 2010-12-23 富士通株式会社 音声信号処理装置及び音声信号処理方法
CN103168326A (zh) * 2010-08-11 2013-06-19 骨声通信有限公司 为隐私和个性化使用而消除背景声
CN102231279B (zh) * 2011-05-11 2012-09-26 武汉大学 基于听觉关注度的音频质量客观评价系统及方法
KR101406398B1 (ko) * 2012-06-29 2014-06-13 인텔렉추얼디스커버리 주식회사 사용자 음원 평가 장치, 방법 및 기록 매체
US9830905B2 (en) 2013-06-26 2017-11-28 Qualcomm Incorporated Systems and methods for feature extraction
CN106409310B (zh) 2013-08-06 2019-11-19 华为技术有限公司 一种音频信号分类方法和装置
US10148526B2 (en) * 2013-11-20 2018-12-04 International Business Machines Corporation Determining quality of experience for communication sessions
US11888919B2 (en) 2013-11-20 2024-01-30 International Business Machines Corporation Determining quality of experience for communication sessions
US10079031B2 (en) * 2015-09-23 2018-09-18 Marvell World Trade Ltd. Residual noise suppression
US9749733B1 (en) * 2016-04-07 2017-08-29 Harman Intenational Industries, Incorporated Approach for detecting alert signals in changing environments
US10141005B2 (en) 2016-06-10 2018-11-27 Apple Inc. Noise detection and removal systems, and related methods
US10311863B2 (en) * 2016-09-02 2019-06-04 Disney Enterprises, Inc. Classifying segments of speech based on acoustic features and context
CN107093432B (zh) * 2017-05-19 2019-12-13 江苏百应信息技术有限公司 一种用于通信系统的语音质量评价系统
US10504538B2 (en) 2017-06-01 2019-12-10 Sorenson Ip Holdings, Llc Noise reduction by application of two thresholds in each frequency band in audio signals
CN111326169B (zh) * 2018-12-17 2023-11-10 中国移动通信集团北京有限公司 一种语音质量的评价方法及装置
US11350885B2 (en) * 2019-02-08 2022-06-07 Samsung Electronics Co., Ltd. System and method for continuous privacy-preserved audio collection
CN110610723B (zh) * 2019-09-20 2022-02-22 中国第一汽车股份有限公司 车内声品质的评价方法、装置、设备及存储介质
CN115699172A (zh) * 2020-05-29 2023-02-03 弗劳恩霍夫应用研究促进协会 用于处理初始音频信号的方法和装置
CN113393863B (zh) * 2021-06-10 2023-11-03 北京字跳网络技术有限公司 一种语音评价方法、装置和设备
CN114486286B (zh) * 2022-01-12 2024-05-17 中国重汽集团济南动力有限公司 一种车辆关门声品质评价方法及设备
CN115334349B (zh) * 2022-07-15 2024-01-02 北京达佳互联信息技术有限公司 音频处理方法、装置、电子设备及存储介质
CN117636907B (zh) * 2024-01-25 2024-04-12 中国传媒大学 基于广义互相关的音频数据处理方法、装置及存储介质

Family Cites Families (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5504473A (en) * 1993-07-22 1996-04-02 Digital Security Controls Ltd. Method of analyzing signal quality
JP3484757B2 (ja) * 1994-05-13 2004-01-06 ソニー株式会社 音声信号の雑音低減方法及び雑音区間検出方法
JP3484801B2 (ja) * 1995-02-17 2004-01-06 ソニー株式会社 音声信号の雑音低減方法及び装置
US5684921A (en) * 1995-07-13 1997-11-04 U S West Technologies, Inc. Method and system for identifying a corrupted speech message signal
US6202046B1 (en) * 1997-01-23 2001-03-13 Kabushiki Kaisha Toshiba Background noise/speech classification method
US6330532B1 (en) * 1999-07-19 2001-12-11 Qualcomm Incorporated Method and apparatus for maintaining a target bit rate in a speech coder
AU5472199A (en) * 1999-08-10 2001-03-05 Telogy Networks, Inc. Background energy estimation
SG97885A1 (en) * 2000-05-05 2003-08-20 Univ Nanyang Noise canceler system with adaptive cross-talk filters
US7472059B2 (en) * 2000-12-08 2008-12-30 Qualcomm Incorporated Method and apparatus for robust speech classification
DE10142846A1 (de) * 2001-08-29 2003-03-20 Deutsche Telekom Ag Verfahren zur Korrektur von gemessenen Sprachqualitätswerten
US7461003B1 (en) * 2003-10-22 2008-12-02 Tellabs Operations, Inc. Methods and apparatus for improving the quality of speech signals
WO2005119193A1 (en) * 2004-06-04 2005-12-15 Philips Intellectual Property & Standards Gmbh Performance prediction for an interactive speech recognition system
US7729275B2 (en) * 2004-06-15 2010-06-01 Nortel Networks Limited Method and apparatus for non-intrusive single-ended voice quality assessment in VoIP
WO2006136900A1 (en) * 2005-06-15 2006-12-28 Nortel Networks Limited Method and apparatus for non-intrusive single-ended voice quality assessment in voip
FR2894707A1 (fr) * 2005-12-09 2007-06-15 France Telecom Procede de mesure de la qualite percue d'un signal audio degrade par la presence de bruit
FR2944640A1 (fr) * 2009-04-17 2010-10-22 France Telecom Procede et dispositif d'evaluation objective de la qualite vocale d'un signal de parole prenant en compte la classification du bruit de fond contenu dans le signal.

Also Published As

Publication number Publication date
US8886529B2 (en) 2014-11-11
FR2944640A1 (fr) 2010-10-22
WO2010119216A1 (fr) 2010-10-21
EP2419900A1 (de) 2012-02-22
US20120059650A1 (en) 2012-03-08

Similar Documents

Publication Publication Date Title
EP2419900B1 (de) Verfahren und einrichtung zur objektiven evaluierung der sprachqualität eines sprachsignals unter berücksichtigung der klassifikation der in dem signal enthaltenen hintergrundgeräusche
EP2415047B1 (de) Klassifizieren von in einem Tonsignal enthaltenem Hintergrundrauschen
Malfait et al. P. 563—The ITU-T standard for single-ended speech quality assessment
EP0867856B1 (de) Verfahren und Vorrichtung zur Sprachdetektion
EP1468416B1 (de) Verfahren zur qualitativen bewertung eines digitalen audiosignals
EP1593116B1 (de) Verfahren zur differenzierten digitalen Sprach- und Musikbearbeitung, Rauschfilterung, Erzeugung von Spezialeffekten und Einrichtung zum Ausführen des Verfahrens
EP1849157B1 (de) Verfahren zur messung von durch geräusche in einem audiosignal verursachten beeinträchtigungen
WO2018146305A1 (fr) Methode et appareil de modification dynamique du timbre de la voix par decalage en fréquence des formants d'une enveloppe spectrale
EP1451548A2 (de) Einrichtung zur sprachdetektion in einem audiosignal bei lauter umgebung
EP2795618B1 (de) Verfahren zur erkennung eines vorgegebenen frequenzbandes in einem audiodatensignal, erkennungsvorrichtung und computerprogramm dafür
EP0685833B1 (de) Verfahren zur Sprachkodierung mittels linearer Prädiktion
WO2007066049A1 (fr) Procede de mesure de la qualite percue d'un signal audio degrade par la presence de bruit
Sharma et al. Non-intrusive estimation of speech signal parameters using a frame-based machine learning approach
Xie et al. Noisy-to-noisy voice conversion framework with denoising model
EP3627510A1 (de) Filterung eines tonsignals, das durch ein stimmerkennungssystem erfasst wurde
Jaiswal Influence of silence and noise filtering on speech quality monitoring
EP1792305A1 (de) Verfahren und vorrichtung zur effizienzbewertung einer lärmreduzierenden funktion für audiosignale
FR2627887A1 (fr) Systeme de reconnaissance de parole et procede de formation de modeles pouvant etre utilise dans ce systeme
Barry et al. Audio Inpainting based on Self-similarity for Sound Source Separation Applications
Jaiswal Performance Analysis of Deep Learning Based Speech Quality Model with Mixture of Features
Santos A non-intrusive objective speech intelligibility metric tailored for cochlear implant users in complex listening environments
FR2856506A1 (fr) Procede et dispositif de detection de parole dans un signal audio
FR2847706A1 (fr) Analyse de la qualite de signal vocal selon des criteres de qualite

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20111115

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO SE SI SK SM TR

DAX Request for extension of the european patent (deleted)
GRAP Despatch of communication of intention to grant a patent

Free format text: ORIGINAL CODE: EPIDOSNIGR1

GRAS Grant fee paid

Free format text: ORIGINAL CODE: EPIDOSNIGR3

GRAA (expected) grant

Free format text: ORIGINAL CODE: 0009210

AK Designated contracting states

Kind code of ref document: B1

Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO SE SI SK SM TR

REG Reference to a national code

Ref country code: GB

Ref legal event code: FG4D

Free format text: NOT ENGLISH

REG Reference to a national code

Ref country code: AT

Ref legal event code: REF

Ref document number: 601216

Country of ref document: AT

Kind code of ref document: T

Effective date: 20130315

Ref country code: CH

Ref legal event code: EP

REG Reference to a national code

Ref country code: IE

Ref legal event code: FG4D

Free format text: LANGUAGE OF EP DOCUMENT: FRENCH

REG Reference to a national code

Ref country code: DE

Ref legal event code: R096

Ref document number: 602010005476

Country of ref document: DE

Effective date: 20130508

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: NO

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20130613

Ref country code: LT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20130313

Ref country code: BG

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20130613

Ref country code: SE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20130313

Ref country code: ES

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20130624

REG Reference to a national code

Ref country code: AT

Ref legal event code: MK05

Ref document number: 601216

Country of ref document: AT

Kind code of ref document: T

Effective date: 20130313

REG Reference to a national code

Ref country code: NL

Ref legal event code: VDEP

Effective date: 20130313

REG Reference to a national code

Ref country code: LT

Ref legal event code: MG4D

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: SI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20130313

Ref country code: FI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20130313

Ref country code: LV

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20130313

Ref country code: GR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20130614

REG Reference to a national code

Ref country code: CH

Ref legal event code: PUE

Owner name: ORANGE, FR

Free format text: FORMER OWNER: FRANCE TELECOM, FR

RAP2 Party data changed (patent owner data changed or rights of a patent transferred)

Owner name: ORANGE

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: HR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20130313

BERE Be: lapsed

Owner name: FRANCE TELECOM

Effective date: 20130430

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: EE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20130313

Ref country code: NL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20130313

Ref country code: RO

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20130313

Ref country code: CZ

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20130313

Ref country code: IS

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20130713

Ref country code: SK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20130313

Ref country code: PT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20130715

Ref country code: AT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20130313

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: PL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20130313

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: MC

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20130313

PLBE No opposition filed within time limit

Free format text: ORIGINAL CODE: 0009261

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT

REG Reference to a national code

Ref country code: IE

Ref legal event code: MM4A

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: BE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20130430

Ref country code: DK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20130313

26N No opposition filed

Effective date: 20131216

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: IT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20130313

REG Reference to a national code

Ref country code: DE

Ref legal event code: R097

Ref document number: 602010005476

Country of ref document: DE

Effective date: 20131216

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: IE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20130412

REG Reference to a national code

Ref country code: CH

Ref legal event code: PL

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: CH

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20140430

Ref country code: LI

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20140430

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: MT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20130313

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: SM

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20130313

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: CY

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20130313

Ref country code: TR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20130313

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: MK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20130313

Ref country code: LU

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20130412

Ref country code: HU

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT; INVALID AB INITIO

Effective date: 20100412

REG Reference to a national code

Ref country code: FR

Ref legal event code: PLFP

Year of fee payment: 7

REG Reference to a national code

Ref country code: FR

Ref legal event code: PLFP

Year of fee payment: 8

REG Reference to a national code

Ref country code: FR

Ref legal event code: PLFP

Year of fee payment: 9

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: GB

Payment date: 20240320

Year of fee payment: 15

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: FR

Payment date: 20240320

Year of fee payment: 15

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: DE

Payment date: 20240320

Year of fee payment: 15