EP2419900B1 - Method and device for the objective evaluation of the voice quality of a speech signal taking into account the classification of the background noise contained in the signal - Google Patents

Method and device for the objective evaluation of the voice quality of a speech signal taking into account the classification of the background noise contained in the signal Download PDF

Info

Publication number
EP2419900B1
EP2419900B1 EP10723655A EP10723655A EP2419900B1 EP 2419900 B1 EP2419900 B1 EP 2419900B1 EP 10723655 A EP10723655 A EP 10723655A EP 10723655 A EP10723655 A EP 10723655A EP 2419900 B1 EP2419900 B1 EP 2419900B1
Authority
EP
European Patent Office
Prior art keywords
signal
noise
noise signal
classification
speech
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
EP10723655A
Other languages
German (de)
French (fr)
Other versions
EP2419900A1 (en
Inventor
Julien Faure
Adrien Leman
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Orange SA
Original Assignee
France Telecom SA
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by France Telecom SA filed Critical France Telecom SA
Publication of EP2419900A1 publication Critical patent/EP2419900A1/en
Application granted granted Critical
Publication of EP2419900B1 publication Critical patent/EP2419900B1/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/48Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
    • G10L25/69Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for evaluating synthetic or decoded voice signals
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Processing of the speech or voice signal to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/02Speech enhancement, e.g. noise reduction or echo cancellation
    • G10L21/0208Noise filtering

Definitions

  • the present invention relates generally to the processing of speech signals and in particular the voice signals transmitted in telecommunications systems.
  • the invention relates to a method and a device for objectively evaluating the speech quality of a speech signal taking into account the classification of the background noise contained in the signal.
  • the invention applies in particular to speech signals transmitted during a telephone call through a communication network, for example a mobile telephony network or a switched network or packet network telephony network.
  • background noise may include various noises: sounds from engines (cars, motorcycles), aircraft passing through the sky, conversation / whispering noises - for example in a restaurant or café environment -, music, and many other audible noises.
  • background noise may be an additional element of communication that can provide useful information to listeners (mobility context, geographic location, environment sharing).
  • the figure 1 annexed to this description is derived from the above-mentioned Document [1] (see section 3.5, Figure 2 of this document) and represents the opinion means (MOS LQSN) with the associated confidence interval, calculated from notes given by auditors to audio messages containing six different types of background noise, according to the ACR method. (Absolute Category Rating).
  • the various types of noise are: pink noise, stationary speech noise (BPS), electrical noise, city noise, restaurant noise, television noise or voice, each noise being considered at three different levels of perceived loudness.
  • voice quality - that is, the quality actually perceived by users - than the known methods of objective evaluation of voice quality.
  • MOS_CLi voice quality score
  • the function f (N) is the natural logarithm, Ln (N), of the total loudness N expressed in sones.
  • the total loudness of the noise signal is estimated according to an objective model of loudness estimation, for example the Zwicker model or the Moore model.
  • the step of calculating audio parameters of the noise signal comprises the calculation of a first parameter (IND_TMP), called temporal indicator, relating to the temporal evolution of the noise signal, and a second parameter (IND_FRQ), called frequency indicator, relating to the frequency spectrum of the noise signal.
  • IND_TMP first parameter
  • IND_FRQ second parameter
  • the time indicator (IND_TMP) is obtained from a calculation of the variation of the sound level of the noise signal
  • the frequency indicator (IND_FRQ) is obtained from a calculation of variation of the amplitude of the frequency spectrum of the noise signal.
  • the invention relates to a computer program on an information medium, this program comprising instructions adapted to the implementation of a method according to the invention as briefly defined above, when the program is loaded and executed in a computer.
  • the method of objective evaluation of the voice quality of a speech signal according to the invention is remarkable in that it uses the result of the classification phase of the background noise contained in the speech signal, to estimate the voice quality of the signal.
  • the classification phase of the background noise contained in the speech signal is based on the implementation of a previously constructed background noise classification model, the method of construction of which according to the invention is described below. after.
  • the construction of a noise classification model takes place conventionally in three successive phases.
  • the first phase consists in determining a sound base composed of audio signals containing various background noises, each audio signal being labeled as belonging to a given noise class.
  • a second phase is extracted from each sound sample of the base a number of predefined characteristic parameters forming a set of indicators.
  • the set of compound pairs, each, of the set of indicators and the associated noise class is provided to a learning engine intended to provide a classification model for classifying any sound sample on the basis of specific indicators, the latter being selected as the most relevant of the various indicators used during the learning phase.
  • the classification model obtained then makes it possible, based on indicators extracted from any sound sample (not part of the sound database), to provide a noise class to which this sample belongs.
  • the sound base used consists, on the one hand, of the audio signals used for the subjective tests described in Document [1], and on the other hand of audio signals originating from public sound bases.
  • audio signals from public sound bases used to complete the sound base
  • noises such as line noise, wind, car, vacuum, hair dryers, murmurs confused ( babble in English), sounds from the natural environment (bird, running water, rain, etc.), music.
  • Each noise is sampled at 8 kHz, filtered with the IRS8 tool, coded and decoded in G.711 as well as in G.729 in the case of the narrow band (300 - 3400 Hz), then each sound is sampled at 16 kHz, then filtered with the tool described in ITU-T Recommendation P.341 ( " Transmission characteristics for wideband (150-7000 Hz) digital hands-free telephony terminals ", 1998 ), and finally coded and decoded in G.722 (broadband 50 - 7000 Hz). These three degraded conditions are then restored according to two levels whose signal-to-noise ratio (SNR) is respectively 16 and 32. Each noise lasts four seconds. Finally, a total of 288 different audio signals are obtained.
  • SNR signal-to-noise ratio
  • the sound base used to develop the classification model finally consists of 632 audio signals.
  • Each sound sample of the sound database is manually tagged to identify a background class of membership.
  • the classes chosen were defined following the subjective tests mentioned in the Document [1] and more precisely, were determined according to the indulgence vis-à-vis the perceived noise, manifested by the human subjects tested during the judgment of the voice quality depending on the type of background noise (among the 6 types mentioned above).
  • the classification model is obtained by learning using a decision tree (cf. figure 1 ), carried out using the statistical tool called “classregtree” of the MATLAB® environment marketed by The MathWorks.
  • the algorithm used is developed from techniques described in the book entitled “ Classification and regression trees “by Leo Breiman et al., Published by Chapman and Hall in 1993 .
  • Each sample of background noise from the sound database is indicated by the eight indicators mentioned above and the class of membership of the sample (1: intelligible, 2: environment, 3: breath, 4: sizzle).
  • the decision tree then calculates the various possible solutions in order to obtain an optimum classification, closest to the manually labeled classes.
  • the most relevant audio indicators are selected, and value thresholds associated with these indicators are defined, these thresholds making it possible to separate the different classes and subclasses of background noise.
  • the resulting classification uses only two of the original eight indicators to rank the 500 background noises of learning in the four classes predefined.
  • the indicators selected are the indicators (3) and (6) of the list introduced above and respectively represent the variation of the acoustic level and the spectral flow of the background noise signals.
  • the "environment” class gets a lower classification result than for the other classes. This result is due to the differentiation between “breath” and “environmental” sounds, which can sometimes be difficult to perform, because of the similarity of certain sounds that can be arranged in both classes, for example sounds such as wind noise or the sound of a hair dryer.
  • the indicators selected for the classification model according to the invention are defined in greater detail below.
  • the time indicator is characteristic of the variation of the sound level of any noise signal is defined by the standard deviation of the power values of all the considered frames of the signal.
  • a power value is determined for each of the frames.
  • Each frame is composed of 512 samples, with overlapping between successive frames of 256 samples. For a sampling frequency of 8000 Hz, this corresponds to a duration of 64 ms (milliseconds) per frame, with an overlap of 32 ms. This overlap is used by 50% to obtain continuity between successive frames, as defined in Document [5] : " P.56 Objective Measurement of Active Voice Level ', ITU-T Recommendation, 1993 .
  • the weft ⁇ i 1
  • frame means the number of the frame to be evaluated;
  • the frame refers to the length of the frame (512 samples);
  • x i corresponds to the amplitude of the sample i ;
  • log refers to the decimal logarithm. This calculates the logarithm of the calculated average to obtain a power value per frame.
  • N frame represents the number of frames present in the background noise considered
  • P i represents the power value for the frame i
  • ⁇ P > is the average power on all frames.
  • the time indicator IND_TMP the more a sound is non-stationary and the higher the value obtained for this indicator.
  • the frequency indicator designated in the rest of the description by "IND_FRQ” and characteristic of the spectral flux of the noise signal, is calculated from the Spectral Power Density (DSP) of the signal.
  • DSP Spectral Power Density
  • this indicator is determined by frame of 256 samples, corresponding to a duration of 32 ms for a sampling frequency of 8 KHz. There is no frame overlap, unlike the time indicator.
  • Spectral flow also referred to as “spectrum amplitude variation,” is a measure of the rate of change of a power spectrum of a signal over time. This indicator is calculated from the normalized cross-correlation between two successive amplitudes of the spectrum a k (t-1) and a k (t).
  • k is an index representing the different frequency components
  • t is an index representing successive frames without overlapping, consisting of 256 samples each.
  • a value of the spectral flux corresponds to the amplitude difference of the spectral vector between two successive frames. This value is close to zero if the successive spectra are similar, and is close to 1 for very different successive spectra.
  • the value of the spectral stream is high for a music signal because a musical signal varies greatly from one frame to another. For speech, with the alternation of periods of stability (vowel) and transitions (consonant / vowel), the measurement of the spectral flow takes very different values and varies strongly during a sentence.
  • the classification model of the invention obtained as explained above, is used according to the invention to determine, on the basis of indicators extracted from any noisy audio signal, the noise class to which this noisy signal belongs among the noise level. set of classes defined for the classification model.
  • the figures 3a and 3b represent a flowchart illustrating a method of objective evaluation of the voice quality of a speech signal, according to an embodiment of the invention. According to the invention, the method of classification of background noise is implemented prior to the actual phase of evaluation of voice quality.
  • the first step S1 is to obtain an audio signal, which in the embodiment presented here is a speech signal obtained in analog or digital form.
  • a voice activity detection (DAV) operation is then applied to the speech signal.
  • the purpose of this voice activity detection is to separate in the input audio signal the periods of the speech-containing signal, possibly noisy, periods of the signal containing no speech (periods of silence), therefore not being able to contain only noise.
  • the active areas of the signal that is to say presenting the noisy voice message, are separated from each other. inactive areas noisy.
  • the voice activity detection technique implemented is that described in Document [5] cited above (" P.56 Objective Measurement of Active Voice Level ", ITU-T Recommendation, 1993 ).
  • the background noise signal generated is the signal consisting of the periods of the audio signal for which the result of the speech activity detection is zero.
  • the audio parameters consisting of the two indicators mentioned above (time indicator IND_TMP and frequency indicator IND_FRQ), which were selected during the obtaining of the classification model (learning phase), are extracted. of the noise signal, in step S7.
  • step S9 the value of the time indicator (IND_TMP) obtained for the noise signal is compared with the first threshold TH1 mentioned above. If the value of the time indicator is greater than the threshold TH1 (S9, no) then the noise signal is of non-stationary type and then the test of step S11 is applied.
  • IND_TMP time indicator
  • the frequency indicator (IND_FRQ) is compared to the second threshold TH2 mentioned above. If the indicator IND_FRQ is greater (S11, no) than the threshold TH2, the class (CL) of the noise signal is determined (step S13) as CL1: "Noise intelligible”; otherwise the class of the noise signal is determined (step S15) as CL2: "Noise The classification of the analyzed noise signal is then completed and the evaluation of the voice quality of the speech signal can then be performed ( Fig. 3b step S23).
  • the noise signal is of stationary type and then the test of step S17 is applied ( Fig. 3b ).
  • the value of the frequency indicator IND_FRQ is compared with the third threshold TH3 (defined above). If the indicator IND_FRQ is greater (S17, no) than the threshold TH3, the class (CL) of the noise signal is determined (step S19) as being CL3: "Breath noise”; otherwise the class of the noise signal is determined (step S21) as being CL4: "Sizzling noise”.
  • the classification of the analyzed noise signal is then completed and the voice quality evaluation of the speech signal can then be performed ( Fig. 3b step S23).
  • the figure 4 detail the step ( Fig. 3b , S23) for evaluating the speech quality of a speech signal according to the classification of the background noise contained in the speech signal.
  • the voice quality evaluation operation starts with step S231 in which, the total loudness of the noise signal ( SIG_N ) is estimated.
  • the loudness is defined as the subjective intensity of a sound, it is expressed in sones or phones.
  • the total loudness measured subjectively can be estimated using models known targets such as Zwicker model or model of Moore.
  • the Zwicker model is described for example in the document entitled “ Psychoacoustics: Facts and Models "by E. Zwicker and H. Fastl - Berlin, Springer, 2nd updated edition, 14 April 1999 .
  • the total loudness of the noise signal is estimated using the Zwicker model, however also implement the invention using the Moore model. Moreover, the more accurate the loudness estimation model used, the more precise the voice quality evaluation according to the invention will be.
  • N The total loudness estimate, expressed in sones, of the noise signal SIG_N, obtained using the Zwicker model, is referred to herein as " N ".
  • the voice quality score for the speech signal, MOS_CLi is obtained, on the one hand, as a function of the classification obtained relating to the background noise present in the speech signal - by the choice of the coefficients ( C i-1 ; C i ) of the mathematical formula which correspond to the background noise class - and on the other hand, according to the estimated loudness N for the background noise.
  • FIG. 1 represents the opinion means (MOS LQSN) with the associated confidence interval, calculated from notes given by auditors to audio messages containing six types of noise. different backgrounds, according to the ACR (Absolute Category Rating ) method.
  • the various types of noise are: pink noise, stationary speech noise (BPS), electrical noise, city noise, restaurant noise, television noise or voice, each noise being considered at three different levels of perceived loudness.
  • the loudness levels of the various types of background noise are obtained in this test, subjectively.
  • SNR pink background noise
  • each test audio signal can be characterized by its background noise class (CL1-CL4), its perceived loudness level (in sones: 1.67, 4.6, 8.2, 14) and the MOS note.
  • -LQSN Listening Quality Subjective Narrowband assigned to him in the preliminary subjective test (Document [1], "Préliminary Experiment "). Therefore, in summary, in this test, 24 subjects underwent an assessment of the overall quality of audio signals, according to the ACR method. Finally, 152 MOS-LQSN scores were obtained by taking the average score given by the 24 subjects, for each of the 152 audio test signals, which are divided according to the four classes of background noise defined according to the invention.
  • the figure 5 graphically shows the result of the aforementioned subjective tests.
  • the 152 test conditions are represented by their points, each corresponding point on the abscissa, at a loudness level, and on the ordinate, at the assigned quality score (MOS-LQSN); the points are furthermore differentiated according to the class of the background noise contained in the corresponding audio signal.
  • the value associated with R 2 corresponds to the correlation coefficient between the results obtained from the subjective test and the corresponding logarithmic regression.
  • the perceived loudness value N - subjectively obtained value in the context of the aforementioned subjective tests - is obtained by estimation according to a known method of loudness estimation, the Zwicker model in the embodiment set forth herein.
  • the figure 6 graphically shows the degree of correlation between the quality scores obtained in the subjective tests and those obtained using the objective quality evaluation method, according to the present invention.
  • This voice quality evaluation device is designed to implement the voice quality evaluation method according to the invention which has just been described above.
  • the device 1 for evaluating the voice quality of a speech signal comprises a module 11 for extracting from the audio signal (SIG) of a background noise signal (SIG_N), said noise signal.
  • the speech signal (GIS) input to the voice quality evaluation device 1 can be delivered to the device 1 from a communication network 2, such as a voice over IP network for example.
  • the module 11 is in practice a voice activity detection module.
  • the module DAV 11 then provides a noise signal SIG_N which is inputted to a module 13 for extracting parameters, that is to say calculating the parameters constituted by the time and frequency indicators, respectively IND_TMP and IND_FRQ.
  • the calculated indicators are then provided to a classification module, implementing the classification model according to the invention, described above, which determines, as a function of the values of the indicators used, the background noise class (CL) to which the noise signal SIG_N, according to the algorithm described in connection with the figures 3a and 3b .
  • the result of the classification performed by the background noise classification module 15 is then provided to voice quality evaluation module 17.
  • voice quality evaluation module 17 implements the voice quality evaluation algorithm described above in connection with the figure 4 to ultimately deliver an objective speech quality score relating to the input speech signal (SIG).
  • the voice quality evaluation device is implemented in the form of software means, that is to say computer program modules, performing the functions described in connection with the figures 3a , 3b , 4 and 5 .
  • the voice quality evaluation module 17 can be incorporated in a computer machine separate from that housing the other modules.
  • the background noise class information (CL) can be routed via a communication network to the machine or server responsible for performing the voice quality evaluation.
  • each voice quality score calculated by the module 17 is sent to a local collection equipment or on the network, responsible for collecting this quality information in order to establish an overall quality score, established for example as a function of time and / or according to the type of communication and / or according to other types of quality notes .
  • the aforementioned program modules are implemented when they are loaded and executed in a computer or computer device.
  • a computing device may also be constituted by any processor system integrated in a communication terminal or in a communication network equipment.
  • a computer program according to the invention can be stored on an information carrier of various types.
  • an information carrier may be constituted by any entity or device capable of storing a program according to the invention.
  • the medium in question may comprise a hardware storage means, such as a memory, for example a CD ROM or a ROM or RAM microelectronic circuit memory, or a magnetic recording means, for example a Hard disk.
  • a hardware storage means such as a memory, for example a CD ROM or a ROM or RAM microelectronic circuit memory, or a magnetic recording means, for example a Hard disk.
  • a computer program according to the invention can use any programming language and be in the form of source code, object code, or intermediate code between source code and object code (for example eg, a partially compiled form), or in any other form desirable for implementing a method according to the invention.

Description

La présente invention a trait de manière générale au traitement des signaux de parole et notamment les signaux vocaux transmis dans les systèmes de télécommunications. L'invention concerne en particulier un procédé et un dispositif d'évaluation objective de la qualité vocale d'un signal de parole prenant en compte la classification du bruit de fond contenu dans le signal. L'invention s'applique notamment aux signaux de parole transmis au cours d'une communication téléphonique au travers d'un réseau de communication, par exemple un réseau de téléphonie mobile ou un réseau de téléphonie sur réseau commuté ou sur réseau de paquets.The present invention relates generally to the processing of speech signals and in particular the voice signals transmitted in telecommunications systems. In particular, the invention relates to a method and a device for objectively evaluating the speech quality of a speech signal taking into account the classification of the background noise contained in the signal. The invention applies in particular to speech signals transmitted during a telephone call through a communication network, for example a mobile telephony network or a switched network or packet network telephony network.

Dans le domaine de la communication vocale, le bruit inclus dans un signal de parole, désigné par "bruit de fond", peut inclure des bruits divers : des sons provenant de moteurs (automobiles, motos), d'avions passant dans le ciel, des bruits de conversation/murmures - par exemple dans un environnement de restaurant ou de café -, de la musique, et bien d'autres bruits audibles. Dans certains cas, le bruit de fond peut être un élément supplémentaire de la communication pouvant apporter des informations utiles aux auditeurs (contexte de mobilité, lieu géographique, partage d'ambiance).In the field of voice communication, the noise included in a speech signal, referred to as "background noise", may include various noises: sounds from engines (cars, motorcycles), aircraft passing through the sky, conversation / whispering noises - for example in a restaurant or café environment -, music, and many other audible noises. In some cases, background noise may be an additional element of communication that can provide useful information to listeners (mobility context, geographic location, environment sharing).

Depuis l'avènement de la téléphonie mobile, la possibilité de communiquer depuis n'importe quel endroit a contribué à augmenter la présence de bruit de fond dans les signaux de parole transmis, et a rendu par conséquent nécessaire le traitement du bruit de fond, afin de préserver un niveau acceptable de qualité de communication. Par ailleurs, outre les bruits provenant de l'environnement où a lieu la prise de son, des bruits parasites, produits notamment lors du codage et de la transmission du signal audio sur le réseau (pertes de paquets par exemple, en voix sur IP) peuvent également interagir avec le bruit de fond.Since the advent of mobile telephony, the ability to communicate from any location has helped to increase the presence of background noise in transmitted speech signals, and has therefore made it necessary to process background noise so maintain an acceptable level of communication quality. Moreover, in addition to noises coming from the environment where the sound is taken, unwanted noise, produced in particular during the coding and transmission of the audio signal on the network (packet losses for example, voice over IP) can also interact with background noise.

Dans ce contexte, on peut donc supposer que la qualité perçue de la parole transmise est dépendante de l'interaction entre les différents types de bruits composant le bruit de fond. Ainsi, le document : " Influence of informational content of background noise on speech quality evaluation for VoIP application" (désigné ci-après par "Document [1]"), de A. Leman, J. Faure et E. Parizet - article présenté lors de la conférence "Acoustics'08" qui s'est tenue à Paris du 29 juin au 4 juillet 2008 - décrit des tests subjectifs qui, non seulement montrent que le niveau sonore des bruits de fond joue un rôle prépondérant dans l'évaluation de la qualité vocale dans le cadre d'une application voix sur IP (VoIP), mais démontrent également que le type de bruit de fond (bruit d'environnement, bruit de ligne, etc.) qui se superpose au signal vocal (le signal utile) joue un rôle important lors de l'évaluation de la qualité vocale de la communication.In this context, it can be assumed that the perceived quality of the transmitted speech is dependent on the interaction between the different types of background noise. Thus, the document: " Influence of informational content of VoIP application "(hereinafter" Document [1] "), by A. Leman, J. Faure and E. Parizet - article presented at the conference" Acoustics '08 held in Paris from June 29 to July 4, 2008 - describes subjective tests which not only show that the noise level of background noise plays a major role in the assessment of voice quality in the context of a Voice over IP (VoIP) application, but also demonstrate that the type Background noise (environmental noise, line noise, etc.) that is superimposed on the speech signal (the useful signal) plays an important role in evaluating the voice quality of the communication.

La figure 1 annexée à la présente description, est issue du Document [1] précité (voir section 3.5, Figure 2 de ce document) et représente les moyennes d'opinion (MOS LQSN) avec l'intervalle de confiance associé, calculées à partir de notes données par des auditeurs testeurs à des messages audio contenant six types de bruits de fond différents, selon la méthode ACR (Absolute Category Rating). Les divers types de bruit sont les suivants : bruit rose, bruit de parole stationnaire (BPS), bruit électrique, bruits de ville, bruits de restaurant, bruits de télévision ou voix, chaque bruit étant considéré à trois niveaux différents de sonie perçue.The figure 1 annexed to this description, is derived from the above-mentioned Document [1] (see section 3.5, Figure 2 of this document) and represents the opinion means (MOS LQSN) with the associated confidence interval, calculated from notes given by auditors to audio messages containing six different types of background noise, according to the ACR method. (Absolute Category Rating). The various types of noise are: pink noise, stationary speech noise (BPS), electrical noise, city noise, restaurant noise, television noise or voice, each noise being considered at three different levels of perceived loudness.

La ligne horizontale située au dessus des autres courbes représente la notation correspondant à un signal audio ne contenant pas de bruit de fond. Les notes données, "MOS LQSN" - pour "Mean Opinion Score of Listening Quality obtained with Subjective method for Narrow band signals" - sont conformes aux recommandations P. 800 et P. 800.1 de l'ITU-T, ayant pour titre, respectivement, "Methods for subjective determination of transmission quality" et "Mean Opinion Score (MOS) terminology". Comme on peut le voir sur la figure 1, les notes données pour un même signal utile (c'est-à-dire le signal de parole contenu dans le signal audio testé) varient non seulement en fonction du type de bruit de fond contenu dans le signal audio, mais également en fonction du niveau sonore perçu (sonie) d'un bruit de fond considéré.The horizontal line above the other curves represents the notation corresponding to an audio signal containing no background noise. The notes given, "MOS LQSN" - for "Mean Opinion Score of Listening Quality and Subjective Method for Narrow Band Signal" - are in accordance with the ITU-T Recommendations P. 800 and P. 800.1, respectively, entitled , "Methods for subjective determination of transmission quality" and "Mean Opinion Score (MOS) terminology". As can be seen on the figure 1 , the notes given for the same useful signal (ie the signal of speech contained in the audio signal tested) vary not only according to the type of background noise contained in the audio signal, but also according to the perceived loudness (loudness) of a background noise considered.

Pourtant, à ce jour, le type du bruit de fond présent dans un signal audio considéré n'est pas pris en compte dans les méthodes connues d'évaluation objective de la qualité vocale d'un signal de parole, qu'il s'agisse par exemple du modèle PESO (cf. Rec. ITU-T, P.862), du modèle E (décrit par exemple dans la Rec. ITU-T, G.107 "The E-model, a computational model for use in transmission planning", 2003 ), ou bien encore de méthodes non intrusives comme celle décrite dans le document " P.563-The ITU-T Standard for Single-Ended Speech Quality Assessment", de L. Malfait, J. Berger, et M. Kastner, IEEE Transaction on Audio, Speech, and Language Processing, vol. 14(6), pp. 1924-1934, 2006 .However, to date, the type of background noise present in a given audio signal is not taken into account in the known methods of objective evaluation of the speech quality of a speech signal, whether for example, the PESO model (see ITU-T Rec., P.862), Model E (described, for example, in ITU-T Rec. ITU-T, G.107 "The E-model, a computational model for use in transmission planning", 2003 ), or even non-intrusive methods like the one described in the document " P.563-The ITU-T Standard for Single-Ended Speech Quality Assessment, by L. Malfait, J. Berger, and M. Kastner, IEEE Transaction on Audio, Speech, and Language Processing, Vol 14 (6), pp. 1924-1934, 2006 .

Il est aussi connu selon le document EP1288914A2 un procédé de mesure objective de qualité selon le standard ITU-T P.862 "PESQ" comprenant une correction de la mesure de qualité tenant compte de l'intensité du bruit de fond.It is also known according to the document EP1288914A2 an objective quality measurement method according to ITU-T standard P.862 "PESQ" including a correction of the quality measurement taking into account the intensity of the background noise.

Il est aussi connu selon le document US5,84,921 un procédé de transmission de signal audio déterminant la qualité dudit signal en classifiant le signal selon le niveau de bruit présent dans celui-ci comme non bruité, faiblement bruité, bruité et très bruité.It is also known according to the document US5,84,921 an audio signal transmission method determining the quality of said signal by classifying the signal according to the noise level present therein as non-noisy, slightly noisy, noisy and highly noisy.

Ainsi, compte tenu de ce qui précède, il existe un réel besoin de disposer d'un modèle d'évaluation objective de la qualité vocale, prenant en compte le type de bruit de fond présent dans un signal audio à évaluer.Thus, in view of the foregoing, there is a real need for an objective voice quality evaluation model, taking into account the type of background noise present in an audio signal to be evaluated.

La présente invention a notamment pour objectif de répondre au besoin précité, en proposant selon un premier aspect un procédé d'évaluation objective de la qualité vocale d'un signal de parole. Conformément à l'invention, ce procédé comprend les étapes de :

  • classification du bruit de fond contenu dans le signal de parole selon un ensemble prédéfini de classes de bruits de fond ;
  • évaluation de la qualité vocale du signal de parole, en fonction d'au moins la classification obtenue relative au bruit de fond présent dans le signal de parole.
The present invention aims in particular to meet the aforementioned need, proposing in a first aspect a method of objective evaluation of the voice quality of a speech signal. According to the invention, this method comprises the steps of:
  • classification of the background noise contained in the speech signal according to a predefined set of background noise classes;
  • voice quality evaluation of the speech signal, based on at least the obtained classification relating to the background noise present in the speech signal.

Selon l'invention, la prise en compte du type du bruit de fond présent dans le signal de parole dans l'évaluation objective de la qualité vocale du signal de parole, permet d'obtenir une évaluation de la qualité plus proche de l'évaluation subjective de la qualité vocale - c'est-à-dire la qualité réellement perçue par des utilisateurs - que ne le permettent les méthodes connues d'évaluation objectives de la qualité vocale.According to the invention, taking into account the type of background noise present in the speech signal in the objective evaluation of the speech signal's speech quality, makes it possible to obtain a quality assessment closer to the evaluation. voice quality - that is, the quality actually perceived by users - than the known methods of objective evaluation of voice quality.

Selon un mode de réalisation de l'invention, l'étape d'évaluation de la qualité vocale du signal, de parole, comprend les étapes de :

  • estimation de la sonie totale (N) du signal de bruit (SIG_N) ;
  • calcul d'une note de qualité vocale en fonction de la classe de bruit de fond présent dans le signal de parole, et de la sonie totale estimée pour le signal de bruit.
According to one embodiment of the invention, the step of evaluating the voice quality of the signal, of speech, comprises the steps of:
  • estimation of the total loudness (N) of the noise signal (SIG_N);
  • calculating a voice quality score according to the background noise class present in the speech signal, and the estimated total loudness for the noise signal.

En pratique, une note de qualité vocale (MOS_CLi) selon l'invention est obtenue selon une formule mathématique de la forme générale suivante : MOS_CLi = C i - 1 + C i × f N

Figure imgb0001
In practice, a voice quality score (MOS_CLi) according to the invention is obtained according to a mathematical formula of the following general form: MOS_CLi = VS i - 1 + VS i × f NOT
Figure imgb0001

Où:

  • ■ MOS_CLi est la note calculée pour le signal de bruit ;
  • f(N) est une fonction mathématique de la sonie totale, N, estimée pour le signal de bruit ;
  • Ci-1 et Ci sont deux coefficients définis pour la classe (CLi) de bruit de fond obtenue pour le signal de bruit.
Or:
  • ■ MOS_CLi is the calculated score for the noise signal;
  • f (N) is a mathematical function of the total loudness, N , estimated for the noise signal;
  • C i-1 and C i are two coefficients defined for the class (CLi) of background noise obtained for the noise signal.

Plus particulièrement, selon une réalisation particulière de l'invention, la fonction f(N) est le logarithme népérien, Ln(N), de la sonie totale N exprimée en sones.More particularly, according to a particular embodiment of the invention, the function f (N) is the natural logarithm, Ln (N), of the total loudness N expressed in sones.

En particulier, selon une caractéristique de réalisation de l'invention, la sonie totale du signal de bruit est estimée selon un modèle objectif d'estimation de la sonie, par exemple le modèle de Zwicker ou le modèle de Moore.In particular, according to an embodiment of the invention, the total loudness of the noise signal is estimated according to an objective model of loudness estimation, for example the Zwicker model or the Moore model.

Selon d'autres caractéristiques de réalisation de l'invention, l'étape de classification du bruit de fond contenu dans le signal de parole, inclut les étapes de:

  • extraction du signal de parole, d'un signal de bruit de fond, dit signal de bruit ;
  • calcul de paramètres audio du signal de bruit ;
  • classification du bruit de fond contenu dans le signal de bruit, en fonction des paramètres audio calculés, selon ledit ensemble de classes de bruits de fond.
According to other embodiments of the invention, the step of classifying the background noise contained in the speech signal includes the steps of:
  • extracting the speech signal, a background noise signal, said noise signal;
  • calculation of audio parameters of the noise signal;
  • classification of the background noise contained in the noise signal, according to the calculated audio parameters, according to said set of background noise classes.

Selon un mode particulier de réalisation de l'invention, l'étape de calcul de paramètres audio du signal de bruit, comprend le calcul d'un premier paramètre (IND_TMP), dit indicateur temporel, relatif à l'évolution temporelle du signal de bruit, et d'un second paramètre (IND_FRQ), dit indicateur fréquentiel, relatif au spectre fréquentiel du signal de bruit.According to a particular embodiment of the invention, the step of calculating audio parameters of the noise signal comprises the calculation of a first parameter (IND_TMP), called temporal indicator, relating to the temporal evolution of the noise signal, and a second parameter (IND_FRQ), called frequency indicator, relating to the frequency spectrum of the noise signal.

En pratique, l'indicateur temporel (IND_TMP) est obtenu à partir d'un calcul de variation du niveau sonore du signal de bruit, et l'indicateur fréquentiel (IND_FRQ) est obtenu à partir d'un calcul de variation de l'amplitude du spectre fréquentiel du signal de bruit.In practice, the time indicator (IND_TMP) is obtained from a calculation of the variation of the sound level of the noise signal, and the frequency indicator (IND_FRQ) is obtained from a calculation of variation of the amplitude of the frequency spectrum of the noise signal.

La combinaison de ces deux indicateurs permet d'obtenir un taux faible d'erreurs de classifications, alors que leur calcul est peu consommateur en ressources de calcul.The combination of these two indicators makes it possible to obtain a low rate of classification errors, whereas their computation consumes little computing resources.

Selon une implémentation particulière de l'étape de classification précitée, pour effectuer cette classification du bruit de fond associé au signal de bruit, le procédé de l'invention met en oeuvre des étapes consistant à :

  • comparer la valeur de l'indicateur temporel (IND_TMP) obtenue pour le signal de bruit à un premier seuil (TH1), et déterminer en fonction du résultat de cette comparaison que le signal de bruit est stationnaire ou non ;
  • lorsque le signal de bruit est identifié comme non-stationnaire, comparer la valeur de l'indicateur fréquentiel à un second seuil (TH2), et déterminer en fonction du résultat de cette comparaison que le signal de bruit appartient à une première classe ou à une seconde classe de bruits de fond ;
  • lorsque le signal de bruit est identifié comme stationnaire, comparer la valeur de l'indicateur fréquentiel à un troisième seuil (TH3), et déterminer en fonction du résultat de cette comparaison que le signal de bruit appartient à une troisième classe ou à une quatrième classe de bruits de fond.
According to a particular implementation of the aforementioned classification step, to perform this classification of the background noise associated with the noise signal, the method of the invention implements steps consisting of:
  • comparing the value of the time indicator (IND_TMP) obtained for the noise signal with a first threshold (TH1), and determining according to the result of this comparison that the noise signal is stationary or not;
  • when the noise signal is identified as non-stationary, comparing the value of the frequency indicator with a second threshold (TH2), and determining according to the result of this comparison that the noise signal belongs to a first class or a second class of background noise;
  • when the noise signal is identified as stationary, comparing the value of the frequency indicator with a third threshold (TH3), and determining according to the result of this comparison that the noise signal belongs to a third class or a fourth class background noise.

Par ailleurs, dans ce mode de mise en oeuvre l'ensemble des classes obtenu selon l'invention, comprend au moins les classes suivantes :

  • bruit intelligible ;
  • bruit d'environnement ;
  • bruit de souffle ;
  • bruit de grésillement.
Moreover, in this embodiment, the set of classes obtained according to the invention comprises at least the following classes:
  • intelligible noise;
  • environmental noise;
  • breath noise;
  • sizzling noise.

L'utilisation des trois seuils TH1, TH2, TH3 précités, dans une structure de classification arborescente simple permet de classifier rapidement un échantillon de signal de bruit. D'autre part, en calculant la classe d'un échantillon sur des fenêtres de courtes durées, on peut obtenir une actualisation en temps réel de la classe de bruit de fond du signal de bruit analysé.The use of the three thresholds TH1, TH2, TH3 above, in a simple tree classification structure makes it possible to quickly classify a noise signal sample. On the other hand, by calculating the class of a sample on windows of short duration, it is possible to obtain a real-time update of the noise background class of the analyzed noise signal.

Corrélativement, selon un deuxième aspect, l'invention concerne un dispositif d'évaluation objective de la qualité vocale d'un signal de parole. Conformément à l'invention, ce dispositif comprend :

  • des moyens de classification du bruit de fond contenu dans le signal de parole selon un ensemble prédéfini de classes de bruits de fond ;
  • des moyens d'évaluation de la qualité vocale du signal de parole, en fonction d'au moins la classification obtenue relative au bruit de fond présent dans le signal de parole.
Correlatively, according to a second aspect, the invention relates to a device for objective evaluation of the voice quality of a speech signal. According to the invention, this device comprises:
  • means for classifying the background noise contained in the speech signal according to a predefined set of background noise classes;
  • means for evaluating the speech quality of the speech signal, as a function of at least the classification obtained relating to the background noise present in the speech signal.

Selon des caractéristiques particulières de réalisation de l'invention, ce dispositif d'évaluation objective de la qualité vocale comprend :

  • un module d'extraction à partir du signal de parole d'un signal de bruit de fond, dit signal de bruit ;
  • un module de calcul de paramètres audio du signal de bruit ;
  • un module de classification du bruit de fond contenu dans le signal de bruit, en fonction des paramètres audio calculés, selon un ensemble prédéfini de classes de bruits de fond ;
  • un module d'évaluation de la qualité vocale du signal de parole, en fonction d'au moins la classification obtenue relative au bruit de fond présent dans le signal de parole.
According to particular embodiments of the invention, this objective voice quality evaluation device comprises:
  • an extraction module from the speech signal of a background noise signal, said noise signal;
  • a module for calculating audio parameters of the noise signal;
  • a noise noise classification module contained in the noise signal, based on the calculated audio parameters, according to a predefined set of background noise classes;
  • a voice quality evaluation module of the speech signal, based on at least the obtained classification relating to the background noise present in the speech signal.

Selon un autre aspect, l'invention concerne un programme d'ordinateur sur un support d'informations, ce programme comportant des instructions adaptées à la mise en oeuvre d'un procédé selon l'invention tel que brièvement défini plus haut, lorsque le programme est chargé et exécuté dans un ordinateur.According to another aspect, the invention relates to a computer program on an information medium, this program comprising instructions adapted to the implementation of a method according to the invention as briefly defined above, when the program is loaded and executed in a computer.

Les avantages procurés par le dispositif d'évaluation objective de qualité vocale et le programme d'ordinateur précités, sont identiques à ceux mentionnés plus haut en relation avec le procédé d'évaluation objective de la qualité vocale d'un signal de parole.The advantages provided by the objective speech quality evaluation device and the aforesaid computer program are identical to those mentioned above in connection with the objective evaluation method of the voice quality of a speech signal.

L'invention sera mieux comprise à l'aide de la description détaillée qui va suivre, faite en se référant aux dessins annexés dans lesquels :

  • La figure 1, déjà abordée, est une représentation graphique des notes subjectives moyennes données par des auditeurs testeurs à des messages audio contenant divers types de bruits de fond et selon plusieurs niveaux de sonie, conformément à une étude connue de l'état de la technique ;
  • La figure 2 représente une fenêtre logicielle affichée sur un écran d'ordinateur montrant l'arbre de sélection obtenu par apprentissage pour définir un modèle de classification de bruits de fond utilisé selon l'invention ;
  • Les figures 3a et 3b représentent un organigramme illustrant un procédé d'évaluation objective de la qualité vocale d'un signal de parole, selon un mode de réalisation de l'invention ;
  • La figure 4 est un organigramme détaillant l'étape (fig. 3b, S23) d'évaluation de la qualité vocale d'un signal de parole en fonction de la classification du bruit de fond contenu dans le signal de parole ;
  • La figure 5 montre graphiquement le résultat de tests subjectifs d'évaluation de la qualité vocale selon l'invention, ainsi que les courbes obtenues par régression logarithmique, qui lient les notes de qualité perçue à la sonie perçue pour des signaux audio correspondant aux classes de bruit de fond définies selon l'invention ;
  • La figure 6 montre graphiquement le degré de corrélation existant entre les notes de qualité obtenues lors des tests subjectifs et celles obtenues selon la méthode d'évaluation objective de la qualité, selon la présente invention ;
  • La figure 7 représente un schéma fonctionnel d'un dispositif d'évaluation objective de la qualité vocale d'un signal de parole, selon l'invention.
The invention will be better understood with the aid of the detailed description which follows, made with reference to the appended drawings in which:
  • The figure 1 , already discussed, is a graphical representation of the average subjective ratings given by test listeners to audio messages containing various types of background noise and at several loudness levels, according to a known prior art study;
  • The figure 2 represents a software window displayed on a computer screen showing the selection tree obtained by learning to define a background noise classification model used according to the invention;
  • The figures 3a and 3b represent a flowchart illustrating an objective evaluation method of the speech quality of a speech signal, according to an embodiment of the invention;
  • The figure 4 is a flowchart detailing the step ( Fig. 3b , S23) for evaluating the speech quality of a speech signal according to the classification of the background noise contained in the speech signal;
  • The figure 5 graphically shows the result of subjective voice quality evaluation tests according to the invention, as well as log-log regression curves, which bind the perceived quality ratings to the perceived loudness for audio signals corresponding to the background noise classes. defined according to the invention;
  • The figure 6 graphically shows the degree of correlation between the quality scores obtained in the subjective tests and those obtained according to the objective quality evaluation method, according to the present invention;
  • The figure 7 represents a block diagram of an objective evaluation device of the voice quality of a speech signal, according to the invention.

Le procédé d'évaluation objective de la qualité vocale d'un signal de parole selon l'invention est remarquable en qu'il utilise le résultat de la phase de classification du bruit de fond contenu dans le signal de parole, pour estimer la qualité vocale du signal. La phase de classification du bruit de fond contenu dans le signal de parole, repose sur la mise en oeuvre d'un modèle de classification de bruits de fond, construit au préalable, et dont le mode de construction selon l'invention est décrit ci-après.The method of objective evaluation of the voice quality of a speech signal according to the invention is remarkable in that it uses the result of the classification phase of the background noise contained in the speech signal, to estimate the voice quality of the signal. The classification phase of the background noise contained in the speech signal is based on the implementation of a previously constructed background noise classification model, the method of construction of which according to the invention is described below. after.

Construction du modèle de classification des bruits de fondConstruction of the background noise classification model

La construction d'un modèle de classification de bruit se déroule classiquement selon trois phases successives. La première phase consiste à déterminer une base sonore composée de signaux audio contenant divers bruits de fond, chaque signal audio étant étiqueté comme appartenant à une classe donnée de bruit. Ensuite, au cours d'une seconde phase on extrait de chaque échantillon sonore de la base un certains nombre de paramètres caractéristiques prédéfinis formant un ensemble d'indicateurs. Finalement, au cours de la troisième phase, dite phase d'apprentissage, l'ensemble des couples composés, chacun, de l'ensemble d'indicateurs et de la classe de bruit associée, est fourni à un moteur d'apprentissage destiné à fournir un modèle de classification permettant de classifier un échantillon sonore quelconque sur la base d'indicateurs déterminés, ces derniers étant sélectionnés comme étant les plus pertinents parmi les divers indicateurs utilisés au cours de la phase d'apprentissage. Le modèle de classification obtenu permet ensuite, à partir d'indicateurs extraits d'un échantillon sonore quelconque (ne faisant pas partie de la base sonore), de fournir une classe de bruit à laquelle appartient cet échantillon.The construction of a noise classification model takes place conventionally in three successive phases. The first phase consists in determining a sound base composed of audio signals containing various background noises, each audio signal being labeled as belonging to a given noise class. Then, during a second phase is extracted from each sound sample of the base a number of predefined characteristic parameters forming a set of indicators. Finally, during the third phase, called the learning phase, the set of compound pairs, each, of the set of indicators and the associated noise class, is provided to a learning engine intended to provide a classification model for classifying any sound sample on the basis of specific indicators, the latter being selected as the most relevant of the various indicators used during the learning phase. The classification model obtained then makes it possible, based on indicators extracted from any sound sample (not part of the sound database), to provide a noise class to which this sample belongs.

Dans le Document [1] cité plus haut, il est démontré que la qualité vocale peut être influencée par la signification du bruit dans le contexte de la téléphonie. Ainsi, si des utilisateurs identifient du bruit comme étant issu d'une source sonore de l'environnement du locuteur, une certaine indulgence est observée concernant l'évaluation de la qualité perçue. Deux tests ont permis de vérifier cela, le premier test concernant l'interaction des caractéristiques et niveaux sonores des bruits de fond avec la qualité vocale perçue, et le second test concernant l'interaction des caractéristiques des bruits de fond avec les dégradations dues à la transmission de voix sur IP. Partant des résultats de l'étude exposée dans le document précité, les inventeurs de la présente invention, ont cherché à définir des paramètres (indicateurs) d'un signal audio permettant de mesurer et de quantifier la signification du bruit de fond présent dans ce signal et ensuite de définir une méthode de classification statistique du bruit de fond en fonction des indicateurs retenus.In Document [1] cited above, it is demonstrated that voice quality can be influenced by the meaning of noise in the context of telephony. Thus, if users identify noise as coming from a sound source of the speaker's environment, some indulgence is observed regarding the perceived quality evaluation. Two tests were used to verify this, the first test concerning the interaction of background noise characteristics and sound levels with perceived voice quality, and the second test concerning the interaction of background noise characteristics with the impairments due to noise. voice over IP. Based on the results of the study described in the aforementioned document, the inventors of the present invention, have sought to define parameters (indicators) of an audio signal for measuring and quantifying the meaning of the background noise present in this signal and then to define a method of statistical classification of the background noise according to the selected indicators .

Phase 1- Constitution d'une base sonore de signaux audio Phase 1- Building a sound base of audio signals

Pour la construction du modèle de classification de la présente invention, la base sonore utilisée est constituée, d'une part, des signaux audio ayant servi aux tests subjectifs décrits dans le Document [1], et d'autre part de signaux audio issus de bases sonores publiques.For the construction of the classification model of the present invention, the sound base used consists, on the one hand, of the audio signals used for the subjective tests described in Document [1], and on the other hand of audio signals originating from public sound bases.

Concernant les signaux audio issus des tests subjectifs précités, dans le premier test (voir Document [1], section 3.2) 152 échantillons sonores sont utilisés. Ces échantillons sont obtenus à partir de huit phrases de même durée (8 secondes) sélectionnées à partir d'une liste normalisée de doubles phrases, produites par quatre locuteurs (deux hommes et deux femmes). Ces phrases sont ensuite mixées avec six types de bruits de fond (détaillés plus bas) à trois niveaux différents de sonie (/oudness en anglais). Des phrases sans bruit de fond sont également incluses. Ensuite l'ensemble des échantillons est encodé avec un codec G.711. Les résultats de ce premier test sont illustrés par la figure 1 décrite plus haut.With regard to the audio signals from the aforementioned subjective tests, in the first test (see Document [1], section 3.2), 152 sound samples are used. These samples are obtained from eight sentences of the same duration (8 seconds) selected from a standardized list of double sentences, produced by four speakers (two men and two women). These sentences are then mixed with six types of background noise (detailed below) at three different levels of loudness (/ oudness in English). Phrases without background noise are also included. Then all the samples are encoded with a G.711 codec. The results of this first test are illustrated by the figure 1 described above.

Dans le second test (voir Document [1], section 4.1), les mêmes phrases sont mixées avec les six types de bruits de fond avec un niveau de sonie moyen, puis quatre types de dégradations dues à la transmission de voix sur IP sont introduites (codec G.711 avec 0% et 3% de perte de paquets ; codec G.729 avec 0% et 3% de perte de paquets). Au total, 192 échantillons sonores sont obtenus selon le deuxième test.In the second test (see Document [1], section 4.1), the same sentences are mixed with the six types of background noise with an average loudness level, then four types of impairments due to voice over IP transmission are introduced. (G.711 codec with 0% and 3% packet loss, G.729 codec with 0% and 3% packet loss). A total of 192 sound samples are obtained according to the second test.

Les six types de bruits de fond utilisés dans le cadre des tests subjectifs précités sont les suivants :

  • un bruit rose (pink-noise), considéré comme la référence (bruit stationnaire avec -3 dB/octave de contenu fréquentiel) ;
  • un bruit de parole stationnaire (BPS) c'est-à-dire un bruit aléatoire avec un contenu fréquentiel similaire à la voix humaine standardisée (stationnaire) ;
  • un bruit électrique, c'est-à-dire un son harmonique ayant une fréquence fondamentale de 50Hz simulant un bruit de circuit (stationnaire) ;
  • un bruit d'environnement de ville avec présence de voitures, avertisseurs sonores, etc. (non-stationnaire) ;
  • un bruit d'environnement de restaurant avec présence de murmures, bruit de verres, rires, etc. (non-stationnaire) ;
  • un son de voix intelligible enregistrée depuis une source TV (non-stationnaire).
The six types of background noise used in the aforementioned subjective tests are:
  • a pink noise (pink-noise), considered as the reference (stationary noise with -3 dB / octave of frequency content);
  • a stationary speech noise (BPS), that is to say a random noise with a frequency content similar to the standardized human voice (stationary);
  • an electrical noise, that is to say a harmonic sound having a fundamental frequency of 50Hz simulating a circuit noise (stationary);
  • a city environment noise with the presence of cars, horns, etc. (non-stationary);
  • a restaurant environment noise with the presence of whispers, the sound of glasses, laughter, etc. (non-stationary);
  • an intelligible voice sound recorded from a TV source (non-stationary).

Tous les sons sont échantillonnés à 8 kHz (16 bits), et un filtre passe-bande IRS (intermediate Reference System) est utilisé pour simuler un réseau téléphonique réel. Les six types de bruits cités ci-dessus sont répétés avec des dégradations liées aux codages G.711 et G.729, avec des pertes de paquets, ainsi qu'avec plusieurs niveaux de diffusion.All sounds are sampled at 8 kHz (16 bits), and an Intermediate System ( IRS) bandpass filter is used to simulate a real telephone network. The six types of noise mentioned above are repeated with degradations related to G.711 and G.729 coding, with packet loss, as well as with several levels of broadcast.

Concernant les signaux audio issus de bases sonores publiques, utilisés pour compléter la base sonore, il s'agit de 48 autres signaux audio, comportant différents bruits, comme par exemple des bruits de ligne, de vent, de voiture, d'aspirateur, de sèche-cheveux, de murmures confus (babble en anglais), des bruits issus du milieu naturel (oiseau, eau qui coule, pluie, etc.), de la musique.Regarding the audio signals from public sound bases, used to complete the sound base, there are 48 other audio signals, with different noises, such as line noise, wind, car, vacuum, hair dryers, murmurs confused ( babble in English), sounds from the natural environment (bird, running water, rain, etc.), music.

Ces 48 bruits ont été ensuite soumis à six conditions de dégradations, comme expliqué ci-après.These noises were then subjected to six degradation conditions, as explained below.

Chaque bruit est échantillonné à 8 kHz, filtré avec l'outil IRS8, codé et décodé en G.711 ainsi qu'en G.729 dans le cas de la bande étroite (300 - 3400 Hz), puis chaque son est échantillonné à 16 kHz, puis filtré avec l'outil décrit dans la recommandation P.341 de l'UIT-T (" Transmission characteristics for wideband (150-7000 Hz) digital hands-free telephony terminals", 1998 ), et enfin codé et décodé en G.722 (large bande 50 - 7000 Hz). Ces trois conditions dégradées sont ensuite restituées selon deux niveaux dont le rapport signal sur bruit (SNR) vaut respectivement 16 et 32. Chaque bruit dure quatre secondes. Finalement, on obtient au total 288 signaux audio différents.Each noise is sampled at 8 kHz, filtered with the IRS8 tool, coded and decoded in G.711 as well as in G.729 in the case of the narrow band (300 - 3400 Hz), then each sound is sampled at 16 kHz, then filtered with the tool described in ITU-T Recommendation P.341 ( " Transmission characteristics for wideband (150-7000 Hz) digital hands-free telephony terminals ", 1998 ), and finally coded and decoded in G.722 (broadband 50 - 7000 Hz). These three degraded conditions are then restored according to two levels whose signal-to-noise ratio (SNR) is respectively 16 and 32. Each noise lasts four seconds. Finally, a total of 288 different audio signals are obtained.

Ainsi, la base sonore utilisée pour mettre au point le modèle de classification se compose finalement de 632 signaux audio.Thus, the sound base used to develop the classification model finally consists of 632 audio signals.

Chaque échantillon sonore de la base sonore est étiqueté manuellement pour identifier une classe de bruit de fond d'appartenance. Les classes retenues ont été définies suite aux tests subjectifs mentionnés dans le Document [1] et plus précisément, ont été déterminées en fonction de l'indulgence vis-à-vis des bruits perçus, manifestée par les sujets humains testés lors du jugement de la qualité vocale en fonction du type de bruit de fond (parmi les 6 types précités).Each sound sample of the sound database is manually tagged to identify a background class of membership. The classes chosen were defined following the subjective tests mentioned in the Document [1] and more precisely, were determined according to the indulgence vis-à-vis the perceived noise, manifested by the human subjects tested during the judgment of the voice quality depending on the type of background noise (among the 6 types mentioned above).

Ainsi, quatre classes de bruit de fond (BDF) ont été retenues :

  • Classe 1 : BDF "intelligible" - il s'agit de bruit de nature intelligible tels que de la musique, de la parole, etc. Cette classe de bruit de fond provoque une forte indulgence sur le jugement de la qualité vocale perçue, par rapport à un bruit de souffle de même niveau.
  • Classe 2 : BDF "d'environnement" - il s'agit de bruits ayant du contenu informationnel et fournissant des informations sur l'environnement du locuteur, comme des bruits de ville, de restaurant, de nature, etc. Cette classe de bruit provoque une légère indulgence sur le jugement de la qualité vocale perçue par les utilisateurs par rapport à un bruit de souffle de même niveau.
  • Classe 3 : BDF "souffle" - Ces bruits sont de nature stationnaire et ne contiennent pas de contenu informationnel, il s'agit par exemple de bruit rose, de bruit de vent stationnaire, de bruit de parole stationnaire (BPS).
  • Classe 4 : BDF "grésillement" - il s'agit de bruits ne contenant pas de contenu informationnel, comme du bruit électrique, du bruit non stationnaire bruité, etc. Cette classe de bruit provoque une forte dégradation de la qualité vocale perçue par les utilisateurs, par rapport à un bruit de souffle de même niveau.
Thus, four classes of background noise (BDF) were selected:
  • Class 1: BDF "intelligible" - this is noise of intelligible nature such as music, speech, etc. This class of background noise causes a strong indulgence on the judgment of the perceived vocal quality, compared to a sound of the same level.
  • Class 2: BDF "environment" - These are noises with informational content and information about the speaker's environment, such as city noise, restaurant noise, nature noise, etc. This class of noise causes a slight indulgence on the judgment of the voice quality perceived by the users compared to a noise of the same level.
  • Class 3: BDF "breath" - These noises are stationary in nature and do not contain informational content, such as pink noise, stationary wind noise, stationary speech noise (BPS).
  • Class 4: BDF "sizzling" - noise that does not contain informational content, such as electrical noise, noisy noisy noise, etc. This class of noise causes a strong degradation of the voice quality perceived by the users, compared to a blast noise of the same level.

Phase 2 - Extraction de paramètres des signaux audio de la base sonore Phase 2 - Extraction of parameters of audio signals from the sound base

Pour chacun des signaux audio de la base sonore, huit paramètres ou indicateurs connus en soi sont calculés. Ces indicateurs sont les suivants :

  • (1) La corrélation du signal : il s'agit d'un indicateur utilisant le coefficient de corrélation de Bravais-Pearson appliqué entre le signal entier et le même signal décalé d'un échantillon numérique.
  • (2) Le taux de passage par zéro (ZCR) du signal ;
  • (3) La variation du niveau acoustique du signal ;
  • (4) Le centre de gravité spectral (Spectral Centroid) du signal ;
  • (5) La rugosité spectrale du signal ;
  • (6) Le flux spectral du signal ;
  • (7) Le point spectral de coupure (Spectral Rolloff Point) du signal ;
  • (8) Le coefficient harmonique du signal.
For each of the audio signals of the sound database, eight parameters or indicators known per se are calculated. These indicators are as follows:
  • (1) Signal correlation: This is an indicator using the Bravais-Pearson correlation coefficient applied between the entire signal and the same shifted signal of a digital sample.
  • (2) Zero crossing rate (ZCR) of the signal;
  • (3) The variation of the sound level of the signal;
  • (4) The spectral center of gravity ( Spectral Centroid) of the signal;
  • (5) the spectral roughness of the signal;
  • (6) the spectral flux of the signal;
  • (7) Spectral Rolloff Point of the signal;
  • (8) The harmonic coefficient of the signal.

Phase 3 - Obtention du modèle de classification Phase 3 - Obtaining the classification model

Le modèle de classification est obtenu par apprentissage à l'aide d'un arbre de décision (cf. figure 1), réalisé à l'aide de l'outil statistique appelé "classregtree" de l'environnement MATLAB® commercialisé par la société The MathWorks. L'algorithme utilisé est développé à partir de techniques décrites dans le livre intitulé " Classification and regression trees" de Leo Breiman et al. publié par Chapman and Hall en 1993 .The classification model is obtained by learning using a decision tree (cf. figure 1 ), carried out using the statistical tool called "classregtree" of the MATLAB® environment marketed by The MathWorks. The algorithm used is developed from techniques described in the book entitled " Classification and regression trees "by Leo Breiman et al., Published by Chapman and Hall in 1993 .

Chaque échantillon de bruit de fond de la base sonore est renseigné par les huit indicateurs précités et la classe d'appartenance de l'échantillon (1: intelligible ; 2: environnement ; 3: souffle ; 4: grésillement). L'arbre de décision calcule alors les différentes solutions possibles afin d'obtenir une classification optimum, se rapprochant le plus des classes étiquetées manuellement. Au cours de cette phase d'apprentissage, les indicateurs audio les plus pertinents sont retenus, et des seuils de valeur associés à ces indicateurs sont définis, ces seuils permettant de séparer les différentes classes et sous-classes de bruit de fond.Each sample of background noise from the sound database is indicated by the eight indicators mentioned above and the class of membership of the sample (1: intelligible, 2: environment, 3: breath, 4: sizzle). The decision tree then calculates the various possible solutions in order to obtain an optimum classification, closest to the manually labeled classes. During this learning phase, the most relevant audio indicators are selected, and value thresholds associated with these indicators are defined, these thresholds making it possible to separate the different classes and subclasses of background noise.

Lors de l'apprentissage, 500 bruits de fond de différents types sont choisis aléatoirement parmi les 632 de la base sonore. Le résultat de la classification obtenue par apprentissage est représenté à la figure 1.During learning, 500 backgrounds of different types are randomly selected from the 632 of the sound base. The result of the classification obtained by apprenticeship is represented in figure 1 .

Comme on peut le voir sur l'arbre de décision représenté à la figure 2 , la classification résultante utilise seulement deux indicateurs parmi les huit initiaux pour classer les 500 bruits de fond de l'apprentissage dans les quatre classes prédéfinies. Les indicateurs sélectionnés sont les indicateurs (3) et (6) de la liste introduite plus haut et représentent respectivement la variation du niveau acoustique et le flux spectral des signaux de bruit de fond.As can be seen in the decision tree shown in figure 2 , the resulting classification uses only two of the original eight indicators to rank the 500 background noises of learning in the four classes predefined. The indicators selected are the indicators (3) and (6) of the list introduced above and respectively represent the variation of the acoustic level and the spectral flow of the background noise signals.

Comme représenté à la figure 2, le modèle de classification obtenu par apprentissage commence par séparer les bruits de fond en fonction de leur caractère de stationnarité. Ce caractère de stationnarité est mis en évidence par l'indicateur temporel caractéristique de la variation du niveau acoustique (indicateur (3)). Ainsi, si cet indicateur a une valeur inférieure à un premier seuil - TH1 = 1,03485 - alors le bruit de fond est considéré comme stationnaire (branche gauche), sinon le bruit de fond est considéré comme non-stationnaire (branche droite). Ensuite, l'indicateur fréquentiel caractéristique du flux spectral (indicateur (6)) filtre à son tour chacune des deux catégories (stationnaire/non-stationnaire) sélectionnées avec l'indicateur (3).As represented in figure 2 , the classification model obtained by learning begins by separating the background noise according to their stationarity character. This stationarity character is highlighted by the temporal indicator characteristic of the variation of the acoustic level (indicator (3)). Thus, if this indicator has a value lower than a first threshold - TH1 = 1.03485 - then the background noise is considered as stationary (left branch), otherwise the background noise is considered as non-stationary (right branch). Then, the characteristic frequency indicator of the spectral flow (indicator (6)) in turn filters each of the two categories (stationary / non-stationary) selected with the indicator (3).

Ainsi, lorsque le signal de bruit est considéré comme non-stationnaire, si l'indicateur fréquentiel est inférieur à un second seuil - TH2 = 0,280607 - alors le signal de bruit appartient à la classe "environnement", sinon le signal de bruit appartient à la classe "intelligible". D'autre part, lorsque le signal de bruit est considéré comme stationnaire, si l'indicateur fréquentiel (indicateur (6), flux spectral) est inférieur à un troisième seuil - TH3 = 0,145702 - alors le signal de bruit appartient à la classe "grésillement", sinon le signal de bruit appartient à la classe "souffle".Thus, when the noise signal is considered non-stationary, if the frequency indicator is less than a second threshold - TH2 = 0.280607 - then the noise signal belongs to the "environment" class, otherwise the noise signal belongs to the class "intelligible". On the other hand, when the noise signal is considered stationary, if the frequency indicator (indicator (6), spectral flux) is less than a third threshold - TH3 = 0.145702 - then the noise signal belongs to the class "sizzle", otherwise the noise signal belongs to the class "breath".

L'arbre de sélection (fig.1), obtenu avec les deux indicateurs précités, a permis de classifier correctement 86,2% des signaux de bruits de fond parmi les 500 signaux audio soumis à l'apprentissage. Plus précisément, les proportions de bonne classification obtenues pour chaque classe sont les suivantes :

  • 100% pour la classe "grésillement",
  • 96,4% pour la classe "souffle",
  • 79,2% pour la classe "environnement",
  • 95,9% pour la classe "intelligible".
The selection tree ( fig.1 ), obtained with the two aforementioned indicators, correctly classified 86.2% of the background noise signals among the 500 audio signals subjected to learning. More specifically, the good classification proportions obtained for each class are as follows:
  • 100% for the class "sizzling",
  • 96.4% for the "breath" class,
  • 79.2% for the "environment" class,
  • 95.9% for the class "intelligible".

On peut remarquer que la classe "environnement" obtient un résultat de bonne classification plus faible que pour les autres classes. Ce résultat est dû à la différenciation entre bruits de "souffle" et "d'environnement" qui peut parfois être difficile à effectuer, de par la ressemblance de certains sons pouvant être rangés à la fois dans ces deux classes, par exemple des sons tels que le bruit du vent ou le bruit d'un sèche-cheveux.It can be noted that the "environment" class gets a lower classification result than for the other classes. This result is due to the differentiation between "breath" and "environmental" sounds, which can sometimes be difficult to perform, because of the similarity of certain sounds that can be arranged in both classes, for example sounds such as wind noise or the sound of a hair dryer.

On définit ci-après de manière plus détaillée les indicateurs retenus pour le modèle de classification selon l'invention.The indicators selected for the classification model according to the invention are defined in greater detail below.

L'indicateur temporel, désigné dans la suite de la description par "IND_TMP", est caractéristique de la variation du niveau sonore du signal de bruit quelconque est défini par l'écart type des valeurs des puissances de toutes les trames considérées du signal. Dans un premier temps, une valeur de puissance est déterminée pour chacune des trames. Chaque trame est composée de 512 échantillons, avec un recouvrement entre les trames successives de 256 échantillons. Pour une fréquence d'échantillonnage de 8000 Hz, cela correspond à une durée de 64 ms (millisecondes) par trame, avec un recouvrement de 32 ms. On utilise ce recouvrement de 50% pour obtenir une continuité entre trames successives, comme défini dans le Document [5] : " P.56 Mesure objective du niveau vocal actif', recommandation de l'ITU-T, 1993 . The time indicator , designated in the rest of the description by "IND_TMP", is characteristic of the variation of the sound level of any noise signal is defined by the standard deviation of the power values of all the considered frames of the signal. In a first step, a power value is determined for each of the frames. Each frame is composed of 512 samples, with overlapping between successive frames of 256 samples. For a sampling frequency of 8000 Hz, this corresponds to a duration of 64 ms (milliseconds) per frame, with an overlap of 32 ms. This overlap is used by 50% to obtain continuity between successive frames, as defined in Document [5] : " P.56 Objective Measurement of Active Voice Level ', ITU-T Recommendation, 1993 .

Lorsque le bruit à classifier a une longueur supérieure à une trame, la valeur de puissance acoustique pour chacune des trames peut être définie par la formule mathématique suivante : P trame = 10 log 1 L trame i = 1 L trame x i 2

Figure imgb0002
When the noise to be classified has a length greater than one frame, the sound power value for each of the frames may be defined by the following mathematical formula: P weft = 10 log 1 The weft Σ i = 1 The weft x i 2
Figure imgb0002

Où : "trame" désigne le numéro de la trame à évaluer ; "Ltrame " désigne la longueur de la trame (512 échantillons) ; "xi " correspond à l'amplitude de l'échantillon i ; "log" désigne le logarithme décimal. On calcule ainsi le logarithme de la moyenne calculée pour obtenir une valeur de puissance par trame.Where: "frame" means the number of the frame to be evaluated; " The frame " refers to the length of the frame (512 samples); " x i " corresponds to the amplitude of the sample i ; "log" refers to the decimal logarithm. This calculates the logarithm of the calculated average to obtain a power value per frame.

La valeur de l'indicateur temporel "IND_TMP" du bruit de fond considéré est ensuite définie par l'écart type de toutes les valeurs de puissances obtenues, par la relation suivante : IND_TMP = 1 Ntrame i = 1 Ntrame P i - < P > 2

Figure imgb0003
The value of the time indicator "IND_TMP" of the considered background noise is then defined by the standard deviation of all the power values obtained, by the following relation: IND_TMP = 1 Ntrame Σ i = 1 Ntrame P i - < P > 2
Figure imgb0003

Où : Ntrame représente le nombre de trames présentes dans le bruit de fond considéré ; Pi représente la valeur de puissance pour la trame i ; et <P> correspond à la moyenne de puissance sur toutes les trames.Where: N frame represents the number of frames present in the background noise considered; P i represents the power value for the frame i ; and < P > is the average power on all frames.

Selon l'indicateur temporel IND_TMP, plus un son est non-stationnaire et plus la valeur obtenue pour cet indicateur est élevée.According to the time indicator IND_TMP, the more a sound is non-stationary and the higher the value obtained for this indicator.

L'indicateur fréquentiel, désigné dans la suite de la description par "IND_FRQ" et caractéristique du flux spectral du signal de bruit, est calculé à partir de la Densité Spectrale de Puissance (DSP) du signal. La DSP d'un signal - issue de la transformée de Fourrier de la fonction d'autocorrélation du signal - permet de caractériser l'enveloppe spectrale du signal, afin d'obtenir des informations sur le contenu fréquentiel du signal à un moment donné, comme par exemple les formants, les harmoniques, etc. Selon le mode de réalisation présenté, cet indicateur est déterminé par trame de 256 échantillons, correspondant à une durée de 32 ms pour une fréquence d'échantillonnage de 8 KHz. Il n'y a pas de recouvrement des trames, contrairement à l'indicateur temporel. The frequency indicator , designated in the rest of the description by "IND_FRQ" and characteristic of the spectral flux of the noise signal, is calculated from the Spectral Power Density (DSP) of the signal. The DSP of a signal - derived from the Fourrier transform of the autocorrelation function of the signal - makes it possible to characterize the spectral envelope of the signal, in order to obtain information on the frequency content of the signal at a given moment, such as for example formants, harmonics, etc. According to the embodiment presented, this indicator is determined by frame of 256 samples, corresponding to a duration of 32 ms for a sampling frequency of 8 KHz. There is no frame overlap, unlike the time indicator.

Le flux spectral (SF), également désigné par "variation de l'amplitude du spectre", est une mesure permettant d'évaluer la vitesse de variation d'un spectre de puissance d'un signal au cours du temps. Cet indicateur est calculé à partir de la corrélation croisée normalisée entre deux amplitudes successives du spectre ak(t-1) et ak(t). Le flux spectral (SF) peut être défini par la formule mathématique suivante : SF trame = 1 - Σ k a k t - 1 . a k t Σ k a k t - 1 2 Σ k a k t 2

Figure imgb0004
Spectral flow (SF), also referred to as "spectrum amplitude variation," is a measure of the rate of change of a power spectrum of a signal over time. This indicator is calculated from the normalized cross-correlation between two successive amplitudes of the spectrum a k (t-1) and a k (t). The spectral flow (SF) can be defined by the following mathematical formula: SF weft = 1 - Σ k at k t - 1 . at k t Σ k at k t - 1 2 Σ k at k t 2
Figure imgb0004

Où : "k" est un indice représentant les différentes composantes fréquentielles, et "t" un indice représentant les trames successives sans recouvrement, composées de 256 échantillons chacune.Where: " k " is an index representing the different frequency components, and " t " is an index representing successive frames without overlapping, consisting of 256 samples each.

En d'autres termes, une valeur du flux spectral (SF) correspond à la différence d'amplitude du vecteur spectral entre deux trames successives. Cette valeur est proche de zéro si les spectres successifs sont similaires, et est proche de 1 pour des spectres successifs très différents. La valeur du flux spectral est élevée pour un signal de musique, car un signal musical varie fortement d'une trame à l'autre. Pour la parole, avec l'alternance de périodes de stabilité (voyelle) et de transitions (consonne/voyelle), la mesure du flux spectral prend des valeurs très différentes et varie fortement au cours d'une phrase.In other words, a value of the spectral flux (SF) corresponds to the amplitude difference of the spectral vector between two successive frames. This value is close to zero if the successive spectra are similar, and is close to 1 for very different successive spectra. The value of the spectral stream is high for a music signal because a musical signal varies greatly from one frame to another. For speech, with the alternation of periods of stability (vowel) and transitions (consonant / vowel), the measurement of the spectral flow takes very different values and varies strongly during a sentence.

Lorsque le bruit à classifier a une longueur supérieure à une trame, l'expression finale retenue pour l'indicateur fréquentiel est définie comme la moyenne des valeurs de flux spectral pour toutes les trames du signal, comme définie dans l'équation ci-après : IND_FRQ = 1 Ntrame i = 1 Ntrame SF i

Figure imgb0005
When the noise to be classified has a length greater than one frame, the final expression used for the frequency indicator is defined as the average of the spectral flux values for all the frames of the signal, as defined in the equation below: IND_FRQ = 1 Ntrame Σ i = 1 Ntrame SF i
Figure imgb0005

Utilisation du modèle de classification de bruits de fondUsing the background noise classification model

Le modèle de classification de l'invention, obtenu comme exposé supra, est utilisé selon l'invention pour déterminer, sur la base d'indicateurs extraits d'un signal audio bruité quelconque, la classe de bruit à laquelle appartient ce signal bruité parmi l'ensemble de classes définies pour le modèle de classification.The classification model of the invention, obtained as explained above, is used according to the invention to determine, on the basis of indicators extracted from any noisy audio signal, the noise class to which this noisy signal belongs among the noise level. set of classes defined for the classification model.

Les figures 3a et 3b représentent un organigramme illustrant un procédé d'évaluation objective de la qualité vocale d'un signal de parole, selon un mode de réalisation de l'invention. Selon l'invention, le procédé de classification de bruits de fond est mis en oeuvre préalablement à la phase proprement dite d'évaluation de la qualité vocale.The figures 3a and 3b represent a flowchart illustrating a method of objective evaluation of the voice quality of a speech signal, according to an embodiment of the invention. According to the invention, the method of classification of background noise is implemented prior to the actual phase of evaluation of voice quality.

Comme représenté à la figure 3a , la première étape S1 consiste à obtenir un signal audio, qui, dans le mode de réalisation présenté ici, est un signal de parole obtenu sous forme analogique ou numérique. Dans ce mode de réalisation, comme illustré par l'étape S3, on applique ensuite au signal de parole une opération de détection d'activité vocale (DAV). Le but de cette détection d'activité vocale est de séparer dans le signal audio d'entrée les périodes du signal contenant de la parole, éventuellement bruitée, des périodes du signal ne contenant pas de parole (périodes de silence), par conséquent ne pouvant contenir que du bruit. Ainsi, au cours de cette étape, on sépare les zones actives du signal, c'est-à-dire présentant le message vocal bruité, des zones inactives bruitées. En pratique, dans ce mode de réalisation, la technique de détection d'activité vocale mise en oeuvre est celle décrite dans le Document [5] précité (" P.56 Mesure objective du niveau vocal actif", recommandation de l'ITU-T, 1993 ).As represented in figure 3a the first step S1 is to obtain an audio signal, which in the embodiment presented here is a speech signal obtained in analog or digital form. In this embodiment, as illustrated by step S3, a voice activity detection (DAV) operation is then applied to the speech signal. The purpose of this voice activity detection is to separate in the input audio signal the periods of the speech-containing signal, possibly noisy, periods of the signal containing no speech (periods of silence), therefore not being able to contain only noise. Thus, during this step, the active areas of the signal, that is to say presenting the noisy voice message, are separated from each other. inactive areas noisy. In practice, in this embodiment, the voice activity detection technique implemented is that described in Document [5] cited above (" P.56 Objective Measurement of Active Voice Level ", ITU-T Recommendation, 1993 ).

En résumé, le principe de la technique DAV utilisée consiste à :

  • détecter l'enveloppe du signal,
  • comparer l'enveloppe du signal avec un seuil fixe en prenant en compte un temps de maintien de la parole,
  • déterminer les trames de signal dont l'enveloppe est située au dessus du seuil (DAV=1 pour les trames actives) et en dessous (DAV=0 pour le bruit de fond). Ce seuil est fixé à 15,9 dB (décibel) en dessous du niveau vocal actif moyen (puissance du signal sur les trames actives).
In summary, the principle of the DAV technique used consists of:
  • detect the signal envelope,
  • compare the envelope of the signal with a fixed threshold by taking into account a time of maintenance of the speech,
  • determine the signal frames whose envelope is above the threshold (DAV = 1 for active frames) and below (DAV = 0 for background noise). This threshold is set at 15.9 dB (decibel) below the average active voice level (signal strength on active frames).

Une fois la détection vocale effectuée sur le signal audio, le signal de bruit de fond généré (étape S5) est le signal constitué des périodes du signal audio pour lesquelles le résultat de la détection d'activité vocale est nul.Once voice detection is performed on the audio signal, the background noise signal generated (step S5) is the signal consisting of the periods of the audio signal for which the result of the speech activity detection is zero.

Une fois le signal de bruit généré, les paramètres audio constitués des deux indicateurs mentionnés plus haut (indicateur temporel IND_TMP et indicateur fréquentiel IND_FRQ), qui ont été sélectionnés lors de l'obtention du modèle de classification (phase d'apprentissage), sont extraits du signal de bruit, au cours de l'étape S7.Once the noise signal has been generated, the audio parameters consisting of the two indicators mentioned above (time indicator IND_TMP and frequency indicator IND_FRQ), which were selected during the obtaining of the classification model (learning phase), are extracted. of the noise signal, in step S7.

Ensuite, les tests S9, S11 (Fig. 3a) et S17 (Fig. 3b) et les branches de décision associées, correspondent à l'arbre de décision décrit plus haut en relation avec la figure 2. Ainsi, à l'étape S9 la valeur de l'indicateur temporel (IND_TMP) obtenue pour le signal de bruit est comparée au premier seuil TH1 mentionné plus haut. Si la valeur de l'indicateur temporel est supérieure au seuil TH1 (S9, non) alors le signal de bruit est de type non-stationnaire et on applique alors le test de l'étape S11.Then the tests S9, S11 ( Fig. 3a ) and S17 ( Fig. 3b ) and associated decision branches, correspond to the decision tree described above in relation to the figure 2 . Thus, in step S9 the value of the time indicator (IND_TMP) obtained for the noise signal is compared with the first threshold TH1 mentioned above. If the value of the time indicator is greater than the threshold TH1 (S9, no) then the noise signal is of non-stationary type and then the test of step S11 is applied.

Au cours du test S11 l'indicateur fréquentiel (IND_FRQ) cette fois, est comparé au second seuil TH2 mentionné plus haut. Si l'indicateur IND_FRQ est supérieur (S11, non) au seuil TH2, la classe (CL) du signal de bruit est déterminée (étape S13) comme étant CL1 : "Bruit intelligible" ; sinon la classe du signal de bruit est déterminée (étape S15) comme étant CL2 : "Bruit d'environnement". La classification du signal de bruit analysé est alors achevée et l'évaluation de la qualité vocale du signal de parole peut être alors effectuée (fig. 3b, étape S23).During the test S11 the frequency indicator (IND_FRQ) this time, is compared to the second threshold TH2 mentioned above. If the indicator IND_FRQ is greater (S11, no) than the threshold TH2, the class (CL) of the noise signal is determined (step S13) as CL1: "Noise intelligible"; otherwise the class of the noise signal is determined (step S15) as CL2: "Noise The classification of the analyzed noise signal is then completed and the evaluation of the voice quality of the speech signal can then be performed ( Fig. 3b step S23).

Lors du test initial S9, si la valeur de l'indicateur temporel est inférieure au seuil TH1 (S9, oui) alors le signal de bruit est de type stationnaire et on applique alors le test de l'étape S17 (fig. 3b). Au test S17, on compare la valeur de l'indicateur fréquentiel IND_FRQ au troisième seuil TH3 (défini plus haut). Si l'indicateur IND_FRQ est supérieur (S17, non) au seuil TH3, la classe (CL) du signal de bruit est déterminée (étape S19) comme étant CL3 : "Bruit de souffle"; sinon la classe du signal de bruit est déterminée (étape S21) comme étant CL4 : "Bruit de grésillement". La classification du signal de bruit analysé est alors achevée et l'évaluation de la qualité vocale du signal de parole peut être alors effectuée (fig. 3b, étape S23).During the initial test S9, if the value of the time indicator is below the threshold TH1 (S9, yes) then the noise signal is of stationary type and then the test of step S17 is applied ( Fig. 3b ). In test S17, the value of the frequency indicator IND_FRQ is compared with the third threshold TH3 (defined above). If the indicator IND_FRQ is greater (S17, no) than the threshold TH3, the class (CL) of the noise signal is determined (step S19) as being CL3: "Breath noise"; otherwise the class of the noise signal is determined (step S21) as being CL4: "Sizzling noise". The classification of the analyzed noise signal is then completed and the voice quality evaluation of the speech signal can then be performed ( Fig. 3b step S23).

La figure 4 détaille l'étape (fig. 3b, S23) d'évaluation de la qualité vocale d'un signal de parole en fonction de la classification du bruit de fond contenu dans le signal de parole. Comme représenté à la figure 4, l'opération d'évaluation de la qualité vocale débute par l'étape S231 au cours de laquelle, la sonie totale du signal de bruit (SIG_N) est estimée.The figure 4 detail the step ( Fig. 3b , S23) for evaluating the speech quality of a speech signal according to the classification of the background noise contained in the speech signal. As represented in figure 4 , the voice quality evaluation operation starts with step S231 in which, the total loudness of the noise signal ( SIG_N ) is estimated.

On rappelle ici que la sonie est définie comme l'intensité subjective d'un son, elle est exprimée en sones ou en phones. La sonie totale mesurée de manière subjective (sonie perçue) peut cependant être estimée en utilisant des modèles objectifs connus tels que le modèle de Zwicker ou le modèle de Moore. It is recalled here that the loudness is defined as the subjective intensity of a sound, it is expressed in sones or phones. The total loudness measured subjectively (perceived loudness), however, can be estimated using models known targets such as Zwicker model or model of Moore.

Le modèle de Zwicker est décrit par exemple dans le document intitulé " Psychoacoustics: Facts and Models", de E. Zwicker et H. Fastl - Berlin, Springer, 2nd updated edition, 14 avril 1999 .The Zwicker model is described for example in the document entitled " Psychoacoustics: Facts and Models "by E. Zwicker and H. Fastl - Berlin, Springer, 2nd updated edition, 14 April 1999 .

Le modèle de Moore est décrit par exemple dans le document : " A Model for the Prediction of Thresholds, Loudness, and Partial Loudness", de B.C.J. Moore, B.R. Glasberg et T. Baer - Journal of the Audio Engineering Society 45(4): 224-240, 1997 .Moore's model is described for example in the document: " A Model for the Prediction of Thresholds, Loudness, and Partial Loudness, by BCJ Moore, BR Glasberg and T. Baer - Journal of the Audio Engineering Society 45 (4): 224-240, 1997 .

Dans le cadre du mode de réalisation exposé ici, la sonie totale du signal de bruit est estimée en utilisant le modèle de Zwicker, cependant on peut également mettre en oeuvre l'invention en utilisant le modèle de Moore. D'ailleurs, plus le modèle objectif d'estimation de la sonie, utilisé, est précis et plus l'évaluation de la qualité vocale selon l'invention sera meilleure.In the context of the embodiment set forth herein, the total loudness of the noise signal is estimated using the Zwicker model, however also implement the invention using the Moore model. Moreover, the more accurate the loudness estimation model used, the more precise the voice quality evaluation according to the invention will be.

L'estimation de la sonie totale, exprimée en sones, du signal de bruit SIG_N, obtenue en utilisant le modèle de Zwicker, est désignée ici par : "N". Ainsi à l'issue de l'étape S231 représentée à la figure 4, on obtient une estimation de la sonie du signal de bruit.The total loudness estimate, expressed in sones, of the noise signal SIG_N, obtained using the Zwicker model, is referred to herein as " N ". Thus at the end of step S231 shown in figure 4 , we obtain an estimate of the loudness of the noise signal.

L'étape S233 qui suit est l'étape d'évaluation proprement dite de la qualité vocale du signal de parole. Selon le procédé, on commence par sélectionnée une formule mathématique à utiliser, parmi quatre, en fonction de la classe de bruit CLi (i = 1, 2, 3, 4) obtenue au cours de la phase préalable de classification du bruit de fond (l'obtention des formules précitées est détaillée plus bas).The following step S233 is the actual evaluation step of the voice quality of the speech signal. According to the method, one starts by selecting a mathematical formula to be used, out of four, as a function of the noise class CLi (i = 1, 2, 3, 4) obtained during the preliminary phase of classification of the background noise ( obtaining the above formulas is detailed below).

L'expression générale de la formule sélectionnée est la suivante : MOS_CLi = C i - 1 + C i × f N

Figure imgb0006
The general expression of the selected formula is: MOS_CLi = VS i - 1 + VS i × f NOT
Figure imgb0006

Où:

  • MOS_CLi est la note calculée pour le signal de bruit SIG_N de classe CLi ;
  • f(N) est une fonction mathématique de la sonie totale, N, estimée pour le signal de bruit, selon un modèle de sonie tel que le modèle de Zwicker ;
  • Ci-1 et Ci sont deux coefficients définis pour la formule mathématique associée à la classe CLi.
Or:
  • MOS_CLi is the calculated score for the class CLi SIG_N noise signal ;
  • f (N) is a mathematical function of the total loudness, N , estimated for the noise signal, according to a loudness model such as the Zwicker model;
  • C i-1 and C i are two coefficients defined for the mathematical formula associated with the CLi class .

L'expression mathématique de la formule (5) ci-dessus met en évidence le fait que l'on dispose, conformément à l'invention, d'un modèle d'évaluation de qualité vocale pour chaque classe de bruit de fond (CL1-CL4), qui est fonction de la sonie totale du bruit de fond.The mathematical expression of formula (5) above demonstrates the fact that, according to the invention, there is available a voice quality evaluation model for each class of background noise (CL1- CL4), which is a function of the total loudness of the background noise.

Ainsi, dans le mode de réalisation exposé ici, la note de qualité vocale pour le signal de parole, MOS_CLi, est obtenue, d'une part, en fonction de la classification obtenue relative au bruit de fond présent dans le signal de parole - par le choix des coefficients (Ci-1 ; Ci ) de la formule mathématique qui correspondent à la classe du bruit de fond - et d'autre part, en fonction de la sonie N estimée pour le bruit de fond.Thus, in the embodiment described here, the voice quality score for the speech signal, MOS_CLi, is obtained, on the one hand, as a function of the classification obtained relating to the background noise present in the speech signal - by the choice of the coefficients ( C i-1 ; C i ) of the mathematical formula which correspond to the background noise class - and on the other hand, according to the estimated loudness N for the background noise.

Obtention des modèles d'évaluation de qualité vocale par classe de bruit de fondObtaining speech quality assessment models by background noise class

On va à présent détailler le mode d'obtention des modèles d'évaluation de qualité vocale pour chaque classe de bruit de fond (CL1-CL4). La figure 1 décrite plus haut, issue du Document [1] précité, représente les moyennes d'opinion (MOS LQSN) avec l'intervalle de confiance associé, calculées à partir de notes données par des auditeurs testeurs à des messages audio contenant six types de bruits de fond différents, selon la méthode ACR (Absolute Category Rating). Les divers types de bruit sont les suivants : bruit rose, bruit de parole stationnaire (BPS), bruit électrique, bruits de ville, bruits de restaurant, bruits de télévision ou voix, chaque bruit étant considéré à trois niveaux différents de sonie perçue. Les niveaux de sonie des divers types de bruit de fond sont obtenus dans ce test, de manière subjective.We will now detail how to obtain voice quality evaluation models for each class of background noise (CL1-CL4). The figure 1 described above, resulting from the aforementioned Document [1] , represents the opinion means (MOS LQSN) with the associated confidence interval, calculated from notes given by auditors to audio messages containing six types of noise. different backgrounds, according to the ACR (Absolute Category Rating ) method. The various types of noise are: pink noise, stationary speech noise (BPS), electrical noise, city noise, restaurant noise, television noise or voice, each noise being considered at three different levels of perceived loudness. The loudness levels of the various types of background noise are obtained in this test, subjectively.

Plus précisément, la base sonore utilisée dans le cadre du premier test décrit dans le Document [1] (voir section 2 du document), est constituée de huit phrases dont la moitié est prononcée par deux hommes et l'autre moitié par deux femmes. Chacune de ces phrases prononcées constitue un signal de parole (8 signaux de parole). Ensuite, à chacun de ces signaux de parole est ajouté chacun des six bruits de fond précités, on obtient alors 48 signaux de paroles bruités (8 signaux par type de bruit de fond). Au cours du test, chacun de ces signaux de parole bruités est présenté à l'écoute aux auditeurs testeurs selon trois niveaux d'isosonie différent, ce qui constitue 144 signaux bruités différents. Par ailleurs, à chacun des 8 signaux de parole initiaux (phrase prononcée) est ajouté du bruit de fond rose (SNR = 44), pour représenter la condition correspondant à un signal de parole sans bruit de fond. En tout, 152 signaux de parole ont été utilisés lors du premier test.More specifically, the sound base used in the first test described in Document [1] (see section 2 of the document) consists of eight sentences, half of which are pronounced by two men and the other half by two women. Each of these pronounced sentences constitutes a speech signal (8 speech signals). Then, to each of these speech signals is added each of the six aforementioned background noise, then 48 noisy speech signals (8 signals per type of background noise) are obtained. During the test, each of these noisy speech signals is presented to the listening auditors in three levels of different isosony, which constitutes 144 different noisy signals. On the other hand, to each of the 8 initial speech signals (pronounced sentence) is added pink background noise (SNR = 44), to represent the condition corresponding to a speech signal without background noise. In all, 152 speech signals were used in the first test.

Concernant les niveaux d'isosonie utilisés, ceux-ci ont déterminés préalablement selon le test d'ajustement ("Adjustment test') du premier test décrit dans le Document [1] (Section 2). Ce test d'ajustement de sonie est conforme aux résultats décrits dans le document intitulé " La sonie des sons impulsionnels : Perception, Mesures et Modèles", thèse de Isabelle Boullet - Université de Aix-Marseille 2, 2005 . En bref, ce test consiste à demander à des personnes de modifier le niveau de chaque signal de bruit de manière que la sonie du signal soit égale à la sonie du signal de référence qui est le bruit rose. En pratique les trois niveaux de sonie (exprimés en sones) déterminés pour chacun des six types de bruit de fond utilisés sont les suivants : 4,6 sone ; 8,2 sone ; 14 sone. Le niveau de sonie de chacun des signaux de parole de référence, sans bruit de fond (c'est-à-dire contenant uniquement du bruit rose avec SNR = 44) est de 1,67 sone.As regards the levels of isosony used, these were determined in advance according to the "Adjustment test" of the first test described in Document [1] (Section 2). in accordance with the results described in the document entitled " The sound of impulse sounds: Perception, Measures and Models ", thesis of Isabelle Boullet - University of Aix-Marseille 2, 2005 . In short, this test consists of asking people to change the level of each noise signal so that the loudness of the signal is equal to the loudness of the reference signal which is the pink noise. In practice the three loudness levels (expressed in sones) determined for each of the six types of background noise used are the following: 4.6 sone; 8.2 sone; 14 sone. The loudness level of each of the reference speech signals, without background noise (i.e. containing only pink noise with SNR = 44) is 1.67 sone.

A partir des résultats du test illustré par la figure 1, les six types de bruit de fond utilisés ont permis de définir les quatre classes de bruit de fond utilisées selon l'invention, de la manière suivante :

  • la classe 1 (CL1 : "intelligible") correspond aux bruits de TV/parole ;
  • la classe 2 (CL2 : "environnement") correspond au regroupement des bruits de ville et bruits de restaurant ;
  • la classe 3 (CL3 : "souffle") regroupe le bruit rose et le bruit de parole stationnaire (BPS) ; et
  • la classe 4 (CL4 : "grésillement") correspond aux bruits électriques.
From the results of the test illustrated by the figure 1 , the six types of background noise used made it possible to define the four classes of background noise used according to the invention, as follows:
  • Class 1 (CL1: "intelligible") corresponds to TV / speech noises;
  • class 2 (CL2: "environment") is the combination of city noise and restaurant noise;
  • Class 3 (CL3: "breath") includes pink noise and stationary speech noise (BPS); and
  • Class 4 (CL4: "sizzling") corresponds to electrical noise.

Ainsi, chaque signal audio de test peut être caractérisé par sa classe de bruit de fond (CL1-CL4), son niveau de sonie perçue (en sones : 1,67 ; 4,6 ; 8,2 ; 14) et la note MOS-LQSN (Listening Quality Subjective Narrowband) qui lui a été attribuée lors du test subjectif préliminaire (Document [1], "Préliminary Experiment"). Par conséquent, en résumé, lors de ce test, 24 sujets ont subi un test d'évaluation de la qualité globale de signaux audio, selon la méthode ACR. Au final, 152 notes MOS-LQSN ont été obtenues en prenant la moyenne des notes attribuées par les 24 sujets, pour chacun des 152 signaux audio de test, lesquels sont répartis selon les quatre classes de bruit de fond définies selon l'invention.Thus, each test audio signal can be characterized by its background noise class (CL1-CL4), its perceived loudness level (in sones: 1.67, 4.6, 8.2, 14) and the MOS note. -LQSN (Listening Quality Subjective Narrowband ) assigned to him in the preliminary subjective test (Document [1], "Préliminary Experiment "). Therefore, in summary, in this test, 24 subjects underwent an assessment of the overall quality of audio signals, according to the ACR method. Finally, 152 MOS-LQSN scores were obtained by taking the average score given by the 24 subjects, for each of the 152 audio test signals, which are divided according to the four classes of background noise defined according to the invention.

La figure 5 montre graphiquement le résultat des tests subjectifs précités. Les 152 conditions de test sont représentées par leurs points, chaque point correspondant en abscisse, à un niveau de sonie, et en ordonnée, au score de qualité attribué (MOS-LQSN) ; les points sont par ailleurs différenciés selon la classe du bruit de fond contenu dans le signal audio correspondant.The figure 5 graphically shows the result of the aforementioned subjective tests. The 152 test conditions are represented by their points, each corresponding point on the abscissa, at a loudness level, and on the ordinate, at the assigned quality score (MOS-LQSN); the points are furthermore differentiated according to the class of the background noise contained in the corresponding audio signal.

Selon l'invention, partant des nuages de points issus des tests subjectifs, la modélisation de l'évaluation de la qualité vocale par classe de bruit de fond, a été réalisée par régression mathématique. En pratique plusieurs types de régression ont été testés (régression polynomiale, linéaire), mais c'est la régression logarithmique en fonction de la sonie perçue, exprimée en sones, qui permet d'obtenir les meilleures corrélations avec les notes de qualité vocale perçue.According to the invention, starting from the point clouds resulting from the subjective tests, the modeling of the evaluation of the vocal quality by class of noise, was carried out by mathematical regression. In practice several types of regression have been tested (polynomial regression, linear), but it is the logarithmic regression as a function of the perceived loudness, expressed in sones, which makes it possible to obtain the best correlations with the notes of perceived vocal quality.

A la figure 5, on peut observer les courbes obtenues par régression logarithmique qui lient les notes de qualité perçue à la sonie perçue, exprimée en sones, pour des signaux audio correspondant aux classes de bruit de fond définies selon l'invention. La figure 5 indique également les équations obtenues pour chacune des quatre courbes obtenue par régression logarithmique. Ainsi la première équation en haut et à droite correspond à la classe 1, la seconde à la classe 2, la troisième à la classe 3, et la quatrième à la classe 4.To the figure 5 Logarithmic regression curves can be observed which relate the perceived quality scores to the perceived loudness, expressed in sones, for audio signals corresponding to the background noise classes defined according to the invention. The figure 5 also indicates the equations obtained for each of the four curves obtained by logarithmic regression. Thus the first equation at the top right corresponds to class 1, the second to class 2, the third to class 3, and the fourth to class 4.

Pour chacune de ces équations, la valeur associée à R2 correspond au coefficient de corrélation entre les résultats issus du test subjectif et la régression logarithmique correspondante.For each of these equations, the value associated with R 2 corresponds to the correlation coefficient between the results obtained from the subjective test and the corresponding logarithmic regression.

Ainsi l'équation (5) exposée plus haut est déclinée, en pratique, pour les différentes classes comme suit : MOS_CLi = C i - 1 + C i × ln N

Figure imgb0007
Thus the equation (5) explained above is declined, in practice, for the different classes as follows: MOS_CLi = VS i - 1 + VS i × ln NOT
Figure imgb0007

Avec :

  • Ln(N) : logarithme népérien de la valeur de sonie totale, N, calculée et exprimée en sones ;
  • (Ci-1 ; Ci ) = (4,4554 ; - 0,5888) pour i=1 (classe 1) ;
  • (Ci-1; Ci ) = (4,7046 ; - 0,7869) pour i=2 (classe 2) ;
  • (Ci-1 ; Ci ) = (4,9015 ; - 0,9592) pour i=3 (classe 3) ;
  • (Ci-1; Ci ) = (4,7489 ; - 0,9608) pour i=4 (classe 4) ;
With:
  • Ln (N) : natural logarithm of the total loudness value, N , calculated and expressed in sones;
  • ( C i-1 ; C i ) = (4.4554; - 0.5888) for i = 1 (class 1);
  • ( C i-1 ; C i ) = (4,7046; - 0,7869) for i = 2 (class 2);
  • ( C i-1 ; C i ) = (4.9015; 0.9592) for i = 3 (class 3);
  • ( C i-1 ; C i ) = (4.7489; -0.9608) for i = 4 (class 4);

Dans le cadre du modèle d'évaluation objective de la qualité vocale selon l'invention, la valeur de sonie perçue N - valeur obtenue subjectivement dans le cadre des tests subjectifs précités - est obtenue par estimation selon une méthode connue d'estimation de sonie, le modèle de Zwicker dans le mode de réalisation exposé ici.In the context of the objective voice quality evaluation model according to the invention, the perceived loudness value N - subjectively obtained value in the context of the aforementioned subjective tests - is obtained by estimation according to a known method of loudness estimation, the Zwicker model in the embodiment set forth herein.

La figure 6 montre graphiquement le degré de corrélation existant entre les notes de qualité obtenues lors des tests subjectifs et celles obtenues en utilisant la méthode d'évaluation objective de la qualité, selon la présente invention. Comme on peut le voir sur la figure 6, on obtient une très bonne corrélation, de l'ordre de 93% (r = 0,93205), entre les notes MOS-LQSN issues du test subjectif exposé plus haut (axe des abscisses), et les notes MOS objectives (axes des ordonnées) obtenues avec le modèle d'évaluation de qualité selon l'invention, tel que défini par l'équation (6) plus haut.The figure 6 graphically shows the degree of correlation between the quality scores obtained in the subjective tests and those obtained using the objective quality evaluation method, according to the present invention. As can be seen on the figure 6 a very good correlation, of the order of 93% (r = 0.93205), is obtained between the MOS-LQSN scores resulting from the subjective test explained above (x-axis), and the objective MOS scores (axes of the ordinates) obtained with the quality evaluation model according to the invention, as defined by equation (6) above.

En liaison avec la figure 7 on va à présent décrire de manière fonctionnelle un dispositif d'évaluation objective de la qualité vocale d'un signal de parole, selon l'invention. Ce dispositif d'évaluation de qualité vocale est conçu pour mettre en oeuvre le procédé de d'évaluation de qualité vocale selon l'invention qui vient d'être décrit ci-dessus.In connection with the figure 7 a device for objective evaluation of the speech quality of a speech signal according to the invention will now be functionally described. This voice quality evaluation device is designed to implement the voice quality evaluation method according to the invention which has just been described above.

Comme représenté à la figure 7, le dispositif 1 d'évaluation de la qualité vocale d'un signal de parole, comprend un module 11 d'extraction à partir du signal audio (SIG) d'un signal de bruit de fond (SIG_N), dit signal de bruit.As represented in figure 7 , the device 1 for evaluating the voice quality of a speech signal, comprises a module 11 for extracting from the audio signal (SIG) of a background noise signal (SIG_N), said noise signal.

Le signal de parole (SIG) fourni en entrée au dispositif 1 d'évaluation de qualité vocale, peut être délivré au dispositif 1 à partir d'un réseau de communication 2, tel qu'un réseau de voix sur IP par exemple.The speech signal (GIS) input to the voice quality evaluation device 1 can be delivered to the device 1 from a communication network 2, such as a voice over IP network for example.

Selon le mode de réalisation exposé, le module 11 est en pratique un module de détection d'activité vocale. Le module DAV 11 fournit alors un signal de bruit SIG_N qui est délivré en entrée à un module 13 d'extraction de paramètres, c'est-à-dire de calcul des paramètres constitués des indicateurs temporel et fréquentiel, respectivement IND_TMP et IND_FRQ. Les indicateurs calculés sont alors fournis à un module 15 de classification, implémentant le modèle de classification selon l'invention, décrit plus haut, et qui détermine en fonction des valeurs des indicateurs utilisés, la classe de bruit de fond (CL) auquel appartient le signal de bruit SIG_N, selon l'algorithme décrit en liaison avec les figures 3a et 3b.According to the embodiment shown, the module 11 is in practice a voice activity detection module. The module DAV 11 then provides a noise signal SIG_N which is inputted to a module 13 for extracting parameters, that is to say calculating the parameters constituted by the time and frequency indicators, respectively IND_TMP and IND_FRQ. The calculated indicators are then provided to a classification module, implementing the classification model according to the invention, described above, which determines, as a function of the values of the indicators used, the background noise class (CL) to which the noise signal SIG_N, according to the algorithm described in connection with the figures 3a and 3b .

Le résultat de la classification effectuée par le module 15 de classification de bruit de fond, est alors fourni au module 17 d'évaluation de la qualité vocale. Ce dernier met en oeuvre l'algorithme d'évaluation de qualité vocale décrit plus haut en liaison avec la figure 4, pour délivrer au final une note de qualité vocale objective relative au signal de parole d'entrée (SIG).The result of the classification performed by the background noise classification module 15 is then provided to voice quality evaluation module 17. The latter implements the voice quality evaluation algorithm described above in connection with the figure 4 to ultimately deliver an objective speech quality score relating to the input speech signal (SIG).

En pratique, le dispositif d'évaluation de la qualité vocale selon l'invention est mis en oeuvre sous forme de moyens logiciels, c'est-à-dire de modules de programme d'ordinateur, réalisant les fonctions décrites en liaison avec les figures 3a, 3b, 4 et 5.In practice, the voice quality evaluation device according to the invention is implemented in the form of software means, that is to say computer program modules, performing the functions described in connection with the figures 3a , 3b , 4 and 5 .

Par ailleurs, dans le cadre d'une implémentation particulière de l'invention, le module 17 d'évaluation de la qualité vocale peut être incorporé dans une machine informatique distincte de celle abritant les autres modules. En particulier l'information de classe de bruit de fond (CL) peut être acheminée via un réseau de communication à la machine ou serveur chargé d'effectuer l'évaluation de la qualité vocale. Par ailleurs, selon une application particulière de l'invention, dans le domaine par exemple de la supervision de la qualité vocale sur un réseau de communication, chaque note de qualité vocale calculée par le module 17, est envoyée à un équipement de collecte local ou sur le réseau, chargé de collecter ces informations de qualité afin d'établir une note globale de qualité, établie par exemple en fonction du temps et/ou en fonction du type de communication et/ou en fonction d'autres types de notes de qualité.Moreover, in the context of a particular implementation of the invention, the voice quality evaluation module 17 can be incorporated in a computer machine separate from that housing the other modules. In particular, the background noise class information (CL) can be routed via a communication network to the machine or server responsible for performing the voice quality evaluation. Furthermore, according to a particular application of the invention, in the field for example of the supervision of the voice quality on a communication network, each voice quality score calculated by the module 17 is sent to a local collection equipment or on the network, responsible for collecting this quality information in order to establish an overall quality score, established for example as a function of time and / or according to the type of communication and / or according to other types of quality notes .

Les modules programmes précités sont mis en oeuvre lorsqu'ils sont chargés et exécutés dans un ordinateur ou dispositif informatique. Un tel dispositif informatique peut être également constitué par tout système à processeur, intégré dans un terminal de communication ou dans un équipement de réseau de communication.The aforementioned program modules are implemented when they are loaded and executed in a computer or computer device. Such a computing device may also be constituted by any processor system integrated in a communication terminal or in a communication network equipment.

On notera aussi qu'un programme d'ordinateur selon l'invention, dont la finalité est la mise en oeuvre de l'invention lorsqu'il est exécuté par un système informatique approprié, peut être stocké sur un support d'informations de types variés. En effet, un tel support d'informations peut être constitué par n'importe quelle entité ou dispositif capable de stocker un programme selon l'invention.It will also be noted that a computer program according to the invention, the purpose of which is the implementation of the invention when it is executed by an appropriate computer system, can be stored on an information carrier of various types. . Indeed, such an information carrier may be constituted by any entity or device capable of storing a program according to the invention.

Par exemple, le support en question peut comporter un moyen de stockage matériel, tel qu'une mémoire, par exemple un CD ROM ou une mémoire de type ROM ou RAM de circuit microélectronique, ou encore un moyen d'enregistrement magnétique, par exemple un disque dur.For example, the medium in question may comprise a hardware storage means, such as a memory, for example a CD ROM or a ROM or RAM microelectronic circuit memory, or a magnetic recording means, for example a Hard disk.

D'un point de vue conception, un programme d'ordinateur selon l'invention peut utiliser n'importe quel langage de programmation et être sous la forme de code source, code objet, ou de code intermédiaire entre code source et code objet (par ex., une forme partiellement compilée), ou dans n'importe quelle autre forme souhaitable pour implémenter un procédé selon l'invention.From a design point of view, a computer program according to the invention can use any programming language and be in the form of source code, object code, or intermediate code between source code and object code (for example eg, a partially compiled form), or in any other form desirable for implementing a method according to the invention.

Claims (15)

  1. Method for objective evaluation of the voice quality of a speech signal, characterized in that it comprises the steps for:
    - classification (S3-S21) of the background noises contained in the speech signal according to a predefined set of classes of background noises (CL1-CL4);
    - evaluation (S23) of the voice quality of the speech signal, according to at least the classification obtained relating to the background noises present in the speech signal.
  2. Method according to Claim 1, in which the step for classification of the background noises contained in the speech signal includes the steps for:
    - extraction (S3, S5) from the speech signal of a background noise signal, referred to as noise signal;
    - calculation (S7) of audio parameters of the noise signal;
    - classification (S9-S21) of the background noises contained in the noise signal as a function of the calculated audio parameters, according to said set of classes of background noise (CL1-CL4).
  3. Method according to Claim 2, in which the step (S23) for evaluation of the voice quality of the speech signal comprises the steps for:
    - estimation (S231) of the total loudness (N) of the noise signal (SIG_N);
    - calculation of a voice quality score (MOS_CLi) as a function of the class (CLi) of background noise present in the speech signal, and of the total loudness (N) estimated for the noise signal.
  4. Method according to Claim 3, in which a voice quality score (MOS_CLi) is obtained according to a mathematical formula of the following general form: MOS_CLi = C i - 1 + C i × f N
    Figure imgb0009

    where:
    MOS_CLi is the score calculated for the noise signal;
    f(N) is a mathematical function of the total loudness, N, estimated for the noise signal;
    Ci-1 and Ci are two coefficients defined for the class (CLi) of background noise obtained for the noise signal.
  5. Method according to Claim 4, in which the function f(N) is the natural logarithm, Ln(N), of the total loudness N expressed in sones.
  6. Method according to one of Claims 3 to 5, in which the total loudness of the noise signal is estimated according to an objective model for estimation of the loudness.
  7. Method according to any one of Claims 2 to 6, in which the step (S7) for calculation of audio parameters of the noise signal comprises the calculation of a first parameter (IND_TMP), referred to as time indicator, relating to the time variation of the noise signal, and of a second parameter (IND_FRQ), referred to as frequency indicator, relating to the frequency spectrum of the noise signal.
  8. Method according to Claim 7, in which the time indicator (IND_TMP) is obtained from a calculation of variation of the sound level of the noise signal, and the frequency indicator (IND_FRQ) is obtained from a calculation of variation of the amplitude of the frequency spectrum of the noise signal.
  9. Method according to any one of the preceding claims, in which, in order to classify the background noises associated with the noise signal, the method comprises the steps consisting in:
    - comparing (S9) the value of the time indicator (IND_TMP) obtained for the noise signal with a first threshold (TH1) and determining, depending on the result of this comparison, whether the noise signal is stationary or not;
    - when the noise signal is identified as non-stationary, comparing (S11) the value of the frequency indicator with a second threshold (TH2) and determining (S13, S15), depending on the result of this comparison, whether the noise signal belongs to a first class (CL1) or to a second class (CL2) of background noise;
    - when the noise signal is identified as stationary, comparing (S17) the value of the frequency indicator with a third threshold (TH3) and determining (S19, S21), depending on the result of this comparison, whether the noise signal belongs to a third class (CL3) or to a fourth class (CL4) of background noise.
  10. Method according to any one of the preceding claims, in which the set of classes comprises at least the following classes:
    - intelligible noise;
    - environmental noise;
    - blowing noise;
    - crackling noise.
  11. Method according to any one of Claims 2 to 10, in which the noise signal is extracted by application to the speech signal of an operation for detection of voice activity, the regions of the speech signal not exhibiting voice activity constituting the noise signal.
  12. Device for objective evaluation of the voice quality of a speech signal, characterized in that it comprises:
    - means of classification (11-15) of the background noises contained in the speech signal according to a predefined set of classes of background noise (CL1-CL4);
    - means of evaluation (17) of the voice quality of the speech signal as a function of at least the classification obtained relating to the background noises present in the speech signal.
  13. Device according to Claim 12, comprising:
    - a module (11) for extraction from the speech signal (SIG) of a background noise signal, referred to as noise signal;
    - a module (13) for calculation of audio parameters of the noise signal;
    - a module (15) for classification of the background noises contained in the noise signal as a function of the calculated audio parameters, according to a predefined set of classes of background noise (CL);
    - a module (17) for evaluation of the voice quality of the speech signal as a function of at least the classification obtained relating to the background noises present in the speech signal.
  14. Device according to Claim 13, furthermore comprising means designed for the implementation of a method according to any one of Claims 2 to 11.
  15. Computer program on information media, said program comprising program instructions designed for the implementation of a method according to any one of Claims 1 to 11, when said program is loaded and executed in a computer.
EP10723655A 2009-04-17 2010-04-12 Method and device for the objective evaluation of the voice quality of a speech signal taking into account the classification of the background noise contained in the signal Active EP2419900B1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
FR0952531A FR2944640A1 (en) 2009-04-17 2009-04-17 METHOD AND DEVICE FOR OBJECTIVE EVALUATION OF THE VOICE QUALITY OF A SPEECH SIGNAL TAKING INTO ACCOUNT THE CLASSIFICATION OF THE BACKGROUND NOISE CONTAINED IN THE SIGNAL.
PCT/FR2010/050699 WO2010119216A1 (en) 2009-04-17 2010-04-12 Method and device for the objective evaluation of the voice quality of a speech signal taking into account the classification of the background noise contained in the signal

Publications (2)

Publication Number Publication Date
EP2419900A1 EP2419900A1 (en) 2012-02-22
EP2419900B1 true EP2419900B1 (en) 2013-03-13

Family

ID=41137230

Family Applications (1)

Application Number Title Priority Date Filing Date
EP10723655A Active EP2419900B1 (en) 2009-04-17 2010-04-12 Method and device for the objective evaluation of the voice quality of a speech signal taking into account the classification of the background noise contained in the signal

Country Status (4)

Country Link
US (1) US8886529B2 (en)
EP (1) EP2419900B1 (en)
FR (1) FR2944640A1 (en)
WO (1) WO2010119216A1 (en)

Families Citing this family (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FR2944640A1 (en) * 2009-04-17 2010-10-22 France Telecom METHOD AND DEVICE FOR OBJECTIVE EVALUATION OF THE VOICE QUALITY OF A SPEECH SIGNAL TAKING INTO ACCOUNT THE CLASSIFICATION OF THE BACKGROUND NOISE CONTAINED IN THE SIGNAL.
EP2444966B1 (en) * 2009-06-19 2019-07-10 Fujitsu Limited Audio signal processing device and audio signal processing method
WO2012020394A2 (en) * 2010-08-11 2012-02-16 Bone Tone Communications Ltd. Background sound removal for privacy and personalization use
CN102231279B (en) * 2011-05-11 2012-09-26 武汉大学 Objective evaluation system and method of voice frequency quality based on hearing attention
KR101406398B1 (en) * 2012-06-29 2014-06-13 인텔렉추얼디스커버리 주식회사 Apparatus, method and recording medium for evaluating user sound source
US9679555B2 (en) 2013-06-26 2017-06-13 Qualcomm Incorporated Systems and methods for measuring speech signal quality
CN106409310B (en) * 2013-08-06 2019-11-19 华为技术有限公司 A kind of audio signal classification method and apparatus
US10148526B2 (en) * 2013-11-20 2018-12-04 International Business Machines Corporation Determining quality of experience for communication sessions
US11888919B2 (en) 2013-11-20 2024-01-30 International Business Machines Corporation Determining quality of experience for communication sessions
US10079031B2 (en) * 2015-09-23 2018-09-18 Marvell World Trade Ltd. Residual noise suppression
US9749733B1 (en) * 2016-04-07 2017-08-29 Harman Intenational Industries, Incorporated Approach for detecting alert signals in changing environments
US9984701B2 (en) 2016-06-10 2018-05-29 Apple Inc. Noise detection and removal systems, and related methods
US10311863B2 (en) * 2016-09-02 2019-06-04 Disney Enterprises, Inc. Classifying segments of speech based on acoustic features and context
CN107093432B (en) * 2017-05-19 2019-12-13 江苏百应信息技术有限公司 Voice quality evaluation system for communication system
US10504538B2 (en) 2017-06-01 2019-12-10 Sorenson Ip Holdings, Llc Noise reduction by application of two thresholds in each frequency band in audio signals
CN111326169B (en) * 2018-12-17 2023-11-10 中国移动通信集团北京有限公司 Voice quality evaluation method and device
US11350885B2 (en) * 2019-02-08 2022-06-07 Samsung Electronics Co., Ltd. System and method for continuous privacy-preserved audio collection
CN110610723B (en) * 2019-09-20 2022-02-22 中国第一汽车股份有限公司 Method, device, equipment and storage medium for evaluating sound quality in vehicle
CN113393863B (en) * 2021-06-10 2023-11-03 北京字跳网络技术有限公司 Voice evaluation method, device and equipment
CN114486286A (en) * 2022-01-12 2022-05-13 中国重汽集团济南动力有限公司 Method and equipment for evaluating quality of door closing sound of vehicle
CN115334349B (en) * 2022-07-15 2024-01-02 北京达佳互联信息技术有限公司 Audio processing method, device, electronic equipment and storage medium

Family Cites Families (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5504473A (en) * 1993-07-22 1996-04-02 Digital Security Controls Ltd. Method of analyzing signal quality
JP3484757B2 (en) * 1994-05-13 2004-01-06 ソニー株式会社 Noise reduction method and noise section detection method for voice signal
JP3484801B2 (en) * 1995-02-17 2004-01-06 ソニー株式会社 Method and apparatus for reducing noise of audio signal
US5684921A (en) * 1995-07-13 1997-11-04 U S West Technologies, Inc. Method and system for identifying a corrupted speech message signal
US6202046B1 (en) * 1997-01-23 2001-03-13 Kabushiki Kaisha Toshiba Background noise/speech classification method
US6330532B1 (en) * 1999-07-19 2001-12-11 Qualcomm Incorporated Method and apparatus for maintaining a target bit rate in a speech coder
US6157670A (en) * 1999-08-10 2000-12-05 Telogy Networks, Inc. Background energy estimation
SG97885A1 (en) * 2000-05-05 2003-08-20 Univ Nanyang Noise canceler system with adaptive cross-talk filters
US7472059B2 (en) * 2000-12-08 2008-12-30 Qualcomm Incorporated Method and apparatus for robust speech classification
DE10142846A1 (en) * 2001-08-29 2003-03-20 Deutsche Telekom Ag Procedure for the correction of measured speech quality values
US7461003B1 (en) * 2003-10-22 2008-12-02 Tellabs Operations, Inc. Methods and apparatus for improving the quality of speech signals
US20090187402A1 (en) * 2004-06-04 2009-07-23 Koninklijke Philips Electronics, N.V. Performance Prediction For An Interactive Speech Recognition System
WO2006035269A1 (en) * 2004-06-15 2006-04-06 Nortel Networks Limited Method and apparatus for non-intrusive single-ended voice quality assessment in voip
WO2006136900A1 (en) * 2005-06-15 2006-12-28 Nortel Networks Limited Method and apparatus for non-intrusive single-ended voice quality assessment in voip
FR2894707A1 (en) * 2005-12-09 2007-06-15 France Telecom METHOD FOR MEASURING THE PERCUSED QUALITY OF A DEGRADED AUDIO SIGNAL BY THE PRESENCE OF NOISE
FR2944640A1 (en) * 2009-04-17 2010-10-22 France Telecom METHOD AND DEVICE FOR OBJECTIVE EVALUATION OF THE VOICE QUALITY OF A SPEECH SIGNAL TAKING INTO ACCOUNT THE CLASSIFICATION OF THE BACKGROUND NOISE CONTAINED IN THE SIGNAL.

Also Published As

Publication number Publication date
EP2419900A1 (en) 2012-02-22
FR2944640A1 (en) 2010-10-22
WO2010119216A1 (en) 2010-10-21
US20120059650A1 (en) 2012-03-08
US8886529B2 (en) 2014-11-11

Similar Documents

Publication Publication Date Title
EP2419900B1 (en) Method and device for the objective evaluation of the voice quality of a speech signal taking into account the classification of the background noise contained in the signal
EP2415047B1 (en) Classifying background noise contained in an audio signal
Malfait et al. P. 563—The ITU-T standard for single-ended speech quality assessment
EP0867856B1 (en) Method and apparatus for vocal activity detection
EP1468416B1 (en) Method for qualitative evaluation of a digital audio signal
EP1593116B1 (en) Method for differentiated digital voice and music processing, noise filtering, creation of special effects and device for carrying out said method
EP1849157B1 (en) Method of measuring annoyance caused by noise in an audio signal
WO2018146305A1 (en) Method and apparatus for dynamic modifying of the timbre of the voice by frequency shift of the formants of a spectral envelope
WO2003048711A2 (en) Speech detection system in an audio signal in noisy surrounding
EP2795618B1 (en) Method of detecting a predetermined frequency band in an audio data signal, detection device and computer program corresponding thereto
EP0685833B1 (en) Method for speech coding using linear prediction
WO2007066049A1 (en) Method for measuring an audio signal perceived quality degraded by a noise presence
Xie et al. Noisy-to-noisy voice conversion framework with denoising model
Sharma et al. Non-intrusive estimation of speech signal parameters using a frame-based machine learning approach
EP3627510A1 (en) Filtering of an audio signal acquired by a voice recognition system
Jaiswal Influence of silence and noise filtering on speech quality monitoring
WO2006032751A1 (en) Method and device for evaluating the efficiency of a noise reducing function for audio signals
FR2627887A1 (en) SPEECH RECOGNITION SYSTEM AND METHOD OF FORMING MODELS THAT CAN BE USED IN THIS SYSTEM
Barry et al. Audio Inpainting based on Self-similarity for Sound Source Separation Applications
Jaiswal Performance Analysis of Deep Learning Based Speech Quality Model with Mixture of Features
Santos A non-intrusive objective speech intelligibility metric tailored for cochlear implant users in complex listening environments
FR2856506A1 (en) Speech detection method for voice recognition system, involves calculating parameter representative of frame unit at group of fixed frame corresponding to speech frame with respect to another group of frame corresponding to noise frame
FR2847706A1 (en) Voice transformation/speech recognition system having modules transforming input/providing representative characteristic and module processing set providing quality level selected signal

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20111115

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO SE SI SK SM TR

DAX Request for extension of the european patent (deleted)
GRAP Despatch of communication of intention to grant a patent

Free format text: ORIGINAL CODE: EPIDOSNIGR1

GRAS Grant fee paid

Free format text: ORIGINAL CODE: EPIDOSNIGR3

GRAA (expected) grant

Free format text: ORIGINAL CODE: 0009210

AK Designated contracting states

Kind code of ref document: B1

Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO SE SI SK SM TR

REG Reference to a national code

Ref country code: GB

Ref legal event code: FG4D

Free format text: NOT ENGLISH

REG Reference to a national code

Ref country code: AT

Ref legal event code: REF

Ref document number: 601216

Country of ref document: AT

Kind code of ref document: T

Effective date: 20130315

Ref country code: CH

Ref legal event code: EP

REG Reference to a national code

Ref country code: IE

Ref legal event code: FG4D

Free format text: LANGUAGE OF EP DOCUMENT: FRENCH

REG Reference to a national code

Ref country code: DE

Ref legal event code: R096

Ref document number: 602010005476

Country of ref document: DE

Effective date: 20130508

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: NO

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20130613

Ref country code: LT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20130313

Ref country code: BG

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20130613

Ref country code: SE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20130313

Ref country code: ES

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20130624

REG Reference to a national code

Ref country code: AT

Ref legal event code: MK05

Ref document number: 601216

Country of ref document: AT

Kind code of ref document: T

Effective date: 20130313

REG Reference to a national code

Ref country code: NL

Ref legal event code: VDEP

Effective date: 20130313

REG Reference to a national code

Ref country code: LT

Ref legal event code: MG4D

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: SI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20130313

Ref country code: FI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20130313

Ref country code: LV

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20130313

Ref country code: GR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20130614

REG Reference to a national code

Ref country code: CH

Ref legal event code: PUE

Owner name: ORANGE, FR

Free format text: FORMER OWNER: FRANCE TELECOM, FR

RAP2 Party data changed (patent owner data changed or rights of a patent transferred)

Owner name: ORANGE

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: HR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20130313

BERE Be: lapsed

Owner name: FRANCE TELECOM

Effective date: 20130430

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: EE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20130313

Ref country code: NL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20130313

Ref country code: RO

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20130313

Ref country code: CZ

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20130313

Ref country code: IS

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20130713

Ref country code: SK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20130313

Ref country code: PT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20130715

Ref country code: AT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20130313

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: PL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20130313

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: MC

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20130313

PLBE No opposition filed within time limit

Free format text: ORIGINAL CODE: 0009261

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT

REG Reference to a national code

Ref country code: IE

Ref legal event code: MM4A

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: BE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20130430

Ref country code: DK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20130313

26N No opposition filed

Effective date: 20131216

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: IT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20130313

REG Reference to a national code

Ref country code: DE

Ref legal event code: R097

Ref document number: 602010005476

Country of ref document: DE

Effective date: 20131216

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: IE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20130412

REG Reference to a national code

Ref country code: CH

Ref legal event code: PL

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: CH

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20140430

Ref country code: LI

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20140430

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: MT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20130313

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: SM

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20130313

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: CY

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20130313

Ref country code: TR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20130313

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: MK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20130313

Ref country code: LU

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20130412

Ref country code: HU

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT; INVALID AB INITIO

Effective date: 20100412

REG Reference to a national code

Ref country code: FR

Ref legal event code: PLFP

Year of fee payment: 7

REG Reference to a national code

Ref country code: FR

Ref legal event code: PLFP

Year of fee payment: 8

REG Reference to a national code

Ref country code: FR

Ref legal event code: PLFP

Year of fee payment: 9

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: FR

Payment date: 20230321

Year of fee payment: 14

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: GB

Payment date: 20230321

Year of fee payment: 14

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: DE

Payment date: 20230321

Year of fee payment: 14