EP1016072B1 - Method and apparatus for suppressing noise in a digital speech signal - Google Patents

Method and apparatus for suppressing noise in a digital speech signal Download PDF

Info

Publication number
EP1016072B1
EP1016072B1 EP98943999A EP98943999A EP1016072B1 EP 1016072 B1 EP1016072 B1 EP 1016072B1 EP 98943999 A EP98943999 A EP 98943999A EP 98943999 A EP98943999 A EP 98943999A EP 1016072 B1 EP1016072 B1 EP 1016072B1
Authority
EP
European Patent Office
Prior art keywords
signal
speech signal
noise
frame
spectral
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Lifetime
Application number
EP98943999A
Other languages
German (de)
French (fr)
Other versions
EP1016072A1 (en
Inventor
Philip Lockwood
Stéphane LUBIARZ
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nortel Networks France SAS
Original Assignee
Matra Nortel Communications SAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Matra Nortel Communications SAS filed Critical Matra Nortel Communications SAS
Publication of EP1016072A1 publication Critical patent/EP1016072A1/en
Application granted granted Critical
Publication of EP1016072B1 publication Critical patent/EP1016072B1/en
Anticipated expiration legal-status Critical
Expired - Lifetime legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/02Speech enhancement, e.g. noise reduction or echo cancellation
    • G10L21/0208Noise filtering
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/02Speech enhancement, e.g. noise reduction or echo cancellation
    • G10L21/0208Noise filtering
    • G10L21/0216Noise filtering characterised by the method used for estimating noise
    • G10L21/0232Processing in the frequency domain
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/02Speech enhancement, e.g. noise reduction or echo cancellation
    • G10L21/0208Noise filtering
    • G10L21/0264Noise filtering characterised by the type of parameter measurement, e.g. correlation techniques, zero crossing techniques or predictive techniques

Definitions

  • the present invention relates to techniques digital denoising of speech signals. She relates more particularly to denoising by nonlinear spectral subtraction.
  • This technique allows acceptable denoising to be obtained for strongly voiced signals, but completely distorts the speech signal. Faced with relatively coherent noise, such as that caused by the contact of car tires or the clicking of an engine, the noise may be more easily predictable as the unvoiced speech signal. We then tend to project the speech signal into part of the noise vector space.
  • the method does disregards the speech signal, especially unvoiced speech areas where predictability is scaled down.
  • predict the speech signal from of a reduced set of parameters does not allow taking counts all the intrinsic richness of speech. We understands here the limits of techniques based only on mathematical considerations forgetting the special character of speech.
  • a main object of the present invention is to propose a new denoising technique that takes take into account the characteristics of speech perception through the human ear, allowing denoising effective without deteriorating speech perception.
  • a method as set out in claim 1 and a device as set out in claim 19 are provided.
  • the second subtracted quantity can in particular be limited to the fraction of the estimate increased by the corresponding spectral component of the noise which exceeds the masking curve. This way of proceeding is based on the observation that it is enough to denoise the frequencies of audible noise. Conversely, there is no point in eliminating noise that is masked by speech.
  • the overestimation of the spectral envelope of the noise is generally desirable so that the estimate increased thus obtained is robust to sudden variations noise.
  • this overestimation has usually the downside of distorting the speech signal when it becomes too large. This has the effect to affect the voiced character of the speech signal in removing some of its predictability.
  • This disadvantage is very inconvenient in the conditions of the telephony, because it is during the voicing areas that the speech signal is then the most energetic.
  • the invention allows greatly reduce this drawback.
  • the denoising system shown in FIG. 1 processes a digital speech signal s.
  • the signal frame is transformed in the frequency domain by a module 11 applying a conventional fast Fourier transform (TFR) algorithm to calculate the module of the signal spectrum.
  • TFR fast Fourier transform
  • the frequency resolution available at the output of the fast Fourier transform is not used, but a lower resolution, determined by a number I of frequency bands covering the band [0 , F e / 2] of the signal.
  • a module 12 calculates the respective averages of the spectral components S n, f of the speech signal in bands, for example by a uniform weighting such that:
  • This averaging reduces the fluctuations between the bands by averaging the noise contributions in these bands, which will decrease the variance of the estimator of noise. In addition, this averaging allows a large reduction of the complexity of the system.
  • the averaged spectral components S n, i are addressed to a voice activity detection module 15 and to a noise estimation module 16. These two modules 15, 16 operate jointly, in the sense that degrees of vocal activity ⁇ n, i measured for the different bands by the module 15 are used by the module 16 to estimate the long-term energy of the noise in the different bands, while these long-term estimates B and n, i are used by module 15 to carry out a priori denoising of the speech signal in the different bands to determine the degrees of vocal activity ⁇ n, i .
  • modules 15 and 16 can correspond to the flowcharts represented in the figures 2 and 3.
  • the module 15 proceeds a priori to denoising the speech signal in the different bands i for the signal frame n.
  • This a priori denoising is carried out according to a conventional process of non-linear spectral subtraction from noise estimates obtained during one or more previous frames.
  • ⁇ 1 and ⁇ 2 are delays expressed in number of frames ( ⁇ 1 ⁇ 1, ⁇ 2 ⁇ 0), and ⁇ '/ n, i is a noise overestimation coefficient whose determination will be explained later.
  • the spectral components pp n, i are calculated according to: where ⁇ p i is a floor coefficient close to 0, conventionally used to prevent the spectrum of the denoised signal from taking negative or too low values which would cause musical noise.
  • Steps 17 to 20 therefore essentially consist in subtracting from the spectrum of the signal an estimate, increased by the coefficient ⁇ ' n - ⁇ 1, i , from the spectrum of noise estimated a priori.
  • the module 15 calculates, for each band i (0 ⁇ i ⁇ I), a quantity ⁇ E n, i representing the short-term variation of the energy of the noise-suppressed signal in the band i, as well as long-term value E n, i of the energy of the denoised signal in band i.
  • step 25 the quantity ⁇ E n, i is compared with a threshold ⁇ 1. If the threshold ⁇ 1 is not reached, the counter b i is incremented by one unit in step 26.
  • step 27 the long-term estimator ba i is compared to the value of the smoothed energy E n, i . If ba i ⁇ E n, i , the estimator ba i is taken equal to the smoothed value E n, i in step 28, and the counter b i is reset to zero.
  • the quantity ⁇ i which is taken equal to the ratio ba i / E n, i (step 36), is then equal to 1.
  • step 27 shows that ba i ⁇ E n, i
  • the counter b i is compared with a limit value bmax in step 29. If b i > bmax, the signal is considered to be too stationary to support vocal activity.
  • Bm represents an update coefficient between 0.90 and 1. Its value differs depending on the state of a voice activity detection automaton (steps 30 to 32). This state ⁇ n-1 is that determined during the processing of the previous frame.
  • the coefficient Bm takes a value Bmp very close to 1 so that the noise estimator is very slightly updated in the presence of speech. Otherwise, the coefficient Bm takes a lower value Bms, to allow a more significant update of the noise estimator in silence phase.
  • the difference ba i -bi i between the long-term estimator and the internal noise estimator is compared to a threshold ⁇ 2. If the threshold ⁇ 2 is not reached, the long-term estimator ba i is updated with the value of the internal estimator bi i in step 35. Otherwise, the long-term estimator ba i remains unchanged . This avoids that sudden variations due to a speech signal lead to an update of the noise estimator.
  • the module 15 After having obtained the quantities ⁇ i , the module 15 proceeds to the voice activity decisions in step 37.
  • the module 15 first updates the state of the detection automaton according to the quantity ⁇ 0 calculated for l of the signal band.
  • the new state ⁇ n of the automaton depends on the previous state ⁇ n-1 and on ⁇ 0 , as shown in Figure 4.
  • the module 15 also calculates the degrees of vocal activity ⁇ n, i in each band i ⁇ 1.
  • This function has for example the appearance shown in FIG. 5.
  • Module 16 calculates the band noise estimates, which will be used in the denoising process, using the successive values of the components S n, i and the degrees of voice activity ⁇ n, i . This corresponds to steps 40 to 42 of FIG. 3.
  • step 40 it is determined whether the voice activity detection machine has just gone from the rising state to the speaking state. If so, the last two estimates B and n -1, i and B and n -2, i Previously calculated for each band i ⁇ 1 are corrected according to the value of the previous estimate B and n -3, i .
  • step 42 the module 16 updates the noise estimates per band according to the formulas: where ⁇ B denotes a forgetting factor such as 0 ⁇ B ⁇ 1.
  • Formula (6) shows how the degree of non-binary vocal activity ⁇ n, i is taken into account.
  • the long-term noise estimates B and n, i are overestimated, by a module 45 (FIG. 1), before proceeding to denoising by nonlinear spectral subtraction.
  • Module 45 calculates the overestimation coefficient ⁇ '/ n, i previously mentioned, as well as an increased estimate B and ' / n, i which essentially corresponds to ⁇ '/ n, i . B and n, i .
  • the organization of the overestimation module 45 is shown in FIG. 6.
  • the enhanced estimate B and '/ n, i is obtained by combining the long-term estimate B and n, i and a measure ⁇ B max / n, i the variability of the noise component in band i around its long-term estimate.
  • this combination is essentially a simple sum made by an adder 46. It could also be a weighted sum.
  • the measure ⁇ B max / n, i of the noise variability reflects the variance of the noise estimator. It is obtained as a function of the values of S n, i and of B and n, i calculated for a certain number of previous frames on which the speech channel does not present any vocal activity in the band i. It is a function of deviations S nk, i - B nk, i calculated for a number K of frames of silence (nk ⁇ n). In the example shown, this function is simply the maximum (block 50).
  • the degree of voice activity ⁇ n, i is compared to a threshold (block 51) to decide whether the difference S or - B or , calculated in 52-53, may or may not be loaded into a queue 54 of K locations organized in first-in-first-out (FIFO) mode. If ⁇ n, i does not exceed the threshold (which can be equal to 0 if the function g () has the form of FIG. 5), the FIFO 54 is not supplied, while it is in the opposite case. The maximum value contained in FIFO 54 is then provided as a measure of variability ⁇ B max / n, i .
  • the measure of variability ⁇ B max / n, i can, as a variant, be obtained as a function of the values S n, f (and not S n, i ) and B and n, i .
  • FIFO 54 does not contain S nk, i - B nk, i for each of the bands i, but rather
  • the enhanced estimator B and '/ n, i provides excellent robustness to the musical noises of the denoising process.
  • a first phase of the spectral subtraction is carried out by the module 55 shown in FIG. 1.
  • This phase provides, with the resolution of the bands i (1 i i I I), the frequency response H 1 / n, i of first denoising filter, as a function of the components S n, i and B and n, i and the overestimation coefficients ⁇ '/ n, i .
  • the coefficient ⁇ 1 / i represents, like the coefficient ⁇ p i of formula (3), a floor conventionally used to avoid negative or too low values of the denoised signal.
  • the overestimation coefficient ⁇ ' n, i could be replaced in formula (7) by another coefficient equal to a function of ⁇ ' n, i and an estimate of the signal-to-noise ratio (for example S n, i / B and n, i ), this function decreasing according to the estimated value of the signal-to-noise ratio.
  • This function is then equal to ⁇ ' n, i for the lowest values of the signal-to-noise ratio. Indeed, when the signal is very noisy, it is a priori not useful to reduce the overestimation factor.
  • this function decreases to zero for the highest values of the signal / noise ratio. This protects the most energetic areas of the spectrum, where the speech signal is most significant, the amount subtracted from the signal then tending towards zero.
  • This strategy can be refined by applying it selectively to frequency harmonics pitch of the speech signal when it has voice activity.
  • a second denoising phase is carried out by a module 56 for protecting harmonics.
  • the module 57 can apply any known method of analysis of the speech signal of the frame to determine the period T p , expressed as an integer or fractional number of samples, for example a linear prediction method.
  • the protection provided by the module 56 may consist in carrying out, for each frequency f belonging to a band i:
  • H 2 / n, f 1
  • the quantity subtracted from the component S n, f will be zero.
  • the floor coefficients ⁇ 2 / i express the fact that certain harmonics of the tone frequency f p can be masked by noise, so that n protecting them is useless.
  • This protection strategy is preferably applied for each of the frequencies closest to the harmonics of f p , that is to say for any arbitrary integer.
  • condition (9) the difference between the ⁇ -th harmonic of the real tonal frequency is its estimate ⁇ ⁇ f p (condition (9)) can go up to ⁇ ⁇ ⁇ ⁇ f p / 2.
  • this difference can be greater than the spectral half-resolution ⁇ f / 2 of the Fourier transform.
  • the corrected frequency response H 2 / n, f can be equal to 1 as indicated above, which corresponds to the subtraction of a zero quantity in the context of spectral subtraction, that is to say ie full protection of the frequency in question. More generally, this corrected frequency response H 2 / n, f could be taken equal to a value between 1 and H 1 / n, f depending on the degree of protection desired, which corresponds to the subtraction of an amount less than which would be subtracted if the frequency in question was not protected.
  • S 2 / n, f H 2 n, f .
  • S n, f H 2 n, f .
  • This signal S 2 / n, f is supplied to a module 60 which calculates, for each frame n, a masking curve by applying a psychoacoustic model of auditory perception by the human ear.
  • the masking phenomenon is a principle known from functioning of the human ear. When two frequencies are heard simultaneously, it is possible that one of the two is no longer audible. We say then that it is hidden.
  • the masking curve is seen as the convolution of the spectral spreading function of the basilar membrane in the bark domain with the excitatory signal, constituted in the present application by the signal S 2 / n, f .
  • the spectral spreading function can be modeled as shown in Figure 7.
  • R q depends on the more or less voiced character of the signal.
  • designates a degree of voicing of the speech signal, varying between zero (no voicing) and 1 (strongly voiced signal).
  • the denoising system also includes a module 62 which corrects the frequency response of the denoising filter, as a function of the masking curve M n, q calculated by the module 60 and of the increased estimates B and '/ n, i calculated by the module 45.
  • Module 62 decides the level of denoising which must really be reached.
  • the new response H 3 / n, f for a frequency f belonging to the band i defined by the module 12 and to the bark band q, thus depends on the relative difference between the increased estimate B and '/ n, i of the corresponding spectral component of the noise and the masking curve M n, q , as follows:
  • the quantity subtracted from a spectral component S n, f , in the process of spectral subtraction having the frequency response H 3 / n, f is substantially equal to the minimum between on the one hand the quantity subtracted from this spectral component in the spectral subtraction process having the frequency response H 2 / n, f , and on the other hand the fraction of the increased estimate B and '/ n, i of the corresponding spectral component of the noise which, if if necessary, exceeds the masking curve M n, q .
  • FIG. 8 illustrates the principle of the correction applied by the module 62. It schematically shows an example of masking curve M n, q calculated on the basis of the spectral components S 2 / n, f of the noise-suppressed signal, as well as the estimation plus B and '/ n, i of the noise spectrum.
  • the quantity finally subtracted from the components S n, f will be that represented by the hatched areas, that is to say limited to the fraction of the increased estimate B and '/ n, i of the spectral components of the noise which exceeds the curve masking.
  • This subtraction is carried out by multiplying the frequency response H 3 / n, f of the denoising filter by the spectral components S n, f of the speech signal (multiplier 64).
  • TFRI inverse fast Fourier transform
  • FIG. 9 shows a preferred embodiment of a denoising system implementing the invention.
  • This system comprises a certain number of elements similar to corresponding elements of the system of FIG. 1, for which the same reference numbers have been used.
  • modules 10, 11, 12, 15, 16, 45 and 55 provide in particular the quantities S n, i , B and n, i , ⁇ '/ n, i,, B and ' / n, i and H 1 / n, f to perform selective denoising.
  • the frequency resolution of the fast Fourier transform 11 is a limitation of the system of FIG. 1.
  • the frequency causing protection by the module 56 is not necessarily the precise tonal frequency f p , but the frequency closest to it in the discrete spectrum. In some cases, it is then possible to protect harmonics relatively far from that of the tone frequency.
  • the system of FIG. 9 overcomes this drawback thanks to an appropriate conditioning of the speech signal.
  • the sampling frequency of the signal is modified so that the period 1 / f p covers exactly an integer number of sample times of the conditioned signal.
  • This size N is usually a power of 2 for putting implementation of the TFR. It is 256 in the example considered.
  • This choice is made by a module 70 according to the value of the delay T p supplied by the harmonic analysis module 57.
  • the module 70 provides the ratio K between the sampling frequencies to three frequency change modules 71, 72, 73 .
  • the module 71 is used to transform the values S n, i , B and n, i , ⁇ '/ n, i,, B and ' / n, i and H 1 / n, f , relating to the bands i defined by the module 12, in the scale of modified frequencies (sampling frequency f e ). This transformation consists simply in dilating the bands i in the factor K. The values thus transformed are supplied to the module 56 for protecting harmonics.
  • the module 72 performs the oversampling of the frame of N samples provided by the windowing module 10.
  • the conditioned signal frame supplied by the module 72 includes KN samples at the frequency f e . These samples are sent to a module 75 which calculates their Fourier transform.
  • the two blocks therefore have an overlap of (2-K) ⁇ 100%.
  • the autocorrelations A (k) are calculated by a module 76, for example according to the formula:
  • a module 77 then calculates the normalized entropy H, and supplies it to module 60 for the calculation of the masking curve (see SA McClellan et al: “Spectral Entropy: an Alternative Indicator for Rate Allocation?”, Proc. ICASSP'94 , pages 201-204):
  • the normalized entropy H constitutes a measurement of voicing very robust to noise and variations in the tonal frequency.
  • the correction module 62 operates in the same way as that of the system in FIG. 1, taking into account the overestimated noise B and '/ n, i rescaled by the frequency change module 71. It provides the response in frequency H 3 / n, f of the final denoising filter, which is multiplied by the spectral components S n, f of the signal conditioned by the multiplier 64. The components S 3 / n, f which result therefrom are brought back into the time domain by the TFRI 65 module. At the output of this TFRI 65, a module 80 combines, for each frame, the two signal blocks resulting from the processing of the two overlapping blocks delivered by the TFR 75. This combination can consist of a weighted sum Hamming of samples, to form a denoised conditioned signal frame of KN samples.
  • the management module 82 controls the windowing module 10 so that the overlap between the current frame and the next one corresponds to NM. This recovery of NM samples will be required in the recovery sum carried out by the module 66 during the processing of the next frame.
  • the tone frequency is estimated in an average way on the frame.
  • the tonal frequency can vary some little over this period. It is possible to take into account these variations in the context of the present invention, in conditioning the signal so as to obtain artificially a constant tone frequency in the frame.
  • the analysis module 57 harmonic provides the time intervals between the consecutive breaks in speech signal due to closures of the glottis of the intervening speaker for the duration of the frame.
  • Usable methods to detect such micro-ruptures are well known in the area of harmonic signal analysis lyrics.
  • the principle of these methods is to perform a statistical test between two models, one in the short term and the other in the long term. Both models are adaptive linear prediction models.
  • the value of this statistical test w m is the cumulative sum of the posterior likelihood ratio of two distributions, corrected by the Kullback divergence. For a distribution of residuals having a Gaussian statistic, this value w m is given by: where e 0 / m and ⁇ 2/0 represent the residue calculated at the time of the sample m of the frame and the variance of the long-term model, e 1 / m and ⁇ 2/1 likewise representing the residue and the variance of the short term model. The closer the two models are, the more the value w m of the statistical test is close to 0. On the other hand, when the two models are distant from each other, this value w m becomes negative, which indicates a break R of the signal.
  • FIG. 10 thus shows a possible example of evolution of the value w m , showing the breaks R of the speech signal.
  • FIG. 11 shows the means used to calculate the conditioning of the signal in the latter case.
  • the harmonic analysis module 57 is produced so as to implement the above analysis method, and to provide the intervals t r relative to the signal frame produced by the module 10.
  • These oversampling reports K r are supplied to the frequency change modules 72 and 73, so that the interpolations are carried out with the sampling ratio K r over the corresponding time interval t r .
  • the largest T p of the time intervals t r supplied by the module 57 for a frame is selected by the module 70 (block 91 in FIG. 11) to obtain a torque p, ⁇ as indicated in table I.
  • This embodiment of the invention also involves an adaptation of the window management module 82.
  • the number M of samples of the denoised signal to be saved on the current frame here corresponds to an integer number of consecutive time intervals t r between two glottal breaks (see FIG. 10). This arrangement avoids the problems of phase discontinuity between frames, while taking into account the possible variations of the time intervals t r on a frame.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Quality & Reliability (AREA)
  • Signal Processing (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Compression, Expansion, Code Conversion, And Decoders (AREA)

Description

La présente invention concerne les techniques numériques de débruitage de signaux de parole. Elle concerne plus particulièrement le débruitage par soustraction spectrale non linéaire.The present invention relates to techniques digital denoising of speech signals. She relates more particularly to denoising by nonlinear spectral subtraction.

Du fait de la généralisation des nouvelles formes de communication, en particulier des téléphones mobiles, les communications se font de plus en plus dans des ambiances fortement bruitées. Le bruit, additionné à la parole, a alors tendance à perturber les communications en empêchant une compression optimale du signal de parole et en créant un bruit de fond non naturel. D'autre part, le bruit rend difficile et fatigante la compréhension du message parlé.Due to the generalization of new forms communication, especially mobile phones, communications are increasingly done in strongly noisy atmospheres. Noise, added to the speech, then tends to disrupt communications by preventing optimal compression of the speech signal and creating unnatural background noise. On the other hand, the noise makes it difficult and tiring to understand the spoken message.

De nombreux algorithmes ont été étudiés pour essayer de diminuer les effets du bruit dans une communication. S. F. Boll («Suppression of acoustic noise in speech using spectral subtraction », IEEE Trans. on Acoustics, Speech and Signal Processing », Vol. ASSP-27, n° 2, avril 1979) a proposé un algorithme basé sur la soustraction spectrale. Cette technique consiste à estimer le spectre du bruit pendant les phases de silence et à le soustraire du signal reçu. Elle permet une réduction du niveau de bruit reçu. Son principal défaut est de créer un bruit musical particulièrement gênant, car non naturel.Many algorithms have been studied for try to decrease the effects of noise in a communication. S. F. Boll (“Suppression of acoustic noise in speech using spectral subtraction ”, IEEE Trans. we Acoustics, Speech and Signal Processing ”, Vol. ASSP-27, n ° 2, April 1979) proposed an algorithm based on the spectral subtraction. This technique consists of estimating the noise spectrum during the silent phases and at the subtract from the received signal. It allows a reduction of noise level received. Its main flaw is to create a particularly annoying musical noise, because unnatural.

Ces travaux, repris et améliorés par D. B. Paul (« The spectral envelope estimation vocoder », IEEE Trans. on Acoustics, Speech and Signal Processing », Vol. ASSP-29, n° 4, août 1981) et par P. Lockwood et J. Boudy (« Experiments with a nonlinear spectral subtractor (NSS), Hidden Markov Models and the projection, for robust speech recognition in cars », Speech Communication, Vol. 11, juin 1992, pages 215-228, et EP-A-0 534 837) ont permis de diminuer sensiblement le niveau de bruit tout en lui conservant un caractère naturel. De plus, cette contribution a eu le mérite d'incorporer pour la première fois le principe de masquage dans le calcul du filtre de débruitage. A partir de cette idée, une première tentative a été faite par S. Nandkumar et J. H. L. Hansen (« Speech enhancement on a new set of auditory constrained parameters », Proc. ICASSP 94, pages I.1-I.4) pour utiliser dans la soustraction spectrale des courbes de masquage calculées explicitement. Malgré les résultats décevants de cette technique, cette contribution a eu le mérite de mettre l'accent sur l'importance de ne pas dénaturer le signal de parole pendant le débruitage.These works, taken up and improved by D. B. Paul ("The spectral envelope estimation vocoder", IEEE Trans. on Acoustics, Speech and Signal Processing ”, Vol. ASSP-29, n ° 4, August 1981) and by P. Lockwood and J. Boudy ("Experiments with a nonlinear spectral subtractor (NSS), Hidden Markov Models and the projection, for robust speech recognition in cars ”, Speech Communication, Vol. 11, June 1992, pages 215-228, and EP-A-0 534 837) made it possible to significantly reduce the noise level while in it retaining a natural character. In addition, this contribution had the merit of incorporating for the first times the masking principle in the calculation of the filter noising. From this idea, a first attempt was made by S. Nandkumar and J. H. L. Hansen ("Speech enhancement on a new set of auditory constrained parameters ", Proc. ICASSP 94, pages I.1-I.4) for use in spectral subtraction curves of masking calculated explicitly. Despite the results disappointing with this technique, this contribution had the deserves to emphasize the importance of not distort the speech signal during denoising.

D'autres méthodes basées sur la décomposition du signal de parole en valeurs singulières, et donc sur une projection du signal de parole dans un espace plus réduit, ont été étudiées par Bart De Mocre (« The singular value decomposition and long and short spaces of noisy matrices », IEEE Trans. on Signal Processing, Vol. 41, n° 9, septembre 1993, pages 2826-2838) et par S. H. Jensen et al (« Reduction of broad-band noise in speech by truncated QSVD », IEEE Trans. on Speech and Audio Processing, Vol. 3, n° 6, novembre 1995). Le principe de cette technique est de considérer le signal de parole et le signal de bruit comme totalement décorrélés, et de considérer que le signal de parole a une prédictibilité suffisante pour être prédit à partir d'un jeu restreint de paramètres. Cette technique permet d'obtenir un débruitage acceptable pour des signaux fortement voisés, mais dénature totalement le signal de parole. Face à un bruit relativement cohérent, tel que celui provoqué par le contact de pneus de voitures ou le cliquetis d'un moteur, le bruit peut s'avérer plus facilement prédictible que le signal de parole non voisé. On a alors tendance à projeter le signal de parole dans une partie de l'espace vectoriel du bruit. La méthode ne tient pas compte du signal de parole, en particulier des zones de parole non voisée où la prédictibilité est réduite. De plus, prédire le signal de parole à partir d'un jeu de paramètres réduit ne permet pas de prendre en compte toute la richesse intrinsèque de la parole. On comprend ici les limites de techniques basées uniquement sur des considérations mathématiques en oubliant le caractère particulier de la parole. Other methods based on the decomposition of speech signal in singular values, and therefore on a projection of the speech signal in a smaller space, were studied by Bart De Mocre ("The singular value decomposition and long and short spaces of noisy matrices ”, IEEE Trans. on Signal Processing, Vol. 41, no. 9, September 1993, pages 2826-2838) and by S. H. Jensen and al ("Reduction of broad-band noise in speech by truncated QSVD ”, IEEE Trans. on Speech and Audio Processing, Vol. 3, No. 6, November 1995). The principle of this technique is to consider the speech signal and the signal of noise as totally uncorrelated, and to consider that the speech signal has sufficient predictability to be predicted from a restricted set of parameters. This technique allows acceptable denoising to be obtained for strongly voiced signals, but completely distorts the speech signal. Faced with relatively coherent noise, such as that caused by the contact of car tires or the clicking of an engine, the noise may be more easily predictable as the unvoiced speech signal. We then tend to project the speech signal into part of the noise vector space. The method does disregards the speech signal, especially unvoiced speech areas where predictability is scaled down. In addition, predict the speech signal from of a reduced set of parameters does not allow taking counts all the intrinsic richness of speech. We understands here the limits of techniques based only on mathematical considerations forgetting the special character of speech.

D'autres techniques enfin sont basées sur des critères de cohérence. La fonction de cohérence est particulièrement bien développée par J. A. Cadzow et O. M. Solomon (« Linear modeling and the coherence function », IEEE Trans. on Acoustics, Speech and Signal Processing, Vol. ASSP-35, n° 1, janvier 1987, pages 19-28), et son application au débruitage a été étudiée par R. Le Bouquin (« Enhancement of noisy speech signais : application to mobile radio communications », Speech Communication, Vol. 18, pages 3-19). Cette méthode se base sur le fait que le signal de parole a une cohérence nettement plus importante que le bruit à condition d'utiliser plusieurs canaux indépendants. Les résultats obtenus semblent être assez encourageants. Mais malheureusement, cette technique impose d'avoir plusieurs sources de prise de son, ce qui n'est pas toujours réalisé.Finally, other techniques are based on consistency criteria. The consistency function is particularly well developed by J. A. Cadzow and O. M. Solomon ("Linear modeling and the coherence function", IEEE Trans. on Acoustics, Speech and Signal Processing, Flight. ASSP-35, n ° 1, January 1987, pages 19-28), and its application to denoising has been studied by R. Le Bouquin ("Enhancement of noisy speech signais: application to mobile radio communications ”, Speech Communication, Vol. 18, pages 3-19). This method is based on the fact that the speech signal has significantly greater coherence that noise provided you use multiple channels independent. The results seem to be quite encouraging. But unfortunately, this technique requires having multiple sources of sound, which is not always achieved.

Un but principal de la présente invention est de proposer une nouvelle technique de débruitage qui prenne en compte les caractéristiques de perception de la parole par l'oreille humaine, permettant ainsi un débruitage efficace sans détériorer la perception de la parole.V Selon l'invention, une méthode comme énoncée dans la revendication 1 et un dispositif comme énoncé dans la revendication 19 sont prévus.A main object of the present invention is to propose a new denoising technique that takes take into account the characteristics of speech perception through the human ear, allowing denoising effective without deteriorating speech perception. According to the invention, a method as set out in claim 1 and a device as set out in claim 19 are provided.

L'invention propose ainsi un procédé de débruitage d'un signal de parole numérique traité par trames successives, dans lequel :

  • on calcule des composantes spectrales du signal de parole sur chaque trame ;
  • on calcule pour chaque trame des estimations majorées de composantes spectrales du bruit compris dans le signal de parole ;
  • on effectue une soustraction spectrale comportant au moins une première étape de soustraction
dans laquelle on soustrait respectivement, de chaque composante spectrale du signal de parole sur la trame, une première quantité dépendant de paramètres incluant l'estimation majorée de la composante spectrale correspondante du bruit pour ladite trame, de manière à obtenir des composantes spectrales d'un premier signal débruité The invention thus proposes a method for denoising a digital speech signal processed by successive frames, in which:
  • calculating spectral components of the speech signal on each frame;
  • calculating for each frame estimates increased by spectral components of the noise included in the speech signal;
  • a spectral subtraction is carried out comprising at least a first subtraction step
in which we subtract respectively from each spectral component of the speech signal on the frame, a first quantity depending on parameters including the increased estimate of the corresponding spectral component of the noise for said frame, so as to obtain spectral components of a first denoised signal

On peut appliquer au résultat de la soustraction spectrale une transformation vers le domaine temporel pour construire un signal de parole débruité.We can apply to the result of the subtraction spectral a transformation to the time domain for build a noisy speech signal.

Selon l'invention, la soustraction spectrale comporte en outre les étapes suivantes :

  • le calcul d'une courbe de masquage en appliquant un modèle de perception auditive à partir des composantes spectrales du premier signal débruité ;
  • la comparaison des estimations majorées des composantes spectrales du bruit pour la trame à la courbe de masquage calculée ; et
  • une seconde étape de soustraction dans laquelle on soustrait respectivement, de chaque composante spectrale du signal de parole sur la trame, une seconde quantité dépendant de paramètres incluant un écart entre l'estimation majorée de la composante spectrale correspondante du bruit et la courbe de masquage calculée.
According to the invention, the spectral subtraction also comprises the following steps:
  • calculating a masking curve by applying an auditory perception model from the spectral components of the first denoised signal;
  • comparing the increased estimates of the spectral noise components for the frame with the calculated masking curve; and
  • a second subtraction step in which each second spectral component of the speech signal on the frame is subtracted respectively, a second quantity depending on parameters including a difference between the estimate increased by the corresponding spectral component of the noise and the calculated masking curve .

La seconde quantité soustraite peut notamment être limitée à la fraction de l'estimation majorée de la composante spectrale correspondante du bruit qui dépasse la courbe de masquage. Cette façon de procéder repose sur l'observation qu'il suffit de débruiter les fréquences de bruit audibles. A contrario, il ne sert à rien d'éliminer du bruit qui est masqué par de la parole.The second subtracted quantity can in particular be limited to the fraction of the estimate increased by the corresponding spectral component of the noise which exceeds the masking curve. This way of proceeding is based on the observation that it is enough to denoise the frequencies of audible noise. Conversely, there is no point in eliminating noise that is masked by speech.

La surestimation de l'enveloppe spectrale du bruit est généralement souhaitable pour que l'estimation majorée ainsi obtenue soit robuste aux brusques variations du bruit. Néanmoins, cette surestimation a habituellement l'inconvénient de distordre le signal de parole lorsqu'elle devient trop importante. Ceci a pour effet d'affecter le caractère voisé du signal de parole en supprimant une partie de sa prédictibilité. Cet inconvénient est très gênant dans les conditions de la téléphonie, car c'est pendant les zones de voisement que le signal de parole est alors le plus énergétique. En limitant la quantité soustraite lorsque la totalité ou une partie d'une composante fréquentielle du bruit surestimé s'avère être masquée par la parole, l'invention permet d'atténuer fortement cet inconvénient.The overestimation of the spectral envelope of the noise is generally desirable so that the estimate increased thus obtained is robust to sudden variations noise. However, this overestimation has usually the downside of distorting the speech signal when it becomes too large. This has the effect to affect the voiced character of the speech signal in removing some of its predictability. This disadvantage is very inconvenient in the conditions of the telephony, because it is during the voicing areas that the speech signal is then the most energetic. In limiting the amount subtracted when all or one part of a frequency component of the overestimated noise turns out to be masked by speech, the invention allows greatly reduce this drawback.

D'autres particularités et avantages de la présente invention apparaítront dans la description ci-après d'exemples de réalisation non limitatifs, en référence aux dessins annexés, dans lesquels :

  • la figure 1 est un schéma synoptique d'un système de débruitage mettant en oeuvre la présente invention ;
  • les figures 2 et 3 sont des organigrammes de procédures utilisées par un détecteur d'activité vocale du système de la figure 1 ;
  • la figure 4 est un diagramme représentant les états d'un automate de détection d'activité vocale ;
  • la figure 5 est un graphique illustrant les variations d'un degré d'activité vocale ;
  • la figure 6 est un schéma synoptique d'un module de surestimation du bruit du système de la figure 1 ;
  • la figure 7 est un graphique illustrant le calcul d'une courbe de masquage ;
  • la figure 8 est un graphique illustrant l'exploitation des courbes de masquage dans le système de la figure 1 ;
  • la figure 9 est un schéma synoptique d'un autre svstème de débruitage mettant en oeuvre la présente invention ;
  • la figure 10 est un graphique illustrant une méthode d'analyse harmonique utilisable dans un procédé selon l'invention ; et
  • la figure 11 montre partiellement une variante du schéma synoptique de la figure 9.
Other features and advantages of the present invention will appear in the description below of nonlimiting exemplary embodiments, with reference to the appended drawings, in which:
  • Figure 1 is a block diagram of a denoising system implementing the present invention;
  • Figures 2 and 3 are flowcharts of procedures used by a voice activity detector of the system of Figure 1;
  • FIG. 4 is a diagram representing the states of a voice activity detection automaton;
  • FIG. 5 is a graph illustrating the variations of a degree of vocal activity;
  • Figure 6 is a block diagram of a noise overestimation module of the system of Figure 1;
  • FIG. 7 is a graph illustrating the calculation of a masking curve;
  • FIG. 8 is a graph illustrating the use of the masking curves in the system of FIG. 1;
  • FIG. 9 is a block diagram of another denoising system implementing the present invention;
  • FIG. 10 is a graph illustrating a harmonic analysis method usable in a method according to the invention; and
  • FIG. 11 partially shows a variant of the block diagram of FIG. 9.

Le système de débruitage représenté sur la figure 1 traite un signal numérique de parole s. Un module de fenêtrage 10 met ce signal s sous forme de fenêtres ou trames successives, constituées chacune d'un nombre N d'échantillons de signal numérique. De façon classique, ces trames peuvent présenter des recouvrements mutuels. Dans la suite de la présente description, on considérera, sans que ceci soit limitatif, que les trames sont constituées de N=256 échantillons à une fréquence d'échantillonnage Fe de 8 kHz, avec une pondération de Hamming dans chaque fenêtre, et des recouvrements de 50% entre fenêtres consécutives.The denoising system shown in FIG. 1 processes a digital speech signal s. A windowing module 10 puts this signal s in the form of successive windows or frames, each consisting of a number N of digital signal samples. Conventionally, these frames can have mutual overlaps. In the following of this description, it will be considered, without this being limiting, that the frames consist of N = 256 samples at a sampling frequency F e of 8 kHz, with a Hamming weighting in each window, and 50% overlap between consecutive windows.

La trame de signal est transformée dans le domaine fréquentiel par un module 11 appliquant un algorithme classique de transformée de Fourier rapide (TFR) pour calculer le module du spectre du signal. Le module 11 délivre alors un ensemble de N=256 composantes fréquentielles du signal de parole, notées Sn,f, où n désigne le numéro de la trame courante, et f une fréquence du spectre discret. Du fait des propriétés des signaux numériques dans le domaine fréquentiel, seuls les N/2=128 premiers échantillons sont utilisés.The signal frame is transformed in the frequency domain by a module 11 applying a conventional fast Fourier transform (TFR) algorithm to calculate the module of the signal spectrum. The module 11 then delivers a set of N = 256 frequency components of the speech signal, denoted S n, f , where n denotes the number of the current frame, and f a frequency of the discrete spectrum. Due to the properties of digital signals in the frequency domain, only the first N / 2 = 128 samples are used.

Pour calculer les estimations du bruit contenu dans le signal s, on n'utilise pas la résolution fréquentielle disponible en sortie de la transformée de Fourier rapide, mais une résolution plus faible, déterminée par un nombre I de bandes de fréquences couvrant la bande [0,Fe/2] du signal. Chaque bande i (1≤i≤I) s'étend entre une fréquence inférieure f(i-1) et une fréquence supérieure f(i), avec f(0)=0, et f(I)=Fe/2. Ce découpage en bandes de fréquences peut être uniforme (f(i)-f(i-1)=Fe/2I). Il peut également être non uniforme (par exemple selon une échelle de barks). Un module 12 calcule les moyennes respectives des composantes spectrales Sn,f du signal de parole par bandes, par exemple par une pondération uniforme telle que :

Figure 00060001
To calculate the noise estimates contained in the signal s, the frequency resolution available at the output of the fast Fourier transform is not used, but a lower resolution, determined by a number I of frequency bands covering the band [0 , F e / 2] of the signal. Each band i (1≤i≤I) extends between a lower frequency f (i-1) and a higher frequency f (i), with f (0) = 0, and f (I) = F e / 2 . This division into frequency bands can be uniform (f (i) -f (i-1) = F e / 2I). It can also be non-uniform (for example according to a barks scale). A module 12 calculates the respective averages of the spectral components S n, f of the speech signal in bands, for example by a uniform weighting such that:
Figure 00060001

Ce moyennage diminue les fluctuations entre les bandes en moyennant les contributions du bruit dans ces bandes, ce qui diminuera la variance de l'estimateur de bruit. En outre, ce moyennage permet une forte diminution de la complexité du système. This averaging reduces the fluctuations between the bands by averaging the noise contributions in these bands, which will decrease the variance of the estimator of noise. In addition, this averaging allows a large reduction of the complexity of the system.

Les composantes spectrales moyennées Sn,i sont adressées à un module 15 de détection d'activité vocale et à un module 16 d'estimation du bruit. Ces deux modules 15, 16 fonctionnent conjointement, en ce sens que des degrés d'activité vocale γn,i mesurés pour les différentes bandes par le module 15 sont utilisés par le module 16 pour estimer l'énergie à long terme du bruit dans les différentes bandes, tandis que ces estimations à long terme B andn,i sont utilisées par le module 15 pour procéder à un débruitage a priori du signal de parole dans les différentes bandes pour déterminer les degrés d'activité vocale γn,i.The averaged spectral components S n, i are addressed to a voice activity detection module 15 and to a noise estimation module 16. These two modules 15, 16 operate jointly, in the sense that degrees of vocal activity γ n, i measured for the different bands by the module 15 are used by the module 16 to estimate the long-term energy of the noise in the different bands, while these long-term estimates B and n, i are used by module 15 to carry out a priori denoising of the speech signal in the different bands to determine the degrees of vocal activity γ n, i .

Le fonctionnement des modules 15 et 16 peut correspondre aux organigrammes représentés sur les figures 2 et 3.The operation of modules 15 and 16 can correspond to the flowcharts represented in the figures 2 and 3.

Aux étapes 17 à 20, le module 15 procède au débruitage a priori du signal de parole dans les différentes bandes i pour la trame de signal n. Ce débruitage a priori est effectué selon un processus classique de soustraction spectrale non linéaire à partir d'estimations du bruit obtenues lors d'une ou plusieurs trames précédentes. A l'étape 17, le module 15 calcule, avec la résolution des bandes i, la réponse en fréquence Hpn,i du filtre de débruitage a priori, selon la formule : Hpn,i = Sn,i - α' n-τ1,i . B n-τ1,i Sn -τ2,i où τ1 et τ2 sont des retards exprimés en nombre de trames (τ1≥1, τ2≥0), et α ' / n,i est un coefficient de surestimation du bruit dont la détermination sera expliquée plus loin. Le retard τ1 peut être fixe (par exemple τ1=1) ou variable. Il est d'autant plus faible qu'on est confiant dans la détection d'activité vocale.In steps 17 to 20, the module 15 proceeds a priori to denoising the speech signal in the different bands i for the signal frame n. This a priori denoising is carried out according to a conventional process of non-linear spectral subtraction from noise estimates obtained during one or more previous frames. In step 17, the module 15 calculates, with the resolution of the bands i, the frequency response Hp n, i of the a priori denoising filter, according to the formula: hp or = S or - α ' not -τ1, i . B not -τ1, i S not -τ2, i where τ1 and τ2 are delays expressed in number of frames (τ1≥1, τ2≥0), and α '/ n, i is a noise overestimation coefficient whose determination will be explained later. The delay τ1 can be fixed (for example τ1 = 1) or variable. The lower the confidence in the detection of voice activity.

Aux étapes 18 à 20, les composantes spectrales Êpn,i sent calculées selon :

Figure 00080001
où βpi est un coefficient de plancher proche de 0, servant classiquement à éviter que le spectre du signal débruité prenne des valeurs négatives ou trop faibles qui provoqueraient un bruit musical.In steps 18 to 20, the spectral components pp n, i are calculated according to:
Figure 00080001
where βp i is a floor coefficient close to 0, conventionally used to prevent the spectrum of the denoised signal from taking negative or too low values which would cause musical noise.

Les étapes 17 à 20 consistent donc essentiellement a soustraire du spectre du signal une estimation, majorée par le coefficient α' n -τ1, i , du spectre du bruit estimé a priori.Steps 17 to 20 therefore essentially consist in subtracting from the spectrum of the signal an estimate, increased by the coefficient α ' n -τ1, i , from the spectrum of noise estimated a priori.

A l'étape 21, le module 15 calcule l'énergie du signal débruité a priori dans les différentes bandes i pour la trame n : E n,i = Êp 2 / n,i. Il calcule aussi une moyenne globale En,0 de l'énergie du signal débruité a priori, par une somme des énergies par bande En,i, pondérée par les largeurs de ces bandes. Dans les notations ci-dessous, l'indice i=0 sera utilisé pour désigner la bande globale du signal.In step 21, the module 15 calculates the energy of the a priori denoised signal in the different bands i for the frame n: E n, i = Êp 2 / n, i. It also calculates an overall average E n, 0 of the energy of the a priori denoised signal, by a sum of the energies per band E n, i , weighted by the widths of these bands. In the notations below, the index i = 0 will be used to designate the overall band of the signal.

Aux étapes 22 et 23, le module 15 calcule, pour chaque bande i (0≤i≤I), une grandeur ΔEn,i représentant la variation à court terme de l'énergie du signal débruité dans la bande i, ainsi qu'une valeur à long terme E n,i de l'énergie du signal débruité dans la bande i. La grandeur ΔEn,i peut être calculée par une formule simplifiée de dérivation : ΔE n,i = En -4,i + En -3,i - En -1,i - En,i 10 . Quant à l'énergie à long terme E n,i, elle peut être calculée à l'aide d'un facteur d'oubli B1 tel que 0<B1<1, à savoir E n,i = B1. E n-1,i + (1-B1). En,i. In steps 22 and 23, the module 15 calculates, for each band i (0≤i≤I), a quantity ΔE n, i representing the short-term variation of the energy of the noise-suppressed signal in the band i, as well as long-term value E n, i of the energy of the denoised signal in band i. The quantity ΔE n, i can be calculated by a simplified derivation formula: Δ E not , i = E not -4, i + E not -3, i - E not -1, i - E or 10 . As for long-term energy E n, i , it can be calculated using a forgetting factor B1 such that 0 <B1 <1, namely E n, i = B 1. E n -1, i + (1- B 1). E n, i .

Après avoir calculé les énergies En,i du signal débruité, ses variations à court terme ΔEn,i et ses valeurs à long terme E n,i de la manière indiquée sur la figure 2, le module 15 calcule, pour chaque bande i (0≤i≤I), une valeur ρi représentative de l'évolution de l'énergie du signal débruité. Ce calcul est effectué aux étapes 25 à 36 de la figure 3, exécutées pour chaque bande i entre i=0 et i=I. Ce calcul fait appel à un estimateur à long terme de l'enveloppe du bruit bai, à un estimateur interne bii et à un compteur de trames bruitées bi.After calculating the energies E n, i of the noise-suppressed signal, its short-term variations ΔE n, i and its long-term values E n, i in the manner indicated in FIG. 2, the module 15 calculates, for each band i (0 i i I I), a value ρ i representative of the evolution of the energy of the denoised signal. This calculation is carried out in steps 25 to 36 of FIG. 3, executed for each band i between i = 0 and i = I. This calculation uses a long-term estimator of the noise envelope ba i , an internal estimator bi i and a noisy frame counter b i .

A l'étape 25, la grandeur ΔEn,i est comparée à un seuil ε1. Si le seuil ε1 n'est pas atteint, le compteur bi est incrémenté d'une unité à l'étape 26. A l'étape 27, l'estimateur à long terme bai est comparé à la valeur de l'énergie lissée E n,i. Si baiE n,i, l'estimateur bai est pris égal à la valeur lissée En,i à l'étape 28, et le compteur bi est remis à zéro. La grandeur ρi, qui est prise égale au rapport bai/E n,i (étape 36), est alors égale à 1.In step 25, the quantity ΔE n, i is compared with a threshold ε1. If the threshold ε1 is not reached, the counter b i is incremented by one unit in step 26. In step 27, the long-term estimator ba i is compared to the value of the smoothed energy E n, i . If ba i E n, i , the estimator ba i is taken equal to the smoothed value E n, i in step 28, and the counter b i is reset to zero. The quantity ρ i , which is taken equal to the ratio ba i / E n, i (step 36), is then equal to 1.

Si l'étape 27 montre que bai<E n,i, le compteur bi est comparé à une valeur limite bmax à l'étape 29. Si bi>bmax, le signal est considéré comme trop stationnaire pour supporter de l'activité vocale. L'étape 28 précitée, qui revient à considérer que la trame ne comporte que du bruit, est alors exécutée. Si bi≤bmax à l'étape 29, l'estimateur interne bii est calculé à l'étape 33 selon : bii = (1-Bm ) . E n,i + Bm.bai Dans cette formule, Bm représente un coefficient de mise à jour compris entre 0,90 et 1. Sa valeur diffère selon l'état d'un automate de détection d'activité vocale (étapes 30 à 32). Cet état δn-1 est celui déterminé lors du traitement de la trame précédente. Si l'automate est dans un état de détection de parole (δn-1=2 à l'étape 30), le coefficient Bm prend une valeur Bmp très proche de 1 pour que l'estimateur du bruit soit très faiblement mis à jour en présence de parole. Dans le cas contraire, le coefficient Bm prend une valeur Bms plus faible, pour permettre une mise a jour plus significative de l'estimateur de bruit en phase de silence. A l'étape 34, l'écart bai-bii entre l'estimateur à long terme et l'estimateur interne du bruit est comparé à un seuil ε2. Si le seuil ε2 n'est pas atteint, l'estimateur à long terme bai est mis à jour avec la valeur de l'estimateur interne bii à l'étape 35. Sinon, l'estimateur à long terme bai reste inchangé. On évite ainsi que de brutales variations dues à un signal de parole conduisent à une mise à jour de l'estimateur de bruit.If step 27 shows that ba i < E n, i , the counter b i is compared with a limit value bmax in step 29. If b i > bmax, the signal is considered to be too stationary to support vocal activity. The aforementioned step 28, which amounts to considering that the frame contains only noise, is then executed. If b i ≤bmax in step 29, the internal estimator bi i is calculated in step 33 according to: bi i = (1- B m ). E or + Bm.ba i In this formula, Bm represents an update coefficient between 0.90 and 1. Its value differs depending on the state of a voice activity detection automaton (steps 30 to 32). This state δ n-1 is that determined during the processing of the previous frame. If the automaton is in a speech detection state (δ n-1 = 2 in step 30), the coefficient Bm takes a value Bmp very close to 1 so that the noise estimator is very slightly updated in the presence of speech. Otherwise, the coefficient Bm takes a lower value Bms, to allow a more significant update of the noise estimator in silence phase. In step 34, the difference ba i -bi i between the long-term estimator and the internal noise estimator is compared to a threshold ε2. If the threshold ε2 is not reached, the long-term estimator ba i is updated with the value of the internal estimator bi i in step 35. Otherwise, the long-term estimator ba i remains unchanged . This avoids that sudden variations due to a speech signal lead to an update of the noise estimator.

Après avoir obtenu les grandeurs ρi, le module 15 procède aux décisions d'activité vocale à l'étape 37. Le module 15 met d'abord à jour l'état de l'automate de détection selon la grandeur ρ0 calculée pour l'ensemble de la bande du signal. Le nouvel état δn de l'automate dépend de l'état précédent δn-1 et de ρ0, de la manière représentée sur la figure 4.After having obtained the quantities ρ i , the module 15 proceeds to the voice activity decisions in step 37. The module 15 first updates the state of the detection automaton according to the quantity ρ 0 calculated for l of the signal band. The new state δ n of the automaton depends on the previous state δ n-1 and on ρ 0 , as shown in Figure 4.

Quatre états sont possibles : δ=0 détecte le silence, ou absence de parole ; δ=2 détecte la présence d'une activité vocale ; et les états δ=1 et δ=3 sont des états intermédiaires de montée et de descente. Lorsque l'automate est dans l'état de silence (δn-1=0), il y reste si ρ0 ne dépasse pas un premier seuil SE1, et il passe dans l'état de montée dans le cas contraire. Dans l'état de montée (δn-1=1), il revient dans l'état de silence si ρ0 est plus petit que le seuil SE1, il passe dans l'état de parole si ρ0 est plus grand qu'un second seuil SE2 plus grand que le seuil SE1, et il reste dans l'état de montée si SE1≤ ρ0≤SE2. Lorsque l'automate est dans l'état de parole (δn-1=2), il y reste si ρ0 dépasse un troisième seuil SE3 plus petit que le seuil SE2, et il passe dans l'état de descente dans le cas contraire. Dans l'état de descente (δn-1=3), l'automate revient dans l'état de parole si ρ0 est plus grand que le seuil SE2, il revient dans l'état de silence si ρ0 est en decà d'un quatrième seuil SE4 plus petit que le seuil SE2, et il reste dans l'état de descente si SE4≤ ρ0≤SE2.Four states are possible: δ = 0 detects silence, or absence of speech; δ = 2 detects the presence of voice activity; and the states δ = 1 and δ = 3 are intermediate states of ascent and descent. When the automaton is in the state of silence (δ n-1 = 0), it remains there if ρ 0 does not exceed a first threshold SE1, and it goes into the state of ascent otherwise. In the rising state (δ n-1 = 1), it returns to the state of silence if ρ 0 is smaller than the threshold SE1, it goes into the speaking state if ρ 0 is greater than a second threshold SE2 greater than the threshold SE1, and it remains in the rising state if SE1≤ ρ 0 ≤SE2. When the automaton is in the speech state (δ n-1 = 2), it remains there if ρ 0 exceeds a third threshold SE3 smaller than the threshold SE2, and it goes into the descent state in the case opposite. In the descending state (δ n-1 = 3), the automaton returns to the speaking state if ρ 0 is greater than the threshold SE2, it returns to the silent state if ρ 0 is below a fourth threshold SE4 smaller than the threshold SE2, and it remains in the descent state if SE4≤ ρ 0 ≤SE2.

A l'étape 37, le module 15 calcule également les degrés d'activité vocale γn,i dans chaque bande i≥1. Ce degré γn,i est de préférence un paramètre non binaire, c'est-à-dire que la fonction γn,i=g(ρi) est une fonction variant continûment entre 0 et 1 en fonction des valeurs prises par la grandeur ρi. Cette fonction a par exemple l'allure représentée sur la figure 5.In step 37, the module 15 also calculates the degrees of vocal activity γ n, i in each band i≥1. This degree γ n, i is preferably a non-binary parameter, that is to say that the function γ n, i = g (ρ i ) is a function continuously varying between 0 and 1 depending on the values taken by the magnitude ρ i . This function has for example the appearance shown in FIG. 5.

Le module 16 calcule les estimations du bruit par bande, qui seront utilisées dans le processus de débruitage, en utilisant les valeurs successives des composantes Sn,i et des degrés d'activité vocale γn,i. Ceci correspond aux étapes 40 à 42 de la figure 3. A l'étape 40, on détermine si l'automate de détection d'activité vocale vient de passer de l'état de montée à l'état de parole. Dans l'affirmative, les deux dernières estimations B andn -1, i et B andn -2, i Précédemment calculées pour chaque bande i≥1 sont corrigées conformément à la valeur de l'estimation précédente B andn -3, i . Cette correction est effectuée pour tenir compte du fait que, dans la phase de montée (δ=1), les estimations à long terme de l'énergie du bruit dans le processus de détection d'activité vocale (étapes 30 à 33) ont pu être calculées comme si le signal ne comportait que du bruit (Bm=Bms), de sorte qu'elles risquent d'être entachées d'erreur.Module 16 calculates the band noise estimates, which will be used in the denoising process, using the successive values of the components S n, i and the degrees of voice activity γ n, i . This corresponds to steps 40 to 42 of FIG. 3. In step 40, it is determined whether the voice activity detection machine has just gone from the rising state to the speaking state. If so, the last two estimates B and n -1, i and B and n -2, i Previously calculated for each band i≥1 are corrected according to the value of the previous estimate B and n -3, i . This correction is made to account for the fact that, in the ascent phase (δ = 1), the long-term noise energy estimates in the voice activity detection process (steps 30 to 33) may have been be calculated as if the signal contained only noise (Bm = Bms), so that they risk being tainted with error.

A l'étape 42, le module 16 met à jour les estimations du bruit par bande selon les formules :

Figure 00120001
Figure 00120002
où λB désigne un facteur d'oubli tel que 0<λB<1. La formule (6) met en évidence la prise en compte du degré d'activité vocale non binaire γn,i.In step 42, the module 16 updates the noise estimates per band according to the formulas:
Figure 00120001
Figure 00120002
where λ B denotes a forgetting factor such as 0 <λ B <1. Formula (6) shows how the degree of non-binary vocal activity γ n, i is taken into account.

Comme indiqué précédemment, les estimations à long terme du bruit B andn,i font l'objet d'une surestimation, par un module 45 (figure 1), avant de procéder au débruitage par soustraction spectrale non linéaire. Le module 45 calcule le coefficient de surestimation α ' / n,i précédemment évoqué, ainsi qu'une estimation majorée B and ' / n,iqui correspond essentiellement à α ' / n,i.B andn,i. As indicated above, the long-term noise estimates B and n, i are overestimated, by a module 45 (FIG. 1), before proceeding to denoising by nonlinear spectral subtraction. Module 45 calculates the overestimation coefficient α '/ n, i previously mentioned, as well as an increased estimate B and ' / n, i which essentially corresponds to α '/ n, i . B and n, i .

L'organisation du module de surestimation 45 est représentée sur la figure 6. L'estimation majorée B and ' / n,i est obtenue en combinant l'estimation à long terme B andn,i et une mesure ΔB max / n,i de la variabilité de la composante du bruit dans la bande i autour de son estimation à long terme. Dans l'exemple considéré, cette combinaison est, pour l'essentiel, une simple somme réalisée par un additionneur 46. Ce pourrait également être une somme pondérée.The organization of the overestimation module 45 is shown in FIG. 6. The enhanced estimate B and '/ n, i is obtained by combining the long-term estimate B and n, i and a measure ΔB max / n, i the variability of the noise component in band i around its long-term estimate. In the example considered, this combination is essentially a simple sum made by an adder 46. It could also be a weighted sum.

Le coefficient de surestimation α ' / n,i est égal au rapport entre la somme B andn,i + ΔB max / n,i délivrée par l'additionneur 46 et l'estimation à long terme retardée B andn-τ 3, i (diviseur 47), plafonné à une valeur limite αmax, par exemple αmax=4 (bloc 48). Le retard τ3 sert à corriger le cas échéant, dans les phases de montée (δ=1), la valeur du coefficient de surestimation α ' / n,i, avant que les estimations à long terme aient été corrigées par les étapes 40 et 41 de la figure 3 (par exemple τ3=3).The overestimation coefficient α '/ n, i is equal to the ratio between the sum B and n, i + ΔB max / n, i delivered by the adder 46 and the delayed long-term estimate B and n-τ 3, i (divider 47), capped at a limit value α max , for example α max = 4 (block 48). The delay τ3 is used to correct, if necessary, in the rise phases (δ = 1), the value of the overestimation coefficient α '/ n, i , before the long-term estimates have been corrected by steps 40 and 41 in Figure 3 (for example τ3 = 3).

L'estimation majorée B and ' / n,i est finalement prise égale à α'n,i. B andn-τ3,i (multiplieur 49).The increased estimate B and '/ n, i is finally taken equal to α' n, i . B and n-τ3, i (multiplier 49).

La mesure ΔB max / n,i de la variabilité du bruit reflète la variance de l'estimateur de bruit. Elle est obtenue en fonction des valeurs de Sn,i et de B andn,i calculées pour un certain nombre de trames précédentes sur lesquelles le sianal de parole ne présente pas d'activité vocale dans la bande i. C'est une fonction des écarts Sn-k,i - B n-k,i calculés pour un nombre K de trames de silence (n-k≤n). Dans l'exemple représenté, cette fonction est simplement le maximum (bloc 50). Pour chaque trame n, le degré d'activité vocale γn,i est comparé à un seuil (bloc 51) pour décider si l'écart Sn,i - B n,i , calculé en 52-53, doit ou non être chargé dans une file d'attente 54 de K emplacements organisée en mode premier entré-premier sorti (FIFO). Si γn,i ne dépasse pas le seuil (qui peut être égal à 0 si la fonction g() a la forme de la figure 5), la FIFO 54 n'est pas alimentée, tandis qu'elle l'est dans le cas contraire. La valeur maximale contenue dans la FIFO 54 est alors fournie comme mesure de variabilité ΔB max / n,i .The measure Δ B max / n, i of the noise variability reflects the variance of the noise estimator. It is obtained as a function of the values of S n, i and of B and n, i calculated for a certain number of previous frames on which the speech channel does not present any vocal activity in the band i. It is a function of deviations S nk, i - B nk, i calculated for a number K of frames of silence (nk≤n). In the example shown, this function is simply the maximum (block 50). For each frame n, the degree of voice activity γ n, i is compared to a threshold (block 51) to decide whether the difference S or - B or , calculated in 52-53, may or may not be loaded into a queue 54 of K locations organized in first-in-first-out (FIFO) mode. If γ n, i does not exceed the threshold (which can be equal to 0 if the function g () has the form of FIG. 5), the FIFO 54 is not supplied, while it is in the opposite case. The maximum value contained in FIFO 54 is then provided as a measure of variability Δ B max / n, i .

La mesure de variabilité ΔB max / n,i peut, en variante, être obtenue en fonction des valeurs Sn,f (et non Sn,i) et B andn,i. On procède alors de la même manière, sauf que la FIFO 54 contient non pas Sn-k,i - B n-k,i pour chacune des bandes i, mais plutôt

Figure 00140001
The measure of variability Δ B max / n, i can, as a variant, be obtained as a function of the values S n, f (and not S n, i ) and B and n, i . We then proceed in the same way, except that FIFO 54 does not contain S nk, i - B nk, i for each of the bands i, but rather
Figure 00140001

Grâce aux estimations indépendantes des fluctuations a long terme du bruit B andn,i et de sa variabilité à court terme ΔB max / n,i, l'estimateur majoré B and ' / n,i procure une excellente robustesse aux bruits musicaux du procédé de débruitage.Thanks to independent estimates of long-term fluctuations in noise B and n, i and its short-term variability Δ B max / n, i , the enhanced estimator B and '/ n, i provides excellent robustness to the musical noises of the denoising process.

Une première phase de la soustraction spectrale est réalisée par le module 55 représenté sur la figure 1. Cette phase fournit, avec la résolution des bandes i (1≤i≤I), la réponse en fréquence H 1 / n,id'un premier filtre de débruitage, en fonction des composantes Sn,i et B andn,i et des coefficients de surestimation α ' / n,i. Ce calcul peut être effectué pour chaque bande i selon la formule :

Figure 00140002
où τ4 est un retard entier déterminé tel que τ4≥0 (par exemple τ4=0). Dans l'expression (7), le coefficient β 1 / i représente, comme le coefficient βpi de la formule (3), un plancher servant classiquement à éviter les valeurs négatives ou trop faibles du signal débruité.A first phase of the spectral subtraction is carried out by the module 55 shown in FIG. 1. This phase provides, with the resolution of the bands i (1 i i I I), the frequency response H 1 / n, i of first denoising filter, as a function of the components S n, i and B and n, i and the overestimation coefficients α '/ n, i . This calculation can be performed for each band i according to the formula:
Figure 00140002
where τ4 is a determined integer delay such as τ4≥0 (for example τ4 = 0). In expression (7), the coefficient β 1 / i represents, like the coefficient β p i of formula (3), a floor conventionally used to avoid negative or too low values of the denoised signal.

De façon connue (EP-A-0 534 837), le coefficient de surestimation α' n,i pourrait être remplacé dans la formule (7) par un autre coefficient égal à une fonction de α' n,i et d'une estimation du rapport signal-sur-bruit (par exemple Sn,i/B andn,i ), cette fonction étant décroissante selon la valeur estimée du rapport signal-sur-bruit. Cette fonction est alors égale à α' n,i pour les valeurs les plus faibles du rapport signal-sur-bruit. En effet, lorsque le signal est très bruité, il n'est a priori pas utile de diminuer le facteur de surestimation. Avantageusement, cette fonction décroít vers zéro pour les valeurs les plus élevées du rapport signal/bruit. Ceci permet de protéger les zones les plus énergétiques du spectre, où le signal de parole est le plus significatif, la quantité soustraite du signal tendant alors vers zéro.As is known (EP-A-0 534 837), the overestimation coefficient α ' n, i could be replaced in formula (7) by another coefficient equal to a function of α' n, i and an estimate of the signal-to-noise ratio (for example S n, i / B and n, i ), this function decreasing according to the estimated value of the signal-to-noise ratio. This function is then equal to α ' n, i for the lowest values of the signal-to-noise ratio. Indeed, when the signal is very noisy, it is a priori not useful to reduce the overestimation factor. Advantageously, this function decreases to zero for the highest values of the signal / noise ratio. This protects the most energetic areas of the spectrum, where the speech signal is most significant, the amount subtracted from the signal then tending towards zero.

Cette stratégie peut être affinée en l'appliquant de manière sélective aux harmoniques de la fréquence tonale (« pitch ») du signal de parole lorsque celui-ci présente une activité vocale.This strategy can be refined by applying it selectively to frequency harmonics pitch of the speech signal when it has voice activity.

Ainsi, dans la réalisation représentée sur la figure 1, une seconde phase de débruitage est réalisée par un module 56 de protection des harmoniques. Ce module calcule, avec la résolution de la transformée de Fourier, la réponse en fréquence H 2 / n,f d'un second filtre de débruitage en fonction des paramètres H 1 / n,i, α ' / n,i, B andn,i , δn, Sn,i et de la fréquence tonale fp=Fe/Tp calculée en dehors des phases de silence par un module d'analyse harmonique 57. En phase de silence (δn=0), le module 56 n'est pas en service, c'est-à-dire que H 2 / n,f = H 1 / n,i pour chaque fréquence f d'une bande i. Le module 57 peut appliquer toute méthode connue d'analyse du signal de parole de la trame pour déterminer la période Tp, exprimée comme un nombre entier ou fractionnaire d'échantillons, par exemple une méthode de prédiction linéaire.Thus, in the embodiment shown in FIG. 1, a second denoising phase is carried out by a module 56 for protecting harmonics. This module calculates, with the resolution of the Fourier transform, the frequency response H 2 / n, f of a second denoising filter as a function of the parameters H 1 / n, i , α '/ n, i , B and n, i , δ n , S n, i and the tone frequency f p = F e / T p calculated outside the phases of silence by a harmonic analysis module 57. In the phase of silence (δ n = 0) , the module 56 is not in service, that is to say that H 2 / n, f = H 1 / n, i for each frequency f of a band i. The module 57 can apply any known method of analysis of the speech signal of the frame to determine the period T p , expressed as an integer or fractional number of samples, for example a linear prediction method.

La protection apportée par le module 56 peut consister à effectuer, pour chaque fréquence f appartenant à une bande i :

Figure 00160001
The protection provided by the module 56 may consist in carrying out, for each frequency f belonging to a band i:
Figure 00160001

Δf=Fe/N représente la résolution spectrale de la transformée de Fourier. Lorsque H 2 / n,f=1, la quantité soustraite de la composante Sn,f sera nulle. Dans ce calcul, les coefficients de plancher β 2 / i (par exemple β 2 / i = β 1 / i ) expriment le fait que certaines harmoniques de la fréquence tonale fp peuvent être masquées par du bruit, de sorte qu'il n'est pas utile de les protéger.Δf = F e / N represents the spectral resolution of the Fourier transform. When H 2 / n, f = 1, the quantity subtracted from the component S n, f will be zero. In this calculation, the floor coefficients β 2 / i (for example β 2 / i = β 1 / i ) express the fact that certain harmonics of the tone frequency f p can be masked by noise, so that n protecting them is useless.

Cette stratégie de protection est de préférence appliquée pour chacune des fréquences les plus proches des harmoniques de fp, c'est-à-dire pour η entier quelconque.This protection strategy is preferably applied for each of the frequencies closest to the harmonics of f p , that is to say for any arbitrary integer.

Si on désigne par δfp la résolution fréquentielie avec laquelle le module d'analyse 57 produit la fréquence tonale estimée fp, c'est-à-dire que la fréquence tonale réelle est comprise entre fp-δfp/2 et fp +δfp/2, alors l'écart entre la η-ième harmonique de la fréquence tonale réelle est son estimation η×fp (condition (9)) peut aller jusqu'à ±η×δfp/2. Pour les valeurs élevées de η, cet écart peut être supérieur à la demi-résolution spectrale Δf/2 de la transformée de Fourier. Pour tenir compte de cette incertitude et garantir la bonne protection des harmoniques de la fréquence tonale réelle, on peut protéger chacune des fréquences de l'intervalle

Figure 00160002
c'est-à-dire remplacer la condition (9) ci--dessus par :
Figure 00160003
Cette façon de procéder (condition (9')) présente un intérêt particulier lorsque les valeurs de η peuvent être grandes, notamment dans le cas où le procédé est utilisé dans un système à bande élargie.If we denote by δf p the frequency resolution with which the analysis module 57 produces the estimated tone frequency f p , that is to say that the real tone frequency is between f p -δf p / 2 and f p + δf p / 2, then the difference between the η-th harmonic of the real tonal frequency is its estimate η × f p (condition (9)) can go up to ± η × δf p / 2. For high values of η, this difference can be greater than the spectral half-resolution Δf / 2 of the Fourier transform. To take account of this uncertainty and to guarantee the good protection of the harmonics of the real tonal frequency, we can protect each of the frequencies of the interval
Figure 00160002
i.e. replace condition (9) above with:
Figure 00160003
This way of proceeding (condition (9 ')) is of particular interest when the values of η can be large, in particular in the case where the method is used in a broadband system.

Pour chaque fréquence protégée, la réponse en fréquence corrigée H 2 / n,f peut être égale à 1 comme indiqué ci-dessus, ce qui correspond à la soustraction d'une quantité nulle dans le cadre de la soustraction spectrale, c'est-à-dire à une protection complète de la fréquence en question. Plus généralement, cette réponse en fréquence corrigée H 2 / n,f pourrait être prise égale à une valeur comprise entre 1 et H 1 / n,f selon le degré de protection souhaité, ce qui correspond à la soustraction d'une quantité inférieure à celle qui serait soustraite si la fréquence en question n'était pas protégée.For each protected frequency, the corrected frequency response H 2 / n, f can be equal to 1 as indicated above, which corresponds to the subtraction of a zero quantity in the context of spectral subtraction, that is to say ie full protection of the frequency in question. More generally, this corrected frequency response H 2 / n, f could be taken equal to a value between 1 and H 1 / n, f depending on the degree of protection desired, which corresponds to the subtraction of an amount less than which would be subtracted if the frequency in question was not protected.

Les composantes spectrales S 2 / n,f d'un signal débruité sont calculées par un multiplieur 58 : S 2 n,f = H 2 n,f .S n,f The spectral components S 2 / n, f of a denoised signal are calculated by a multiplier 58: S 2 n, f = H 2 n, f . S n, f

Ce signal S 2 / n,f est fourni à un module 60 qui calcule, pour chaque trame n, une courbe de masquage en appliquant un modèle psychoacoustique de perception auditive par l'oreille humaine.This signal S 2 / n, f is supplied to a module 60 which calculates, for each frame n, a masking curve by applying a psychoacoustic model of auditory perception by the human ear.

Le phénomène de masquage est un principe connu du fonctionnement de l'oreille humaine. Lorsque deux fréquences sont entendues simultanément, il est possible que l'une des deux ne soit plus audible. On dit alors qu'elle est masquée.The masking phenomenon is a principle known from functioning of the human ear. When two frequencies are heard simultaneously, it is possible that one of the two is no longer audible. We say then that it is hidden.

Il existe différentes méthodes pour calculer des courbes de masquage. On peut par exemple utiliser celle développée par J.D. Johnston («Transform Coding of Audio Signais Using Perceptual Noise Criteria », IEEE Journal on Selected Areas in Communications, Vol. 6, No. 2, février 1988). Dans cette méthode, on travaille dans l'échelle fréquentielle des barks. La courbe de masquage est vue comme la convolution de la fonction d'étalement spectral de la membrane basilaire dans le domaine bark avec le signal excitateur, constitué dans la présente application par le signal S 2 / n,f. La fonction d'étalement spectral peut être modélisée de la manière représentée sur la figure 7. Pour chaque bande de bark, on calcule la contribution des bandes inférieures et supérieures convoluées par la fonction d'étalement de la membrane basilaire :

Figure 00180001
où les indices q et q' désignent les bandes de bark (0≤q, q'≤Q), et S 2 / n,q, représente la moyenne des composantes S 2 / n,f du signal excitateur débruité pour les fréquences discrètes f appartenant à la bande de bark q' .There are different methods for calculating masking curves. One can for example use that developed by JD Johnston ("Transform Coding of Audio Signais Using Perceptual Noise Criteria", IEEE Journal on Selected Areas in Communications, Vol. 6, No. 2, February 1988). In this method, we work in the frequency scale of the barks. The masking curve is seen as the convolution of the spectral spreading function of the basilar membrane in the bark domain with the excitatory signal, constituted in the present application by the signal S 2 / n, f . The spectral spreading function can be modeled as shown in Figure 7. For each bark band, we calculate the contribution of the upper and lower bands convoluted by the spreading function of the basilar membrane:
Figure 00180001
where the indices q and q 'denote the bark bands (0≤q, q'≤Q), and S 2 / n, q , represents the mean of the components S 2 / n, f of the excitatory signal denoised for the discrete frequencies f belonging to the bark band q '.

Le seuil de masquage Mn,q est obtenu par le module 60 pour chaque bande de bark q, selon la formule : Mn,q = Cn,q/Rq où Rq dépend du caractère plus ou moins voisé du signal. De façon connue, une forme possible de Rq est : 10. log10(Rq) = (A+q) .χ + B.(1-χ) avec A=14,5 et B=5,5. χ désigne un degré de voisement du signal de parole, variant entre zéro (pas de voisement) et 1 (signal fortement voisé). Le paramètre χ peut être de la forme connue :

Figure 00180002
où SFM représente, en décibels, le rapport entre la moyenne arithmétique et la moyenne géométrique de l'énergie des bandes de bark, et SFMmax=-60 dB.The masking threshold M n, q is obtained by the module 60 for each bark band q, according to the formula: M n, q = C n, q / R q where R q depends on the more or less voiced character of the signal. As is known, a possible form of R q is: 10. log 10 (R q ) = (A + q) .χ + B. (1-χ) with A = 14.5 and B = 5.5. χ designates a degree of voicing of the speech signal, varying between zero (no voicing) and 1 (strongly voiced signal). The parameter χ can be of the known form:
Figure 00180002
where SFM represents, in decibels, the ratio between the arithmetic mean and the geometric mean of the energy of the bark bands, and SFM max = -60 dB.

Le système de débruitage comporte encore un module 62 qui corrige la réponse en fréquence du filtre de débruitage, en fonction de la courbe de masquage Mn,q calculée par le module 60 et des estimations majorées B and ' / n,i calculées par le module 45. Le module 62 décide du niveau de débruitage qui doit réellement être atteint.The denoising system also includes a module 62 which corrects the frequency response of the denoising filter, as a function of the masking curve M n, q calculated by the module 60 and of the increased estimates B and '/ n, i calculated by the module 45. Module 62 decides the level of denoising which must really be reached.

En comparant l'enveloppe de l'estimation majorée du bruit avec l'enveloppe formée par les seuils de masquage Mn,q, on décide de ne débruiter le signal que dans la mesure où l'estimation majorée B and ' / n,i dépasse la courbe de masquage. Ceci évite de supprimer inutilement du bruit masqué par de la parole.By comparing the envelope of the estimate increased by the noise with the envelope formed by the masking thresholds M n, q , it is decided to denoise the signal only insofar as the estimate increased B and '/ n, i exceeds the masking curve. This avoids unnecessarily removing noise masked by speech.

La nouvelle réponse H 3 / n,f, pour une fréquence f appartenant à la bande i définie par le module 12 et à la bande de bark q, dépend ainsi de l'écart relatif entre l'estimation majorée B and ' / n,i de la composante spectrale correspondante du bruit et la courbe de masquage Mn,q, de la manière suivante :

Figure 00190001
The new response H 3 / n, f , for a frequency f belonging to the band i defined by the module 12 and to the bark band q, thus depends on the relative difference between the increased estimate B and '/ n, i of the corresponding spectral component of the noise and the masking curve M n, q , as follows:
Figure 00190001

En d'autres termes, la quantité soustraite d'une composante spectrale Sn,f, dans le processus de soustraction spectrale ayant la réponse fréquentielle H 3 / n,f, est sensiblement égale au minimum entre d'une part la quantité soustraite de cette composante spectrale dans le processus de soustraction spectrale ayant la réponse fréquentielle H 2 / n,f, et d'autre part la fraction de l'estimation majorée B and ' / n,i de la composante spectrale correspondante du bruit qui, le cas échéant, dépasse la courbe de masquage Mn,q.In other words, the quantity subtracted from a spectral component S n, f , in the process of spectral subtraction having the frequency response H 3 / n, f , is substantially equal to the minimum between on the one hand the quantity subtracted from this spectral component in the spectral subtraction process having the frequency response H 2 / n, f , and on the other hand the fraction of the increased estimate B and '/ n, i of the corresponding spectral component of the noise which, if if necessary, exceeds the masking curve M n, q .

La figure 8 illustre le principe de la correction appliquée par le module 62. Elle montre schématiquement un exemple de courbe de masquage Mn,q calculée sur la base des composantes spectrales S 2 / n,f du signal débruité, ainsi que l'estimation majorée B and ' / n,i du spectre du bruit. La quantité finalement soustraite des composantes Sn,f sera celle représentée par les zones hachurées, c'est-à-dire limitée à la fraction de l'estimation majorée B and ' / n,i des composantes spectrales du bruit qui dépasse la courbe de masquage.FIG. 8 illustrates the principle of the correction applied by the module 62. It schematically shows an example of masking curve M n, q calculated on the basis of the spectral components S 2 / n, f of the noise-suppressed signal, as well as the estimation plus B and '/ n, i of the noise spectrum. The quantity finally subtracted from the components S n, f will be that represented by the hatched areas, that is to say limited to the fraction of the increased estimate B and '/ n, i of the spectral components of the noise which exceeds the curve masking.

Cette soustraction est effectuée en multipliant la réponse fréquentielle H 3 / n,f du filtre de débruitage par les composantes spectrales Sn,f du signal de parole (multiplieur 64). Un module 65 reconstruit alors le signal débruité dans le domaine temporel, en opérant la transformée de Fourier rapide inverse (TFRI) inverse des échantillons de fréquence S 3 / n,f délivrés par le multiplieur 64. Pour chaque trame, seuls les N/2=128 premiers échantillons du signal produit par le module 65 sont délivrés comme signal débruité final s3, après reconstruction par addition-recouvrement avec les N/2=128 derniers échantillons de la trame précédente (module 66).This subtraction is carried out by multiplying the frequency response H 3 / n, f of the denoising filter by the spectral components S n, f of the speech signal (multiplier 64). A module 65 then reconstructs the denoised signal in the time domain, by operating the inverse fast Fourier transform (TFRI) which reverses samples of frequency S 3 / n, f delivered by the multiplier 64. For each frame, only the N / 2 = 128 first samples of the signal produced by module 65 are delivered as final denoised signal s 3 , after reconstruction by addition-overlap with the N / 2 = 128 last samples of the previous frame (module 66).

La figure 9 montre une forme de réalisation préférée d'un système de débruitage mettant en oeuvre l'invention. Ce système comporte un certain nombre d'éléments semblables à des éléments correspondants du système de la figure 1, pour lesquels on a utilisé les mêmes références numériques. Ainsi, les modules 10, 11, 12, 15, 16, 45 et 55 fournissent notamment les quantités Sn,i, B andn,i , α ' / n,i,, B and ' / n,i et H 1 / n,f pour effectuer le débruitage sélectif.FIG. 9 shows a preferred embodiment of a denoising system implementing the invention. This system comprises a certain number of elements similar to corresponding elements of the system of FIG. 1, for which the same reference numbers have been used. Thus, modules 10, 11, 12, 15, 16, 45 and 55 provide in particular the quantities S n, i , B and n, i , α '/ n, i,, B and ' / n, i and H 1 / n, f to perform selective denoising.

La résolution en fréquence de la transformée de Fourier rapide 11 est une limitation du système de la figure 1. En effet, la fréquence faisant l'objer de la protection par le module 56 n'est pas nécessairement la fréquence tonale précise fp, mais la fréquence la plus proche de celle-ci dans le spectre discret. Dans certains cas, on peut alors protéger des harmoniques relativement éloignées de celle de la fréquence tonale. Le système de la figure 9 pallie cet inconvénient grâce à un conditionnement approprié du signal de parole.The frequency resolution of the fast Fourier transform 11 is a limitation of the system of FIG. 1. In fact, the frequency causing protection by the module 56 is not necessarily the precise tonal frequency f p , but the frequency closest to it in the discrete spectrum. In some cases, it is then possible to protect harmonics relatively far from that of the tone frequency. The system of FIG. 9 overcomes this drawback thanks to an appropriate conditioning of the speech signal.

Dans ce conditionnement, on modifie la fréquence d'échantillonnage du signal de telle sorte que la période 1/fp couvre exactement un nombre entier de temps d'échantillon du signal conditionné.In this conditioning, the sampling frequency of the signal is modified so that the period 1 / f p covers exactly an integer number of sample times of the conditioned signal.

De nombreuses méthodes d'analyse harmonique pouvant être mises en oeuvre par le module 57 sont capables de fournir une valeur fractionnaire du retard Tp, exprimé en nombre d'échantillons à la fréquence d'échantillonnage initiale Fe. On choisit alors une nouvelle fréquence d'échantillonnage fe de telle sorte qu'elle soit égale à un multiple entier de la fréquence tonale estimée, soit fe=p.fp=p.Fe/Tp=K.Fe, avec p entier. Afin de ne pas perdre d'échantillons de signal, il convient que fe soit supérieure à Fe. On peut notamment imposer qu'elle soit comprise entre Fe et 2Fe (1≤K≤2), pour faciliter la mise en oeuvre du conditionnement.Many harmonic analysis methods that can be implemented by the module 57 are capable of providing a fractional value of the delay T p , expressed in number of samples at the initial sampling frequency F e . A new sampling frequency f e is then chosen so that it is equal to an integer multiple of the estimated tone frequency, ie f e = pf p = pF e / T p = KF e , with p integer. In order not to lose signal samples, f e should be greater than F e . One can in particular impose that it is between F e and 2F e (1 K K 2 2), to facilitate the implementation of the packaging.

Bien entendu, si aucune activité vocale n'est détectée sur la trame courante (δn≠0) , ou si le retard Tp estimé par le module 57 est entier, il n'est pas nécessaire de conditionner le signal.Of course, if no voice activity is detected on the current frame (δ n ≠ 0), or if the delay T p estimated by the module 57 is whole, it is not necessary to condition the signal.

Afin que chacune des harmoniques de la fréquence tonale corresponde également à un nombre entier d'échantillons du signal conditionné, l'entier p doit être un diviseur de la taille N de la fenêtre de signal produite par le module 10 : N=αp, avec α entier. Cette taille N est usuellement une puissance de 2 pour la mise en oeuvre de la TFR. Elle est de 256 dans l'exemple considéré. So that each of the frequency harmonics tonal also corresponds to a whole number samples of the conditioned signal, the integer p must be a divider of the size N of the signal window produced by module 10: N = αp, with α integer. This size N is usually a power of 2 for putting implementation of the TFR. It is 256 in the example considered.

La résolution spectrale Δf de la transformée de Fourier discrète du signal conditionné est donnée par Δf=p.fp/N=fp/α. On a donc intérêt à choisir p petit de façon à maximiser α, mais suffisamment grand pour suréchantillonner. Dans l'exemple considéré, où Fe=8 kHz et N=256, les valeurs choisies pour les paramètres p et α sont indiquées dans le tableau I. 500 Hz < fp < 1000 Hz 8 < Tp < 16 p = 16 α = 16 250 Hz < fp < 500 Hz 16 < Tp < 32 p = 32 α = 8 125 Hz < fp < 250 Hz 32 < Tp < 64 p = 64 α = 4 62,5 Hz < fp < 125 Hz 64 < Tp < 123 p = 128 α = 2 31,25 Hz < fp < 62,5 Hz 128 < Tp < 256 p = 256 α = 1 The spectral resolution Δf of the discrete Fourier transform of the conditioned signal is given by Δf = pf p / N = f p / α. It is therefore advantageous to choose p small so as to maximize α, but large enough to oversample. In the example considered, where F e = 8 kHz and N = 256, the values chosen for the parameters p and α are indicated in table I. 500 Hz <f p <1000 Hz 8 <T p <16 p = 16 α = 16 250 Hz <f p <500 Hz 16 <T p <32 p = 32 α = 8 125 Hz <f p <250 Hz 32 <T p <64 p = 64 α = 4 62.5 Hz <f p <125 Hz 64 <T p <123 p = 128 α = 2 31.25 Hz <f p <62.5 Hz 128 <T p <256 p = 256 α = 1

Ce choix est effectué par un module 70 selon la valeur du retard Tp fournie par le module d'analyse harmonique 57. Le module 70 fournit le rapport K entre les fréquences d'échantillonnage à trois modules de changement de fréquence 71, 72, 73.This choice is made by a module 70 according to the value of the delay T p supplied by the harmonic analysis module 57. The module 70 provides the ratio K between the sampling frequencies to three frequency change modules 71, 72, 73 .

Le module 71 sert à transformer les valeurs Sn,i, B andn,i , α ' / n,i,, B and ' / n,i et H 1 / n,f, relatives aux bandes i définies par le module 12, dans l'échelle des fréquences modifiées (fréquence d'échantillonnage fe). Cette transformation consiste simplement à dilater les bandes i dans le facteur K. Les valeurs ainsi transformées sont fournies au module 56 de protection des harmoniques.The module 71 is used to transform the values S n, i , B and n, i , α '/ n, i,, B and ' / n, i and H 1 / n, f , relating to the bands i defined by the module 12, in the scale of modified frequencies (sampling frequency f e ). This transformation consists simply in dilating the bands i in the factor K. The values thus transformed are supplied to the module 56 for protecting harmonics.

Celui-ci opère alors de la même manière que précédemment pour fournir la réponse en fréquence H 2 / n,f du filtre de débruitage. Cette réponse H 2 / n,f est obtenue de la même manière que dans le cas de la figure 1 (conditions (8) et (9)), à cette différence près que, dans la condition (9), la fréquence tonale fp=fe/p est définie selon la valeur du retard entier p fourni par le module 70, la résolution en fréquence Δf étant également fournie par ce module 70.This then operates in the same manner as above to provide the frequency response H 2 / n, f of the denoising filter. This response H 2 / n, f is obtained in the same way as in the case of FIG. 1 (conditions (8) and (9)), except that in condition (9), the tonal frequency f p = f e / p is defined according to the value of the entire delay p supplied by the module 70, the frequency resolution Δf being also provided by this module 70.

Le module 72 procède au suréchantillonnage de la trame de N échantillons fournie par le module de fenêtrage 10. Le suréchantillonnage dans un facteur K rationnel (K=K1/K2) consiste à effectuer d'abord un suréchantillonnage dans le facteur entier K1, puis un sous-échantillonnage dans le facteur entier K2. Ces suréchantillonnage et sous-échantillonnage dans des facteurs entiers peuvent être effectués classiquement au moyen de bancs de filtres polyphase.The module 72 performs the oversampling of the frame of N samples provided by the windowing module 10. Oversampling in a rational K factor (K = K1 / K2) consists of first performing a oversampling in the integer factor K1, then a subsampling in the integer factor K2. These oversampling and subsampling in whole factors can be done classically at using polyphase filter banks.

La trame de signal conditionné s' fournie par le module 72 comporte KN échantillons à la fréquence fe. Ces échantillons sont adressés à un module 75 qui calcule leur transformée de Fourier. La transformation peut être effectuée à partir de deux blocs de N=256 échantillons : l'un constitué par les N premiers échantillons de la trame de longueur KN du signal conditionné s', et l'autre par les N derniers échantillons de cette trame. Les deux blocs présentent donc un recouvrement de (2-K)×100%. Pour chacun des deux blocs, on obtient un jeu de composantes de Fourier Sn,f. Ces composantes Sn,f sont fournies au multiplieur 58, qui les multiplie par la réponse spectrale H 2 / n,f pour délivrer les composantes spectrales S 2 / n,f du premier signal débruité.The conditioned signal frame supplied by the module 72 includes KN samples at the frequency f e . These samples are sent to a module 75 which calculates their Fourier transform. The transformation can be carried out from two blocks of N = 256 samples: one consisting of the first N samples of the frame of length KN of the conditioned signal s', and the other of the last N samples of this frame. The two blocks therefore have an overlap of (2-K) × 100%. For each of the two blocks, we obtain a set of Fourier components S n, f . These components S n, f are supplied to the multiplier 58, which multiplies them by the spectral response H 2 / n, f to deliver the spectral components S 2 / n, f of the first denoised signal.

Ces composantes S 2 / n,fsont adressées au module 60 qui calcule les courbes de masquage de la manière précédemment indiquée.These components S 2 / n, f are addressed to the module 60 which calculates the masking curves in the manner previously indicated.

De préférence, dans ce calcul des courbes de masquage, la grandeur χ désignant le degré de voisement du signal de parole (formule (13)) est prise de la forme χ=1-H, où H est une entropie de l'autocorrelation des composantes spectrales S 2 / n,f du signal conditionné débruité. Les autocorrelations A(k) sont calculées par un module 76, par exemple selon la formule :

Figure 00240001
Preferably, in this calculation of the masking curves, the quantity χ designating the degree of voicing of the speech signal (formula (13)) is taken from the form χ = 1-H, where H is an entropy of the autocorrelation of the spectral components S 2 / n, f of the denoised conditioned signal. The autocorrelations A (k) are calculated by a module 76, for example according to the formula:
Figure 00240001

Un module 77 calcule ensuite l'entropie normalisée H, et la fournit au module 60 pour le calcul de la courbe de masquage (voir S.A. McClellan et al : « Spectral Entropy : an Alternative Indicator for Rate Allocation ? », Proc. ICASSP'94, pages 201-204) :

Figure 00240002
A module 77 then calculates the normalized entropy H, and supplies it to module 60 for the calculation of the masking curve (see SA McClellan et al: “Spectral Entropy: an Alternative Indicator for Rate Allocation?”, Proc. ICASSP'94 , pages 201-204):
Figure 00240002

Grâce au conditionnement du signal, ainsi qu'à son débruitage par le filtre H 2 / n,f, l'entropie normalisée H constitue une mesure de voisement très robuste au bruit et aux variations de la fréquence tonale.Thanks to the conditioning of the signal, as well as to its denoising by the filter H 2 / n, f , the normalized entropy H constitutes a measurement of voicing very robust to noise and variations in the tonal frequency.

Le module de correction 62 opère de la même manière que celui du système de la figure 1, en tenant compte du bruit surestimé B and ' / n,i remis à l'échelle par le module de changement de fréquence 71. Il fournit la réponse en fréquence H 3 / n,f du filtre de débruitage définitif, qui est multipliée par les composantes spectrales Sn,f du signal conditionné par le multiplieur 64. Les composantes S 3 / n,f qui en résultent sont ramenées dans le domaine temporel par le module de TFRI 65. En sortie de cette TFRI 65, un module 80 combine, pour chaque trame, les deux blocs de signal issus du traitement des deux blocs recouvrants délivrés par la TFR 75. Cette combinaison peut consister en une somme avec pondération de Hamming des échantillons, pour former une trame de signal conditionné débruité de KN échantillons.The correction module 62 operates in the same way as that of the system in FIG. 1, taking into account the overestimated noise B and '/ n, i rescaled by the frequency change module 71. It provides the response in frequency H 3 / n, f of the final denoising filter, which is multiplied by the spectral components S n, f of the signal conditioned by the multiplier 64. The components S 3 / n, f which result therefrom are brought back into the time domain by the TFRI 65 module. At the output of this TFRI 65, a module 80 combines, for each frame, the two signal blocks resulting from the processing of the two overlapping blocks delivered by the TFR 75. This combination can consist of a weighted sum Hamming of samples, to form a denoised conditioned signal frame of KN samples.

Le signal conditionné débruité fourni par le module 80 fait l'objet d'un changement de fréquence d'échantillonnage par le module 73. Sa fréquence d'échantillonnage est ramenée à Fe=fe/K par les opérations inverses de celles effectuées par le module 75. Le module 73 délivre N=256 échantillons par trame. Après la reconstruction par addition-recouvrement avec les N/2=128 derniers échantillons de la trame précédente, seuls les N/2=128 premiers échantillons de la trame courante sont finalement conservés pour former le signal débruité final s3 (module 66).The denoised conditioned signal supplied by the module 80 is subject to a change in sampling frequency by the module 73. Its sampling frequency is reduced to F e = f e / K by the operations opposite to those performed by module 75. Module 73 delivers N = 256 samples per frame. After the reconstruction by addition-recovery with the N / 2 = 128 last samples of the previous frame, only the N / 2 = 128 first samples of the current frame are finally kept to form the final denoised signal s 3 (module 66).

Dans une forme de réalisation préférée, un module 82 gère les fenêtres formées par le module 10 et sauvegardées par le module 66, de façon telle qu'on sauvegarde un nombre M d'échantillons égal à un multiple entier de Tp=Fe/fp. On évite ainsi les problèmes de discontinuité de phase entre les trames. De façon correspondante, le module de gestion 82 commande le module de fenêtrage 10 pour que le recouvrement entre la trame courante et la prochaine corresponde à N-M. Il sera tenu de ce recouvrement de N-M échantillons dans la somme à recouvrement effectuée par le module 66 lors du traitement de la prochaine trame. A partir de la valeur de Tp fournie par le module d'analyse harmonique 57, le module 82 calcule le nombre d'échantillons à sauvegarder M=Tp×E[N/(2Tp) ], E[] désignant la partie entière, et commande de façon correspondante les modules 10 et 66.In a preferred embodiment, a module 82 manages the windows formed by the module 10 and saved by the module 66, in such a way that a number M of samples is saved equal to an integer multiple of T p = F e / f p . This avoids the problems of phase discontinuity between the frames. Correspondingly, the management module 82 controls the windowing module 10 so that the overlap between the current frame and the next one corresponds to NM. This recovery of NM samples will be required in the recovery sum carried out by the module 66 during the processing of the next frame. From the value of T p supplied by the harmonic analysis module 57, the module 82 calculates the number of samples to be saved M = T p × E [N / (2T p )], E [] designating the part whole, and command modules 10 and 66 accordingly.

Dans le mode de réalisation qu'on vient de décrire, la fréquence tonale est estimée de façon moyenne sur la trame. Or la fréquence tonale peut varier quelque peu sur cette durée. Il est possible de tenir compte de ces variations dans le cadre de la présente invention, en conditionnant le signal de façon à obtenir artificiellement une fréquence tonale constante dans la trame. In the embodiment which we have just describe, the tone frequency is estimated in an average way on the frame. However the tonal frequency can vary some little over this period. It is possible to take into account these variations in the context of the present invention, in conditioning the signal so as to obtain artificially a constant tone frequency in the frame.

Pour cela, on a besoin que le module 57 d'analyse harmonique fournisse les intervalles de temps entre les ruptures consécutives du signal de parole attribuables à des fermetures de la glotte du locuteur intervenant pendant la durée de la trame. Des méthodes utilisables pour détecter de telles micro-ruptures sont bien connues dans le domaine de l'analyse harmonique des signaux de paroles. On pourra à cet égard consulter les articles suivants : M. BASSEVILLE et al., « Sequential detection of abrupt changes in spectral characteristics of digital signals », IEEE Trans. on Information Theory, 1983, Vol. IT-29, n°5, pages 708-723 ; P., ANDRE-OBRECHT, « A new statistical approach for the automatic segmentation or continuous speech signals », IEEE Trans. on Acous., Speech and Sig. Proc., Vol. 36, N°1, janvier 1988 ; et C. MURGIA et al., « An algorithm for the estimation of glottal closure instants using the sequential detection of abrupt changes in speech signais », Signal Processing VII, 1994, pages 1685-1688.For this, we need the analysis module 57 harmonic provides the time intervals between the consecutive breaks in speech signal due to closures of the glottis of the intervening speaker for the duration of the frame. Usable methods to detect such micro-ruptures are well known in the area of harmonic signal analysis lyrics. In this regard, we can consult the articles following: M. BASSEVILLE et al., “Sequential detection of abrupt changes in spectral characteristics of digital signals ", IEEE Trans. on Information Theory, 1983, Vol. IT-29, No. 5, pages 708-723; P., ANDRE-OBRECHT, "A new statistical approach for the automatic segmentation or continuous speech signals ”, IEEE Trans. on Acous., Speech and Sig. Proc., Vol. 36, No. 1, January 1988; and C. MURGIA et al., "An algorithm for the estimation of glottal closure instants using the sequential detection of abrupt changes in speech signais ", Signal Processing VII, 1994, pages 1685-1688.

Le principe de ces méthodes est d'effectuer un test statistique entre deux modèles, l'un à court terme et l'autre à long terme. Les deux modèles sont des modèles adaptatifs de prédiction linéaire. La valeur de ce test statistique wm est la somme cumulée du rapport de vraisemblance a posteriori de deux distributions, corrigée par la divergence de Kullback. Pour une distribution de résidus ayant une statistique gaussienne, cette valeur wm est donnée par :

Figure 00260001
e 0 / m et σ 2 / 0 représentent le résidu calculé au moment de l'échantillon m de la trame et la variance du modèle à long terme, e 1 / m et σ 2 / 1 représentant de même le résidu et la variance du modèle à court terme. Plus les deux modèles sont proches, plus la valeur wm du test statistique est proche de 0. Par contre, lorsque les deux modèles sont éloignés l'un de l'autre, cette valeur wm devient négative, ce qui dénote une rupture R du signal.The principle of these methods is to perform a statistical test between two models, one in the short term and the other in the long term. Both models are adaptive linear prediction models. The value of this statistical test w m is the cumulative sum of the posterior likelihood ratio of two distributions, corrected by the Kullback divergence. For a distribution of residuals having a Gaussian statistic, this value w m is given by:
Figure 00260001
where e 0 / m and σ 2/0 represent the residue calculated at the time of the sample m of the frame and the variance of the long-term model, e 1 / m and σ 2/1 likewise representing the residue and the variance of the short term model. The closer the two models are, the more the value w m of the statistical test is close to 0. On the other hand, when the two models are distant from each other, this value w m becomes negative, which indicates a break R of the signal.

La figure 10 montre ainsi un exemple possible d'évolution de la valeur wm, montrant les ruptures R du signal de parole. Les intervalles de temps tr (r = 1,2,...) entre deux ruptures consécutives R sont calculés, et exprimés en nombre d'échantillons du signal de parole. Chacun de ces intervalles tr est inversement proportionnel à la fréquence tonale fp, qui est ainsi estimée localement : fp=Fe/tr sur le r-ième intervalle.FIG. 10 thus shows a possible example of evolution of the value w m , showing the breaks R of the speech signal. The time intervals t r (r = 1.2, ...) between two consecutive breaks R are calculated, and expressed in number of samples of the speech signal. Each of these intervals t r is inversely proportional to the tone frequency f p , which is thus estimated locally: f p = F e / t r over the r-th interval.

On peut alors corriger les variations temporelles de la fréquence tonale (c'est-à-dire le fait que les Intervalles tr ne sont pas tous égaux sur une trame donnée), afin d'avoir une fréquence tonale constante dans chacune des trames d'analyse. Cette correction est effectuée par une modification de la fréquence d'échantillonnage sur chaque intervalle tr, de façon à obtenir, après suréchantillonnage, des intervalles constants entre deux ruptures glottiques. On modifie donc la durée entre deux ruptures en faisant un suréchantillonnage dans un rapport variable, de façon à se caler sur l'intervalle le plus grand. De plus, on fait en sorte de respecter la contrainte de conditionnement selon laquelle la fréquence de suréchantillonnage est multiple de la fréquence tonale estimée.We can then correct the temporal variations of the tone frequency (that is to say the fact that the Intervals t r are not all equal on a given frame), in order to have a constant tone frequency in each of the frames d 'analysis. This correction is carried out by modifying the sampling frequency over each interval t r , so as to obtain, after oversampling, constant intervals between two glottal breaks. The duration between two breaks is therefore modified by oversampling in a variable ratio, so as to lock in on the largest interval. In addition, care is taken to comply with the conditioning constraint that the oversampling frequency is a multiple of the estimated tone frequency.

La figure 11 montre les moyens utilisés pour calculer le conditionnement du signal dans ce dernier cas. Le module 57 d'analyse harmonique est réalisé de façon à mettre en oeuvre la méthode d'analyse ci-dessus, et à fournir les intervalles tr relatifs à la trame de signal produite par le module 10. Pour chacun de ces intervalles, le module 70 (bloc 90 sur la figure 11) calcule le rapport de suréchantillonnage Kr=pr/tr, où l'entier pr est donné par la troisième colonne du tableau I lorsque tr prend les valeurs indiquées dans la deuxième colonne. Ces rapports de suréchantillonnage Kr sont fournis aux modules de changement de fréquence 72 et 73, pour que les interpolations soient effectuées avec le rapport d'échantillonnage Kr sur l'intervalle de temps correspondant tr.FIG. 11 shows the means used to calculate the conditioning of the signal in the latter case. The harmonic analysis module 57 is produced so as to implement the above analysis method, and to provide the intervals t r relative to the signal frame produced by the module 10. For each of these intervals, the module 70 (block 90 in FIG. 11) calculates the oversampling ratio K r = p r / t r , where the integer p r is given by the third column of table I when t r takes the values indicated in the second column . These oversampling reports K r are supplied to the frequency change modules 72 and 73, so that the interpolations are carried out with the sampling ratio K r over the corresponding time interval t r .

Le plus grand Tp des intervalles de temps tr fournis par le module 57 pour une trame est sélectionné par le module 70 (bloc 91 sur la figure 11) pour obtenir un couple p,α comme indiqué dans le tableau I. La fréquence d'échantillonnage modifiée est alors fe=p.Fe/Tp comme précédemment, la résolution spectrale Δf de la transformée de Fourier discrète du signal conditionné étant toujours donnée par Δf=Fe/(α.Tp). Pour le module de changement de fréquence 71, le rapport de suréchantillonnage K est donné par K=p/Tp (bloc 92). Le module 56 de protection des harmoniques de la fréquence tonale opère de la même manière que précédemment, en utilisant pour la condition (9) la résolution spectrale Δf fournie par le bloc 91 et la fréquence tonale fp=fe/p définie selon la valeur du retard entier p fournie par le bloc 91.The largest T p of the time intervals t r supplied by the module 57 for a frame is selected by the module 70 (block 91 in FIG. 11) to obtain a torque p, α as indicated in table I. The frequency d the modified sampling is then f e = pF e / T p as before, the spectral resolution Δf of the discrete Fourier transform of the conditioned signal being always given by Δf = F e /(α.T p ). For the frequency change module 71, the oversampling ratio K is given by K = p / T p (block 92). The module 56 for protecting the harmonics of the tone frequency operates in the same manner as above, using for condition (9) the spectral resolution Δf provided by the block 91 and the tone frequency f p = f e / p defined according to the value of the entire delay p supplied by block 91.

Cette forme de réalisation de l'invention implique également une adaptation du module 82 de gestion des fenêtres. Le nombre M d'échantillons du signal débruité à sauvegarder sur la trame courante correspond ici à un nombre entier d'intervalles de temps tr consécutifs entre deux ruptures glottiques (voir figure 10). Cette disposition évite les problèmes de discontinuité de phase entre trames, tout en tenant compte des variations possibles des intervalles de temps tr sur une trame.This embodiment of the invention also involves an adaptation of the window management module 82. The number M of samples of the denoised signal to be saved on the current frame here corresponds to an integer number of consecutive time intervals t r between two glottal breaks (see FIG. 10). This arrangement avoids the problems of phase discontinuity between frames, while taking into account the possible variations of the time intervals t r on a frame.

Claims (19)

  1. Method of suppressing noise in a digital speech signal (s) processed by successive frames, comprising the steps of:
    computing spectral components (Sn,f, Sn,i) of the speech signal of each frame;
    computing, for each frame, overestimates (B and ' / n,i) of spectral components of the noise included in the speech signal;
    performing a spectral subtraction including at least a first subtraction step in which a respective first quantity dependent on parameters including the overestimate (B and ' / n,i) of the corresponding spectral component of the noise for said frame is subtracted from each spectral component (Sn,f) of the speech signal of the frame, to obtain spectral components (S 2 / n,f) of a first noise-suppressed signal,
       characterized in that the spectral subtraction further includes the following steps :
    computing a masking curve (Mn,q) by applying an auditory perception model on the basis of spectral components (S 2 / n,f) of the first noise-suppressed signal;
    comparing overestimates (B and ' / n,i) of the spectral components of the noise for the frame to the computed masking curve (Mn,q); and
    a second subtraction step in which a respective second quantity depending on parameters including a difference between the overestimate of the corresponding spectral component of the noise and the computed masking curve is subtracted from each spectral component (Sn,f) of the speech signal of the frame.
  2. Method according to claim 1, wherein said second quantity relating to a spectral component (Sn,f) of the speech signal of the frame is substantially equal to whichever is the lower of the corresponding first quantity and the fraction of the overestimate (B and ' / n,i) of the corresponding spectral component of the noise which exceeds the masking curve (Mn,q).
  3. Method according to claim 1 or 2, comprising the step of performing a harmonic analysis of the speech signal to estimate a pitch frequency (fp) of the speech signal in each frame in which the speech signal features vocal activity.
  4. Method according to claim 3, wherein the parameters on which the first subtracted quantities depend include the estimated pitch frequency (fp).
  5. Method according to claim 4, wherein the first quantity subtracted from a given spectral component (Sn,f) of the speech signal is lower if said spectral component corresponds to the frequency closest to an integer multiple of the estimated pitch frequency (fp) than if said spectral component does not correspond to the frequency closest to an integer multiple of the estimated pitch frequency.
  6. Method according to claim 4 or 5, wherein the respective quantities subtracted from the spectral components (Sn,f) of the speech signal corresponding to the frequencies closest to integer multiples of the estimated pitch frequency (fp) are substantially zero.
  7. Method according to any of claims 3 to 6, wherein, after estimating the pitch frequency (fp) of the speech signal in a frame, the speech signal of the frame is conditioned by oversampling it at an oversampling frequency (fe) which is a multiple of the estimated pitch frequency and the spectral components (Sn,f) of the speech signal are computed for the frame on the basis of the conditioned signal (s') to subtract said quantities therefrom.
  8. Method according to claim 7, wherein spectral components (Sn,f) of the speech signal are computed by distributing the conditioned signal (s') into blocks of N samples transformed into the frequency domain and wherein the ratio (p) between the oversampling frequency (fe) and the estimated pitch frequency is a factor of the number N.
  9. Method according to claim 7 or 8, wherein a degree of voicing (χ) of the speech signal is estimated for the frame on the basis of a computation of the entropy (H) of the autocorrelation of the spectral components computed on the basis of the conditioned signal.
  10. Method according to claim 9, wherein said spectral components (S 2 / n,f) whose autocorrelation (H) is computed are those computed on the basis of the conditioned signal (s') after subtraction of said first quantities.
  11. Method according to claim 9 or 10, wherein the degree of voicing (χ) is measured on the basis of a normalized entropy (H) of the form:
    Figure 00360001
    where N is the number of samples used to calculate the spectral components (Sn,f) on the basis of the conditioned signal (s') and A(k) is the normalized autocorrelation defined by:
    Figure 00370001
    S 2 / n,f designating the spectral component of rank f computed on the basis of the conditioned signal.
  12. Method according to claim 11, wherein the computation of the masking curve (Mn,q) uses the degree of voicing (χ) measured by the normalized entropy H.
  13. Method according to any of claims 3 to 12, wherein, after processing each frame, a number (M) of the samples of the noise-suppressed speech signal supplied by such processing is retained which is equal to an integer multiple of the ratio (Tp) between the sampling frequency (Fe) and the estimated pitch frequency (fp).
  14. Method according to any of claims 3 to 12, wherein the estimation of the pitch frequency of the speech signal over a frame includes the steps of:
    estimating time intervals (tr) between two consecutive breaks (R) of the signal which can be attributed to glottal closures of the speaker occurring during the frame, the estimated pitch frequency being inversely proportional to said time intervals; and
    interpolating the speech signal in said time intervals so that the conditioned signal (s') resulting from such interpolation has a constant time interval between two consecutive breaks.
  15. Method according to claim 14, wherein, after processing each frame, a number (M) of the noise-suppressed speech signal samples supplied by such processing is retained which corresponds to an integer number of estimated time intervals (tr).
  16. Method according to any one of the preceding claims, wherein values of a signal-to-noise ratio of the speech signal (s) are estimated in the spectral domain for each frame and the parameters on which the first subtracted quantities depend include the estimated values of the signal-to-noise ratio, the first quantity subtracted from each spectral component (Sn,f) of the speech signal in the frame being a decreasing function of the corresponding estimated value of the signal-to-noise ratio.
  17. Method according to claim 16, wherein said function decreases toward zero for the highest values of the signal-to-noise ratio.
  18. Method according to any one of the preceding claims, wherein the result of the spectral subtraction is transformed to the time domain to construct a noise-suppressed speech signal (s3).
  19. A device for suppressing noise in a speech signal, comprising processing means arranged to implement a method according to any one of the preceding claims.
EP98943999A 1997-09-18 1998-09-16 Method and apparatus for suppressing noise in a digital speech signal Expired - Lifetime EP1016072B1 (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
FR9711643 1997-09-18
FR9711643A FR2768547B1 (en) 1997-09-18 1997-09-18 METHOD FOR NOISE REDUCTION OF A DIGITAL SPEAKING SIGNAL
PCT/FR1998/001980 WO1999014738A1 (en) 1997-09-18 1998-09-16 Method for suppressing noise in a digital speech signal

Publications (2)

Publication Number Publication Date
EP1016072A1 EP1016072A1 (en) 2000-07-05
EP1016072B1 true EP1016072B1 (en) 2002-01-16

Family

ID=9511230

Family Applications (1)

Application Number Title Priority Date Filing Date
EP98943999A Expired - Lifetime EP1016072B1 (en) 1997-09-18 1998-09-16 Method and apparatus for suppressing noise in a digital speech signal

Country Status (7)

Country Link
US (1) US6477489B1 (en)
EP (1) EP1016072B1 (en)
AU (1) AU9168998A (en)
CA (1) CA2304571A1 (en)
DE (1) DE69803203T2 (en)
FR (1) FR2768547B1 (en)
WO (1) WO1999014738A1 (en)

Families Citing this family (44)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6510408B1 (en) * 1997-07-01 2003-01-21 Patran Aps Method of noise reduction in speech signals and an apparatus for performing the method
US6549586B2 (en) * 1999-04-12 2003-04-15 Telefonaktiebolaget L M Ericsson System and method for dual microphone signal noise reduction using spectral subtraction
US6717991B1 (en) 1998-05-27 2004-04-06 Telefonaktiebolaget Lm Ericsson (Publ) System and method for dual microphone signal noise reduction using spectral subtraction
FR2797343B1 (en) * 1999-08-04 2001-10-05 Matra Nortel Communications VOICE ACTIVITY DETECTION METHOD AND DEVICE
JP3454206B2 (en) * 1999-11-10 2003-10-06 三菱電機株式会社 Noise suppression device and noise suppression method
US6804640B1 (en) * 2000-02-29 2004-10-12 Nuance Communications Signal noise reduction using magnitude-domain spectral subtraction
US6766292B1 (en) * 2000-03-28 2004-07-20 Tellabs Operations, Inc. Relative noise ratio weighting techniques for adaptive noise cancellation
JP2002221988A (en) * 2001-01-25 2002-08-09 Toshiba Corp Method and device for suppressing noise in voice signal and voice recognition device
US20020150264A1 (en) * 2001-04-11 2002-10-17 Silvia Allegro Method for eliminating spurious signal components in an input signal of an auditory system, application of the method, and a hearing aid
US6985709B2 (en) * 2001-06-22 2006-01-10 Intel Corporation Noise dependent filter
DE10150519B4 (en) * 2001-10-12 2014-01-09 Hewlett-Packard Development Co., L.P. Method and arrangement for speech processing
US7103539B2 (en) * 2001-11-08 2006-09-05 Global Ip Sound Europe Ab Enhanced coded speech
US20040078199A1 (en) * 2002-08-20 2004-04-22 Hanoh Kremer Method for auditory based noise reduction and an apparatus for auditory based noise reduction
US7398204B2 (en) * 2002-08-27 2008-07-08 Her Majesty In Right Of Canada As Represented By The Minister Of Industry Bit rate reduction in audio encoders by exploiting inharmonicity effects and auditory temporal masking
WO2004036549A1 (en) * 2002-10-14 2004-04-29 Koninklijke Philips Electronics N.V. Signal filtering
KR101141247B1 (en) * 2003-10-10 2012-05-04 에이전시 포 사이언스, 테크놀로지 앤드 리서치 Method for encoding a digital signal into a scalable bitstream? Method for decoding a scalable bitstream
US7725314B2 (en) * 2004-02-16 2010-05-25 Microsoft Corporation Method and apparatus for constructing a speech filter using estimates of clean speech and noise
US7729908B2 (en) * 2005-03-04 2010-06-01 Panasonic Corporation Joint signal and model based noise matching noise robustness method for automatic speech recognition
US20060206320A1 (en) * 2005-03-14 2006-09-14 Li Qi P Apparatus and method for noise reduction and speech enhancement with microphones and loudspeakers
KR100927897B1 (en) * 2005-09-02 2009-11-23 닛본 덴끼 가부시끼가이샤 Noise suppression method and apparatus, and computer program
US8126706B2 (en) * 2005-12-09 2012-02-28 Acoustic Technologies, Inc. Music detector for echo cancellation and noise reduction
JP4592623B2 (en) * 2006-03-14 2010-12-01 富士通株式会社 Communications system
US8949120B1 (en) 2006-05-25 2015-02-03 Audience, Inc. Adaptive noise cancelation
JP4757158B2 (en) * 2006-09-20 2011-08-24 富士通株式会社 Sound signal processing method, sound signal processing apparatus, and computer program
US20080162119A1 (en) * 2007-01-03 2008-07-03 Lenhardt Martin L Discourse Non-Speech Sound Identification and Elimination
ES2391228T3 (en) 2007-02-26 2012-11-22 Dolby Laboratories Licensing Corporation Entertainment audio voice enhancement
US8560320B2 (en) * 2007-03-19 2013-10-15 Dolby Laboratories Licensing Corporation Speech enhancement employing a perceptual model
JP5302968B2 (en) * 2007-09-12 2013-10-02 ドルビー ラボラトリーズ ライセンシング コーポレイション Speech improvement with speech clarification
US8538763B2 (en) * 2007-09-12 2013-09-17 Dolby Laboratories Licensing Corporation Speech enhancement with noise level estimation adjustment
EP2192579A4 (en) * 2007-09-19 2016-06-08 Nec Corp Noise suppression device, its method, and program
JP5056654B2 (en) * 2008-07-29 2012-10-24 株式会社Jvcケンウッド Noise suppression device and noise suppression method
US20110257978A1 (en) * 2009-10-23 2011-10-20 Brainlike, Inc. Time Series Filtering, Data Reduction and Voice Recognition in Communication Device
US9838784B2 (en) 2009-12-02 2017-12-05 Knowles Electronics, Llc Directional audio capture
US8423357B2 (en) * 2010-06-18 2013-04-16 Alon Konchitsky System and method for biometric acoustic noise reduction
US9640194B1 (en) 2012-10-04 2017-05-02 Knowles Electronics, Llc Noise suppression for speech processing based on machine-learning mask estimation
US9536540B2 (en) * 2013-07-19 2017-01-03 Knowles Electronics, Llc Speech signal separation and synthesis based on auditory scene analysis and speech modeling
CN103824562B (en) * 2014-02-10 2016-08-17 太原理工大学 The rearmounted perceptual filter of voice based on psychoacoustic model
DE102014009689A1 (en) * 2014-06-30 2015-12-31 Airbus Operations Gmbh Intelligent sound system / module for cabin communication
DE112015003945T5 (en) 2014-08-28 2017-05-11 Knowles Electronics, Llc Multi-source noise reduction
CN107112025A (en) 2014-09-12 2017-08-29 美商楼氏电子有限公司 System and method for recovering speech components
CN105869652B (en) * 2015-01-21 2020-02-18 北京大学深圳研究院 Psychoacoustic model calculation method and device
US9820042B1 (en) 2016-05-02 2017-11-14 Knowles Electronics, Llc Stereo separation and directional suppression with omni-directional microphones
EP3566229B1 (en) * 2017-01-23 2020-11-25 Huawei Technologies Co., Ltd. An apparatus and method for enhancing a wanted component in a signal
US11017798B2 (en) * 2017-12-29 2021-05-25 Harman Becker Automotive Systems Gmbh Dynamic noise suppression and operations for noisy speech signals

Family Cites Families (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH03117919A (en) * 1989-09-30 1991-05-20 Sony Corp Digital signal encoding device
AU633673B2 (en) 1990-01-18 1993-02-04 Matsushita Electric Industrial Co., Ltd. Signal processing device
DE69124005T2 (en) 1990-05-28 1997-07-31 Matsushita Electric Ind Co Ltd Speech signal processing device
US5450522A (en) * 1991-08-19 1995-09-12 U S West Advanced Technologies, Inc. Auditory model for parametrization of speech
US5469087A (en) 1992-06-25 1995-11-21 Noise Cancellation Technologies, Inc. Control system using harmonic filters
US5400409A (en) * 1992-12-23 1995-03-21 Daimler-Benz Ag Noise-reduction method for noise-affected voice channels
AU676714B2 (en) * 1993-02-12 1997-03-20 British Telecommunications Public Limited Company Noise reduction
US5623577A (en) * 1993-07-16 1997-04-22 Dolby Laboratories Licensing Corporation Computationally efficient adaptive bit allocation for encoding method and apparatus with allowance for decoder spectral distortions
JP3131542B2 (en) * 1993-11-25 2001-02-05 シャープ株式会社 Encoding / decoding device
US5555190A (en) 1995-07-12 1996-09-10 Micro Motion, Inc. Method and apparatus for adaptive line enhancement in Coriolis mass flow meter measurement
FR2739736B1 (en) * 1995-10-05 1997-12-05 Jean Laroche PRE-ECHO OR POST-ECHO REDUCTION METHOD AFFECTING AUDIO RECORDINGS
FI100840B (en) * 1995-12-12 1998-02-27 Nokia Mobile Phones Ltd Noise attenuator and method for attenuating background noise from noisy speech and a mobile station
US6144937A (en) * 1997-07-23 2000-11-07 Texas Instruments Incorporated Noise suppression of speech by signal processing including applying a transform to time domain input sequences of digital signals representing audio information

Also Published As

Publication number Publication date
AU9168998A (en) 1999-04-05
FR2768547B1 (en) 1999-11-19
US6477489B1 (en) 2002-11-05
DE69803203D1 (en) 2002-02-21
FR2768547A1 (en) 1999-03-19
WO1999014738A1 (en) 1999-03-25
EP1016072A1 (en) 2000-07-05
DE69803203T2 (en) 2002-08-29
CA2304571A1 (en) 1999-03-25

Similar Documents

Publication Publication Date Title
EP1016072B1 (en) Method and apparatus for suppressing noise in a digital speech signal
EP2002428B1 (en) Method for trained discrimination and attenuation of echoes of a digital signal in a decoder and corresponding device
EP1789956B1 (en) Method of processing a noisy sound signal and device for implementing said method
EP1356461B1 (en) Noise reduction method and device
EP1320087B1 (en) Synthesis of an excitation signal for use in a comfort noise generator
EP1016071B1 (en) Method and apparatus for detecting speech activity
EP1395981B1 (en) Device and method for processing an audio signal
EP1016073B1 (en) Method and apparatus for suppressing noise in a digital speech signal
EP0490740A1 (en) Method and apparatus for pitch period determination of the speech signal in very low bitrate vocoders
JP2003280696A (en) Apparatus and method for emphasizing voice
EP1021805B1 (en) Method and apparatus for conditioning a digital speech signal
EP1429316B1 (en) System and method for multi-referenced correction of spectral voice distortions introduced by a communication network
EP3192073B1 (en) Discrimination and attenuation of pre-echoes in a digital audio signal
EP2515300B1 (en) Method and system for noise reduction
FR2888704A1 (en)
EP4287648A1 (en) Electronic device and associated processing method, acoustic apparatus and computer program
FR2885462A1 (en) METHOD FOR ATTENUATING THE PRE- AND POST-ECHOS OF AN AUDIO DIGITAL SIGNAL AND CORRESPONDING DEVICE

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20000316

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): DE FR GB

17Q First examination report despatched

Effective date: 20001004

RIC1 Information provided on ipc code assigned before grant

Free format text: 7G 10L 21/02 A

RTI1 Title (correction)

Free format text: METHOD AND APPARATUS FOR SUPPRESSING NOISE IN A DIGITAL SPEECH SIGNAL

RIC1 Information provided on ipc code assigned before grant

Free format text: 7G 10L 21/02 A

RTI1 Title (correction)

Free format text: METHOD AND APPARATUS FOR SUPPRESSING NOISE IN A DIGITAL SPEECH SIGNAL

GRAG Despatch of communication of intention to grant

Free format text: ORIGINAL CODE: EPIDOS AGRA

GRAG Despatch of communication of intention to grant

Free format text: ORIGINAL CODE: EPIDOS AGRA

GRAH Despatch of communication of intention to grant a patent

Free format text: ORIGINAL CODE: EPIDOS IGRA

GRAH Despatch of communication of intention to grant a patent

Free format text: ORIGINAL CODE: EPIDOS IGRA

GRAA (expected) grant

Free format text: ORIGINAL CODE: 0009210

REG Reference to a national code

Ref country code: GB

Ref legal event code: IF02

AK Designated contracting states

Kind code of ref document: B1

Designated state(s): DE FR GB

REF Corresponds to:

Ref document number: 69803203

Country of ref document: DE

Date of ref document: 20020221

GBT Gb: translation of ep patent filed (gb section 77(6)(a)/1977)

Effective date: 20020407

RAP2 Party data changed (patent owner data changed or rights of a patent transferred)

Owner name: NORTEL NETWORKS FRANCE

PLBE No opposition filed within time limit

Free format text: ORIGINAL CODE: 0009261

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT

26N No opposition filed
REG Reference to a national code

Ref country code: FR

Ref legal event code: CD

Ref country code: FR

Ref legal event code: CA

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: DE

Payment date: 20031127

Year of fee payment: 6

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: DE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20050401

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: GB

Payment date: 20050817

Year of fee payment: 8

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: FR

Payment date: 20050902

Year of fee payment: 8

GBPC Gb: european patent ceased through non-payment of renewal fee

Effective date: 20060916

REG Reference to a national code

Ref country code: FR

Ref legal event code: ST

Effective date: 20070531

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: GB

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20060916

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: FR

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20061002